Sample records for quality limited segments

  1. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images

    PubMed Central

    Tang, Yunwei; Jing, Linhai; Ding, Haifeng

    2017-01-01

    The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods. PMID:29064416

  2. Water quality of Cisadane River based on watershed segmentation

    NASA Astrophysics Data System (ADS)

    Effendi, Hefni; Ayu Permatasari, Prita; Muslimah, Sri; Mursalin

    2018-05-01

    The growth of population and industrialization combined with land development along river cause water pollution and environmental deterioration. Cisadane River is one of the river in Indonesia where urbanization, industrialization, and agricultural are extremely main sources of pollution. Cisadane River is an interesting case for investigating the effect of land use to water quality and comparing water quality in every river segment. The main objectives with this study were to examine if there is a correlation between land use and water quality in Cisadane River and there is a difference in water quality between the upstream section of Cisadane River compared with its downstream section. This study compared water quality with land use condition in each segment of river. Land use classification showed that river segment that has more undeveloped area has better water quality compared to river segment with developed area. in general, BOD and COD values have increased from upstream to downstream. However, BOD and COD values do not show a steady increase in each segment Water quality is closely related to the surrounding land use.Therefore, it can not be concluded that the water quality downstream is worse than in the upstream area.

  3. Segments from red blood cell units should not be used for quality testing.

    PubMed

    Kurach, Jayme D R; Hansen, Adele L; Turner, Tracey R; Jenkins, Craig; Acker, Jason P

    2014-02-01

    Nondestructive testing of blood components could permit in-process quality control and reduce discards. Tubing segments, generated during red blood cell (RBC) component production, were tested to determine their suitability as a sample source for quality testing. Leukoreduced RBC components were produced from whole blood (WB) by two different methods: WB filtration and buffy coat (BC). Components and their corresponding segments were tested on Days 5 and 42 of hypothermic storage (HS) for spun hematocrit (Hct), hemoglobin (Hb) content, percentage hemolysis, hematologic indices, and adenosine triphosphate concentration to determine whether segment quality represents unit quality. Segment samples overestimated hemolysis on Days 5 and 42 of HS in both BC- and WB filtration-produced RBCs (p < 0.001 for all). Hct and Hb levels in the segments were also significantly different from the units at both time points for both production methods (p < 0.001 for all). Indeed, for all variables tested different results were obtained from segment and unit samples, and these differences were not consistent across production methods. The quality of samples from tubing segments is not representative of the quality of the corresponding RBC unit. Segments are not suitable surrogates with which to assess RBC quality. © 2013 American Association of Blood Banks.

  4. [Evaluation of Image Quality of Readout Segmented EPI with Readout Partial Fourier Technique].

    PubMed

    Yoshimura, Yuuki; Suzuki, Daisuke; Miyahara, Kanae

    Readout segmented EPI (readout segmentation of long variable echo-trains: RESOLVE) segmented k-space in the readout direction. By using the partial Fourier method in the readout direction, the imaging time was shortened. However, the influence on image quality due to insufficient data sampling is concerned. The setting of the partial Fourier method in the readout direction in each segment was changed. Then, we examined signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and distortion ratio for changes in image quality due to differences in data sampling. As the number of sampling segments decreased, SNR and CNR showed a low value. In addition, the distortion ratio did not change. The image quality of minimum sampling segments is greatly different from full data sampling, and caution is required when using it.

  5. Segmentation quality evaluation using region-based precision and recall measures for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun

    2015-04-01

    Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.

  6. Angler segmentation using perceptions of experiential quality in the Great Barrier Reef Marine Park

    Treesearch

    William Smith; Gerard Kyle; Stephen G. Sutton

    2012-01-01

    This study investigated the efficacy of segmenting anglers using their perceptions of trip quality in the Great Barrier Reef Marine Park (GBRMP). Analysis revealed five segments of anglers whose perceptions differed on trip quality.We named the segments: slow action, plenty of action, weather sensitive, gloomy gusses, and ok corral and assessed variation among them...

  7. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei

    2016-08-15

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem,more » we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and

  8. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei; Wang, Linyuan; Cai, Ailong; Li, Zhongguo; Yan, Bin

    2016-08-01

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem, we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress

  9. Perceived image quality with simulated segmented bifocal corrections

    PubMed Central

    Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; de Gracia, Pablo; Sawides, Lucie; Marcos, Susana

    2016-01-01

    Bifocal contact or intraocular lenses use the principle of simultaneous vision to correct for presbyopia. A modified two-channel simultaneous vision simulator provided with an amplitude transmission spatial light modulator was used to optically simulate 14 segmented bifocal patterns (+ 3 diopters addition) with different far/near pupillary distributions of equal energy. Five subjects with paralyzed accommodation evaluated image quality and subjective preference through the segmented bifocal corrections. There are strong and systematic perceptual differences across the patterns, subjects and observation distances: 48% of the conditions evaluated were significantly preferred or rejected. Optical simulations (in terms of through-focus Strehl ratio from Hartmann-Shack aberrometry) accurately predicted the pattern producing the highest perceived quality in 4 out of 5 patients, both for far and near vision. These perceptual differences found arise primarily from optical grounds, but have an important neural component. PMID:27895981

  10. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    PubMed

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Coronary artery analysis: Computer-assisted selection of best-quality segments in multiple-phase coronary CT angiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Chuan, E-mail: chuan@umich.edu; Chan, Heang-

    Purpose: The authors are developing an automated method to identify the best-quality coronary arterial segment from multiple-phase coronary CT angiography (cCTA) acquisitions, which may be used by either interpreting physicians or computer-aided detection systems to optimally and efficiently utilize the diagnostic information available in multiple-phase cCTA for the detection of coronary artery disease. Methods: After initialization with a manually identified seed point, each coronary artery tree is automatically extracted from multiple cCTA phases using our multiscale coronary artery response enhancement and 3D rolling balloon region growing vessel segmentation and tracking method. The coronary artery trees from multiple phases are thenmore » aligned by a global registration using an affine transformation with quadratic terms and nonlinear simplex optimization, followed by a local registration using a cubic B-spline method with fast localized optimization. The corresponding coronary arteries among the available phases are identified using a recursive coronary segment matching method. Each of the identified vessel segments is transformed by the curved planar reformation (CPR) method. Four features are extracted from each corresponding segment as quality indicators in the original computed tomography volume and the straightened CPR volume, and each quality indicator is used as a voting classifier for the arterial segment. A weighted voting ensemble (WVE) classifier is designed to combine the votes of the four voting classifiers for each corresponding segment. The segment with the highest WVE vote is then selected as the best-quality segment. In this study, the training and test sets consisted of 6 and 20 cCTA cases, respectively, each with 6 phases, containing a total of 156 cCTA volumes and 312 coronary artery trees. An observer preference study was also conducted with one expert cardiothoracic radiologist and four nonradiologist readers to visually rank vessel

  12. Heterologous Packaging Signals on Segment 4, but Not Segment 6 or Segment 8, Limit Influenza A Virus Reassortment.

    PubMed

    White, Maria C; Steel, John; Lowen, Anice C

    2017-06-01

    Influenza A virus (IAV) RNA packaging signals serve to direct the incorporation of IAV gene segments into virus particles, and this process is thought to be mediated by segment-segment interactions. These packaging signals are segment and strain specific, and as such, they have the potential to impact reassortment outcomes between different IAV strains. Our study aimed to quantify the impact of packaging signal mismatch on IAV reassortment using the human seasonal influenza A/Panama/2007/99 (H3N2) and pandemic influenza A/Netherlands/602/2009 (H1N1) viruses. Focusing on the three most divergent segments, we constructed pairs of viruses that encoded identical proteins but differed in the packaging signal regions on a single segment. We then evaluated the frequency with which segments carrying homologous versus heterologous packaging signals were incorporated into reassortant progeny viruses. We found that, when segment 4 (HA) of coinfecting parental viruses was modified, there was a significant preference for the segment containing matched packaging signals relative to the background of the virus. This preference was apparent even when the homologous HA constituted a minority of the HA segment population available in the cell for packaging. Conversely, when segment 6 (NA) or segment 8 (NS) carried modified packaging signals, there was no significant preference for homologous packaging signals. These data suggest that movement of NA and NS segments between the human H3N2 and H1N1 lineages is unlikely to be restricted by packaging signal mismatch, while movement of the HA segment would be more constrained. Our results indicate that the importance of packaging signals in IAV reassortment is segment dependent. IMPORTANCE Influenza A viruses (IAVs) can exchange genes through reassortment. This process contributes to both the highly diverse population of IAVs found in nature and the formation of novel epidemic and pandemic IAV strains. Our study sought to determine the

  13. Heterologous Packaging Signals on Segment 4, but Not Segment 6 or Segment 8, Limit Influenza A Virus Reassortment

    PubMed Central

    White, Maria C.; Steel, John

    2017-01-01

    ABSTRACT Influenza A virus (IAV) RNA packaging signals serve to direct the incorporation of IAV gene segments into virus particles, and this process is thought to be mediated by segment-segment interactions. These packaging signals are segment and strain specific, and as such, they have the potential to impact reassortment outcomes between different IAV strains. Our study aimed to quantify the impact of packaging signal mismatch on IAV reassortment using the human seasonal influenza A/Panama/2007/99 (H3N2) and pandemic influenza A/Netherlands/602/2009 (H1N1) viruses. Focusing on the three most divergent segments, we constructed pairs of viruses that encoded identical proteins but differed in the packaging signal regions on a single segment. We then evaluated the frequency with which segments carrying homologous versus heterologous packaging signals were incorporated into reassortant progeny viruses. We found that, when segment 4 (HA) of coinfecting parental viruses was modified, there was a significant preference for the segment containing matched packaging signals relative to the background of the virus. This preference was apparent even when the homologous HA constituted a minority of the HA segment population available in the cell for packaging. Conversely, when segment 6 (NA) or segment 8 (NS) carried modified packaging signals, there was no significant preference for homologous packaging signals. These data suggest that movement of NA and NS segments between the human H3N2 and H1N1 lineages is unlikely to be restricted by packaging signal mismatch, while movement of the HA segment would be more constrained. Our results indicate that the importance of packaging signals in IAV reassortment is segment dependent. IMPORTANCE Influenza A viruses (IAVs) can exchange genes through reassortment. This process contributes to both the highly diverse population of IAVs found in nature and the formation of novel epidemic and pandemic IAV strains. Our study sought to

  14. Automated identification of best-quality coronary artery segments from multiple-phase coronary CT angiography (cCTA) for vessel analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A.

    2016-03-01

    We are developing an automated method to identify the best quality segment among the corresponding segments in multiple-phase cCTA. The coronary artery trees are automatically extracted from different cCTA phases using our multi-scale vessel segmentation and tracking method. An automated registration method is then used to align the multiple-phase artery trees. The corresponding coronary artery segments are identified in the registered vessel trees and are straightened by curved planar reformation (CPR). Four features are extracted from each segment in each phase as quality indicators in the original CT volume and the straightened CPR volume. Each quality indicator is used as a voting classifier to vote the corresponding segments. A newly designed weighted voting ensemble (WVE) classifier is finally used to determine the best-quality coronary segment. An observer preference study is conducted with three readers to visually rate the quality of the vessels in 1 to 6 rankings. Six and 10 cCTA cases are used as training and test set in this preliminary study. For the 10 test cases, the agreement between automatically identified best-quality (AI-BQ) segments and radiologist's top 2 rankings is 79.7%, and between AI-BQ and the other two readers are 74.8% and 83.7%, respectively. The results demonstrated that the performance of our automated method was comparable to those of experienced readers for identification of the best-quality coronary segments.

  15. Cross-Talk Limits of Highly Segmented Semiconductor Detectors

    NASA Astrophysics Data System (ADS)

    Pullia, Alberto; Weisshaar, Dirk; Zocca, Francesca; Bazzacco, Dino

    2011-06-01

    Cross-talk limits of monolithic highly-segmented semiconductor detectors for high-resolution X-gamma spectrometry are investigated. Cross-talk causes false signal components yielding amplitude losses and fold-dependent shifts of the spectral lines, which partially spoil the spectroscopic performance of the detector. Two complementary electrical models are developed, which describe quantitatively the inter-channel cross-talk of monolithic segmented detectors whose electrodes are read out by charge-sensitive preamplifiers. The first is here designated as Cross-Capacitance (CC) model, the second as Split-Charge (SC) model. The CC model builds around the parasitic capacitances Cij linking the preamplifier outputs and the neighbor channel inputs. The SC model builds around the finite-value of the decoupling capacitance CC used to read out the high-voltage detector electrode. The key parameters of the models are individuated and ideas are shown to minimize their impact. Using a quasi-coaxial germanium segmented detector it is found that the SC cross-talk becomes negligible for decoupling capacitances larger than 1 nF, where instead the CC cross-talk tends to dominate. The residual cross-talk may be reduced by minimization of stray capacitances Cij, through a careful design of the layout of the Printed Circuit Board (PCB) where the input transistors are mounted. Cij can be made as low as 5 fF, but it is shown that even in such case the impact of the CC cross-talk on the detector performance is not negligible. Finally, an algorithm for cross-talk correction is presented and elaborated.

  16. Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image

    NASA Astrophysics Data System (ADS)

    Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian

    2014-07-01

    To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.

  17. Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing.

    PubMed

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2008-08-01

    This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

  18. Training time and quality of smartphone-based anterior segment screening in rural India.

    PubMed

    Ludwig, Cassie A; Newsom, Megan R; Jais, Alexandre; Myung, David J; Murthy, Somasheila I; Chang, Robert T

    2017-01-01

    We aimed at evaluating the ability of individuals without ophthalmologic training to quickly capture high-quality images of the cornea by using a smartphone and low-cost anterior segment imaging adapter (the "EyeGo" prototype). Seven volunteers photographed 1,502 anterior segments from 751 high school students in Varni, India, by using an iPhone 5S with an attached EyeGo adapter. Primary outcome measures were median photograph quality of the cornea and anterior segment of the eye (validated Fundus Photography vs Ophthalmoscopy Trial Outcomes in the Emergency Department [FOTO-ED] study; 1-5 scale; 5, best) and the time required to take each photograph. Volunteers were surveyed on their familiarity with using a smartphone (1-5 scale; 5, very comfortable) and comfort in assessing problems with the eye (1-5 scale; 5, very comfortable). Binomial logistic regression was performed using image quality (low quality: <4; high quality: ≥4) as the dependent variable and age, comfort using a smartphone, and comfort in assessing problems with the eye as independent variables. Six of the seven volunteers captured high-quality (median ≥4/5) images with a median time of ≤25 seconds per eye for all the eyes screened. Four of the seven volunteers demonstrated significant reductions in time to acquire photographs ( P 1=0.01, P 5=0.01, P 6=0.01, and P 7=0.01), and three of the seven volunteers demonstrated significant improvements in the quality of photographs between the first 100 and last 100 eyes screened ( P 1<0.001, P 2<0.001, and P 6<0.01). Self-reported comfort using a smartphone (odds ratio [OR] =1.25; 95% CI =1.13 to 1.39) and self-reported comfort diagnosing eye conditions (OR =1.17; 95% CI =1.07 to 1.29) were significantly associated with an ability to take a high-quality image (≥4/5). There was a nonsignificant association between younger age and ability to take a high-quality image. Individuals without ophthalmic training were able to quickly capture a high-quality

  19. Training time and quality of smartphone-based anterior segment screening in rural India

    PubMed Central

    Ludwig, Cassie A; Newsom, Megan R; Jais, Alexandre; Myung, David J; Murthy, Somasheila I; Chang, Robert T

    2017-01-01

    Objective We aimed at evaluating the ability of individuals without ophthalmologic training to quickly capture high-quality images of the cornea by using a smartphone and low-cost anterior segment imaging adapter (the “EyeGo” prototype). Methods Seven volunteers photographed 1,502 anterior segments from 751 high school students in Varni, India, by using an iPhone 5S with an attached EyeGo adapter. Primary outcome measures were median photograph quality of the cornea and anterior segment of the eye (validated Fundus Photography vs Ophthalmoscopy Trial Outcomes in the Emergency Department [FOTO-ED] study; 1–5 scale; 5, best) and the time required to take each photograph. Volunteers were surveyed on their familiarity with using a smartphone (1–5 scale; 5, very comfortable) and comfort in assessing problems with the eye (1–5 scale; 5, very comfortable). Binomial logistic regression was performed using image quality (low quality: <4; high quality: ≥4) as the dependent variable and age, comfort using a smartphone, and comfort in assessing problems with the eye as independent variables. Results Six of the seven volunteers captured high-quality (median ≥4/5) images with a median time of ≤25 seconds per eye for all the eyes screened. Four of the seven volunteers demonstrated significant reductions in time to acquire photographs (P1=0.01, P5=0.01, P6=0.01, and P7=0.01), and three of the seven volunteers demonstrated significant improvements in the quality of photographs between the first 100 and last 100 eyes screened (P1<0.001, P2<0.001, and P6<0.01). Self-reported comfort using a smartphone (odds ratio [OR] =1.25; 95% CI =1.13 to 1.39) and self-reported comfort diagnosing eye conditions (OR =1.17; 95% CI =1.07 to 1.29) were significantly associated with an ability to take a high-quality image (≥4/5). There was a nonsignificant association between younger age and ability to take a high-quality image. Conclusion Individuals without ophthalmic training were

  20. SCOUT: simultaneous time segmentation and community detection in dynamic networks

    PubMed Central

    Hulovatyy, Yuriy; Milenković, Tijana

    2016-01-01

    Many evolving complex real-world systems can be modeled via dynamic networks. An important problem in dynamic network research is community detection, which finds groups of topologically related nodes. Typically, this problem is approached by assuming either that each time point has a distinct community organization or that all time points share a single community organization. The reality likely lies between these two extremes. To find the compromise, we consider community detection in the context of the problem of segment detection, which identifies contiguous time periods with consistent network structure. Consequently, we formulate a combined problem of segment community detection (SCD), which simultaneously partitions the network into contiguous time segments with consistent community organization and finds this community organization for each segment. To solve SCD, we introduce SCOUT, an optimization framework that explicitly considers both segmentation quality and partition quality. SCOUT addresses limitations of existing methods that can be adapted to solve SCD, which consider only one of segmentation quality or partition quality. In a thorough evaluation, SCOUT outperforms the existing methods in terms of both accuracy and computational complexity. We apply SCOUT to biological network data to study human aging. PMID:27881879

  1. Document segmentation for high-quality printing

    NASA Astrophysics Data System (ADS)

    Ancin, Hakan

    1997-04-01

    A technique to segment dark texts on light background of mixed mode color documents is presented. This process does not perceptually change graphics and photo regions. Color documents are scanned and printed from various media which usually do not have clean background. This is especially the case for the printouts generated from thin magazine samples, these printouts usually include text and figures form the back of the page, which is called bleeding. Removal of bleeding artifacts improves the perceptual quality of the printed document and reduces the color ink usage. By detecting the light background of the document, these artifacts are removed from background regions. Also detection of dark text regions enables the halftoning algorithms to use true black ink for the black text pixels instead of composite black. The processed document contains sharp black text on white background, resulting improved perceptual quality and better ink utilization. The described method is memory efficient and requires a small number of scan lines of high resolution color documents during processing.

  2. Manual versus Automated Carotid Artery Plaque Component Segmentation in High and Lower Quality 3.0 Tesla MRI Scans

    PubMed Central

    Smits, Loek P.; van Wijk, Diederik F.; Duivenvoorden, Raphael; Xu, Dongxiang; Yuan, Chun; Stroes, Erik S.; Nederveen, Aart J.

    2016-01-01

    Purpose To study the interscan reproducibility of manual versus automated segmentation of carotid artery plaque components, and the agreement between both methods, in high and lower quality MRI scans. Methods 24 patients with 30–70% carotid artery stenosis were planned for 3T carotid MRI, followed by a rescan within 1 month. A multicontrast protocol (T1w,T2w, PDw and TOF sequences) was used. After co-registration and delineation of the lumen and outer wall, segmentation of plaque components (lipid-rich necrotic cores (LRNC) and calcifications) was performed both manually and automated. Scan quality was assessed using a visual quality scale. Results Agreement for the detection of LRNC (Cohen’s kappa (k) is 0.04) and calcification (k = 0.41) between both manual and automated segmentation methods was poor. In the high-quality scans (visual quality score ≥ 3), the agreement between manual and automated segmentation increased to k = 0.55 and k = 0.58 for, respectively, the detection of LRNC and calcification larger than 1 mm2. Both manual and automated analysis showed good interscan reproducibility for the quantification of LRNC (intraclass correlation coefficient (ICC) of 0.94 and 0.80 respectively) and calcified plaque area (ICC of 0.95 and 0.77, respectively). Conclusion Agreement between manual and automated segmentation of LRNC and calcifications was poor, despite a good interscan reproducibility of both methods. The agreement between both methods increased to moderate in high quality scans. These findings indicate that image quality is a critical determinant of the performance of both manual and automated segmentation of carotid artery plaque components. PMID:27930665

  3. WE-G-207-05: Relationship Between CT Image Quality, Segmentation Performance, and Quantitative Image Feature Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J; Nishikawa, R; Reiser, I

    Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benignmore » or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or

  4. Segmentation of Image Ensembles via Latent Atlases

    PubMed Central

    Van Leemput, Koen; Menze, Bjoern H.; Wells, William M.; Golland, Polina

    2010-01-01

    Spatial priors, such as probabilistic atlases, play an important role in MRI segmentation. However, the availability of comprehensive, reliable and suitable manual segmentations for atlas construction is limited. We therefore propose a method for joint segmentation of corresponding regions of interest in a collection of aligned images that does not require labeled training data. Instead, a latent atlas, initialized by at most a single manual segmentation, is inferred from the evolving segmentations of the ensemble. The algorithm is based on probabilistic principles but is solved using partial differential equations (PDEs) and energy minimization criteria. We evaluate the method on two datasets, segmenting subcortical and cortical structures in a multi-subject study and extracting brain tumors in a single-subject multi-modal longitudinal experiment. We compare the segmentation results to manual segmentations, when those exist, and to the results of a state-of-the-art atlas-based segmentation method. The quality of the results supports the latent atlas as a promising alternative when existing atlases are not compatible with the images to be segmented. PMID:20580305

  5. Automatic initialization and quality control of large-scale cardiac MRI segmentations.

    PubMed

    Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F

    2018-01-01

    Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The

  6. Automated tumor volumetry using computer-aided image segmentation.

    PubMed

    Gaonkar, Bilwaj; Macyszyn, Luke; Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A; Ali, Zarina S; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M; Davatzikos, Christos

    2015-05-01

    Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0-5 rating scale where 5 indicated perfect segmentation. The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  7. Automated Tumor Volumetry Using Computer-Aided Image Segmentation

    PubMed Central

    Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A.; Ali, Zarina S.; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M.; Davatzikos, Christos

    2015-01-01

    Rationale and Objectives Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. Materials and Methods A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Results Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0–5 rating scale where 5 indicated perfect segmentation. Conclusions The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. PMID:25770633

  8. A wavefront compensation approach to segmented mirror figure control

    NASA Technical Reports Server (NTRS)

    Redding, David; Breckenridge, Bill; Sevaston, George; Lau, Ken

    1991-01-01

    We consider the 'figure-control' problem for a spaceborn sub-millimeter wave telescope, the Precision Segmented Reflector Project Focus Mission Telescope. We show that performance of any figure control system is subject to limits on the controllability and observability of the quality of the wavefront. We present a wavefront-compensation method for the Focus Mission Telescope which uses mirror-figure sensors and three-axis segment actuator to directly minimize wavefront errors due to segment position errors. This approach shows significantly better performance when compared with a panel-state-compensation approach.

  9. Spontaneous ignition temperature limits of jet A fuel in research-combustor segment

    NASA Technical Reports Server (NTRS)

    Ingebo, R. D.

    1974-01-01

    The effects of inlet-air pressure and reference velocity on the spontaneous-ignition temperature limits of Jet A fuel were determined in a combustor segment with a primary-zone length of 0.076 m (3 in.). At a constant reference velocity of 21.4 m/sec (170 ft/sec), increasing the inlet-air pressure from 21 to 207 N/sq cm decreased the spontaneous-ignition temperature limit from approximately 700 to 555 K. At a constant inlet-air pressure of 41 N/sq cm, increasing the reference velocity from 12.2 to 30.5 m/sec increased the spontaneous-ignition temperature limit from approximately 575 to 800 K. Results are compared with other data in the literature.

  10. Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software

    PubMed Central

    Lee, Myungeun; Woo, Boyeong; Kuo, Michael D.; Jamshidi, Neema

    2017-01-01

    Objective The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. Materials and Methods MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Results Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. Conclusion The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics. PMID:28458602

  11. Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software.

    PubMed

    Lee, Myungeun; Woo, Boyeong; Kuo, Michael D; Jamshidi, Neema; Kim, Jong Hyo

    2017-01-01

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics.

  12. Interactive segmentation of tongue contours in ultrasound video sequences using quality maps

    NASA Astrophysics Data System (ADS)

    Ghrenassia, Sarah; Ménard, Lucie; Laporte, Catherine

    2014-03-01

    Ultrasound (US) imaging is an effective and non invasive way of studying the tongue motions involved in normal and pathological speech, and the results of US studies are of interest for the development of new strategies in speech therapy. State-of-the-art tongue shape analysis techniques based on US images depend on semi-automated tongue segmentation and tracking techniques. Recent work has mostly focused on improving the accuracy of the tracking techniques themselves. However, occasional errors remain inevitable, regardless of the technique used, and the tongue tracking process must thus be supervised by a speech scientist who will correct these errors manually or semi-automatically. This paper proposes an interactive framework to facilitate this process. In this framework, the user is guided towards potentially problematic portions of the US image sequence by a segmentation quality map that is based on the normalized energy of an active contour model and automatically produced during tracking. When a problematic segmentation is identified, corrections to the segmented contour can be made on one image and propagated both forward and backward in the problematic subsequence, thereby improving the user experience. The interactive tools were tested in combination with two different tracking algorithms. Preliminary results illustrate the potential of the proposed framework, suggesting that the proposed framework generally improves user interaction time, with little change in segmentation repeatability.

  13. Self-correcting multi-atlas segmentation

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Wilford, Andrew; Guo, Liang

    2016-03-01

    In multi-atlas segmentation, one typically registers several atlases to the new image, and their respective segmented label images are transformed and fused to form the final segmentation. After each registration, the quality of the registration is reflected by the single global value: the final registration cost. Ideally, if the quality of the registration can be evaluated at each point, independent of the registration process, which also provides a direction in which the deformation can further be improved, the overall segmentation performance can be improved. We propose such a self-correcting multi-atlas segmentation method. The method is applied on hippocampus segmentation from brain images and statistically significantly improvement is observed.

  14. Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images.

    PubMed

    Lee, Kyungmoo; Buitendijk, Gabriëlle H S; Bogunovic, Hrvoje; Springelkamp, Henriët; Hofman, Albert; Wahle, Andreas; Sonka, Milan; Vingerling, Johannes R; Klaver, Caroline C W; Abràmoff, Michael D

    2016-03-01

    To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. Six hundred ninety macular SD-OCT image volumes (6.0 × 6.0 × 2.3 mm 3 ) were obtained from one eyes of 690 subjects (74.6 ± 9.7 [mean ± SD] years, 37.8% of males) randomly selected from the population-based Rotterdam Study. The dataset consisted of 420 OCT volumes with successful automated retinal nerve fiber layer (RNFL) segmentations obtained from our previously reported graph-based segmentation method and 270 volumes with failed segmentations. To evaluate the reliability of the layer segmentations, we have developed a new metric, segmentability index SI, which is obtained from a random forest regressor based on 12 features using OCT voxel intensities, edge-based costs, and on-surface costs. The SI was compared with well-known quality indices, quality index (QI), and maximum tissue contrast index (mTCI), using receiver operating characteristic (ROC) analysis. The 95% confidence interval (CI) and the area under the curve (AUC) for the QI are 0.621 to 0.805 with AUC 0.713, for the mTCI 0.673 to 0.838 with AUC 0.756, and for the SI 0.784 to 0.920 with AUC 0.852. The SI AUC is significantly larger than either the QI or mTCI AUC ( P < 0.01). The segmentability index SI is well suited to identify SD-OCT scans for which successful automated intraretinal layer segmentations can be expected. Interpreting the quantification of SD-OCT images requires the underlying segmentation to be reliable, but standard SD-OCT quality metrics do not predict which segmentations are reliable and which are not. The segmentability index SI presented in this study does allow reliable segmentations to be identified, which is important for more accurate layer thickness analyses in research and population studies.

  15. Marital Quality and Cognitive Limitations in Late Life

    PubMed Central

    Thomas, Patricia A.; Umberson, Debra

    2016-01-01

    Objectives. Identifying factors associated with cognitive limitations among older adults has become a major public health objective. Given the importance of marital relationships for older adults’ health, this study examines the association between marital quality and change in cognitive limitations in late life, directionality of the relationship between marital quality and cognitive limitations, and potential gender differences in these associations. Method. Latent growth curve models were used to estimate the association of marital quality with change in cognitive limitations among older adults and the direction of the association between marital quality and cognitive limitations using 4 waves of the Americans’ Changing Lives survey (N = 841). Results. Results indicate that more frequent negative (but not positive) marital experiences are associated with a slower increase in cognitive limitations over time, and the direction of this association does not operate in the reverse (i.e., cognitive limitations did not lead to change in marital quality over time). The association between negative marital experiences and cognitive limitations is similar for men and women. Discussion. The discussion highlights possible explanations for the apparent protective effect of negative marital experiences for older adults’ cognitive health over time, regardless of gender. PMID:25765315

  16. Spatial limitations of fast temporal segmentation are best modeled by V1 receptive fields.

    PubMed

    Goodbourn, Patrick T; Forte, Jason D

    2013-11-22

    The fine temporal structure of events influences the spatial grouping and segmentation of visual-scene elements. Although adjacent regions flickering asynchronously at high temporal frequencies appear identical, the visual system signals a boundary between them. These "phantom contours" disappear when the gap between regions exceeds a critical value (g(max)). We used g(max) as an index of neuronal receptive-field size to compare with known receptive-field data from along the visual pathway and thus infer the location of the mechanism responsible for fast temporal segmentation. Observers viewed a circular stimulus reversing in luminance contrast at 20 Hz for 500 ms. A gap of constant retinal eccentricity segmented each stimulus quadrant; on each trial, participants identified a target quadrant containing counterphasing inner and outer segments. Through varying the gap width, g(max) was determined at a range of retinal eccentricities. We found that g(max) increased from 0.3° to 0.8° for eccentricities from 2° to 12°. These values correspond to receptive-field diameters of neurons in primary visual cortex that have been reported in single-cell and fMRI studies and are consistent with the spatial limitations of motion detection. In a further experiment, we found that modulation sensitivity depended critically on the length of the contour and could be predicted by a simple model of spatial summation in early cortical neurons. The results suggest that temporal segmentation is achieved by neurons at the earliest cortical stages of visual processing, most likely in primary visual cortex.

  17. Mindcontrol: A web application for brain segmentation quality control.

    PubMed

    Keshavan, Anisha; Datta, Esha; M McDonough, Ian; Madan, Christopher R; Jordan, Kesshi; Henry, Roland G

    2018-04-15

    Tissue classification plays a crucial role in the investigation of normal neural development, brain-behavior relationships, and the disease mechanisms of many psychiatric and neurological illnesses. Ensuring the accuracy of tissue classification is important for quality research and, in particular, the translation of imaging biomarkers to clinical practice. Assessment with the human eye is vital to correct various errors inherent to all currently available segmentation algorithms. Manual quality assurance becomes methodologically difficult at a large scale - a problem of increasing importance as the number of data sets is on the rise. To make this process more efficient, we have developed Mindcontrol, an open-source web application for the collaborative quality control of neuroimaging processing outputs. The Mindcontrol platform consists of a dashboard to organize data, descriptive visualizations to explore the data, an imaging viewer, and an in-browser annotation and editing toolbox for data curation and quality control. Mindcontrol is flexible and can be configured for the outputs of any software package in any data organization structure. Example configurations for three large, open-source datasets are presented: the 1000 Functional Connectomes Project (FCP), the Consortium for Reliability and Reproducibility (CoRR), and the Autism Brain Imaging Data Exchange (ABIDE) Collection. These demo applications link descriptive quality control metrics, regional brain volumes, and thickness scalars to a 3D imaging viewer and editing module, resulting in an easy-to-implement quality control protocol that can be scaled for any size and complexity of study. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Unraveling Pancreatic Segmentation.

    PubMed

    Renard, Yohann; de Mestier, Louis; Perez, Manuela; Avisse, Claude; Lévy, Philippe; Kianmanesh, Reza

    2018-04-01

    Limited pancreatic resections are increasingly performed, but the rate of postoperative fistula is higher than after classical resections. Pancreatic segmentation, anatomically and radiologically identifiable, may theoretically help the surgeon removing selected anatomical portions with their own segmental pancreatic duct and thus might decrease the postoperative fistula rate. We aimed at systematically and comprehensively reviewing the previously proposed pancreatic segmentations and discuss their relevance and limitations. PubMed database was searched for articles investigating pancreatic segmentation, including human or animal anatomy, and cadaveric or surgical studies. Overall, 47/99 articles were selected and grouped into 4 main hypotheses of pancreatic segmentation methodology: anatomic, vascular, embryologic and lymphatic. The head, body and tail segments are gross description without distinct borders. The arterial territories defined vascular segments and isolate an isthmic paucivascular area. The embryological theory relied on the fusion plans of the embryological buds. The lymphatic drainage pathways defined the lymphatic segmentation. These theories had differences, but converged toward separating the head and body/tail parts, and the anterior from posterior and inferior parts of the pancreatic head. The rate of postoperative fistula was not decreased when surgical resection was performed following any of these segmentation theories; hence, none of them appeared relevant enough to guide pancreatic transections. Current pancreatic segmentation theories do not enable defining anatomical-surgical pancreatic segments. Other approaches should be explored, in particular focusing on pancreatic ducts, through pancreatic ducts reconstructions and embryologic 3D modelization.

  19. Image quality assessment of automatic three-segment MR attenuation correction vs. CT attenuation correction.

    PubMed

    Partovi, Sasan; Kohan, Andres; Gaeta, Chiara; Rubbert, Christian; Vercher-Conejero, Jose L; Jones, Robert S; O'Donnell, James K; Wojtylak, Patrick; Faulhaber, Peter

    2013-01-01

    The purpose of this study is to systematically evaluate the usefulness of Positron emission tomography/Magnetic resonance imaging (PET/MRI) images in a clinical setting by assessing the image quality of Positron emission tomography (PET) images using a three-segment MR attenuation correction (MRAC) versus the standard CT attenuation correction (CTAC). We prospectively studied 48 patients who had their clinically scheduled FDG-PET/CT followed by an FDG-PET/MRI. Three nuclear radiologists evaluated the image quality of CTAC vs. MRAC using a Likert scale (five-point scale). A two-sided, paired t-test was performed for comparison purposes. The image quality was further assessed by categorizing it as acceptable (equal to 4 and 5 on the five-point Likert scale) or unacceptable (equal to 1, 2, and 3 on the five-point Likert scale) quality using the McNemar test. When assessing the image quality using the Likert scale, one reader observed a significant difference between CTAC and MRAC (p=0.0015), whereas the other readers did not observe a difference (p=0.8924 and p=0.1880, respectively). When performing the grouping analysis, no significant difference was found between CTAC vs. MRAC for any of the readers (p=0.6137 for reader 1, p=1 for reader 2, and p=0.8137 for reader 3). All three readers more often reported artifacts on the MRAC images than on the CTAC images. There was no clinically significant difference in quality between PET images generated on a PET/MRI system and those from a Positron emission tomography/Computed tomography (PET/CT) system. PET images using the automatic three-segmented MR attenuation method provided diagnostic image quality. However, future research regarding the image quality obtained using different MR attenuation based methods is warranted before PET/MRI can be used clinically.

  20. Risk segmentation: goal or problem?

    PubMed

    Feldman, R; Dowd, B

    2000-07-01

    This paper traces the evolution of economists' views about risk segmentation in health insurance markets. Originally seen as a desirable goal, risk segmentation has come to be viewed as leading to abnormal profits, wasted resources, and inefficient limitations on coverage and services. We suggest that risk segmentation may be efficient if one takes an ex post view (i.e., after consumers' risks are known). From this perspective, managed care may be a much better method for achieving risk segmentation than limitations on coverage. The most serious objection to risk segmentation is the ex ante concern that it undermines long-term insurance contracts that would protect consumers against changes in lifetime risk.

  1. On the evaluation of segmentation editing tools

    PubMed Central

    Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.

    2014-01-01

    Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063

  2. Crossing the quality chasm in resource-limited settings.

    PubMed

    Maru, Duncan Smith-Rohrberg; Andrews, Jason; Schwarz, Dan; Schwarz, Ryan; Acharya, Bibhav; Ramaiya, Astha; Karelas, Gregory; Rajbhandari, Ruma; Mate, Kedar; Shilpakar, Sona

    2012-11-30

    Over the last decade, extensive scientific and policy innovations have begun to reduce the "quality chasm"--the gulf between best practices and actual implementation that exists in resource-rich medical settings. While limited data exist, this chasm is likely to be equally acute and deadly in resource-limited areas. While health systems have begun to be scaled up in impoverished areas, scale-up is just the foundation necessary to deliver effective healthcare to the poor. This perspective piece describes a vision for a global quality improvement movement in resource-limited areas. The following action items are a first step toward achieving this vision: 1) revise global health investment mechanisms to value quality; 2) enhance human resources for improving health systems quality; 3) scale up data capacity; 4) deepen community accountability and engagement initiatives; 5) implement evidence-based quality improvement programs; 6) develop an implementation science research agenda.

  3. Fizeau interferometric cophasing of segmented mirrors: experimental validation.

    PubMed

    Cheetham, Anthony; Cvetojevic, Nick; Norris, Barnaby; Sivaramakrishnan, Anand; Tuthill, Peter

    2014-06-02

    We present an optical testbed demonstration of the Fizeau Interferometric Cophasing of Segmented Mirrors (FICSM) algorithm. FICSM allows a segmented mirror to be phased with a science imaging detector and three filters (selected among the normal science complement). It requires no specialised, dedicated wavefront sensing hardware. Applying random piston and tip/tilt aberrations of more than 5 wavelengths to a small segmented mirror array produced an initial unphased point spread function with an estimated Strehl ratio of 9% that served as the starting point for our phasing algorithm. After using the FICSM algorithm to cophase the pupil, we estimated a Strehl ratio of 94% based on a comparison between our data and simulated encircled energy metrics. Our final image quality is limited by the accuracy of our segment actuation, which yields a root mean square (RMS) wavefront error of 25 nm. This is the first hardware demonstration of coarse and fine phasing an 18-segment pupil with the James Webb Space Telescope (JWST) geometry using a single algorithm. FICSM can be implemented on JWST using any of its scientic imaging cameras making it useful as a fall-back in the event that accepted phasing strategies encounter problems. We present an operational sequence that would co-phase such an 18-segment primary in 3 sequential iterations of the FICSM algorithm. Similar sequences can be readily devised for any segmented mirror.

  4. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  5. The use of segmented regression in analysing interrupted time series studies: an example in pre-hospital ambulance care.

    PubMed

    Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M

    2014-06-19

    An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.

  6. 3D conformal planning using low segment multi-criteria IMRT optimization

    PubMed Central

    Khan, Fazal; Craft, David

    2014-01-01

    Purpose To evaluate automated multicriteria optimization (MCO) – designed for intensity modulated radiation therapy (IMRT), but invoked with limited segmentation – to efficiently produce high quality 3D conformal radiation therapy (3D-CRT) plans. Methods Ten patients previously planned with 3D-CRT to various disease sites (brain, breast, lung, abdomen, pelvis), were replanned with a low-segment inverse multicriteria optimized technique. The MCO-3D plans used the same beam geometry of the original 3D plans, but were limited to an energy of 6 MV. The MCO-3D plans were optimized using fluence-based MCO IMRT and then, after MCO navigation, segmented with a low number of segments. The 3D and MCO-3D plans were compared by evaluating mean dose for all structures, D95 (dose that 95% of the structure receives) and homogeneity indexes for targets, D1 and clinically appropriate dose volume objectives for individual organs at risk (OARs), monitor units (MUs), and physician preference. Results The MCO-3D plans reduced the OAR mean doses (41 out of a total of 45 OARs had a mean dose reduction, p<<0.01) and monitor units (seven out of ten plans have reduced MUs; the average reduction is 17%, p=0.08) while maintaining clinical standards on coverage and homogeneity of target volumes. All MCO-3D plans were preferred by physicians over their corresponding 3D plans. Conclusion High quality 3D plans can be produced using MCO-IMRT optimization, resulting in automated field-in-field type plans with good monitor unit efficiency. Adopting this technology in a clinic could improve plan quality, and streamline treatment plan production by utilizing a single system applicable to both IMRT and 3D planning. PMID:25413405

  7. Coupled dictionary learning for joint MR image restoration and segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Xuesong; Fan, Yong

    2018-03-01

    To achieve better segmentation of MR images, image restoration is typically used as a preprocessing step, especially for low-quality MR images. Recent studies have demonstrated that dictionary learning methods could achieve promising performance for both image restoration and image segmentation. These methods typically learn paired dictionaries of image patches from different sources and use a common sparse representation to characterize paired image patches, such as low-quality image patches and their corresponding high quality counterparts for the image restoration, and image patches and their corresponding segmentation labels for the image segmentation. Since learning these dictionaries jointly in a unified framework may improve the image restoration and segmentation simultaneously, we propose a coupled dictionary learning method to concurrently learn dictionaries for joint image restoration and image segmentation based on sparse representations in a multi-atlas image segmentation framework. Particularly, three dictionaries, including a dictionary of low quality image patches, a dictionary of high quality image patches, and a dictionary of segmentation label patches, are learned in a unified framework so that the learned dictionaries of image restoration and segmentation can benefit each other. Our method has been evaluated for segmenting the hippocampus in MR T1 images collected with scanners of different magnetic field strengths. The experimental results have demonstrated that our method achieved better image restoration and segmentation performance than state of the art dictionary learning and sparse representation based image restoration and image segmentation methods.

  8. Image Information Mining Utilizing Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai

    2002-01-01

    The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.

  9. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to

  10. Pavement management segment consolidation

    DOT National Transportation Integrated Search

    1998-01-01

    Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...

  11. Market Segmentation for Information Services.

    ERIC Educational Resources Information Center

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  12. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  13. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  14. Fast segmentation of industrial quality pavement images using Laws texture energy measures and k -means clustering

    NASA Astrophysics Data System (ADS)

    Mathavan, Senthan; Kumar, Akash; Kamal, Khurram; Nieminen, Michael; Shah, Hitesh; Rahman, Mujib

    2016-09-01

    Thousands of pavement images are collected by road authorities daily for condition monitoring surveys. These images typically have intensity variations and texture nonuniformities that make their segmentation challenging. The automated segmentation of such pavement images is crucial for accurate, thorough, and expedited health monitoring of roads. In the pavement monitoring area, well-known texture descriptors, such as gray-level co-occurrence matrices and local binary patterns, are often used for surface segmentation and identification. These, despite being the established methods for texture discrimination, are inherently slow. This work evaluates Laws texture energy measures as a viable alternative for pavement images for the first time. k-means clustering is used to partition the feature space, limiting the human subjectivity in the process. Data classification, hence image segmentation, is performed by the k-nearest neighbor method. Laws texture energy masks are shown to perform well with resulting accuracy and precision values of more than 80%. The implementations of the algorithm, in both MATLAB® and OpenCV/C++, are extensively compared against the state of the art for execution speed, clearly showing the advantages of the proposed method. Furthermore, the OpenCV-based segmentation shows a 100% increase in processing speed when compared to the fastest algorithm available in literature.

  15. Segmenting patients and physicians using preferences from discrete choice experiments.

    PubMed

    Deal, Ken

    2014-01-01

    People often form groups or segments that have similar interests and needs and seek similar benefits from health providers. Health organizations need to understand whether the same health treatments, prevention programs, services, and products should be applied to everyone in the relevant population or whether different treatments need to be provided to each of several segments that are relatively homogeneous internally but heterogeneous among segments. Our objective was to explain the purposes, benefits, and methods of segmentation for health organizations, and to illustrate the process of segmenting health populations based on preference coefficients from a discrete choice conjoint experiment (DCE) using an example study of prevention of cyberbullying among university students. We followed a two-level procedure for investigating segmentation incorporating several methods for forming segments in Level 1 using DCE preference coefficients and testing their quality, reproducibility, and usability by health decision makers. Covariates (demographic, behavioral, lifestyle, and health state variables) were included in Level 2 to further evaluate quality and to support the scoring of large databases and developing typing tools for assigning those in the relevant population, but not in the sample, to the segments. Several segmentation solution candidates were found during the Level 1 analysis, and the relationship of the preference coefficients to the segments was investigated using predictive methods. Those segmentations were tested for their quality and reproducibility and three were found to be very close in quality. While one seemed better than others in the Level 1 analysis, another was very similar in quality and proved ultimately better in predicting segment membership using covariates in Level 2. The two segments in the final solution were profiled for attributes that would support the development and acceptance of cyberbullying prevention programs among university

  16. Best Merge Region Growing Segmentation with Integrated Non-Adjacent Region Object Aggregation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Tarabalka, Yuliya; Montesano, Paul M.; Gofman, Emanuel

    2012-01-01

    Best merge region growing normally produces segmentations with closed connected region objects. Recognizing that spectrally similar objects often appear in spatially separate locations, we present an approach for tightly integrating best merge region growing with non-adjacent region object aggregation, which we call Hierarchical Segmentation or HSeg. However, the original implementation of non-adjacent region object aggregation in HSeg required excessive computing time even for moderately sized images because of the required intercomparison of each region with all other regions. This problem was previously addressed by a recursive approximation of HSeg, called RHSeg. In this paper we introduce a refined implementation of non-adjacent region object aggregation in HSeg that reduces the computational requirements of HSeg without resorting to the recursive approximation. In this refinement, HSeg s region inter-comparisons among non-adjacent regions are limited to regions of a dynamically determined minimum size. We show that this refined version of HSeg can process moderately sized images in about the same amount of time as RHSeg incorporating the original HSeg. Nonetheless, RHSeg is still required for processing very large images due to its lower computer memory requirements and amenability to parallel processing. We then note a limitation of RHSeg with the original HSeg for high spatial resolution images, and show how incorporating the refined HSeg into RHSeg overcomes this limitation. The quality of the image segmentations produced by the refined HSeg is then compared with other available best merge segmentation approaches. Finally, we comment on the unique nature of the hierarchical segmentations produced by HSeg.

  17. Probabilistic segmentation and intensity estimation for microarray images.

    PubMed

    Gottardo, Raphael; Besag, Julian; Stephens, Matthew; Murua, Alejandro

    2006-01-01

    We describe a probabilistic approach to simultaneous image segmentation and intensity estimation for complementary DNA microarray experiments. The approach overcomes several limitations of existing methods. In particular, it (a) uses a flexible Markov random field approach to segmentation that allows for a wider range of spot shapes than existing methods, including relatively common 'doughnut-shaped' spots; (b) models the image directly as background plus hybridization intensity, and estimates the two quantities simultaneously, avoiding the common logical error that estimates of foreground may be less than those of the corresponding background if the two are estimated separately; and (c) uses a probabilistic modeling approach to simultaneously perform segmentation and intensity estimation, and to compute spot quality measures. We describe two approaches to parameter estimation: a fast algorithm, based on the expectation-maximization and the iterated conditional modes algorithms, and a fully Bayesian framework. These approaches produce comparable results, and both appear to offer some advantages over other methods. We use an HIV experiment to compare our approach to two commercial software products: Spot and Arrayvision.

  18. Quality comparison of continuous steam sterilization segmented-flow aseptic processing versus conventional canning of whole and sliced mushrooms.

    PubMed

    Anderson, N M; Walker, P N

    2011-08-01

    This study was carried out to investigate segmented-flow aseptic processing of particle foods. A pilot-scale continuous steam sterilization unit capable of producing shelf stable aseptically processed whole and sliced mushrooms was developed. The system utilized pressurized steam as the heating medium to achieve high temperature-short time processing conditions with high and uniform heat transfer that will enable static temperature penetration studies for process development. Segmented-flow technology produced a narrower residence time distribution than pipe-flow aseptic processing; thus, whole and sliced mushrooms were processed only as long as needed to achieve the target F₀  = 7.0 min and were not overcooked. Continuous steam sterilization segmented-flow aseptic processing produced shelf stable aseptically processed mushrooms of superior quality to conventionally canned mushrooms. When compared to conventionally canned mushrooms, aseptically processed yield (weight basis) increased 6.1% (SD = 2.9%) and 6.6% (SD = 2.2%), whiteness (L) improved 3.1% (SD = 1.9%) and 4.7% (SD = 0.7%), color difference (ΔE) improved 6.0% (SD = 1.3%) and 8.5% (SD = 1.5%), and texture improved 3.9% (SD = 1.7%) and 4.6% (SD = 4.2%), for whole and sliced mushrooms, respectively. Segmented-flow aseptic processing eliminated a separate blanching step, eliminated the unnecessary packaging of water and promoted the use of bag-in-box and other versatile aseptic packaging methods. Segmented-flow aseptic processing is capable of producing shelf stable aseptically processed particle foods of superior quality to a conventionally canned product. This unique continuous steam sterilization process eliminates the need for a separate blanching step, reduces or eliminates the need for a liquid carrier, and promotes the use of bag-in-box and other versatile aseptic packaging methods. © 2011 Institute of Food Technologists®

  19. Applying the algorithm "assessing quality using image registration circuits" (AQUIRC) to multi-atlas segmentation

    NASA Astrophysics Data System (ADS)

    Datteri, Ryan; Asman, Andrew J.; Landman, Bennett A.; Dawant, Benoit M.

    2014-03-01

    Multi-atlas registration-based segmentation is a popular technique in the medical imaging community, used to transform anatomical and functional information from a set of atlases onto a new patient that lacks this information. The accuracy of the projected information on the target image is dependent on the quality of the registrations between the atlas images and the target image. Recently, we have developed a technique called AQUIRC that aims at estimating the error of a non-rigid registration at the local level and was shown to correlate to error in a simulated case. Herein, we extend upon this work by applying AQUIRC to atlas selection at the local level across multiple structures in cases in which non-rigid registration is difficult. AQUIRC is applied to 6 structures, the brainstem, optic chiasm, left and right optic nerves, and the left and right eyes. We compare the results of AQUIRC to that of popular techniques, including Majority Vote, STAPLE, Non-Local STAPLE, and Locally-Weighted Vote. We show that AQUIRC can be used as a method to combine multiple segmentations and increase the accuracy of the projected information on a target image, and is comparable to cutting edge methods in the multi-atlas segmentation field.

  20. Translation-aware semantic segmentation via conditional least-square generative adversarial networks

    NASA Astrophysics Data System (ADS)

    Zhang, Mi; Hu, Xiangyun; Zhao, Like; Pang, Shiyan; Gong, Jinqi; Luo, Min

    2017-10-01

    Semantic segmentation has recently made rapid progress in the field of remote sensing and computer vision. However, many leading approaches cannot simultaneously translate label maps to possible source images with a limited number of training images. The core issue is insufficient adversarial information to interpret the inverse process and proper objective loss function to overcome the vanishing gradient problem. We propose the use of conditional least squares generative adversarial networks (CLS-GAN) to delineate visual objects and solve these problems. We trained the CLS-GAN network for semantic segmentation to discriminate dense prediction information either from training images or generative networks. We show that the optimal objective function of CLS-GAN is a special class of f-divergence and yields a generator that lies on the decision boundary of discriminator that reduces possible vanished gradient. We also demonstrate the effectiveness of the proposed architecture at translating images from label maps in the learning process. Experiments on a limited number of high resolution images, including close-range and remote sensing datasets, indicate that the proposed method leads to the improved semantic segmentation accuracy and can simultaneously generate high quality images from label maps.

  1. Segmented-field radiography in scoliosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, W.W.; Barnes, G.T.; Nasca, R.J.

    1985-02-01

    A method of scoliosis imaging using segmented fields is presented. The method is advantageous for patients requiring serial radiographic monitoring, as it results in markedly reduced radiation doses to critical organs, particularly the breast. Absorbed dose to the breast was measured to be 8.8 mrad (88 ..mu..Gy) for a full-field examination and 0.051 mrad (5.1 ..mu..Gy) for the segmented-field study. The segmented-field technique also results in improved image quality. Experience with 53 studies in 23 patients is reported.

  2. Automated Segmentation Errors When Using Optical Coherence Tomography to Measure Retinal Nerve Fiber Layer Thickness in Glaucoma.

    PubMed

    Mansberger, Steven L; Menda, Shivali A; Fortune, Brad A; Gardiner, Stuart K; Demirel, Shaban

    2017-02-01

    To characterize the error of optical coherence tomography (OCT) measurements of retinal nerve fiber layer (RNFL) thickness when using automated retinal layer segmentation algorithms without manual refinement. Cross-sectional study. This study was set in a glaucoma clinical practice, and the dataset included 3490 scans from 412 eyes of 213 individuals with a diagnosis of glaucoma or glaucoma suspect. We used spectral domain OCT (Spectralis) to measure RNFL thickness in a 6-degree peripapillary circle, and exported the native "automated segmentation only" results. In addition, we exported the results after "manual refinement" to correct errors in the automated segmentation of the anterior (internal limiting membrane) and the posterior boundary of the RNFL. Our outcome measures included differences in RNFL thickness and glaucoma classification (i.e., normal, borderline, or outside normal limits) between scans with automated segmentation only and scans using manual refinement. Automated segmentation only resulted in a thinner global RNFL thickness (1.6 μm thinner, P < .001) when compared to manual refinement. When adjusted by operator, a multivariate model showed increased differences with decreasing RNFL thickness (P < .001), decreasing scan quality (P < .001), and increasing age (P < .03). Manual refinement changed 298 of 3486 (8.5%) of scans to a different global glaucoma classification, wherein 146 of 617 (23.7%) of borderline classifications became normal. Superior and inferior temporal clock hours had the largest differences. Automated segmentation without manual refinement resulted in reduced global RNFL thickness and overestimated the classification of glaucoma. Differences increased in eyes with a thinner RNFL thickness, older age, and decreased scan quality. Operators should inspect and manually refine OCT retinal layer segmentation when assessing RNFL thickness in the management of patients with glaucoma. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Precision segmented reflector, figure verification sensor

    NASA Technical Reports Server (NTRS)

    Manhart, Paul K.; Macenka, Steve A.

    1989-01-01

    The Precision Segmented Reflector (PSR) program currently under way at the Jet Propulsion Laboratory is a test bed and technology demonstration program designed to develop and study the structural and material technologies required for lightweight, precision segmented reflectors. A Figure Verification Sensor (FVS) which is designed to monitor the active control system of the segments is described, a best fit surface is defined, and an image or wavefront quality of the assembled array of reflecting panels is assessed

  4. Noninvasive Fetal Electrocardiography Part II: Segmented-Beat Modulation Method for Signal Denoising

    PubMed Central

    Agostinelli, Angela; Sbrollini, Agnese; Burattini, Luca; Fioretti, Sandro; Di Nardo, Francesco; Burattini, Laura

    2017-01-01

    Background: Fetal well-being evaluation may be accomplished by monitoring cardiac activity through fetal electrocardiography. Direct fetal electrocardiography (acquired through scalp electrodes) is the gold standard but its invasiveness limits its clinical applicability. Instead, clinical use of indirect fetal electrocardiography (acquired through abdominal electrodes) is limited by its poor signal quality. Objective: Aim of this study was to evaluate the suitability of the Segmented-Beat Modulation Method to denoise indirect fetal electrocardiograms in order to achieve a signal-quality at least comparable to the direct ones. Method: Direct and indirect recordings, simultaneously acquired from 5 pregnant women during labor, were filtered with the Segmented-Beat Modulation Method and correlated in order to assess their morphological correspondence. Signal-to-noise ratio was used to quantify their quality. Results: Amplitude was higher in direct than indirect fetal electrocardiograms (median:104 µV vs. 22 µV; P=7.66·10-4), whereas noise was comparable (median:70 µV vs. 49 µV, P=0.45). Moreover, fetal electrocardiogram amplitude was significantly higher than affecting noise in direct recording (P=3.17·10-2) and significantly in indirect recording (P=1.90·10-3). Consequently, signal-to-noise ratio was initially higher for direct than indirect recordings (median:3.3 dB vs. -2.3 dB; P=3.90·10-3), but became lower after denoising of indirect ones (median:9.6 dB; P=9.84·10-4). Eventually, direct and indirect recordings were highly correlated (median: ρ=0.78; P<10-208), indicating that the two electrocardiograms were morphologically equivalent. Conclusion: Segmented-Beat Modulation Method is particularly useful for denoising of indirect fetal electrocardiogram and may contribute to the spread of this noninvasive technique in the clinical practice. PMID:28567129

  5. Video rate color region segmentation for mobile robotic applications

    NASA Astrophysics Data System (ADS)

    de Cabrol, Aymeric; Bonnin, Patrick J.; Hugel, Vincent; Blazevic, Pierre; Chetto, Maryline

    2005-08-01

    Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.

  6. Two-Phase and Graph-Based Clustering Methods for Accurate and Efficient Segmentation of Large Mass Spectrometry Images.

    PubMed

    Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine

    2017-11-07

    Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.

  7. Factors influencing trust in doctors: a community segmentation strategy for quality improvement in healthcare

    PubMed Central

    Gopichandran, Vijayaprasad; Chetlapalli, Satish Kumar

    2013-01-01

    Background Trust is a forward-looking covenant between the patient and the doctor where the patient optimistically accepts his/her vulnerability. Trust is known to improve the clinical outcomes. Objectives To explore the factors that determine patients’ trust in doctors and to segment the community based on factors which drive their trust. Setting Resource-poor urban and rural settings in Tamil Nadu, a state in southern India. Participants A questionnaire was administered to a sample of 625 adult community-dwelling respondents from four districts of Tamil Nadu, India, chosen by multistage sampling strategy. Outcome measures The outcomes were to understand the main domains of factors influencing trust in doctors and to segment the community based on which of these domains predominantly influenced their trust. Results Factor analysis revealed five main categories, namely, comfort with the doctor, doctor with personal involvement with the patient, behaviourally competent doctor, doctor with a simple appearance and culturally competent doctor, which explained 49.3% of the total variance. Using k-means cluster analysis the respondents were segmented into four groups, namely, those who have ‘comfort-based trust’, ‘emotionally assessed trust’, who were predominantly older and belonging to lower socioeconomic status, those who had ‘personal trust’, who were younger people from higher socioeconomic strata of the community and the group who had ‘objectively assessed trust’, who were younger women. Conclusions Trust in doctors seems to be influenced by the doctor's behaviuor, perceived comfort levels, personal involvement with the patient, and to a lesser extent by cultural competence and doctor's physical appearance. On the basis of these dimensions, the community can be segmented into distinct groups, and trust building can happen in a strategic manner which may lead to improvement in perceived quality of care. PMID:24302512

  8. Blood vessel segmentation algorithms - Review of methods, datasets and evaluation metrics.

    PubMed

    Moccia, Sara; De Momi, Elena; El Hadji, Sara; Mattos, Leonardo S

    2018-05-01

    Blood vessel segmentation is a topic of high interest in medical image analysis since the analysis of vessels is crucial for diagnosis, treatment planning and execution, and evaluation of clinical outcomes in different fields, including laryngology, neurosurgery and ophthalmology. Automatic or semi-automatic vessel segmentation can support clinicians in performing these tasks. Different medical imaging techniques are currently used in clinical practice and an appropriate choice of the segmentation algorithm is mandatory to deal with the adopted imaging technique characteristics (e.g. resolution, noise and vessel contrast). This paper aims at reviewing the most recent and innovative blood vessel segmentation algorithms. Among the algorithms and approaches considered, we deeply investigated the most novel blood vessel segmentation including machine learning, deformable model, and tracking-based approaches. This paper analyzes more than 100 articles focused on blood vessel segmentation methods. For each analyzed approach, summary tables are presented reporting imaging technique used, anatomical region and performance measures employed. Benefits and disadvantages of each method are highlighted. Despite the constant progress and efforts addressed in the field, several issues still need to be overcome. A relevant limitation consists in the segmentation of pathological vessels. Unfortunately, not consistent research effort has been addressed to this issue yet. Research is needed since some of the main assumptions made for healthy vessels (such as linearity and circular cross-section) do not hold in pathological tissues, which on the other hand require new vessel model formulations. Moreover, image intensity drops, noise and low contrast still represent an important obstacle for the achievement of a high-quality enhancement. This is particularly true for optical imaging, where the image quality is usually lower in terms of noise and contrast with respect to magnetic

  9. Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.

    PubMed

    Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C

    2009-09-01

    A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.

  10. Simulation of nutrient and sediment concentrations and loads in the Delaware inland bays watershed: Extension of the hydrologic and water-quality model to ungaged segments

    USGS Publications Warehouse

    Gutierrez-Magness, Angelica L.

    2006-01-01

    Rapid population increases, agriculture, and industrial practices have been identified as important sources of excessive nutrients and sediments in the Delaware Inland Bays watershed. The amount and effect of excessive nutrients and sediments in the Inland Bays watershed have been well documented by the Delaware Geological Survey, the Delaware Department of Natural Resources and Environmental Control, the U.S. Environmental Protection Agency's National Estuary Program, the Delaware Center for Inland Bays, the University of Delaware, and other agencies. This documentation and data previously were used to develop a hydrologic and water-quality model of the Delaware Inland Bays watershed to simulate nutrients and sediment concentrations and loads, and to calibrate the model by comparing concentrations and streamflow data at six stations in the watershed over a limited period of time (October 1998 through April 2000). Although the model predictions of nutrient and sediment concentrations for the calibrated segments were fairly accurate, the predictions for the 28 ungaged segments located near tidal areas, where stream data were not available, were above the range of values measured in the area. The cooperative study established in 2000 by the Delaware Department of Natural Resources and Environmental Control, the Delaware Geological Survey, and the U.S. Geological Survey was extended to evaluate the model predictions in ungaged segments and to ensure that the model, developed as a planning and management tool, could accurately predict nutrient and sediment concentrations within the measured range of values in the area. The evaluation of the predictions was limited to the period of calibration (1999) of the 2003 model. To develop estimates on ungaged watersheds, parameter values from calibrated segments are transferred to the ungaged segments; however, accurate predictions are unlikely where parameter transference is subject to error. The unexpected nutrient and

  11. [In Vitro Evaluation of the Optical Quality of Segmental Refractive Multifocal Intraocular Lenses].

    PubMed

    Yildirim, Timur Mert; Auffarth, Gerd Uwe; Tandogan, Tamer; Liebing, Stephanie; Labuz, Grzegorz; Choi, Chul Young; Khoramnia, Ramin

    2017-11-08

    In customised patient care, it is important to know the optical quality of different intraocular lenses (IOL). In this study, the optical quality of three segmental intraocular lenses were compared. The LENTIS Comfort LS-313 MF15, LENTIS Mplus X LS-313 MF30 and LENTIS High Add IOL LS-313 MF80 (Oculentis, Berlin, Germany) with a far power of + 21 D were analysed at the optical bench OptiSpheric IOL PRO (Trioptics GmbH, Wedel, Germany). The lenses have almost the same optical design but differ in the power of the near segment. The MF15 has a + 1.5 D addition to improve vision in intermediate distances, the MF30 has a near addition of + 3 D and the MF80 has a near addition of + 8 D. The modulation transfer function area (MTFa) and the Strehl ratio were examined for apertures of 3 mm (photopic) and 4.5 mm (mesopic). The MTFa values for the far focus are 33.34/30.80/51.53 (MF15/MF30/MF80) with an aperture of 3 mm and 25.38/22.52/43.15 for 4.5 mm. The MTFa values for the intermediate focus are 29.85/16.21/6.25 for a 3 mm aperture and 23.92/8.05/3.08 for 4.5 mm. The MTFa values for the near focus are 9.75/21.49/33.12 for an aperture of 3 mm and 4.95/22.70/31.68 for 4.5 mm. The Strehl ratio of the far focus is 0.34/0.30/0.52 for an aperture of 3 mm and 0.24/0.22/0.43 for 4.5 mm. For the intermediate focus, the Strehl ratio is 0.30/0.17/0.07 for an aperture of 3 mm and 0.24/0.08/0.03 for 4.5 mm. The Strehl ratio of the near focus is 0.10/0.22/0.33 for an aperture of 3 mm and 0.05/0.23/0.32 for 4.5 mm. We confirmed that the addition influences the optical quality of segmental bifocal intraocular lenses. For the far focus, the results of the MF15 and MF30 are similar. In intermediate distances, the MF15 achieves the best results. For near distances, the MF30 achieves better optical values than the MF15. The lens MF80, which has been designed for patients with maculopathies, achieves good results for far and near distances. Georg

  12. A model to identify high crash road segments with the dynamic segmentation method.

    PubMed

    Boroujerdian, Amin Mirza; Saffarzadeh, Mahmoud; Yousefi, Hassan; Ghassemian, Hassan

    2014-12-01

    Currently, high social and economic costs in addition to physical and mental consequences put road safety among most important issues. This paper aims at presenting a novel approach, capable of identifying the location as well as the length of high crash road segments. It focuses on the location of accidents occurred along the road and their effective regions. In other words, due to applicability and budget limitations in improving safety of road segments, it is not possible to recognize all high crash road segments. Therefore, it is of utmost importance to identify high crash road segments and their real length to be able to prioritize the safety improvement in roads. In this paper, after evaluating deficiencies of the current road segmentation models, different kinds of errors caused by these methods are addressed. One of the main deficiencies of these models is that they can not identify the length of high crash road segments. In this paper, identifying the length of high crash road segments (corresponding to the arrangement of accidents along the road) is achieved by converting accident data to the road response signal of through traffic with a dynamic model based on the wavelet theory. The significant advantage of the presented method is multi-scale segmentation. In other words, this model identifies high crash road segments with different lengths and also it can recognize small segments within long segments. Applying the presented model into a real case for identifying 10-20 percent of high crash road segment showed an improvement of 25-38 percent in relative to the existing methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. A method of setting limits for the purpose of quality assurance

    NASA Astrophysics Data System (ADS)

    Sanghangthum, Taweap; Suriyapee, Sivalee; Kim, Gwe-Ya; Pawlicki, Todd

    2013-10-01

    The result from any assurance measurement needs to be checked against some limits for acceptability. There are two types of limits; those that define clinical acceptability (action limits) and those that are meant to serve as a warning that the measurement is close to the action limits (tolerance limits). Currently, there is no standard procedure to set these limits. In this work, we propose an operational procedure to set tolerance limits and action limits. The approach to establish the limits is based on techniques of quality engineering using control charts and a process capability index. The method is different for tolerance limits and action limits with action limits being categorized into those that are specified and unspecified. The procedure is to first ensure process control using the I-MR control charts. Then, the tolerance limits are set equal to the control chart limits on the I chart. Action limits are determined using the Cpm process capability index with the requirements that the process must be in-control. The limits from the proposed procedure are compared to an existing or conventional method. Four examples are investigated: two of volumetric modulated arc therapy (VMAT) point dose quality assurance (QA) and two of routine linear accelerator output QA. The tolerance limits range from about 6% larger to 9% smaller than conventional action limits for VMAT QA cases. For the linac output QA, tolerance limits are about 60% smaller than conventional action limits. The operational procedure describe in this work is based on established quality management tools and will provide a systematic guide to set up tolerance and action limits for different equipment and processes.

  14. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  15. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  16. Unsupervised motion-based object segmentation refined by color

    NASA Astrophysics Data System (ADS)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the

  17. James Webb Space Telescope Optical Simulation Testbed: Segmented Mirror Phase Retrieval Testing

    NASA Astrophysics Data System (ADS)

    Laginja, Iva; Egron, Sylvain; Brady, Greg; Soummer, Remi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Mazoyer, Johan; N’Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand

    2018-01-01

    The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a hardware simulator designed to produce JWST-like images. A model of the JWST three mirror anastigmat is realized with three lenses in form of a Cooke Triplet, which provides JWST-like optical quality over a field equivalent to a NIRCam module, and an Iris AO segmented mirror with hexagonal elements is standing in for the JWST segmented primary. This setup successfully produces images extremely similar to NIRCam images from cryotesting in terms of the PSF morphology and sampling relative to the diffraction limit.The testbed is used for staff training of the wavefront sensing and control (WFS&C) team and for independent analysis of WFS&C scenarios of the JWST. Algorithms like geometric phase retrieval (GPR) that may be used in flight and potential upgrades to JWST WFS&C will be explored. We report on the current status of the testbed after alignment, implementation of the segmented mirror, and testing of phase retrieval techniques.This optical bench complements other work at the Makidon laboratory at the Space Telescope Science Institute, including the investigation of coronagraphy for segmented aperture telescopes. Beyond JWST we intend to use JOST for WFS&C studies for future large segmented space telescopes such as LUVOIR.

  18. Segmental stiff skin syndrome (SSS): A distinct clinical entity.

    PubMed

    Myers, Kathryn L; Mir, Adnan; Schaffer, Julie V; Meehan, Shane A; Orlow, Seth J; Brinster, Nooshin K

    2016-07-01

    Stiff skin syndrome (SSS) is a noninflammatory, fibrosing condition of the skin, often affecting the limb girdles. We present 4 new patients with SSS with largely unilateral, segmental distribution. To date, reported cases of SSS have been grouped based on generally accepted clinical and histopathologic findings. The purpose of this study was to analyze differences in clinical and histopathologic findings between previously reported SSS cases. This is a retrospective review of 4 new cases and 48 previously published cases of SSS obtained from PubMed search. Of 52 total cases, 18 (35%) were segmentally distributed and 34 (65%) were widespread. The average age of onset was 4.1 years versus 1.6 years for segmental versus widespread SSS, respectively. Limitation in joint mobility affected 44% of patients with segmental SSS and 97% of patients with widespread SSS. Histopathologic findings were common between the 2 groups. This was a retrospective study of previously published cases limited by the completeness and accuracy of the reviewed cases. We propose a distinct clinical entity, segmental SSS, characterized by a segmental distribution, later age of onset, and less severe functional limitation. Both segmental SSS and widespread SSS share common diagnostic histopathologic features. Copyright © 2016 American Academy of Dermatology, Inc. All rights reserved.

  19. Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm

    NASA Astrophysics Data System (ADS)

    Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter

    2004-05-01

    The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.

  20. Membrane Shell Reflector Segment Antenna

    NASA Technical Reports Server (NTRS)

    Fang, Houfei; Im, Eastwood; Lin, John; Moore, James

    2012-01-01

    The mesh reflector is the only type of large, in-space deployable antenna that has successfully flown in space. However, state-of-the-art large deployable mesh antenna systems are RF-frequency-limited by both global shape accuracy and local surface quality. The limitations of mesh reflectors stem from two factors. First, at higher frequencies, the porosity and surface roughness of the mesh results in loss and scattering of the signal. Second, the mesh material does not have any bending stiffness and thus cannot be formed into true parabolic (or other desired) shapes. To advance the deployable reflector technology at high RF frequencies from the current state-of-the-art, significant improvements need to be made in three major aspects: a high-stability and highprecision deployable truss; a continuously curved RF reflecting surface (the function of the surface as well as its first derivative are both continuous); and the RF reflecting surface should be made of a continuous material. To meet these three requirements, the Membrane Shell Reflector Segment (MSRS) antenna was developed.

  1. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  2. Gamifying Video Object Segmentation.

    PubMed

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  3. Interactive Tooth Separation from Dental Model Using Segmentation Field

    PubMed Central

    2016-01-01

    Tooth segmentation on dental model is an essential step of computer-aided-design systems for orthodontic virtual treatment planning. However, fast and accurate identifying cutting boundary to separate teeth from dental model still remains a challenge, due to various geometrical shapes of teeth, complex tooth arrangements, different dental model qualities, and varying degrees of crowding problems. Most segmentation approaches presented before are not able to achieve a balance between fine segmentation results and simple operating procedures with less time consumption. In this article, we present a novel, effective and efficient framework that achieves tooth segmentation based on a segmentation field, which is solved by a linear system defined by a discrete Laplace-Beltrami operator with Dirichlet boundary conditions. A set of contour lines are sampled from the smooth scalar field, and candidate cutting boundaries can be detected from concave regions with large variations of field data. The sensitivity to concave seams of the segmentation field facilitates effective tooth partition, as well as avoids obtaining appropriate curvature threshold value, which is unreliable in some case. Our tooth segmentation algorithm is robust to dental models with low quality, as well as is effective to dental models with different levels of crowding problems. The experiments, including segmentation tests of varying dental models with different complexity, experiments on dental meshes with different modeling resolutions and surface noises and comparison between our method and the morphologic skeleton segmentation method are conducted, thus demonstrating the effectiveness of our method. PMID:27532266

  4. Building Roof Segmentation from Aerial Images Using a Line-and Region-Based Watershed Segmentation Technique

    PubMed Central

    Merabet, Youssef El; Meurie, Cyril; Ruichek, Yassine; Sbihi, Abderrahmane; Touahni, Raja

    2015-01-01

    In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. PMID:25648706

  5. Segmented media and medium damping in microwave assisted magnetic recording

    NASA Astrophysics Data System (ADS)

    Bai, Xiaoyu; Zhu, Jian-Gang

    2018-05-01

    In this paper, we present a methodology of segmented media stack design for microwave assisted magnetic recording. Through micro-magnetic modeling, it is demonstrated that an optimized media segmentation is able to yield high signal-to-noise ratio even with limited ac field power. With proper segmentation, the ac field power could be utilized more efficiently and this can alleviate the requirement for medium damping which has been previously considered a critical limitation. The micro-magnetic modeling also shows that with segmentation optimization, recording signal-to-noise ratio can have very little dependence on damping for different recording linear densities.

  6. Limited-preparation CT colonography in frail elderly patients: a feasibility study.

    PubMed

    Keeling, Aoife N; Slattery, Michael M; Leong, Sum; McCarthy, Eoghan; Susanto, Maja; Lee, Michael J; Morrin, Martina M

    2010-05-01

    Full colonic preparation can be onerous and may be poorly tolerated in frail elderly patients. The purpose of this study was to prospectively assess the image quality and diagnostic yield of limited-preparation CT colonography (CTC) in elderly patients with suspected colorectal cancer who were deemed medically unfit or unsuitable for colonoscopy. A prospective study was performed of 67 elderly patients with reduced functional status referred for CTC. Participants were prescribed a limited bowel preparation consisting of a low-residue diet for 3 days, 1 L of 2% oral diatrizoate meglumine (Gastrografin) 24 hours before CTC, and 1 L of 2% oral Gastrografin over the 2 hours immediately before CTC. No cathartic preparation was administered. All colonic segments were graded from 1 to 5 for image quality (1, unreadable; 2, poor; 3, equivocal; 4, good; 5, excellent) and reader confidence. Clinical and conventional colonoscopy follow-up findings were documented, and all colonic and extracolonic pathologic findings were documented. Overall image quality and reader confidence in the evaluation of the colon was rated good or excellent in 84% of the colonic segments. Colonic abnormalities were identified in 12 patients (18%), including four colonic tumors, two polyps, and seven colonic strictures. Incidental extraintestinal findings were detected in 43 patients (64%), including nine patients with lesions radiologically consistent with malignancy. Limited-preparation low-dose CTC is a feasible and useful minimally invasive technique with which to evaluate the colon and exclude gross pathology (mass lesions and polyps > 1 cm) in elderly patients with diminished performance status, yielding good to excellent image quality.

  7. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.

    PubMed

    Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A

    2016-05-01

    Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.

  8. Photoreceptor Outer Segment on Internal Limiting Membrane after Macular Hole Surgery: Implications for Pathogenesis.

    PubMed

    Grinton, Michael E; Sandinha, Maria T; Steel, David H W

    2015-01-01

    This report presents a case, which highlights key principles in the pathophysiology of macular holes. It has been hypothesized that anteroposterior (AP) and tangential vitreous traction on the fovea are the primary underlying factors causing macular holes [Nischal and Pearson; in Kanski and Bowling: Clinical Ophthalmology: A Systemic Approach, 2011, pp 629-631]. Spectral domain optical coherence tomography (OCT) has subsequently corroborated this theory in part but shown that AP vitreofoveal traction is the more common scenario [Steel and Lotery: Eye 2013;27:1-21]. This study was conducted as a single case report. A 63-year old female presented to her optician with blurred and distorted vision in her left eye. OCT showed a macular hole with a minimum linear diameter of 370 µm, with persistent broad vitreofoveal attachment on both sides of the hole edges. The patient underwent combined left phacoemulsification and pars plana vitrectomy, internal limiting membrane (ILM) peel and gas injection. The ILM was examined by electron microscopy and showed the presence of a cone outer segment on the retinal side. Post-operative OCT at 11 weeks showed a closed hole with recovery of the foveal contour and good vision. Our case shows the presence of a photoreceptor outer segment on the retinal side of the ILM and reinforces the importance of tangential traction in the development of some macula holes. The case highlights the theory of transmission of inner retinal forces to the photoreceptors via Müller cells and how a full thickness macular hole defect can occur in the absence of AP vitreomacular traction.

  9. Bayesian Fusion of Color and Texture Segmentations

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto

    2000-01-01

    In many applications one would like to use information from both color and texture features in order to segment an image. We propose a novel technique to combine "soft" segmentations computed for two or more features independently. Our algorithm merges models according to a mean entropy criterion, and allows to choose the appropriate number of classes for the final grouping. This technique also allows to improve the quality of supervised classification based on one feature (e.g. color) by merging information from unsupervised segmentation based on another feature (e.g., texture.)

  10. Segment scheduling method for reducing 360° video streaming latency

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  11. Segmentation of radiologic images with self-organizing maps: the segmentation problem transformed into a classification task

    NASA Astrophysics Data System (ADS)

    Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas

    1996-04-01

    The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.

  12. High Quality Facade Segmentation Based on Structured Random Forest, Region Proposal Network and Rectangular Fitting

    NASA Astrophysics Data System (ADS)

    Rahmani, K.; Mayer, H.

    2018-05-01

    In this paper we present a pipeline for high quality semantic segmentation of building facades using Structured Random Forest (SRF), Region Proposal Network (RPN) based on a Convolutional Neural Network (CNN) as well as rectangular fitting optimization. Our main contribution is that we employ features created by the RPN as channels in the SRF.We empirically show that this is very effective especially for doors and windows. Our pipeline is evaluated on two datasets where we outperform current state-of-the-art methods. Additionally, we quantify the contribution of the RPN and the rectangular fitting optimization on the accuracy of the result.

  13. Performance evaluation of image segmentation algorithms on microscopic image data.

    PubMed

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  14. Quality of life in smokers: focus on functional limitations rather than on lung function?

    PubMed Central

    Geijer, Roeland MM; Sachs, Alfred PE; Verheij, Theo JM; Kerstjens, Huib AM; Kuyvenhoven, Marijke M; Hoes, Arno W

    2007-01-01

    Background The Global Initiative for Chronic Obstructive Lung Disease (GOLD) classification of severity of chronic obstructive pulmonary disease (COPD) is based solely on obstruction and does not capture physical functioning. The hypothesis that the Medical Research Council (MRC) dyspnoea scale would correlate better with quality of life than the level of airflow limitation was examined. Aim To study the associations between quality of life in smokers and limitations in physical functioning (MRC dyspnoea scale) and, quality of life and airflow limitation (GOLD COPD stages). Design Cross-sectional study. Setting The city of IJsselstein, a small town in the centre of The Netherlands. Method Male smokers aged 40–65 years without a prior diagnosis of COPD and enlisted with a general practice, participated in this study. Quality of life was assessed by means of a generic (SF–36) and a disease-specific, questionnaire (QOLRIQ). Results A total of 395 subjects (mean age 55.4 years, pack years 27.1) performed adequate spirometry and completed the questionnaires. Limitations of physical functioning according to the MRC dyspnoea scale were found in 25.1 % (99/395) of the participants and airflow limitation in 40.2% (159/395). The correlations of limitations of physical functioning with all quality-of-life components were stronger than the correlations of all quality-of-life subscales with the severity of airflow limitation. Conclusion In middle-aged smokers the correlation of limitations of physical functioning (MRC dyspnoea scale) with quality of life was stronger than the correlation of the severity of airflow limitation with quality of life. Future staging systems of severity of COPD should capture this and not rely on forced expiratory volume in one second (FEV1) alone. PMID:17550673

  15. LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.

    PubMed

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2015-03-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images

    PubMed Central

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188

  17. Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.

    PubMed

    Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D

    2018-04-07

    We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Video-based noncooperative iris image segmentation.

    PubMed

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  19. Stress distribution pattern of screw-retained restorations with segmented vs. non-segmented abutments: A finite element analysis

    PubMed Central

    Aalaei, Shima; Rajabi Naraki, Zahra; Nematollahi, Fatemeh; Beyabanaki, Elaheh; Shahrokhi Rad, Afsaneh

    2017-01-01

    Background. Screw-retained restorations are favored in some clinical situations such as limited inter-occlusal spaces. This study was designed to compare stresses developed in the peri-implant bone in two different types of screw-retained restorations (segmented vs. non-segmented abutment) using a finite element model. Methods. An implant, 4.1 mm in diameter and 10 mm in length, was placed in the first molar site of a mandibular model with 1 mm of cortical bone on the buccal and lingual sides. Segmented and non-segmented screw abutments with their crowns were placed on the simulated implant in each model. After loading (100 N, axial and 45° non-axial), von Mises stress was recorded using ANSYS software, version 12.0.1. Results. The maximum stresses in the non-segmented abutment screw were less than those of segmented abutment (87 vs. 100, and 375 vs. 430 MPa under axial and non-axial loading, respectively). The maximum stresses in the peri-implant bone for the model with segmented abutment were less than those of non-segmented ones (21 vs. 24 MPa, and 31 vs. 126 MPa under vertical and angular loading, respectively). In addition, the micro-strain of peri-implant bone for the segmented abutment restoration was less than that of non-segmented abutment. Conclusion. Under axial and non-axial loadings, non-segmented abutment showed less stress concentration in the screw, while there was less stress and strain in the peri-implant bone in the segmented abutment. PMID:29184629

  20. Modelling, fabrication and characterization of a polymeric micromixer based on sequential segmentation.

    PubMed

    Nguyen, Nam-Trung; Huang, Xiaoyang

    2006-06-01

    Effective and fast mixing is important for many microfluidic applications. In many cases, mixing is limited by molecular diffusion due to constrains of the laminar flow in the microscale regime. According to scaling law, decreasing the mixing path can shorten the mixing time and enhance mixing quality. One of the techniques for reducing mixing path is sequential segmentation. This technique divides solvent and solute into segments in axial direction. The so-called Taylor-Aris dispersion can improve axial transport by three orders of magnitudes. The mixing path can be controlled by the switching frequency and the mean velocity of the flow. Mixing ratio can be controlled by pulse width modulation of the switching signal. This paper first presents a simple time-dependent one-dimensional analytical model for sequential segmentation. The model considers an arbitrary mixing ratio between solute and solvent as well as the axial Taylor-Aris dispersion. Next, a micromixer was designed and fabricated based on polymeric micromachining. The micromixer was formed by laminating four polymer layers. The layers are micro machined by a CO(2) laser. Switching of the fluid flows was realized by two piezoelectric valves. Mixing experiments were evaluated optically. The concentration profile along the mixing channel agrees qualitatively well with the analytical model. Furthermore, mixing results at different switching frequencies were investigated. Due to the dynamic behavior of the valves and the fluidic system, mixing quality decreases with increasing switching frequency.

  1. Objective measurements to evaluate glottal space segmentation from laryngeal images.

    PubMed

    Gutiérrez-Arriola, J M; Osma-Ruiz, V; Sáenz-Lechón, N; Godino-Llorente, J I; Fraile, R; Arias-Londoño, J D

    2012-01-01

    Objective evaluation of the results of medical image segmentation is a known problem. Applied to the task of automatically detecting the glottal area from laryngeal images, this paper proposes a new objective measurement to evaluate the quality of a segmentation algorithm by comparing with the results given by a human expert. The new figure of merit is called Area Index, and its effectiveness is compared with one of the most used figures of merit found in the literature: the Pratt Index. Results over 110 laryngeal images presented high correlations between both indexes, demonstrating that the proposed measure is comparable to the Pratt Index and it is a good indicator of the segmentation quality.

  2. Multiclassifier fusion in human brain MR segmentation: modelling convergence.

    PubMed

    Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander

    2006-01-01

    Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.

  3. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  4. Fetal brain volumetry through MRI volumetric reconstruction and segmentation

    PubMed Central

    Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.

    2013-01-01

    Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848

  5. Readout-Segmented Echo-Planar Imaging in Diffusion-Weighted MR Imaging in Breast Cancer: Comparison with Single-Shot Echo-Planar Imaging in Image Quality

    PubMed Central

    Kim, Yun Ju; Kang, Bong Joo; Park, Chang Suk; Kim, Hyeon Sook; Son, Yo Han; Porter, David Andrew; Song, Byung Joo

    2014-01-01

    Objective The purpose of this study was to compare the image quality of standard single-shot echo-planar imaging (ss-EPI) and that of readout-segmented EPI (rs-EPI) in patients with breast cancer. Materials and Methods Seventy-one patients with 74 breast cancers underwent both ss-EPI and rs-EPI. For qualitative comparison of image quality, three readers independently assessed the two sets of diffusion-weighted (DW) images. To evaluate geometric distortion, a comparison was made between lesion lengths derived from contrast enhanced MR (CE-MR) images and those obtained from the corresponding DW images. For assessment of image parameters, signal-to-noise ratio (SNR), lesion contrast, and contrast-to-noise ratio (CNR) were calculated. Results The rs-EPI was superior to ss-EPI in most criteria regarding the qualitative image quality. Anatomical structure distinction, delineation of the lesion, ghosting artifact, and overall image quality were significantly better in rs-EPI. Regarding the geometric distortion, lesion length on ss-EPI was significantly different from that of CE-MR, whereas there were no significant differences between CE-MR and rs-EPI. The rs-EPI was superior to ss-EPI in SNR and CNR. Conclusion Readout-segmented EPI is superior to ss-EPI in the aspect of image quality in DW MR imaging of the breast. PMID:25053898

  6. Spinal cord grey matter segmentation challenge.

    PubMed

    Prados, Ferran; Ashburner, John; Blaiotta, Claudia; Brosch, Tom; Carballido-Gamio, Julio; Cardoso, Manuel Jorge; Conrad, Benjamin N; Datta, Esha; Dávid, Gergely; Leener, Benjamin De; Dupont, Sara M; Freund, Patrick; Wheeler-Kingshott, Claudia A M Gandini; Grussu, Francesco; Henry, Roland; Landman, Bennett A; Ljungberg, Emil; Lyttle, Bailey; Ourselin, Sebastien; Papinutto, Nico; Saporito, Salvatore; Schlaeger, Regina; Smith, Seth A; Summers, Paul; Tam, Roger; Yiannakas, Marios C; Zhu, Alyssa; Cohen-Adad, Julien

    2017-05-15

    An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi- or fully-automated segmentation methods for cervical cord cross-sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross-sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi-centre and multi-vendor dataset acquired with distinct 3D gradient-echo sequences. This challenge aimed to characterize the state-of-the-art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold-standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality-of-segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Postoperative quality-of-life assessment in patients with spine metastases treated with long-segment pedicle-screw fixation.

    PubMed

    Bernard, Florian; Lemée, Jean-Michel; Lucas, Olivier; Menei, Philippe

    2017-06-01

    OBJECTIVE In recent decades, progress in the medical management of cancer has been significant, resulting in considerable extension of survival for patients with metastatic disease. This has, in turn, led to increased attention to the optimal surgical management of bone lesions, including metastases to the spine. In addition, there has been a shift in focus toward improving quality of life and reducing hospital stay for these patients, and many minimally invasive techniques have been introduced with the aim of reducing the morbidity associated with more traditional open approaches. The goal of this study was to assess the efficacy of long-segment percutaneous pedicle screw stabilization for the treatment of instability associated with thoracolumbar spine metastases in neurologically intact patients. METHODS This study was a retrospective review of data from a prospective database. The authors analyzed cases in which long-segment percutaneous pedicle screw fixation was performed for the palliative treatment of thoracolumbar spinal instability due to spinal metastases in neurologically intact patients. All of the patients included in the study underwent surgery between January 2014 and May 2015 at the authors' institution. Postoperative radiation therapy was planned within 10 days following the stabilization in all cases. Clinical and radiological follow-up assessments were planned for 3 days, 3 weeks, 6 weeks, 3 months, 6 months, and 1 year after surgery. Outcome was assessed by means of standard postoperative evaluation and oncological and spinal quality of life measures (European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire Version 3.0 [EORTC QLQ-C30] and Oswestry Disability Index [ODI], respectively). Moreover, 5 patients were given an activity monitoring device for recording the distance walked daily; preoperative and postoperative daily distances were compared. RESULTS Data from 17 cases were analyzed. There were no

  8. Design limitations of Bryan disc arthroplasty.

    PubMed

    Fong, Shee Yan; DuPlessis, Stephan J; Casha, Steven; Hurlbert, R John

    2006-01-01

    Disc arthroplasty is gaining momentum as a surgical procedure in the treatment of spinal degenerative disease. Results must be carefully scrutinized to recognize benefits as well as limitations. The aim of this study was to investigate factors associated with segmental kyphosis after Bryan disc replacement. Prospective study of a consecutively enrolled cohort of 10 patients treated in a single center using the Bryan cervical disc prosthesis for single-level segmental reconstruction in the surgical treatment of cervical radiculopathy and/or myelopathy. Radiographic and quality of life outcome measures. Static and dynamic lateral radiographs were digitally analyzed in patients undergoing Bryan disc arthroplasty throughout a minimum 3-month follow-up period. Observations were compared with preoperative studies looking for predictive factors of postoperative spinal alignment. Postoperative end plate angles through the Bryan disc in the neutral position were kyphotic in 9 of 10 patients. Compared with preoperative end plate angulation there was a mean change of -7 degrees (towards kyphosis) in postoperative end plate alignment (p=.007, 95% confidence interval [CI] -6 degrees to -13 degrees). This correlated significantly with postoperative reduction in posterior vertebral body height of the caudal segment (p=.011, r2=.575) and postoperative functional spine unit (FSU) kyphosis (p=.032, r2=.46). Despite intraoperative distraction, postoperative FSU height was significantly reduced, on average by 1.7 mm (p=.040, 95% CI 0.5-2.8 mm). Asymmetrical end plate preparation occurs because of suboptimal coordinates to which the milling jig is referenced. Although segmental motion is preserved, Bryan disc arthroplasty demonstrates a propensity towards kyphotic orientation through the prosthesis likely as a result of intraoperative lordotic distraction. FSU angulation tends towards kyphosis and FSU height is decreased in the postoperative state from lack of anterior column support

  9. Reactive power and voltage control strategy based on dynamic and adaptive segment for DG inverter

    NASA Astrophysics Data System (ADS)

    Zhai, Jianwei; Lin, Xiaoming; Zhang, Yongjun

    2018-03-01

    The inverter of distributed generation (DG) can support reactive power to help solve the problem of out-of-limit voltage in active distribution network (ADN). Therefore, a reactive voltage control strategy based on dynamic and adaptive segment for DG inverter is put forward to actively control voltage in this paper. The proposed strategy adjusts the segmented voltage threshold of Q(U) droop curve dynamically and adaptively according to the voltage of grid-connected point and the power direction of adjacent downstream line. And then the reactive power reference of DG inverter can be got through modified Q(U) control strategy. The reactive power of inverter is controlled to trace the reference value. The proposed control strategy can not only control the local voltage of grid-connected point but also help to maintain voltage within qualified range considering the terminal voltage of distribution feeder and the reactive support for adjacent downstream DG. The scheme using the proposed strategy is compared with the scheme without the reactive support of DG inverter and the scheme using the Q(U) control strategy with constant segmented voltage threshold. The simulation results suggest that the proposed method has a significant improvement on solving the problem of out-of-limit voltage, restraining voltage variation and improving voltage quality.

  10. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking.

    PubMed

    Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary

    2011-08-01

    Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Figure-ground segmentation based on class-independent shape priors

    NASA Astrophysics Data System (ADS)

    Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu

    2018-01-01

    We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.

  12. Segmentation Fusion Techniques with Application to Plenoptic Images: A Survey.

    NASA Astrophysics Data System (ADS)

    Evin, D.; Hadad, A.; Solano, A.; Drozdowicz, B.

    2016-04-01

    The segmentation of anatomical and pathological structures plays a key role in the characterization of clinically relevant evidence from digital images. Recently, plenoptic imaging has emerged as a new promise to enrich the diagnostic potential of conventional photography. Since the plenoptic images comprises a set of slightly different versions of the target scene, we propose to make use of those images to improve the segmentation quality in relation to the scenario of a single image segmentation. The problem of finding a segmentation solution from multiple images of a single scene, is called segmentation fusion. This paper reviews the issue of segmentation fusion in order to find solutions that can be applied to plenoptic images, particularly images from the ophthalmological domain.

  13. Statistical Validation of Image Segmentation Quality Based on a Spatial Overlap Index1

    PubMed Central

    Zou, Kelly H.; Warfield, Simon K.; Bharatha, Aditya; Tempany, Clare M.C.; Kaus, Michael R.; Haker, Steven J.; Wells, William M.; Jolesz, Ferenc A.; Kikinis, Ron

    2005-01-01

    Rationale and Objectives To examine a statistical validation method based on the spatial overlap between two sets of segmentations of the same anatomy. Materials and Methods The Dice similarity coefficient (DSC) was used as a statistical validation metric to evaluate the performance of both the reproducibility of manual segmentations and the spatial overlap accuracy of automated probabilistic fractional segmentation of MR images, illustrated on two clinical examples. Example 1: 10 consecutive cases of prostate brachytherapy patients underwent both preoperative 1.5T and intraoperative 0.5T MR imaging. For each case, 5 repeated manual segmentations of the prostate peripheral zone were performed separately on preoperative and on intraoperative images. Example 2: A semi-automated probabilistic fractional segmentation algorithm was applied to MR imaging of 9 cases with 3 types of brain tumors. DSC values were computed and logit-transformed values were compared in the mean with the analysis of variance (ANOVA). Results Example 1: The mean DSCs of 0.883 (range, 0.876–0.893) with 1.5T preoperative MRI and 0.838 (range, 0.819–0.852) with 0.5T intraoperative MRI (P < .001) were within and at the margin of the range of good reproducibility, respectively. Example 2: Wide ranges of DSC were observed in brain tumor segmentations: Meningiomas (0.519–0.893), astrocytomas (0.487–0.972), and other mixed gliomas (0.490–0.899). Conclusion The DSC value is a simple and useful summary measure of spatial overlap, which can be applied to studies of reproducibility and accuracy in image segmentation. We observed generally satisfactory but variable validation results in two clinical applications. This metric may be adapted for similar validation tasks. PMID:14974593

  14. Partial wetting gas-liquid segmented flow microreactor.

    PubMed

    Kazemi Oskooei, S Ali; Sinton, David

    2010-07-07

    A microfluidic reactor strategy for reducing plug-to-plug transport in gas-liquid segmented flow microfluidic reactors is presented. The segmented flow is generated in a wetting portion of the chip that transitions downstream to a partially wetting reaction channel that serves to disconnect the liquid plugs. The resulting residence time distributions show little dependence on channel length, and over 60% narrowing in residence time distribution as compared to an otherwise similar reactor. This partial wetting strategy mitigates a central limitation (plug-to-plug dispersion) while preserving the many attractive features of gas-liquid segmented flow reactors.

  15. Assessment of water quality and suitability analysis of River Ganga in Rishikesh, India

    NASA Astrophysics Data System (ADS)

    Haritash, A. K.; Gaur, Shalini; Garg, Sakshi

    2016-11-01

    The water samples were collected from River Ganga in Rishikesh during December 2008 to assess its suitability for drinking, irrigation, and industrial usages using various indices. Based on the values obtained and suggested designated best use, water in upper segment can be used for drinking but after disinfection (Class A); organized outdoor bathing in middle segment (Class B); and can be used as drinking water source (Class C) in lower segment in Rishikesh. All the parameters were within the specified limits for drinking water quality except E. coli. The indices of suitability for irrigation and industrial application were also evaluated. The irrigation quality ranged from good to excellent at almost all places with the exception of percent sodium. The abundance of major ions followed K+> Ca2+> Cl- > HCO3 - > Na+> Mg2+> CO3 2- trend. The major cations suggested that the water is alkaline (Na + K) than alkaline earth (Ca + Mg) type. The heavy metals (Pb, Cu, Zn, Ni) were found either absent or within the limits specified. There was no specific industrial input of pollutants. Industrial applications of the river water should be limited since the water was found to be aggressive, based on Langelier saturation index (0.3) and Ryznar stability index (8.8), with the problem of heavy to intolerable corrosion. Water quality of Ganga in Rishikesh was good with exception of most probable number (MPN) which needs regular monitoring and measures to control.

  16. Machine learning based brain tumour segmentation on limited data using local texture and abnormality.

    PubMed

    Bonte, Stijn; Goethals, Ingeborg; Van Holen, Roel

    2018-05-07

    Brain tumour segmentation in medical images is a very challenging task due to the large variety in tumour shape, position, appearance, scanning modalities and scanning parameters. Most existing segmentation algorithms use information from four different MRI-sequences, but since this is often not available, there is need for a method able to delineate the different tumour tissues based on a minimal amount of data. We present a novel approach using a Random Forests model combining voxelwise texture and abnormality features on a contrast-enhanced T1 and FLAIR MRI. We transform the two scans into 275 feature maps. A random forest model next calculates the probability to belong to 4 tumour classes or 5 normal classes. Afterwards, a dedicated voxel clustering algorithm provides the final tumour segmentation. We trained our method on the BraTS 2013 database and validated it on the larger BraTS 2017 dataset. We achieve median Dice scores of 40.9% (low-grade glioma) and 75.0% (high-grade glioma) to delineate the active tumour, and 68.4%/80.1% for the total abnormal region including edema. Our fully automated brain tumour segmentation algorithm is able to delineate contrast enhancing tissue and oedema with high accuracy based only on post-contrast T1-weighted and FLAIR MRI, whereas for non-enhancing tumour tissue and necrosis only moderate results are obtained. This makes the method especially suitable for high-grade glioma. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Clinical Effects of Posterior Limited Long-Segment Pedicle Instrumentation for the Treatment of Thoracolumbar Fractures.

    PubMed

    Liang, Chengmin; Liu, Bin; Zhang, Wei; Yu, Haiyang; Cao, Jie; Yin, Wen

    2018-06-01

    The purpose of this study was to assess the clinical effects of treating thoracolumbar fractures with posterior limited long-segment pedicle instrumentation (LLSPI). A total of 58 thoracolumbar fracture patients were retrospectively analyzed, including 31 cases that were fixed by skipping the fractured vertebra with 6 screws using LLSPI and 27 cases that were fixed by skipping the fractured vertebra with 4 screws using short-segment pedicle instrumentation (SSPI). Surgery time, blood loss, hospital stay, Oswestry disability index (ODI), neurological function, sagittal kyphotic Cobb angle (SKA), percentage of anterior vertebral height (PAVH), instrumentation failure, and the loss of SKA and PAVH were recorded before and after surgery. No significant differences were observed in either the surgery time or hospital stay (P < 0.05), while there were significant differences in blood loss between the two groups. At the final follow-up, both the ODI and the neurological status were notably improved compared to those at the preoperative state (P < 0.05), but the difference between the two groups was relatively small. Furthermore, the SKA and PAVH were notably improved at the final follow-up compared to postoperative values (P < 0.05), but no significant difference was observed between the two groups. During long-term follow-up, the loss of SKA and PAVH in the LLSPI group was significantly less than that in the SSPI group (P < 0.05). Based on strict criteria for data collection and analysis, the clinical effects of LLSPI for the treatment of thoracolumbar fractures were satisfactory, especially for maintaining the height of the fractured vertebra and reducing the loss of SKA and instrumentation failure rates.

  18. Compound image segmentation of published biomedical figures.

    PubMed

    Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit

    2018-04-01

    Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.

  19. Large aperture segmented optics for space-to-ground communications.

    PubMed

    Lucy, R F

    1968-08-01

    A large aperture, moderate quality segmented optical array for use in noncoherent space-to-ground laser communications is determined as a function of resolution, diameter, focal length, and number of segments in the array. Secondary optics and construction tolerances are also discussed. Performance predictions show a typical receiver to be capable of megahertz communications at Mars distances during daylight operation.

  20. Words in Puddles of Sound: Modelling Psycholinguistic Effects in Speech Segmentation

    ERIC Educational Resources Information Center

    Monaghan, Padraic; Christiansen, Morten H.

    2010-01-01

    There are numerous models of how speech segmentation may proceed in infants acquiring their first language. We present a framework for considering the relative merits and limitations of these various approaches. We then present a model of speech segmentation that aims to reveal important sources of information for speech segmentation, and to…

  1. The semiotics of medical image Segmentation.

    PubMed

    Baxter, John S H; Gibson, Eli; Eagleson, Roy; Peters, Terry M

    2018-02-01

    As the interaction between clinicians and computational processes increases in complexity, more nuanced mechanisms are required to describe how their communication is mediated. Medical image segmentation in particular affords a large number of distinct loci for interaction which can act on a deep, knowledge-driven level which complicates the naive interpretation of the computer as a symbol processing machine. Using the perspective of the computer as dialogue partner, we can motivate the semiotic understanding of medical image segmentation. Taking advantage of Peircean semiotic traditions and new philosophical inquiry into the structure and quality of metaphors, we can construct a unified framework for the interpretation of medical image segmentation as a sign exchange in which each sign acts as an interface metaphor. This allows for a notion of finite semiosis, described through a schematic medium, that can rigorously describe how clinicians and computers interpret the signs mediating their interaction. Altogether, this framework provides a unified approach to the understanding and development of medical image segmentation interfaces. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Design and Optimization of the SPOT Primary Mirror Segment

    NASA Technical Reports Server (NTRS)

    Budinoff, Jason G.; Michaels, Gregory J.

    2005-01-01

    The 3m Spherical Primary Optical Telescope (SPOT) will utilize a single ring of 0.86111 point-to-point hexagonal mirror segments. The f2.85 spherical mirror blanks will be fabricated by the same replication process used for mass-produced commercial telescope mirrors. Diffraction-limited phasing will require segment-to-segment radius of curvature (ROC) variation of approx.1 micron. Low-cost, replicated segment ROC variations are estimated to be almost 1 mm, necessitating a method for segment ROC adjustment & matching. A mechanical architecture has been designed that allows segment ROC to be adjusted up to 400 microns while introducing a minimum figure error, allowing segment-to-segment ROC matching. A key feature of the architecture is the unique back profile of the mirror segments. The back profile of the mirror was developed with shape optimization in MSC.Nastran(TradeMark) using optical performance response equations written with SigFit. A candidate back profile was generated which minimized ROC-adjustment-induced surface error while meeting the constraints imposed by the fabrication method. Keywords: optimization, radius of curvature, Pyrex spherical mirror, Sigfit

  3. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  4. Pupil-segmentation-based adaptive optical correction of a high-numerical-aperture gradient refractive index lens for two-photon fluorescence endoscopy.

    PubMed

    Wang, Chen; Ji, Na

    2012-06-01

    The intrinsic aberrations of high-NA gradient refractive index (GRIN) lenses limit their image quality as well as field of view. Here we used a pupil-segmentation-based adaptive optical approach to correct the inherent aberrations in a two-photon fluorescence endoscope utilizing a 0.8 NA GRIN lens. By correcting the field-dependent aberrations, we recovered diffraction-limited performance across a large imaging field. The consequent improvements in imaging signal and resolution allowed us to detect fine structures that were otherwise invisible inside mouse brain slices.

  5. SVM Pixel Classification on Colour Image Segmentation

    NASA Astrophysics Data System (ADS)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  6. Open-source software platform for medical image segmentation applications

    NASA Astrophysics Data System (ADS)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  7. Prevalence of Incidental Clinoid Segment Saccular Aneurysms.

    PubMed

    Revilla-Pacheco, Francisco; Escalante-Seyffert, María Cecilia; Herrada-Pineda, Tenoch; Manrique-Guzman, Salvador; Perez-Zuniga, Irma; Rangel-Suarez, Sergio; Rubalcava-Ortega, Johnatan; Loyo-Varela, Mauro

    2018-04-12

    Clinoid segment aneurysms are cerebral vascular lesions recently described in the neurosurgical literature. They arise from the clinoid segment of the internal carotid artery, which is the segment limited rostrally by the dural carotid ring and caudally, by the carotid-oculomotor membrane. Even although clinoid segment aneurysms represent a common incidental finding in magnetic resonance studies, its prevalence has not been yet reported. To determine the prevalence of incidental clinoid segment saccular aneurysms diagnosed by magnetic resonance imaging as well as their anatomic architecture and their association with smoking, arterial hypertension, age, and sex of patients. A total of 500 patients were prospectively studied with magnetic resonance imaging time-of-flight sequence and angioresonance with contrast material, to search for incidental saccular intracranial aneurysms. The site of primary interest was the clinoid segment, but the presence of aneurysms in any other location was determined for comparison. The relation among the presence of clinoid segment aneurysms, demographic factors, and secondary diagnosis of arterial hypertension, smoking, and other vascular/neoplastic cerebral lesions was analyzed. We found a global prevalence of incidental aneurysms of 7% (95% confidence interval, 5-9), with a prevalence of clinoid segment aneurysms of 3% (95% confidence interval, 2-4). Univariate logistic regression analysis showed a statistically significant relationship among incidental aneurysms, systemic arterial hypertension (P = 0.000), and smoking (P = 0.004). In the studied population, incidental clinoid segment aneurysms constitute the variety with highest prevalence. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. 30 CFR 816.42 - Hydrologic balance: Water quality standards and effluent limitations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Hydrologic balance: Water quality standards and effluent limitations. 816.42 Section 816.42 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND... STANDARDS-SURFACE MINING ACTIVITIES § 816.42 Hydrologic balance: Water quality standards and effluent...

  9. 30 CFR 817.42 - Hydrologic balance: Water quality standards and effluent limitations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Hydrologic balance: Water quality standards and effluent limitations. 817.42 Section 817.42 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND... STANDARDS-UNDERGROUND MINING ACTIVITIES § 817.42 Hydrologic balance: Water quality standards and effluent...

  10. 30 CFR 817.42 - Hydrologic balance: Water quality standards and effluent limitations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Hydrologic balance: Water quality standards and effluent limitations. 817.42 Section 817.42 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND... STANDARDS-UNDERGROUND MINING ACTIVITIES § 817.42 Hydrologic balance: Water quality standards and effluent...

  11. 30 CFR 816.42 - Hydrologic balance: Water quality standards and effluent limitations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Hydrologic balance: Water quality standards and effluent limitations. 816.42 Section 816.42 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND... STANDARDS-SURFACE MINING ACTIVITIES § 816.42 Hydrologic balance: Water quality standards and effluent...

  12. Iterative cross section sequence graph for handwritten character segmentation.

    PubMed

    Dawoud, Amer

    2007-08-01

    The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms.

  13. Integrated multi-choice goal programming and multi-segment goal programming for supplier selection considering imperfect-quality and price-quantity discounts in a multiple sourcing environment

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Ter; Chen, Huang-Mu; Zhuang, Zheng-Yun

    2014-05-01

    Supplier selection (SS) is a multi-criteria and multi-objective problem, in which multi-segment (e.g. imperfect-quality discount (IQD) and price-quantity discount (PQD)) and multi-aspiration level problems may be significantly important; however, little attention had been given to dealing with both of them simultaneously in the past. This study proposes a model for integrating multi-choice goal programming and multi-segment goal programming to solve the above-mentioned problems by providing the following main contributions: (1) it allows decision-makers to set multiple aspiration levels on the right-hand side of each goal to suit real-world situations, (2) the PQD and IQD conditions are considered in the proposed model simultaneously and (3) the proposed model can solve a SS problem with n suppliers where each supplier offers m IQD with r PQD intervals, where only ? extra binary variables are required. The usefulness of the proposed model is explained using a real case. The results indicate that the proposed model not only can deal with a SS problem with multi-segment and multi-aspiration levels, but also can help the decision-maker to find the appropriate order quantities for each supplier by considering cost, quality and delivery.

  14. Dispersal Limitations on Fish Community Recovery Following Long-term Water Quality Remediation

    DOE PAGES

    McManamay, Ryan A.; Jett, Robert T.; Ryon, Michael G.; ...

    2016-02-22

    Holistic restoration approaches, such as water quality remediation, are likely to meet conservation objectives because they are typically implemented at watershed scales, as opposed to individual stream reaches. However, habitat fragmentation may impose constraints on the ecological effectiveness of holistic restoration strategies by limiting colonization following remediation. We questioned the importance of dispersal limitations to fish community recovery following long-term water quality remediation and species reintroductions across the White Oak Creek (WOC) watershed near Oak Ridge, Tennessee (USA). Long-term (26 years) responses in fish species richness and biomass to water quality remediation were evaluated in light of habitat fragmentation andmore » population isolation from instream barriers, which varied in their passage potential. In addition, ordination techniques were used to determine the relative importance of habitat connectivity and water quality, in explaining variation fish communities relative to environmental fluctuations, i.e. streamflow. Ecological recovery (changes in richness) at each site was negatively related to barrier index, a measure of community isolation by barriers relative to stream distance. Following species reintroductions, dispersal by fish species was consistently in the downstream direction and upstream passage above barriers was non-existent. The importance of barrier index in explaining variation in fish communities was stronger during higher flow conditions, but decreased over time an indication of increasing community stability and loss of seasonal migrants. Compared to habitat fragmentation, existing water quality concerns (i.e., outfalls, point source discharges) were unrelated to ecological recovery, but explained relatively high variation in community dynamics. Our results suggest that habitat fragmentation limited the ecological effectiveness of intensive water quality remediation efforts and fish reintroduction

  15. Dispersal Limitations on Fish Community Recovery Following Long-term Water Quality Remediation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A.; Jett, Robert T.; Ryon, Michael G.

    Holistic restoration approaches, such as water quality remediation, are likely to meet conservation objectives because they are typically implemented at watershed scales, as opposed to individual stream reaches. However, habitat fragmentation may impose constraints on the ecological effectiveness of holistic restoration strategies by limiting colonization following remediation. We questioned the importance of dispersal limitations to fish community recovery following long-term water quality remediation and species reintroductions across the White Oak Creek (WOC) watershed near Oak Ridge, Tennessee (USA). Long-term (26 years) responses in fish species richness and biomass to water quality remediation were evaluated in light of habitat fragmentation andmore » population isolation from instream barriers, which varied in their passage potential. In addition, ordination techniques were used to determine the relative importance of habitat connectivity and water quality, in explaining variation fish communities relative to environmental fluctuations, i.e. streamflow. Ecological recovery (changes in richness) at each site was negatively related to barrier index, a measure of community isolation by barriers relative to stream distance. Following species reintroductions, dispersal by fish species was consistently in the downstream direction and upstream passage above barriers was non-existent. The importance of barrier index in explaining variation in fish communities was stronger during higher flow conditions, but decreased over time an indication of increasing community stability and loss of seasonal migrants. Compared to habitat fragmentation, existing water quality concerns (i.e., outfalls, point source discharges) were unrelated to ecological recovery, but explained relatively high variation in community dynamics. Our results suggest that habitat fragmentation limited the ecological effectiveness of intensive water quality remediation efforts and fish reintroduction

  16. An Algorithm to Automate Yeast Segmentation and Tracking

    PubMed Central

    Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.

    2013-01-01

    Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484

  17. Segmented all-electron Gaussian basis sets of double and triple zeta qualities for Fr, Ra, and Ac

    NASA Astrophysics Data System (ADS)

    Campos, C. T.; de Oliveira, A. Z.; Ferreira, I. B.; Jorge, F. E.; Martins, L. S. C.

    2017-05-01

    Segmented all-electron basis sets of valence double and triple zeta qualities plus polarization functions for the elements Fr, Ra, and Ac are generated using non-relativistic and Douglas-Kroll-Hess (DKH) Hamiltonians. The sets are augmented with diffuse functions with the purpose to describe appropriately the electrons far from the nuclei. At the DKH-B3LYP level, first atomic ionization energies and bond lengths, dissociation energies, and polarizabilities of a sample of diatomics are calculated. Comparison with theoretical and experimental data available in the literature is carried out. It is verified that despite the small sizes of the basis sets, they are yet reliable.

  18. Glioblastoma Segmentation: Comparison of Three Different Software Packages.

    PubMed

    Fyllingen, Even Hovig; Stensjøen, Anne Line; Berntsen, Erik Magnus; Solheim, Ole; Reinertsen, Ingerid

    2016-01-01

    To facilitate a more widespread use of volumetric tumor segmentation in clinical studies, there is an urgent need for reliable, user-friendly segmentation software. The aim of this study was therefore to compare three different software packages for semi-automatic brain tumor segmentation of glioblastoma; namely BrainVoyagerTM QX, ITK-Snap and 3D Slicer, and to make data available for future reference. Pre-operative, contrast enhanced T1-weighted 1.5 or 3 Tesla Magnetic Resonance Imaging (MRI) scans were obtained in 20 consecutive patients who underwent surgery for glioblastoma. MRI scans were segmented twice in each software package by two investigators. Intra-rater, inter-rater and between-software agreement was compared by using differences of means with 95% limits of agreement (LoA), Dice's similarity coefficients (DSC) and Hausdorff distance (HD). Time expenditure of segmentations was measured using a stopwatch. Eighteen tumors were included in the analyses. Inter-rater agreement was highest for BrainVoyager with difference of means of 0.19 mL and 95% LoA from -2.42 mL to 2.81 mL. Between-software agreement and 95% LoA were very similar for the different software packages. Intra-rater, inter-rater and between-software DSC were ≥ 0.93 in all analyses. Time expenditure was approximately 41 min per segmentation in BrainVoyager, and 18 min per segmentation in both 3D Slicer and ITK-Snap. Our main findings were that there is a high agreement within and between the software packages in terms of small intra-rater, inter-rater and between-software differences of means and high Dice's similarity coefficients. Time expenditure was highest for BrainVoyager, but all software packages were relatively time-consuming, which may limit usability in an everyday clinical setting.

  19. Segmentation of culturally diverse visitors' values in forest recreation management

    Treesearch

    C. Li; H.C. Zinn; G.E. Chick; J.D. Absher; A.R. Graefe; Y. Hsu

    2007-01-01

    The purpose of this study was to examine the potential utility of HOFSTEDE’s measure of cultural values (1980) for group segmentation in an ethnically diverse population in a forest recreation context, and to validate the values segmentation, if any, via socio-demographic and service quality related variables. In 2002, the visitors to the Angeles National Forest (ANF)...

  20. Space Network Ground Segment Sustainment (SGSS) Project: Developing a COTS-Intensive Ground System

    NASA Technical Reports Server (NTRS)

    Saylor, Richard; Esker, Linda; Herman, Frank; Jacobsohn, Jeremy; Saylor, Rick; Hoffman, Constance

    2013-01-01

    Purpose of the Space Network Ground Segment Sustainment (SGSS) is to implement a new modern ground segment that will enable the NASA Space Network (SN) to deliver high quality services to the SN community for the future The key SGSS Goals: (1) Re-engineer the SN ground segment (2) Enable cost efficiencies in the operability and maintainability of the broader SN.

  1. A Bayesian Approach for Image Segmentation with Shape Priors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Hang; Yang, Qing; Parvin, Bahram

    2008-06-20

    Color and texture have been widely used in image segmentation; however, their performance is often hindered by scene ambiguities, overlapping objects, or missingparts. In this paper, we propose an interactive image segmentation approach with shape prior models within a Bayesian framework. Interactive features, through mouse strokes, reduce ambiguities, and the incorporation of shape priors enhances quality of the segmentation where color and/or texture are not solely adequate. The novelties of our approach are in (i) formulating the segmentation problem in a well-de?ned Bayesian framework with multiple shape priors, (ii) ef?ciently estimating parameters of the Bayesian model, and (iii) multi-object segmentationmore » through user-speci?ed priors. We demonstrate the effectiveness of our method on a set of natural and synthetic images.« less

  2. A novel pipeline for adrenal tumour segmentation.

    PubMed

    Koyuncu, Hasan; Ceylan, Rahime; Erdogan, Hasan; Sivri, Mesut

    2018-06-01

    Adrenal tumours occur on adrenal glands surrounded by organs and osteoid. These tumours can be categorized as either functional, non-functional, malign, or benign. Depending on their appearance in the abdomen, adrenal tumours can arise from one adrenal gland (unilateral) or from both adrenal glands (bilateral) and can connect with other organs, including the liver, spleen, pancreas, etc. This connection phenomenon constitutes the most important handicap against adrenal tumour segmentation. Size change, variety of shape, diverse location, and low contrast (similar grey values between the various tissues) are other disadvantages compounding segmentation difficulty. Few studies have considered adrenal tumour segmentation, and no significant improvement has been achieved for unilateral, bilateral, adherent, or noncohesive tumour segmentation. There is also no recognised segmentation pipeline or method for adrenal tumours including different shape, size, or location information. This study proposes an adrenal tumour segmentation (ATUS) pipeline designed to eliminate the above disadvantages for adrenal tumour segmentation. ATUS incorporates a number of image methods, including contrast limited adaptive histogram equalization, split and merge based on quadtree decomposition, mean shift segmentation, large grey level eliminator, and region growing. Performance assessment of ATUS was realised on 32 arterial and portal phase computed tomography images using six metrics: dice, jaccard, sensitivity, specificity, accuracy, and structural similarity index. ATUS achieved remarkable segmentation performance, and was not affected by the discussed handicaps, on particularly adherence to other organs, with success rates of 83.06%, 71.44%, 86.44%, 99.66%, 99.43%, and 98.51% for the metrics, respectively, for images including sufficient contrast uptake. The proposed ATUS system realises detailed adrenal tumour segmentation, and avoids known disadvantages preventing accurate

  3. Elaboration of a semi-automated algorithm for brain arteriovenous malformation segmentation: initial results.

    PubMed

    Clarençon, Frédéric; Maizeroi-Eugène, Franck; Bresson, Damien; Maingreaud, Flavien; Sourour, Nader; Couquet, Claude; Ayoub, David; Chiras, Jacques; Yardin, Catherine; Mounayer, Charbel

    2015-02-01

    The purpose of our study was to distinguish the different components of a brain arteriovenous malformation (bAVM) on 3D rotational angiography (3D-RA) using a semi-automated segmentation algorithm. Data from 3D-RA of 15 patients (8 males, 7 females; 14 supratentorial bAVMs, 1 infratentorial) were used to test the algorithm. Segmentation was performed in two steps: (1) nidus segmentation from propagation (vertical then horizontal) of tagging on the reference slice (i.e., the slice on which the nidus had the biggest surface); (2) contiguity propagation (based on density and variance) from tagging of arteries and veins distant from the nidus. Segmentation quality was evaluated by comparison with six frame/s DSA by two independent reviewers. Analysis of supraselective microcatheterisation was performed to dispel discrepancy. Mean duration for bAVM segmentation was 64 ± 26 min. Quality of segmentation was evaluated as good or fair in 93% of cases. Segmentation had better results than six frame/s DSA for the depiction of a focal ectasia on the main draining vein and for the evaluation of the venous drainage pattern. This segmentation algorithm is a promising tool that may help improve the understanding of bAVM angio-architecture, especially the venous drainage. • The segmentation algorithm allows for the distinction of the AVM's components • This algorithm helps to see the venous drainage of bAVMs more precisely • This algorithm may help to reduce the treatment-related complication rate.

  4. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    NASA Astrophysics Data System (ADS)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  5. Among-species differences in pollen quality and quantity limitation: implications for endemics in biodiverse hotspots.

    PubMed

    Alonso, Conchita; Navarro-Fernández, Carmen M; Arceo-Gómez, Gerardo; Meindl, George A; Parra-Tabla, Víctor; Ashman, Tia-Lynn

    2013-11-01

    Insufficient pollination is a function of quantity and quality of pollen receipt, and the relative contribution of each to pollen limitation may vary with intrinsic plant traits and extrinsic ecological properties. Community-level studies are essential to evaluate variation across species in quality limitation under common ecological conditions. This study examined whether endemic species are more limited by pollen quantity or quality than non-endemic co-flowering species in three endemic-rich plant communities located in biodiversity hotspots of different continents (Andalusia, California and Yucatan). Natural variations in pollen receipt and pollen tube formation were analysed for 20 insect-pollinated plants. Endemic and non-endemic species that co-flowered were paired in order to estimate and compare the quantity and quality components of pre-zygotic pollination success, obtained through piecewise regression analysis of the relationship between pollen grains and pollen tubes of naturally pollinated wilted flowers. Pollen tubes did not frequently exceed the number of ovules per flower. Only the combination of abundant and good quality pollen and a low number of ovules per flower conferred relief from pre-zygotic pollen limitation in the three stochastic pollination environments studied. Quality of pollen receipt was found to be as variable as quantity among study species. The relative pollination success of endemic and non-endemic species, and its quantity and quality components, was community dependent. Assessing both quality and quantity of pollen receipt is key to determining the ovule fertilization potential of both endemic and widespread plants in biodiverse hotspot regions. Large natural variation among flowers of the same species in the two components and pollen tube formation deserves further analysis in order to estimate the environmental, phenotypic and intraindividual sources of variation that may affect how plants evolve to overcome this limitation in

  6. Segmented slant hole collimator for stationary cardiac SPECT: Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Yanfei, E-mail: ymao@ucair.med.utah.edu; Yu, Zhicong; Zeng, Gengsheng L.

    2015-09-15

    Purpose: This work is a preliminary study of a stationary cardiac SPECT system. The goal of this research is to propose a stationary cardiac SPECT system using segmented slant-hole collimators and to perform computer simulations to test the feasibility. Compared to the rotational SPECT, a stationary system has a benefit of acquiring temporally consistent projections. The most challenging issue in building a stationary system is to provide sufficient projection view-angles. Methods: A GATE (GEANT4 application for tomographic emission) Monte Carlo model was developed to simulate a two-detector stationary cardiac SPECT that uses segmented slant-hole collimators. Each detector contains seven segmentedmore » slant-hole sections that slant to a common volume at the rotation center. Consequently, 14 view-angles over 180° were acquired without any gantry rotation. The NCAT phantom was used for data generation and a tailored maximum-likelihood expectation-maximization algorithm was used for image reconstruction. Effects of limited number of view-angles and data truncation were carefully evaluated in the paper. Results: Simulation results indicated that the proposed segmented slant-hole stationary cardiac SPECT system is able to acquire sufficient data for cardiac imaging without a loss of image quality, even when the uptakes in the liver and kidneys are high. Seven views are acquired simultaneously at each detector, leading to 5-fold sensitivity gain over the conventional dual-head system at the same total acquisition time, which in turn increases the signal-to-noise ratio by 19%. The segmented slant-hole SPECT system also showed a good performance in lesion detection. In our prototype system, a short hole-length was used to reduce the dead zone between neighboring collimator segments. The measured sensitivity gain is about 17-fold over the conventional dual-head system. Conclusions: The GATE Monte Carlo simulations confirm the feasibility of the proposed stationary

  7. LDR segmented mirror technology assessment study

    NASA Technical Reports Server (NTRS)

    Krim, M.; Russo, J.

    1983-01-01

    In the mid-1990s, NASA plans to orbit a giant telescope, whose aperture may be as great as 30 meters, for infrared and sub-millimeter astronomy. Its primary mirror will be deployed or assembled in orbit from a mosaic of possibly hundreds of mirror segments. Each segment must be shaped to precise curvature tolerances so that diffraction-limited performance will be achieved at 30 micron (nominal operating wavelength). All panels must lie within 1 micron on a theoretical surface described by the optical precipitation of the telescope's primary mirror. To attain diffraction-limited performance, the issues of alignment and/or position sensing, position control of micron tolerances, and structural, thermal, and mechanical considerations for stowing, deploying, and erecting the reflector must be resolved. Radius of curvature precision influences panel size, shape, material, and type of construction. Two superior material choices emerged: fused quartz (sufficiently homogeneous with respect to thermal expansivity to permit a thin shell substrate to be drape molded between graphite dies to a precise enough off-axis asphere for optical finishing on the as-received a segment) and a Pyrex or Duran (less expensive than quartz and formable at lower temperatures). The optimal reflector panel size is between 1-1/2 and 2 meters. Making one, two-meter mirror every two weeks requires new approaches to manufacturing off-axis parabolic or aspheric segments (drape molding on precision dies and subsequent finishing on a nonrotationally symmetric dependent machine). Proof-of-concept developmental programs were identified to prove the feasibility of the materials and manufacturing ideas.

  8. Combining multi-atlas segmentation with brain surface estimation

    NASA Astrophysics Data System (ADS)

    Huo, Yuankai; Carass, Aaron; Resnick, Susan M.; Pham, Dzung L.; Prince, Jerry L.; Landman, Bennett A.

    2016-03-01

    Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitation in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.

  9. Combining Multi-atlas Segmentation with Brain Surface Estimation.

    PubMed

    Huo, Yuankai; Carass, Aaron; Resnick, Susan M; Pham, Dzung L; Prince, Jerry L; Landman, Bennett A

    2016-02-27

    Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitations in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.

  10. Semiautomatic Segmentation of Glioma on Mobile Devices.

    PubMed

    Wu, Ya-Ping; Lin, Yu-Song; Wu, Wei-Guo; Yang, Cong; Gu, Jian-Qin; Bai, Yan; Wang, Mei-Yun

    2017-01-01

    Brain tumor segmentation is the first and the most critical step in clinical applications of radiomics. However, segmenting brain images by radiologists is labor intense and prone to inter- and intraobserver variability. Stable and reproducible brain image segmentation algorithms are thus important for successful tumor detection in radiomics. In this paper, we propose a supervised brain image segmentation method, especially for magnetic resonance (MR) brain images with glioma. This paper uses hard edge multiplicative intrinsic component optimization to preprocess glioma medical image on the server side, and then, the doctors could supervise the segmentation process on mobile devices in their convenient time. Since the preprocessed images have the same brightness for the same tissue voxels, they have small data size (typically 1/10 of the original image size) and simple structure of 4 types of intensity value. This observation thus allows follow-up steps to be processed on mobile devices with low bandwidth and limited computing performance. Experiments conducted on 1935 brain slices from 129 patients show that more than 30% of the sample can reach 90% similarity; over 60% of the samples can reach 85% similarity, and more than 80% of the sample could reach 75% similarity. The comparisons with other segmentation methods also demonstrate both efficiency and stability of the proposed approach.

  11. A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery.

    PubMed

    Gunawardena, Athula D A; D'Souza, Warren D; Goadrich, Laura D; Meyer, Robert R; Sorensen, Kelly J; Naqvi, Shahid A; Shi, Leyuan

    2006-05-21

    At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans.

  12. Transcriptomic insights into citrus segment membrane's cell wall components relating to fruit sensory texture.

    PubMed

    Wang, Xun; Lin, Lijin; Tang, Yi; Xia, Hui; Zhang, Xiancong; Yue, Maolan; Qiu, Xia; Xu, Ke; Wang, Zhihui

    2018-04-23

    During fresh fruit consumption, sensory texture is one factor that affects the organoleptic qualities. Chemical components of plant cell walls, including pectin, cellulose, hemicellulose and lignin, play central roles in determining the textural qualities. To explore the genes and regulatory pathways involved in fresh citrus' perceived sensory texture, we performed mRNA-seq analyses of the segment membranes of two citrus cultivars, Shiranui and Kiyomi, with different organoleptic textures. Segment membranes were sampled at two developmental stages of citrus fruit, the beginning and end of the expansion period. More than 3000 differentially expressed genes were identified. The gene ontology analysis revealed that more categories were significantly enriched in 'Shiranui' than in 'Kiyomi' at both developmental stages. In total, 108 significantly enriched pathways were obtained, with most belonging to metabolism. A detailed transcriptomic analysis revealed potential critical genes involved in the metabolism of cell wall structures, for example, GAUT4 in pectin synthesis, CESA1, 3 and 6, and SUS4 in cellulose synthesis, CSLC5, XXT1 and XXT2 in hemicellulose synthesis, and CSE in lignin synthesis. Low levels, or no expression, of genes involved in cellulose and hemicellulose, such as CESA4, CESA7, CESA8, IRX9 and IRX14, confirmed that secondary cell walls were negligible or absent in citrus segment membranes. A chemical component analysis of the segment membranes from mature fruit revealed that the pectin, cellulose and lignin contents, and the segment membrane's weight (% of segment) were greater in 'Kiyomi'. Organoleptic quality of citrus is easily overlooked. It is mainly determined by sensory texture perceived in citrus segment membrane properties. We performed mRNA-seq analyses of citrus segment membranes to explore the genes and regulatory pathways involved in fresh citrus' perceived sensory texture. Transcriptomic data showed high repeatability between two

  13. Correction tool for Active Shape Model based lumbar muscle segmentation.

    PubMed

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  14. Threshold-based segmentation of fluorescent and chromogenic images of microglia, astrocytes and oligodendrocytes in FIJI.

    PubMed

    Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una

    2018-02-01

    Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Automated segmentation and dose-volume analysis with DICOMautomaton

    NASA Astrophysics Data System (ADS)

    Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.

    2014-03-01

    Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.

  16. Electro-Optic Segment-Segment Sensors for Radio and Optical Telescopes

    NASA Technical Reports Server (NTRS)

    Abramovici, Alex

    2012-01-01

    A document discusses an electro-optic sensor that consists of a collimator, attached to one segment, and a quad diode, attached to an adjacent segment. Relative segment-segment motion causes the beam from the collimator to move across the quad diode, thus generating a measureable electric signal. This sensor type, which is relatively inexpensive, can be configured as an edge sensor, or as a remote segment-segment motion sensor.

  17. Addition of simethicone improves small bowel capsule endoscopy visualisation quality.

    PubMed

    Krijbolder, M S; Grooteman, K V; Bogers, S K; de Jong, D J

    2018-01-01

    Small bowel capsule endoscopy (SBCE) is an important diagnostic tool for small-bowel diseases but its quality may be hampered by intraluminal gas. This study evaluated the added value of the anti-foaming agent, simethicone, to a bowel preparation with polyethylene glycol (PEG) on the quality of small bowel visualisation and its use in the Netherlands. This was a retrospective, single-blind, cohort study. Patients in the PEG group only received PEG prior to SBCE. Patients in the PEG-S group ingested additional simethicone. Two investigators assessed the quality of small-bowel visualisation using a four-point scale for 'intraluminal gas' and 'faecal contamination'. By means of a survey, the use of anti-foaming agents was assessed in a random sample of 16 Dutch hospitals performing SBCE. The quality of small bowel visualisation in the PEG group (n = 33) was significantly more limited by intraluminal gas when compared with the PEG-S group (n = 31): proximal segment 83.3% in PEG group vs. 18.5% in PEG-S group (p < 0.01), distal segment 66.7% vs. 18.5% respectively (p < 0.01). No difference was observed in the amount of faecal contamination (proximal segment 80.0% PEG vs. 59.3% PEG-S, p = 0.2; distal segment 90.0% PEG vs. 85.2% PEG-S, p = 0.7), mean small bowel transit times (4.0 PEG vs. 3.9 hours PEG-S, p = 0.7) and diagnostic yield (43.3% PEG vs. 22.2% PEG-S, p = 0.16). Frequency of anti-foaming agent use in the Netherlands was low (3/16, 18.8%). Simethicone is of added value to a PEG bowel preparation in improving the quality of visualisation of the small bowel by reducing intraluminal gas. At present, the use of anti-foaming agents in SBCE preparation is not standard practice in the Netherlands.

  18. MRI Segmentation of the Human Brain: Challenges, Methods, and Applications

    PubMed Central

    Despotović, Ivana

    2015-01-01

    Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121

  19. Flexible methods for segmentation evaluation: results from CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2014-01-01

    Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.

  20. Dynamic deformable models for 3D MRI heart segmentation

    NASA Astrophysics Data System (ADS)

    Zhukov, Leonid; Bao, Zhaosheng; Gusikov, Igor; Wood, John; Breen, David E.

    2002-05-01

    Automated or semiautomated segmentation of medical images decreases interstudy variation, observer bias, and postprocessing time as well as providing clincally-relevant quantitative data. In this paper we present a new dynamic deformable modeling approach to 3D segmentation. It utilizes recently developed dynamic remeshing techniques and curvature estimation methods to produce high-quality meshes. The approach has been implemented in an interactive environment that allows a user to specify an initial model and identify key features in the data. These features act as hard constraints that the model must not pass through as it deforms. We have employed the method to perform semi-automatic segmentation of heart structures from cine MRI data.

  1. Consistent cortical reconstruction and multi-atlas brain segmentation.

    PubMed

    Huo, Yuankai; Plassard, Andrew J; Carass, Aaron; Resnick, Susan M; Pham, Dzung L; Prince, Jerry L; Landman, Bennett A

    2016-09-01

    Whole brain segmentation and cortical surface reconstruction are two essential techniques for investigating the human brain. Spatial inconsistences, which can hinder further integrated analyses of brain structure, can result due to these two tasks typically being conducted independently of each other. FreeSurfer obtains self-consistent whole brain segmentations and cortical surfaces. It starts with subcortical segmentation, then carries out cortical surface reconstruction, and ends with cortical segmentation and labeling. However, this "segmentation to surface to parcellation" strategy has shown limitations in various cohorts such as older populations with large ventricles. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. A modification called MaCRUISE(+) is designed to perform well when white matter lesions are present. Comparing to the benchmarks CRUISE and FreeSurfer, the surface accuracy of MaCRUISE and MaCRUISE(+) is validated using two independent datasets with expertly placed cortical landmarks. A third independent dataset with expertly delineated volumetric labels is employed to compare segmentation performance. Finally, 200MR volumetric images from an older adult sample are used to assess the robustness of MaCRUISE and FreeSurfer. The advantages of MaCRUISE are: (1) MaCRUISE constructs self-consistent voxelwise segmentations and cortical surfaces, while MaCRUISE(+) is robust to white matter pathology. (2) MaCRUISE achieves more accurate whole brain segmentations than independently conducting the multi-atlas segmentation. (3) MaCRUISE is comparable in accuracy to FreeSurfer (when FreeSurfer does not exhibit global failures) while achieving greater robustness across an older adult population. MaCRUISE has been made freely

  2. What provides a better value for your time? The use of relative value units to compare posterior segmental instrumentation of vertebral segments.

    PubMed

    Orr, R Douglas; Sodhi, Nipun; Dalton, Sarah E; Khlopas, Anton; Sultan, Assem A; Chughtai, Morad; Newman, Jared M; Savage, Jason; Mroz, Thomas E; Mont, Michael A

    2018-02-02

    Relative value units (RVUs) are a compensation model based on the effort required to provide a procedure or service to a patient. Thus, procedures that are more complex and require greater technical skill and aftercare, such as multilevel spine surgery, should provide greater physician compensation. However, there are limited data comparing RVUs with operative time. Therefore, this study aims to compare mean (1) operative times; (2) RVUs; and (3) RVU/min between posterior segmental instrumentation of 3-6, 7-12, and ≥13 vertebral segments, and to perform annual cost difference analysis. A total of 437 patients who underwent instrumentation of 3-6 segments (Cohort 1, current procedural terminology [CPT] code: 22842), 67 patients who had instrumentation of 7-12 segments (Cohort 2, CPT code: 22843), and 16 patients who had instrumentation of ≥13 segments (Cohort 3, CPT code: 22844) were identified from the National Surgical Quality Improvement Program (NSQIP) database. Mean operative times, RVUs, and RVU/min, as well as an annualized cost difference analysis, were calculated and compared using Student t test. This study received no funding from any party or entity. Cohort 1 had shorter mean operative times than Cohorts 2 and 3 (217 minutes vs. 325 minutes vs. 426 minutes, p<.05). Cohort 1 had a lower mean RVU than Cohorts 2 and 3 (12.6 vs. 13.4 vs. 16.4). Cohort 1 had a greater RVU/min than Cohorts 2 and 3 (0.08 vs. 0.05, p<.05; vs. 0.08 vs. 0.05, p>.05). A $112,432.12 annualized cost difference between Cohorts 1 and 2, a $176,744.76 difference between Cohorts 1 and 3, and a $64,312.55 difference between Cohorts 2 and 3 were calculated. The RVU/min takes into account not just the value provided but also the operative times required for highly complex cases. The RVU/min for fewer vertebral level instrumentation being greater (0.08 vs. 0.05), as well as the $177,000 annualized cost difference, indicates that compensation is not proportional to the added time, effort

  3. Automatic brain tissue segmentation based on graph filter.

    PubMed

    Kong, Youyong; Chen, Xiaopeng; Wu, Jiasong; Zhang, Pinzheng; Chen, Yang; Shu, Huazhong

    2018-05-09

    Accurate segmentation of brain tissues from magnetic resonance imaging (MRI) is of significant importance in clinical applications and neuroscience research. Accurate segmentation is challenging due to the tissue heterogeneity, which is caused by noise, bias filed and partial volume effects. To overcome this limitation, this paper presents a novel algorithm for brain tissue segmentation based on supervoxel and graph filter. Firstly, an effective supervoxel method is employed to generate effective supervoxels for the 3D MRI image. Secondly, the supervoxels are classified into different types of tissues based on filtering of graph signals. The performance is evaluated on the BrainWeb 18 dataset and the Internet Brain Segmentation Repository (IBSR) 18 dataset. The proposed method achieves mean dice similarity coefficient (DSC) of 0.94, 0.92 and 0.90 for the segmentation of white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) for BrainWeb 18 dataset, and mean DSC of 0.85, 0.87 and 0.57 for the segmentation of WM, GM and CSF for IBSR18 dataset. The proposed approach can well discriminate different types of brain tissues from the brain MRI image, which has high potential to be applied for clinical applications.

  4. Multi-atlas based segmentation using probabilistic label fusion with adaptive weighting of image similarity measures.

    PubMed

    Sjöberg, C; Ahnesjö, A

    2013-06-01

    Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. MRI brain tumor segmentation based on improved fuzzy c-means method

    NASA Astrophysics Data System (ADS)

    Deng, Wankai; Xiao, Wei; Pan, Chao; Liu, Jianguo

    2009-10-01

    This paper focuses on the image segmentation, which is one of the key problems in medical image processing. A new medical image segmentation method is proposed based on fuzzy c- means algorithm and spatial information. Firstly, we classify the image into the region of interest and background using fuzzy c means algorithm. Then we use the information of the tissues' gradient and the intensity inhomogeneities of regions to improve the quality of segmentation. The sum of the mean variance in the region and the reciprocal of the mean gradient along the edge of the region are chosen as an objective function. The minimum of the sum is optimum result. The result shows that the clustering segmentation algorithm is effective.

  6. Innovative visualization and segmentation approaches for telemedicine

    NASA Astrophysics Data System (ADS)

    Nguyen, D.; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2014-09-01

    In health care applications, we obtain, manage, store and communicate using high quality, large volume of image data through integrated devices. In this paper we propose several promising methods that can assist physicians in image data process and communication. We design a new semi-automated segmentation approach for radiological images, such as CT and MRI to clearly identify the areas of interest. This approach combines the advantages from both the region-based method and boundary-based methods. It has three key steps compose: coarse segmentation by using fuzzy affinity and homogeneity operator, image division and reclassification using the Voronoi Diagram, and refining boundary lines using the level set model.

  7. Segmental volvulus in the neonate: A particular clinical entity.

    PubMed

    Khen-Dunlop, Naziha; Beaudoin, Sylvie; Marion, Blandine; Rousseau, Véronique; Giuseppi, Agnes; Nicloux, Muriel; Grevent, David; Salomon, Laurent J; Aigrain, Yves; Lapillonne, Alexandre; Sarnacki, Sabine

    2017-03-01

    Complete intestinal volvulus is mainly related to congenital anomalies of the so-called intestinal malrotation, whereas segmental volvulus appears as a distinct entity, mostly observed during the perinatal period. Because these two situations are still lumped together, the aim of this study was to describe the particular condition of neonatal segmental volvulus. We analyzed the circumstances of diagnosis and management of 17 consecutives neonates operated for segmental volvulus more than a 10-year period in a single institution. During the same period, 19 cases of neonatal complete midgut volvulus were operated. Prenatal US exam anomalies were observed in 16/17 (94%) of segmental volvulus, significantly more frequently than in complete volvulus (p=0.003). Intestinal malposition was described peroperatively in all cases of complete volvulus, but also in 4/17 segmental volvulus (23%). Intestinal resection was performed in 88% of segmental volvulus when only one extensive intestinal necrosis was observed in complete volvulus. Parenteral nutrition was required in all patients with segmental volvulus with a median duration of 50days (range 5-251). Segmental volvulus occurs mainly prenatally and leads to fetal ultrasound anomalies. This situation, despite a limited length of intestinal loss, is associated to significant postnatal morbidity. Treatment study. Level IV. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Optomechanical design software for segmented mirrors

    NASA Astrophysics Data System (ADS)

    Marrero, Juan

    2016-08-01

    The software package presented in this paper, still under development, was born to help analyzing the influence of the many parameters involved in the design of a large segmented mirror telescope. In summary, it is a set of tools which were added to a common framework as they were needed. Great emphasis has been made on the graphical presentation, as scientific visualization nowadays cannot be conceived without the use of a helpful 3d environment, showing the analyzed system as close to reality as possible. Use of third party software packages is limited to ANSYS, which should be available in the system only if the FEM results are needed. Among the various functionalities of the software, the next ones are worth mentioning here: automatic 3d model construction of a segmented mirror from a set of parameters, geometric ray tracing, automatic 3d model construction of a telescope structure around the defined mirrors from a set of parameters, segmented mirror human access assessment, analysis of integration tolerances, assessment of segments collision, structural deformation under gravity and thermal variation, mirror support system analysis including warping harness mechanisms, etc.

  9. 77 FR 64039 - Limited Approval and Disapproval of Air Quality Implementation Plans; Nevada; Clark County...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-18

    ... Approval and Disapproval of Air Quality Implementation Plans; Nevada; Clark County; Stationary Source... limited approval and limited disapproval of revisions to the Clark County portion of the applicable state... limited approval and limited disapproval action is to update the applicable SIP with current Clark County...

  10. Food Insecurity and Perceived Diet Quality Among Low-Income Older Americans with Functional Limitations.

    PubMed

    Chang, Yunhee; Hickman, Haley

    2018-05-01

    To evaluate how functional limitations are associated with food insecurity and perceived diet quality in low-income older Americans. Nationwide repeated cross-sectional surveys regarding health and nutritional status. The National Health and Nutrition Examination Surveys, 2007-2008, 2009-2010, and 2011-2012. Individuals aged ≥65 years with household incomes ≤130% of the federal poverty level (n = 1,323). Dependent variables included dichotomous indicators of food insecurity and poor-quality diet, measured with the household food security survey module and respondents' own ratings, respectively. Independent variable was presence of limitations in physical functioning. Weighted logistic regressions with nested controls and interaction terms. Functional limitations in low-income older adults were associated with 1.69 times higher odds of food insecurity (P < .01) and 1.65 times higher odds of poor-quality diet (P < .01) after accounting for individuals' health care needs and socioeconomic conditions. These associations were greatest among those living alone (odds ratio = 3.38 for food insecurity; 3.07 for poor-quality diet; P < .05) and smallest among those living with a partner. Low-income older adults who live alone with functional limitations are exposed to significant nutritional risk. Resources should be directed to facilitating their physical access to healthful foods. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  11. Mining Quality Phrases from Massive Text Corpora

    PubMed Central

    Liu, Jialu; Shang, Jingbo; Wang, Chi; Ren, Xiang; Han, Jiawei

    2015-01-01

    Text data are ubiquitous and play an essential role in big data applications. However, text data are mostly unstructured. Transforming unstructured text into structured units (e.g., semantically meaningful phrases) will substantially reduce semantic ambiguity and enhance the power and efficiency at manipulating such data using database technology. Thus mining quality phrases is a critical research problem in the field of databases. In this paper, we propose a new framework that extracts quality phrases from text corpora integrated with phrasal segmentation. The framework requires only limited training but the quality of phrases so generated is close to human judgment. Moreover, the method is scalable: both computation time and required space grow linearly as corpus size increases. Our experiments on large text corpora demonstrate the quality and efficiency of the new method. PMID:26705375

  12. Pupil-segmentation-based adaptive optics for microscopy

    NASA Astrophysics Data System (ADS)

    Ji, Na; Milkie, Daniel E.; Betzig, Eric

    2011-03-01

    Inhomogeneous optical properties of biological samples make it difficult to obtain diffraction-limited resolution in depth. Correcting the sample-induced optical aberrations needs adaptive optics (AO). However, the direct wavefront-sensing approach commonly used in astronomy is not suitable for most biological samples due to their strong scattering of light. We developed an image-based AO approach that is insensitive to sample scattering. By comparing images of the sample taken with different segments of the pupil illuminated, local tilt in the wavefront is measured from image shift. The aberrated wavefront is then obtained either by measuring the local phase directly using interference or with phase reconstruction algorithms similar to those used in astronomical AO. We implemented this pupil-segmentation-based approach in a two-photon fluorescence microscope and demonstrated that diffraction-limited resolution can be recovered from nonbiological and biological samples.

  13. Distance-based over-segmentation for single-frame RGB-D images

    NASA Astrophysics Data System (ADS)

    Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao

    2017-11-01

    Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.

  14. Prostate segmentation by sparse representation based classification

    PubMed Central

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-01-01

    Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by

  15. Prostate segmentation by sparse representation based classification.

    PubMed

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-10-01

    The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in

  16. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    PubMed

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  17. Development of Image Segmentation Methods for Intracranial Aneurysms

    PubMed Central

    Qian, Yi; Morgan, Michael

    2013-01-01

    Though providing vital means for the visualization, diagnosis, and quantification of decision-making processes for the treatment of vascular pathologies, vascular segmentation remains a process that continues to be marred by numerous challenges. In this study, we validate eight aneurysms via the use of two existing segmentation methods; the Region Growing Threshold and Chan-Vese model. These methods were evaluated by comparison of the results obtained with a manual segmentation performed. Based upon this validation study, we propose a new Threshold-Based Level Set (TLS) method in order to overcome the existing problems. With divergent methods of segmentation, we discovered that the volumes of the aneurysm models reached a maximum difference of 24%. The local artery anatomical shapes of the aneurysms were likewise found to significantly influence the results of these simulations. In contrast, however, the volume differences calculated via use of the TLS method remained at a relatively low figure, at only around 5%, thereby revealing the existence of inherent limitations in the application of cerebrovascular segmentation. The proposed TLS method holds the potential for utilisation in automatic aneurysm segmentation without the setting of a seed point or intensity threshold. This technique will further enable the segmentation of anatomically complex cerebrovascular shapes, thereby allowing for more accurate and efficient simulations of medical imagery. PMID:23606905

  18. Flexible methods for segmentation evaluation: Results from CT-based luggage screening

    PubMed Central

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2017-01-01

    BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346

  19. Problem of data quality and the limitations of the infrastructure approach

    NASA Astrophysics Data System (ADS)

    Behlen, Fred M.; Sayre, Richard E.; Rackus, Edward; Ye, Dingzhong

    1998-07-01

    The 'Infrastructure Approach' is a PACS implementation methodology wherein the archive, network and information systems interfaces are acquired first, and workstations are installed later. The approach allows building a history of archived image data, so that most prior examinations are available in digital form when workstations are deployed. A limitation of the Infrastructure Approach is that the deferred use of digital image data defeats many data quality management functions that are provided automatically by human mechanisms when data is immediately used for the completion of clinical tasks. If the digital data is used solely for archiving while reports are interpreted from film, the radiologist serves only as a check against lost films, and another person must be designated as responsible for the quality of the digital data. Data from the Radiology Information System and the PACS were analyzed to assess the nature and frequency of system and data quality errors. The error level was found to be acceptable if supported by auditing and error resolution procedures requiring additional staff time, and in any case was better than the loss rate of a hardcopy film archive. It is concluded that the problem of data quality compromises but does not negate the value of the Infrastructure Approach. The Infrastructure Approach should best be employed only to a limited extent, and that any phased PACS implementation should have a substantial complement of workstations dedicated to softcopy interpretation for at least some applications, and with full deployment following not long thereafter.

  20. Real-time segmentation of burst suppression patterns in critical care EEG monitoring

    PubMed Central

    Westover, M. Brandon; Shafi, Mouhsin M.; Ching, ShiNung; Chemali, Jessica J.; Purdon, Patrick L.; Cash, Sydney S.; Brown, Emery N.

    2014-01-01

    Objective Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. Methods A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Results Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Conclusions Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Significance Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. PMID:23891828

  1. Real-time segmentation of burst suppression patterns in critical care EEG monitoring.

    PubMed

    Brandon Westover, M; Shafi, Mouhsin M; Ching, Shinung; Chemali, Jessica J; Purdon, Patrick L; Cash, Sydney S; Brown, Emery N

    2013-09-30

    Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. SEGMENTAL NEUROFIBROMATOSIS: A REPORT OF 3 CASES

    PubMed Central

    Gabhane, Sushma Kashinath; Kotwal, Mrunmayi Nishikant; Bobhate, Sudhakar K

    2010-01-01

    Neurofibromatosis is a genetic disorder of neural crest-derived cells that primarily affect growth of neural tissues. It is broadly divided into three categories: (a) von Recklinghausen's neurofibromatosis or NF-1, (b) bilateral acoustic neuroma (NF-2), and (c) all other neurofibromatoses, including alternate or atypical forms of the disease. The patients with generalized form of NF1 are characterized by multiple café-au-lait spots and neurofibromas and diagnosed easily. But when an individual has small number of lesions in a limited region of the body it could be neglected by the patient or not be recognized by the clinicians as a segmental form of neurofibromatosis. We describe three cases of segmental neurofibromatosis (SNF). These cases have been classified as segmental NF according to Riccardi's definition of SNF and classification of neurofibromatosis. Segmental form of NF may evolve into a complete form over time. Also, this disorder may be transmitted to the offspring's of these individuals. Hence genetic counseling of these individuals must include these facts. PMID:20418991

  3. Revascularization of diaphyseal bone segments by vascular bundle implantation.

    PubMed

    Nagi, O N

    2005-11-01

    Vascularized bone transfer is an effective, established treatment for avascular necrosis and atrophic or infected nonunions. However, limited donor sites and technical difficulty limit its application. Vascular bundle transplantation may provide an alternative. However, even if vascular ingrowth is presumed to occur in such situations, its extent in aiding revascularization for ultimate graft incorporation is not well understood. A rabbit tibia model was used to study and compare vascularized, segmental, diaphyseal, nonvascularized conventional, and vascular bundle-implanted grafts with a combination of angiographic, radiographic, histopathologic, and bone scanning techniques. Complete graft incorporation in conventional grafts was observed at 6 months, whereas it was 8 to 12 weeks with either of the vascularized grafts. The pattern of radionuclide uptake and the duration of graft incorporation between vascular segmental bone grafts (with intact endosteal blood supply) and vascular bundle-implanted segmental grafts were similar. A vascular bundle implanted in the recipient bone was found to anastomose extensively with the intraosseous circulation at 6 weeks. Effective revascularization of bone could be seen when a simple vascular bundle was introduced into a segment of bone deprived of its normal blood supply. This simple technique offers promise for improvement of bone graft survival in clinical circumstances.

  4. Limited Use of Price and Quality Advertising Among American Hospitals

    PubMed Central

    Wilks, Chrisanne E A; Richter, Jason P

    2013-01-01

    Background Consumer-directed policies, including health savings accounts, have been proposed and implemented to involve individuals more directly with the cost of their health care. The hope is this will ultimately encourage providers to compete for patients based on price or quality, resulting in lower health care costs and better health outcomes. Objective To evaluate American hospital websites to learn whether hospitals advertise directly to consumers using price or quality data. Methods Structured review of websites of 10% of American hospitals (N=474) to evaluate whether price or quality information is available to consumers and identify what hospitals advertise about to attract consumers. Results On their websites, 1.3% (6/474) of hospitals advertised about price and 19.0% (90/474) had some price information available; 5.7% (27/474) of hospitals advertised about quality outcomes information and 40.9% (194/474) had some quality outcome data available. Price and quality information that was available was limited and of minimal use to compare hospitals. Hospitals were more likely to advertise about service lines (56.5%, 268/474), access (49.6%, 235/474), awards (34.0%, 161/474), and amenities (30.8%, 146/474). Conclusions Insufficient information currently exists for consumers to choose hospitals on the basis of price or quality, making current consumer-directed policies unlikely to realize improved quality or lower costs. Consumers may be more interested in information not related to cost or clinical factors when choosing a hospital, so consumer-directed strategies may be better served before choosing a provider, such as when choosing a health plan. PMID:23988296

  5. Limited use of price and quality advertising among American hospitals.

    PubMed

    Muhlestein, David B; Wilks, Chrisanne E A; Richter, Jason P

    2013-08-29

    Consumer-directed policies, including health savings accounts, have been proposed and implemented to involve individuals more directly with the cost of their health care. The hope is this will ultimately encourage providers to compete for patients based on price or quality, resulting in lower health care costs and better health outcomes. To evaluate American hospital websites to learn whether hospitals advertise directly to consumers using price or quality data. Structured review of websites of 10% of American hospitals (N=474) to evaluate whether price or quality information is available to consumers and identify what hospitals advertise about to attract consumers. On their websites, 1.3% (6/474) of hospitals advertised about price and 19.0% (90/474) had some price information available; 5.7% (27/474) of hospitals advertised about quality outcomes information and 40.9% (194/474) had some quality outcome data available. Price and quality information that was available was limited and of minimal use to compare hospitals. Hospitals were more likely to advertise about service lines (56.5%, 268/474), access (49.6%, 235/474), awards (34.0%, 161/474), and amenities (30.8%, 146/474). Insufficient information currently exists for consumers to choose hospitals on the basis of price or quality, making current consumer-directed policies unlikely to realize improved quality or lower costs. Consumers may be more interested in information not related to cost or clinical factors when choosing a hospital, so consumer-directed strategies may be better served before choosing a provider, such as when choosing a health plan.

  6. Segmentation and clustering as complementary sources of information

    NASA Astrophysics Data System (ADS)

    Dale, Michael B.; Allison, Lloyd; Dale, Patricia E. R.

    2007-03-01

    This paper examines the effects of using a segmentation method to identify change-points or edges in vegetation. It identifies coherence (spatial or temporal) in place of unconstrained clustering. The segmentation method involves change-point detection along a sequence of observations so that each cluster formed is composed of adjacent samples; this is a form of constrained clustering. The protocol identifies one or more models, one for each section identified, and the quality of each is assessed using a minimum message length criterion, which provides a rational basis for selecting an appropriate model. Although the segmentation is less efficient than clustering, it does provide other information because it incorporates textural similarity as well as homogeneity. In addition it can be useful in determining various scales of variation that may apply to the data, providing a general method of small-scale pattern analysis.

  7. A dynamic water-quality modeling framework for the Neuse River estuary, North Carolina

    USGS Publications Warehouse

    Bales, Jerad D.; Robbins, Jeanne C.

    1999-01-01

    As a result of fish kills in the Neuse River estuary in 1995, nutrient reduction strategies were developed for point and nonpoint sources in the basin. However, because of the interannual variability in the natural system and the resulting complex hydrologic-nutrient inter- actions, it is difficult to detect through a short-term observational program the effects of management activities on Neuse River estuary water quality and aquatic health. A properly constructed water-quality model can be used to evaluate some of the potential effects of manage- ment actions on estuarine water quality. Such a model can be used to predict estuarine response to present and proposed nutrient strategies under the same set of meteorological and hydrologic conditions, thus removing the vagaries of weather and streamflow from the analysis. A two-dimensional, laterally averaged hydrodynamic and water-quality modeling framework was developed for the Neuse River estuary by using previously collected data. Development of the modeling framework consisted of (1) computational grid development, (2) assembly of data for model boundary conditions and model testing, (3) selection of initial values of model parameters, and (4) limited model testing. The model domain extends from Streets Ferry to Oriental, N.C., includes seven lateral embayments that have continual exchange with the main- stem of the estuary, three point-source discharges, and three tributary streams. Thirty-five computational segments represent the mainstem of the estuary, and the entire framework contains a total of 60 computa- tional segments. Each computational cell is 0.5 meter thick; segment lengths range from 500 meters to 7,125 meters. Data that were used to develop the modeling framework were collected during March through October 1991 and represent the most comprehensive data set available prior to 1997. Most of the data were collected by the North Carolina Division of Water Quality, the University of North Carolina

  8. Segmentation in Tardigrada and diversification of segmental patterns in Panarthropoda.

    PubMed

    Smith, Frank W; Goldstein, Bob

    2017-05-01

    The origin and diversification of segmented metazoan body plans has fascinated biologists for over a century. The superphylum Panarthropoda includes three phyla of segmented animals-Euarthropoda, Onychophora, and Tardigrada. This superphylum includes representatives with relatively simple and representatives with relatively complex segmented body plans. At one extreme of this continuum, euarthropods exhibit an incredible diversity of serially homologous segments. Furthermore, distinct tagmosis patterns are exhibited by different classes of euarthropods. At the other extreme, all tardigrades share a simple segmented body plan that consists of a head and four leg-bearing segments. The modular body plans of panarthropods make them a tractable model for understanding diversification of animal body plans more generally. Here we review results of recent morphological and developmental studies of tardigrade segmentation. These results complement investigations of segmentation processes in other panarthropods and paleontological studies to illuminate the earliest steps in the evolution of panarthropod body plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.

    2016-06-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).

  10. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study

    NASA Astrophysics Data System (ADS)

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21

  11. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  12. Osteoimmune Mechanisms of Segmental Bone Fracture Healing and Therapy

    DTIC Science & Technology

    2016-09-01

    to civilians. Despite efforts involving allografts , surgery and fixators, intramedullary nailing and invasive plate fixing to heal segmental...efforts are focused on: tissue engineering approaches aimed at developing osteoconductive scaffolds, better quality synthetic bone grafts, and use of

  13. Validation of automatic segmentation of ribs for NTCP modeling.

    PubMed

    Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob

    2016-03-01

    Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Automatic and hierarchical segmentation of the human skeleton in CT images.

    PubMed

    Fu, Yabo; Liu, Shi; Li, Harold; Yang, Deshan

    2017-04-07

    Accurate segmentation of each bone of the human skeleton is useful in many medical disciplines. The results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulty due to the high image contrast between the bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to the many limitations of the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all the major individual bones of the human skeleton above the upper legs in CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. The degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. The segmentation results are evaluated using the Dice coefficient and point-to-surface error (PSE) against manual segmentation results as the ground-truth. The results suggest that the reported method can automatically segment and label the human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for the mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic

  15. Automatic and hierarchical segmentation of the human skeleton in CT images

    NASA Astrophysics Data System (ADS)

    Fu, Yabo; Liu, Shi; Li, H. Harold; Yang, Deshan

    2017-04-01

    Accurate segmentation of each bone of the human skeleton is useful in many medical disciplines. The results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulty due to the high image contrast between the bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to the many limitations of the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all the major individual bones of the human skeleton above the upper legs in CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. The degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. The segmentation results are evaluated using the Dice coefficient and point-to-surface error (PSE) against manual segmentation results as the ground-truth. The results suggest that the reported method can automatically segment and label the human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for the mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic

  16. A contrast enhancement method for improving the segmentation of breast lesions on ultrasonography.

    PubMed

    Flores, Wilfrido Gómez; Pereira, Wagner Coelho de Albuquerque

    2017-01-01

    This paper presents an adaptive contrast enhancement method based on sigmoidal mapping function (SACE) used for improving the computerized segmentation of breast lesions on ultrasound. First, from the original ultrasound image an intensity variation map is obtained, which is used to generate local sigmoidal mapping functions related to distinct contextual regions. Then, a bilinear interpolation scheme is used to transform every original pixel to a new gray level value. Also, four contrast enhancement techniques widely used in breast ultrasound enhancement are implemented: histogram equalization (HEQ), contrast limited adaptive histogram equalization (CLAHE), fuzzy enhancement (FEN), and sigmoid based enhancement (SEN). In addition, these contrast enhancement techniques are considered in a computerized lesion segmentation scheme based on watershed transformation. The performance comparison among techniques is assessed in terms of both the quality of contrast enhancement and the segmentation accuracy. The former is quantified by the measure, where the greater the value, the better the contrast enhancement, whereas the latter is calculated by the Jaccard index, which should tend towards unity to indicate adequate segmentation. The experiments consider a data set with 500 breast ultrasound images. The results show that SACE outperforms its counterparts, where the median values for the measure are: SACE: 139.4, SEN: 68.2, HEQ: 64.1, CLAHE: 62.8, and FEN: 7.9. Considering the segmentation performance results, the SACE method presents the largest accuracy, where the median values for the Jaccard index are: SACE: 0.81, FEN: 0.80, CLAHE: 0.79, HEQ: 77, and SEN: 0.63. The SACE method performs well due to the combination of three elements: (1) the intensity variation map reduces intensity variations that could distort the real response of the mapping function, (2) the sigmoidal mapping function enhances the gray level range where the transition between lesion and background

  17. Exploring the Constraint Profile of Winter Sports Resort Tourist Segments.

    PubMed

    Priporas, Constantinos-Vasilios; Vassiliadis, Chris A; Bellou, Victoria; Andronikidis, Andreas

    2015-09-01

    Many studies have confirmed the importance of market segmentation both theoretically and empirically. Surprisingly though, no study has so far addressed the issue from the perspective of leisure constraints. Since different consumers face different barriers, we look at participation in leisure activities as an outcome of the negotiation process that winter sports resort tourists go through, to balance between related motives and constraints. This empirical study reports the findings on the applicability of constraining factors in segmenting the tourists who visit winter sports resorts. Utilizing data from 1,391 tourists of winter sports resorts in Greece, five segments were formed based on their constraint, demographic, and behavioral profile. Our findings indicate that such segmentation sheds light on factors that could potentially limit the full utilization of the market. To maximize utilization, we suggest customizing marketing to the profile of each distinct winter sports resort tourist segment that emerged.

  18. Exploring the Constraint Profile of Winter Sports Resort Tourist Segments

    PubMed Central

    Priporas, Constantinos-Vasilios; Vassiliadis, Chris A.; Bellou, Victoria; Andronikidis, Andreas

    2014-01-01

    Many studies have confirmed the importance of market segmentation both theoretically and empirically. Surprisingly though, no study has so far addressed the issue from the perspective of leisure constraints. Since different consumers face different barriers, we look at participation in leisure activities as an outcome of the negotiation process that winter sports resort tourists go through, to balance between related motives and constraints. This empirical study reports the findings on the applicability of constraining factors in segmenting the tourists who visit winter sports resorts. Utilizing data from 1,391 tourists of winter sports resorts in Greece, five segments were formed based on their constraint, demographic, and behavioral profile. Our findings indicate that such segmentation sheds light on factors that could potentially limit the full utilization of the market. To maximize utilization, we suggest customizing marketing to the profile of each distinct winter sports resort tourist segment that emerged. PMID:29708114

  19. Development of the segment alignment maintenance system (SAMS) for the Hobby-Eberly Telescope

    NASA Astrophysics Data System (ADS)

    Booth, John A.; Adams, Mark T.; Ames, Gregory H.; Fowler, James R.; Montgomery, Edward E.; Rakoczy, John M.

    2000-07-01

    A sensing and control system for maintaining optical alignment of ninety-one 1-meter mirror segments forming the Hobby-Eberly Telescope (HET) primary mirror array is now under development. The Segment Alignment Maintenance System (SAMS) is designed to sense relative shear motion between each segment edge pair and calculated individual segment tip, tilt, and piston position errors. Error information is sent to the HET primary mirror control system, which corrects the physical position of each segment as often as once per minute. Development of SAMS is required to meet optical images quality specifications for the telescope. Segment misalignment over time is though to be due to thermal inhomogeneity within the steel mirror support truss. Challenging problems of sensor resolution, dynamic range, mechanical mounting, calibration, stability, robust algorithm development, and system integration must be overcome to achieve a successful operational solution.

  20. Computer Based Melanocytic and Nevus Image Enhancement and Segmentation.

    PubMed

    Jamil, Uzma; Akram, M Usman; Khalid, Shehzad; Abbas, Sarmad; Saleem, Kashif

    2016-01-01

    Digital dermoscopy aids dermatologists in monitoring potentially cancerous skin lesions. Melanoma is the 5th common form of skin cancer that is rare but the most dangerous. Melanoma is curable if it is detected at an early stage. Automated segmentation of cancerous lesion from normal skin is the most critical yet tricky part in computerized lesion detection and classification. The effectiveness and accuracy of lesion classification are critically dependent on the quality of lesion segmentation. In this paper, we have proposed a novel approach that can automatically preprocess the image and then segment the lesion. The system filters unwanted artifacts including hairs, gel, bubbles, and specular reflection. A novel approach is presented using the concept of wavelets for detection and inpainting the hairs present in the cancer images. The contrast of lesion with the skin is enhanced using adaptive sigmoidal function that takes care of the localized intensity distribution within a given lesion's images. We then present a segmentation approach to precisely segment the lesion from the background. The proposed approach is tested on the European database of dermoscopic images. Results are compared with the competitors to demonstrate the superiority of the suggested approach.

  1. Active Segmentation.

    PubMed

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  2. Aberration correction in wide-field fluorescence microscopy by segmented-pupil image interferometry.

    PubMed

    Scrimgeour, Jan; Curtis, Jennifer E

    2012-06-18

    We present a new technique for the correction of optical aberrations in wide-field fluorescence microscopy. Segmented-Pupil Image Interferometry (SPII) uses a liquid crystal spatial light modulator placed in the microscope's pupil plane to split the wavefront originating from a fluorescent object into an array of individual beams. Distortion of the wavefront arising from either system or sample aberrations results in displacement of the images formed from the individual pupil segments. Analysis of image registration allows for the local tilt in the wavefront at each segment to be corrected with respect to a central reference. A second correction step optimizes the image intensity by adjusting the relative phase of each pupil segment through image interferometry. This ensures that constructive interference between all segments is achieved at the image plane. Improvements in image quality are observed when Segmented-Pupil Image Interferometry is applied to correct aberrations arising from the microscope's optical path.

  3. Segmentation of pomegranate MR images using spatial fuzzy c-means (SFCM) algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation is one of the fundamental issues of image processing and machine vision. It plays a prominent role in a variety of image processing applications. In this paper, one of the most important applications of image processing in MRI segmentation of pomegranate is explored. Pomegranate is a fruit with pharmacological properties such as being anti-viral and anti-cancer. Having a high quality product in hand would be critical factor in its marketing. The internal quality of the product is comprehensively important in the sorting process. The determination of qualitative features cannot be manually made. Therefore, the segmentation of the internal structures of the fruit needs to be performed as accurately as possible in presence of noise. Fuzzy c-means (FCM) algorithm is noise-sensitive and pixels with noise are classified inversely. As a solution, in this paper, the spatial FCM algorithm in pomegranate MR images' segmentation is proposed. The algorithm is performed with setting the spatial neighborhood information in FCM and modification of fuzzy membership function for each class. The segmentation algorithm results on the original and the corrupted Pomegranate MR images by Gaussian, Salt Pepper and Speckle noises show that the SFCM algorithm operates much more significantly than FCM algorithm. Also, after diverse steps of qualitative and quantitative analysis, we have concluded that the SFCM algorithm with 5×5 window size is better than the other windows.

  4. Enhancing atlas based segmentation with multiclass linear classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible localmore » registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.« less

  5. Novel multiresolution mammographic density segmentation using pseudo 3D features and adaptive cluster merging

    NASA Astrophysics Data System (ADS)

    He, Wenda; Juette, Arne; Denton, Erica R. E.; Zwiggelaar, Reyer

    2015-03-01

    Breast cancer is the most frequently diagnosed cancer in women. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective ways to overcome the disease. Successful mammographic density segmentation is a key aspect in deriving correct tissue composition, ensuring an accurate mammographic risk assessment. However, mammographic densities have not yet been fully incorporated with non-image based risk prediction models, (e.g. the Gail and the Tyrer-Cuzick model), because of unreliable segmentation consistency and accuracy. This paper presents a novel multiresolution mammographic density segmentation, a concept of stack representation is proposed, and 3D texture features were extracted by adapting techniques based on classic 2D first-order statistics. An unsupervised clustering technique was employed to achieve mammographic segmentation, in which two improvements were made; 1) consistent segmentation by incorporating an optimal centroids initialisation step, and 2) significantly reduced the number of missegmentation by using an adaptive cluster merging technique. A set of full field digital mammograms was used in the evaluation. Visual assessment indicated substantial improvement on segmented anatomical structures and tissue specific areas, especially in low mammographic density categories. The developed method demonstrated an ability to improve the quality of mammographic segmentation via clustering, and results indicated an improvement of 26% in segmented image with good quality when compared with the standard clustering approach. This in turn can be found useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  6. Image segmentation-based robust feature extraction for color image watermarking

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  7. Control and Automation of Fluid Flow, Mass Transfer and Chemical Reactions in Microscale Segmented Flow

    NASA Astrophysics Data System (ADS)

    Abolhasani, Milad

    Flowing trains of uniformly sized bubbles/droplets (i.e., segmented flows) and the associated mass transfer enhancement over their single-phase counterparts have been studied extensively during the past fifty years. Although the scaling behaviour of segmented flow formation is increasingly well understood, the predictive adjustment of the desired flow characteristics that influence the mixing and residence times, remains a challenge. Currently, a time consuming, slow and often inconsistent manual manipulation of experimental conditions is required to address this task. In my thesis, I have overcome the above-mentioned challenges and developed an experimental strategy that for the first time provided predictive control over segmented flows in a hands-off manner. A computer-controlled platform that consisted of a real-time image processing module within an integral controller, a silicon-based microreactor and automated fluid delivery technique was designed, implemented and validated. In a first part of my thesis I utilized this approach for the automated screening of physical mass transfer and solubility characteristics of carbon dioxide (CO2) in a physical solvent at a well-defined temperature and pressure and a throughput of 12 conditions per hour. Second, by applying the segmented flow approach to a recently discovered CO2 chemical absorbent, frustrated Lewis pairs (FLPs), I determined the thermodynamic characteristics of the CO2-FLP reaction. Finally, the segmented flow approach was employed for characterization and investigation of CO2-governed liquid-liquid phase separation process. The second part of my thesis utilized the segmented flow platform for the preparation and shape control of high quality colloidal nanomaterials (e.g., CdSe/CdS) via the automated control of residence times up to approximately 5 minutes. By introducing a novel oscillatory segmented flow concept, I was able to further extend the residence time limitation to 24 hours. A case study of a

  8. Improving HIV outcomes in resource-limited countries: the importance of quality indicators.

    PubMed

    Ahonkhai, Aima A; Bassett, Ingrid V; Ferris, Timothy G; Freedberg, Kenneth A

    2012-11-24

    Resource-limited countries increasingly depend on quality indicators to improve outcomes within HIV treatment programs, but indicators of program performance suitable for use at the local program level remain underdeveloped. Using the existing literature as a guide, we applied standard quality improvement (QI) concepts to the continuum of HIV care from HIV diagnosis, to enrollment and retention in care, and highlighted critical service delivery process steps to identify opportunities for performance indicator development. We then identified existing indicators to measure program performance, citing examples used by pivotal donor agencies, and assessed their feasibility for use in surveying local program performance. Clinical delivery steps without existing performance measures were identified as opportunities for measure development. Using National Quality Forum (NQF) criteria as a guide, we developed measurement concepts suitable for use at the local program level that address existing gaps in program performance assessment. This analysis of the HIV continuum of care identified seven critical process steps providing numerous opportunities for performance measurement. Analysis of care delivery process steps and the application of NQF criteria identified 24 new measure concepts that are potentially useful for improving operational performance in HIV care at the local level. An evidence-based set of program-level quality indicators is critical for the improvement of HIV care in resource-limited settings. These performance indicators should be utilized as treatment programs continue to grow.

  9. Assessment of a spectral domain OCT segmentation software in a retrospective cohort study of exudative AMD patients.

    PubMed

    Tilleul, Julien; Querques, Giuseppe; Canoui-Poitrine, Florence; Leveziel, Nicolas; Souied, Eric H

    2013-01-01

    To assess the ability of the Spectralis optical coherence tomography (OCT) segmentation software to identify the inner limiting membrane and Bruch's membrane in exudative age-related macular degeneration (AMD) patients. Thirty-eight eyes of 38 naive exudative AMD patients were retrospectively included. They all had a complete ophthalmologic examination including Spectralis OCT at baseline, at month 1 and 2. Reliability of the segmentation software was assessed by 2 ophthalmologists. Reliability of the segmentation software was defined as good if both inner limiting membrane and Bruch's membrane were correctly drawn. A total of 38 patients charts were reviewed (114 scans). The inner limiting membrane was correctly drawn by the segmentation software in 114/114 spectral domain OCT scans (100%). Conversely, Bruch's membrane was correctly drawn in 59/114 scans (51.8%). The software was less reliable in locating Bruch's membrane in case of pigment epithelium detachment (PED) than without PED (42.5 vs. 73.5%, respectively; p = 0.049), but its reliability was not associated with SRF or CME (p = 0.55 and p = 0.10, respectively). Segmentation of the inner limiting membrane was constantly trustworthy but Bruch's membrane segmentation was poorly reliable using the automatic Spectralis segmentation software. Based on this software, evaluation of retinal thickness may be incorrect, particularly in case of PED. PED is effectively an important parameter which is not included when measuring retinal thickness. Copyright © 2012 S. Karger AG, Basel.

  10. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  11. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    NASA Astrophysics Data System (ADS)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  12. An Approach for Reducing the Error Rate in Automated Lung Segmentation

    PubMed Central

    Gill, Gurman; Beichel, Reinhard R.

    2016-01-01

    Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855 ± 0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897

  13. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images.

    PubMed

    Vázquez, David; Bernal, Jorge; Sánchez, F Javier; Fernández-Esparrach, Gloria; López, Antonio M; Romero, Adriana; Drozdzal, Michal; Courville, Aaron

    2017-01-01

    Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

  14. Denoising and segmentation of retinal layers in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Dash, Puspita; Sigappi, A. N.

    2018-04-01

    Optical Coherence Tomography (OCT) is an imaging technique used to localize the intra-retinal boundaries for the diagnostics of macular diseases. Due to speckle noise, low image contrast and accurate segmentation of individual retinal layers is difficult. Due to this, a method for retinal layer segmentation from OCT images is presented. This paper proposes a pre-processing filtering approach for denoising and segmentation methods for segmenting retinal layers OCT images using graph based segmentation technique. These techniques are used for segmentation of retinal layers for normal as well as patients with Diabetic Macular Edema. The algorithm based on gradient information and shortest path search is applied to optimize the edge selection. In this paper the four main layers of the retina are segmented namely Internal limiting membrane (ILM), Retinal pigment epithelium (RPE), Inner nuclear layer (INL) and Outer nuclear layer (ONL). The proposed method is applied on a database of OCT images of both ten normal and twenty DME affected patients and the results are found to be promising.

  15. MIA-Clustering: a novel method for segmentation of paleontological material.

    PubMed

    Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M

    2018-01-01

    Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  16. System Estimates Radius of Curvature of a Segmented Mirror

    NASA Technical Reports Server (NTRS)

    Rakoczy, John

    2008-01-01

    A system that estimates the global radius of curvature (GRoC) of a segmented telescope mirror has been developed for use as one of the subsystems of a larger system that exerts precise control over the displacements of the mirror segments. This GRoC-estimating system, when integrated into the overall control system along with a mirror-segment- actuation subsystem and edge sensors (sensors that measure displacements at selected points on the edges of the segments), makes it possible to control the GROC mirror-deformation mode, to which mode contemporary edge sensors are insufficiently sensitive. This system thus makes it possible to control the GRoC of the mirror with sufficient precision to obtain the best possible image quality and/or to impose a required wavefront correction on incoming or outgoing light. In its mathematical aspect, the system utilizes all the information available from the edge-sensor subsystem in a unique manner that yields estimates of all the states of the segmented mirror. The system does this by exploiting a special set of mirror boundary conditions and mirror influence functions in such a way as to sense displacements in degrees of freedom that would otherwise be unobservable by means of an edge-sensor subsystem, all without need to augment the edge-sensor system with additional metrological hardware. Moreover, the accuracy of the estimates increases with the number of mirror segments.

  17. Pleural effusion segmentation in thin-slice CT

    NASA Astrophysics Data System (ADS)

    Donohue, Rory; Shearer, Andrew; Bruzzi, John; Khosa, Huma

    2009-02-01

    A pleural effusion is excess fluid that collects in the pleural cavity, the fluid-filled space that surrounds the lungs. Surplus amounts of such fluid can impair breathing by limiting the expansion of the lungs during inhalation. Measuring the fluid volume is indicative of the effectiveness of any treatment but, due to the similarity to surround regions, fragments of collapsed lung present and topological changes; accurate quantification of the effusion volume is a difficult imaging problem. A novel code is presented which performs conditional region growth to accurately segment the effusion shape across a dataset. We demonstrate the applicability of our technique in the segmentation of pleural effusion and pulmonary masses.

  18. Wavelet-based Encoding Scheme for Controlling Size of Compressed ECG Segments in Telecardiology Systems.

    PubMed

    Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben

    2017-09-12

    One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.

  19. Cortical Enhanced Tissue Segmentation of Neonatal Brain MR Images Acquired by a Dedicated Phased Array Coil

    PubMed Central

    Shi, Feng; Yap, Pew-Thian; Fan, Yong; Cheng, Jie-Zhi; Wald, Lawrence L.; Gerig, Guido; Lin, Weili; Shen, Dinggang

    2010-01-01

    The acquisition of high quality MR images of neonatal brains is largely hampered by their characteristically small head size and low tissue contrast. As a result, subsequent image processing and analysis, especially for brain tissue segmentation, are often hindered. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by effectively combing images obtained from 8 coil elements without lengthening data acquisition time. In addition, a subject-specific atlas based tissue segmentation algorithm is specifically developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in neonatal images with a Hessian filter for generation of cortical GM prior. Then, the prior is combined with our neonatal population atlas to form a cortical enhanced hybrid atlas, which we refer to as the subject-specific atlas. Various experiments are conducted to compare the proposed method with manual segmentation results, as well as with additional two population atlas based segmentation methods. Results show that the proposed method is capable of segmenting the neonatal brain with the highest accuracy, compared to other two methods. PMID:20862268

  20. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  1. Semi-automated brain tumor and edema segmentation using MRI.

    PubMed

    Xie, Kai; Yang, Jie; Zhang, Z G; Zhu, Y M

    2005-10-01

    Manual segmentation of brain tumors from magnetic resonance images is a challenging and time-consuming task. A semi-automated method has been developed for brain tumor and edema segmentation that will provide objective, reproducible segmentations that are close to the manual results. Additionally, the method segments non-enhancing brain tumor and edema from healthy tissues in magnetic resonance images. In this study, a semi-automated method was developed for brain tumor and edema segmentation and volume measurement using magnetic resonance imaging (MRI). Some novel algorithms for tumor segmentation from MRI were integrated in this medical diagnosis system. We exploit a hybrid level set (HLS) segmentation method driven by region and boundary information simultaneously, region information serves as a propagation force which is robust and boundary information serves as a stopping functional which is accurate. Ten different patients with brain tumors of different size, shape and location were selected, a total of 246 axial tumor-containing slices obtained from 10 patients were used to evaluate the effectiveness of segmentation methods. This method was applied to 10 non-enhancing brain tumors and satisfactory results were achieved. Two quantitative measures for tumor segmentation quality estimation, namely, correspondence ratio (CR) and percent matching (PM), were performed. For the segmentation of brain tumor, the volume total PM varies from 79.12 to 93.25% with the mean of 85.67+/-4.38% while the volume total CR varies from 0.74 to 0.91 with the mean of 0.84+/-0.07. For the segmentation of edema, the volume total PM varies from 72.86 to 87.29% with the mean of 79.54+/-4.18% while the volume total CR varies from 0.69 to 0.85 with the mean of 0.79+/-0.08. The HLS segmentation method perform better than the classical level sets (LS) segmentation method in PM and CR. The results of this research may have potential applications, both as a staging procedure and a method of

  2. Deep learning and texture-based semantic label fusion for brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Vidyaratne, L.; Alam, M.; Shboul, Z.; Iftekharuddin, K. M.

    2018-02-01

    Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.

  3. Deep Learning and Texture-Based Semantic Label Fusion for Brain Tumor Segmentation.

    PubMed

    Vidyaratne, L; Alam, M; Shboul, Z; Iftekharuddin, K M

    2018-01-01

    Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.

  4. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    PubMed

    Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf

    2014-01-01

    Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  5. Segmentation in cohesive systems constrained by elastic environments

    PubMed Central

    Novak, I.

    2017-01-01

    The complexity of fracture-induced segmentation in elastically constrained cohesive (fragile) systems originates from the presence of competing interactions. The role of discreteness in such phenomena is of interest in a variety of fields, from hierarchical self-assembly to developmental morphogenesis. In this paper, we study the analytically solvable example of segmentation in a breakable mass–spring chain elastically linked to a deformable lattice structure. We explicitly construct the complete set of local minima of the energy in this prototypical problem and identify among them the states corresponding to the global energy minima. We show that, even in the continuum limit, the dependence of the segmentation topology on the stretching/pre-stress parameter in this problem takes the form of a devil's type staircase. The peculiar nature of this staircase, characterized by locking in rational microstructures, is of particular importance for biological applications, where its structure may serve as an explanation of the robustness of stress-driven segmentation. This article is part of the themed issue ‘Patterning through instabilities in complex media: theory and applications.’ PMID:28373383

  6. Segmentation in cohesive systems constrained by elastic environments

    NASA Astrophysics Data System (ADS)

    Novak, I.; Truskinovsky, L.

    2017-04-01

    The complexity of fracture-induced segmentation in elastically constrained cohesive (fragile) systems originates from the presence of competing interactions. The role of discreteness in such phenomena is of interest in a variety of fields, from hierarchical self-assembly to developmental morphogenesis. In this paper, we study the analytically solvable example of segmentation in a breakable mass-spring chain elastically linked to a deformable lattice structure. We explicitly construct the complete set of local minima of the energy in this prototypical problem and identify among them the states corresponding to the global energy minima. We show that, even in the continuum limit, the dependence of the segmentation topology on the stretching/pre-stress parameter in this problem takes the form of a devil's type staircase. The peculiar nature of this staircase, characterized by locking in rational microstructures, is of particular importance for biological applications, where its structure may serve as an explanation of the robustness of stress-driven segmentation. This article is part of the themed issue 'Patterning through instabilities in complex media: theory and applications.'

  7. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images.

    PubMed

    Pei, Yuru; Ai, Xingsheng; Zha, Hongbin; Xu, Tianmin; Ma, Gengyu

    2016-09-01

    Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics

  8. Android application for handwriting segmentation using PerTOHS theory

    NASA Astrophysics Data System (ADS)

    Akouaydi, Hanen; Njah, Sourour; Alimi, Adel M.

    2017-03-01

    The paper handles the problem of segmentation of handwriting on mobile devices. Many applications have been developed in order to facilitate the recognition of handwriting and to skip the limited numbers of keys in keyboards and try to introduce a space of drawing for writing instead of using keyboards. In this one, we will present a mobile theory for the segmentation of for handwriting uses PerTOHS theory, Perceptual Theory of On line Handwriting Segmentation, where handwriting is defined as a sequence of elementary and perceptual codes. In fact, the theory analyzes the written script and tries to learn the handwriting visual codes features in order to generate new ones via the generated perceptual sequences. To get this classification we try to apply the Beta-elliptic model, fuzzy detector and also genetic algorithms in order to get the EPCs (Elementary Perceptual Codes) and GPCs (Global Perceptual Codes) that composed the script. So, we will present our Android application M-PerTOHS for segmentation of handwriting.

  9. Automatic atlas-based three-label cartilage segmentation from MR knee images

    PubMed Central

    Shan, Liang; Zach, Christopher; Charles, Cecil; Niethammer, Marc

    2016-01-01

    Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies. Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces – for example the interface between femoral and tibial cartilage. This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions. We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset. PMID:25128683

  10. Two-stage atlas subset selection in multi-atlas based image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The

  11. Two-stage atlas subset selection in multi-atlas based image segmentation.

    PubMed

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  12. Carbon limitation patterns in buried and open urban streams

    EPA Science Inventory

    Urban streams alternate between darkened buried segments dominated by heterotrophic processes and lighted open segments dominated by autotrophic processes. We hypothesized that labile carbon leaking from autotrophic cells would reduce heterotrophic carbon limitation in open chan...

  13. Atlas-based fuzzy connectedness segmentation and intensity nonuniformity correction applied to brain MRI.

    PubMed

    Zhou, Yongxin; Bai, Jing

    2007-01-01

    A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.

  14. Supervised segmentation of phenotype descriptions for the human skeletal phenome using hybrid methods.

    PubMed

    Groza, Tudor; Hunter, Jane; Zankl, Andreas

    2012-10-15

    Over the course of the last few years there has been a significant amount of research performed on ontology-based formalization of phenotype descriptions. In order to fully capture the intrinsic value and knowledge expressed within them, we need to take advantage of their inner structure, which implicitly combines qualities and anatomical entities. The first step in this process is the segmentation of the phenotype descriptions into their atomic elements. We present a two-phase hybrid segmentation method that combines a series individual classifiers using different aggregation schemes (set operations and simple majority voting). The approach is tested on a corpus comprised of skeletal phenotype descriptions emerged from the Human Phenotype Ontology. Experimental results show that the best hybrid method achieves an F-Score of 97.05% in the first phase and F-Scores of 97.16% / 94.50% in the second phase. The performance of the initial segmentation of anatomical entities and qualities (phase I) is not affected by the presence / absence of external resources, such as domain dictionaries. From a generic perspective, hybrid methods may not always improve the segmentation accuracy as they are heavily dependent on the goal and data characteristics.

  15. Nonparametric rank regression for analyzing water quality concentration data with multiple detection limits.

    PubMed

    Fu, Liya; Wang, You-Gan

    2011-02-15

    Environmental data usually include measurements, such as water quality data, which fall below detection limits, because of limitations of the instruments or of certain analytical methods used. The fact that some responses are not detected needs to be properly taken into account in statistical analysis of such data. However, it is well-known that it is challenging to analyze a data set with detection limits, and we often have to rely on the traditional parametric methods or simple imputation methods. Distributional assumptions can lead to biased inference and justification of distributions is often not possible when the data are correlated and there is a large proportion of data below detection limits. The extent of bias is usually unknown. To draw valid conclusions and hence provide useful advice for environmental management authorities, it is essential to develop and apply an appropriate statistical methodology. This paper proposes rank-based procedures for analyzing non-normally distributed data collected at different sites over a period of time in the presence of multiple detection limits. To take account of temporal correlations within each site, we propose an optimal linear combination of estimating functions and apply the induced smoothing method to reduce the computational burden. Finally, we apply the proposed method to the water quality data collected at Susquehanna River Basin in United States of America, which clearly demonstrates the advantages of the rank regression models.

  16. Recommendations for the Use of Automated Gray Matter Segmentation Tools: Evidence from Huntington's Disease.

    PubMed

    Johnson, Eileanoir B; Gregory, Sarah; Johnson, Hans J; Durr, Alexandra; Leavitt, Blair R; Roos, Raymund A; Rees, Geraint; Tabrizi, Sarah J; Scahill, Rachael I

    2017-01-01

    The selection of an appropriate segmentation tool is a challenge facing any researcher aiming to measure gray matter (GM) volume. Many tools have been compared, yet there is currently no method that can be recommended above all others; in particular, there is a lack of validation in disease cohorts. This work utilizes a clinical dataset to conduct an extensive comparison of segmentation tools. Our results confirm that all tools have advantages and disadvantages, and we present a series of considerations that may be of use when selecting a GM segmentation method, rather than a ranking of these tools. Seven segmentation tools were compared using 3 T MRI data from 20 controls, 40 premanifest Huntington's disease (HD), and 40 early HD participants. Segmented volumes underwent detailed visual quality control. Reliability and repeatability of total, cortical, and lobular GM were investigated in repeated baseline scans. The relationship between each tool was also examined. Longitudinal within-group change over 3 years was assessed via generalized least squares regression to determine sensitivity of each tool to disease effects. Visual quality control and raw volumes highlighted large variability between tools, especially in occipital and temporal regions. Most tools showed reliable performance and the volumes were generally correlated. Results for longitudinal within-group change varied between tools, especially within lobular regions. These differences highlight the need for careful selection of segmentation methods in clinical neuroimaging studies. This guide acts as a primer aimed at the novice or non-technical imaging scientist providing recommendations for the selection of cohort-appropriate GM segmentation software.

  17. Research on Segmentation Monitoring Control of IA-RWA Algorithm with Probe Flow

    NASA Astrophysics Data System (ADS)

    Ren, Danping; Guo, Kun; Yao, Qiuyan; Zhao, Jijun

    2018-04-01

    The impairment-aware routing and wavelength assignment algorithm with probe flow (P-IA-RWA) can make an accurate estimation for the transmission quality of the link when the connection request comes. But it also causes some problems. The probe flow data introduced in the P-IA-RWA algorithm can result in the competition for wavelength resources. In order to reduce the competition and the blocking probability of the network, a new P-IA-RWA algorithm with segmentation monitoring-control mechanism (SMC-P-IA-RWA) is proposed. The algorithm would reduce the holding time of network resources for the probe flow. It segments the candidate path suitably for the data transmitting. And the transmission quality of the probe flow sent by the source node will be monitored in the endpoint of each segment. The transmission quality of data can also be monitored, so as to make the appropriate treatment to avoid the unnecessary probe flow. The simulation results show that the proposed SMC-P-IA-RWA algorithm can effectively reduce the blocking probability. It brings a better solution to the competition for resources between the probe flow and the main data to be transferred. And it is more suitable for scheduling control in the large-scale network.

  18. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms.

    PubMed

    Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas

    2017-03-18

    Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.

  19. A Manual Segmentation Tool for Three-Dimensional Neuron Datasets.

    PubMed

    Magliaro, Chiara; Callara, Alejandro L; Vanello, Nicola; Ahluwalia, Arti

    2017-01-01

    To date, automated or semi-automated software and algorithms for segmentation of neurons from three-dimensional imaging datasets have had limited success. The gold standard for neural segmentation is considered to be the manual isolation performed by an expert. To facilitate the manual isolation of complex objects from image stacks, such as neurons in their native arrangement within the brain, a new Manual Segmentation Tool (ManSegTool) has been developed. ManSegTool allows user to load an image stack, scroll down the images and to manually draw the structures of interest stack-by-stack. Users can eliminate unwanted regions or split structures (i.e., branches from different neurons that are too close each other, but, to the experienced eye, clearly belong to a unique cell), to view the object in 3D and save the results obtained. The tool can be used for testing the performance of a single-neuron segmentation algorithm or to extract complex objects, where the available automated methods still fail. Here we describe the software's main features and then show an example of how ManSegTool can be used to segment neuron images acquired using a confocal microscope. In particular, expert neuroscientists were asked to segment different neurons from which morphometric variables were subsequently extracted as a benchmark for precision. In addition, a literature-defined index for evaluating the goodness of segmentation was used as a benchmark for accuracy. Neocortical layer axons from a DIADEM challenge dataset were also segmented with ManSegTool and compared with the manual "gold-standard" generated for the competition.

  20. Cellular image segmentation using n-agent cooperative game theory

    NASA Astrophysics Data System (ADS)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  1. NOTE: Reducing the number of segments in unidirectional MLC segmentations

    NASA Astrophysics Data System (ADS)

    Mellado, X.; Cruz, S.; Artacho, J. M.; Canellas, M.

    2010-02-01

    In intensity-modulated radiation therapy (IMRT), fluence matrices obtained from a treatment planning system are usually delivered by a linear accelerator equipped with a multileaf collimator (MLC). A segmentation method is needed for decomposing these fluence matrices into segments suitable for the MLC, and the number of segments used is an important factor for treatment time. In this work, an algorithm for reduction of the number of segments (NS) is presented for unidirectional segmentations, where there is no backtracking of the MLC leaves. It uses a geometrical representation of the segmentation output for searching the key values in a fluence matrix that complicate its decomposition. The NS reduction is achieved by performing minor modifications in these values, under the conditions of avoiding substantial modifications of the dose-volume histogram, and does not increase in average the total number of monitor units delivered. The proposed method was tested using two clinical cases planned with the PCRT 3D® treatment planning system.

  2. Automatic rectum limit detection by anatomical markers correlation.

    PubMed

    Namías, R; D'Amato, J P; del Fresno, M; Vénere, M

    2014-06-01

    Several diseases take place at the end of the digestive system. Many of them can be diagnosed by means of different medical imaging modalities together with computer aided detection (CAD) systems. These CAD systems mainly focus on the complete segmentation of the digestive tube. However, the detection of limits between different sections could provide important information to these systems. In this paper we present an automatic method for detecting the rectum and sigmoid colon limit using a novel global curvature analysis over the centerline of the segmented digestive tube in different imaging modalities. The results are compared with the gold standard rectum upper limit through a validation scheme comprising two different anatomical markers: the third sacral vertebra and the average rectum length. Experimental results in both magnetic resonance imaging (MRI) and computed tomography colonography (CTC) acquisitions show the efficacy of the proposed strategy in automatic detection of rectum limits. The method is intended for application to the rectum segmentation in MRI for geometrical modeling and as contextual information source in virtual colonoscopies and CAD systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Automated classification of bone marrow cells in microscopic images for diagnosis of leukemia: a comparison of two classification schemes with respect to the segmentation quality

    NASA Astrophysics Data System (ADS)

    Krappe, Sebastian; Benz, Michaela; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian

    2015-03-01

    The morphological analysis of bone marrow smears is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually with the use of bright field microscope. This is a time consuming, partly subjective and tedious process. Furthermore, repeated examinations of a slide yield intra- and inter-observer variances. For this reason an automation of morphological bone marrow analysis is pursued. This analysis comprises several steps: image acquisition and smear detection, cell localization and segmentation, feature extraction and cell classification. The automated classification of bone marrow cells is depending on the automated cell segmentation and the choice of adequate features extracted from different parts of the cell. In this work we focus on the evaluation of support vector machines (SVMs) and random forests (RFs) for the differentiation of bone marrow cells in 16 different classes, including immature and abnormal cell classes. Data sets of different segmentation quality are used to test the two approaches. Automated solutions for the morphological analysis for bone marrow smears could use such a classifier to pre-classify bone marrow cells and thereby shortening the examination duration.

  4. Analysis and design of segment control system in segmented primary mirror

    NASA Astrophysics Data System (ADS)

    Yu, Wenhao; Li, Bin; Chen, Mo; Xian, Hao

    2017-10-01

    Segmented primary mirror will be adopted widely in giant telescopes in future, such as TMT, E-ELT and GMT. High-performance control technology of the segmented primary mirror is one of the difficult technologies for telescopes using segmented primary mirror. The control of each segment is the basis of control system in segmented mirror. Correcting the tilt and tip of single segment is the main work of this paper which is divided into two parts. Firstly, harmonic response done in finite element model of single segment matches the Bode diagram of a two-order system whose natural frequency is 45 hertz and damping ratio is 0.005. Secondly, a control system model is established, and speed feedback is introduced in control loop to suppress resonance point gain and increase the open-loop bandwidth, up to 30Hz or even higher. Corresponding controller is designed based on the control system model described above.

  5. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.

  6. Heuristic Bayesian segmentation for discovery of coexpressed genes within genomic regions.

    PubMed

    Pehkonen, Petri; Wong, Garry; Törönen, Petri

    2010-01-01

    Segmentation aims to separate homogeneous areas from the sequential data, and plays a central role in data mining. It has applications ranging from finance to molecular biology, where bioinformatics tasks such as genome data analysis are active application fields. In this paper, we present a novel application of segmentation in locating genomic regions with coexpressed genes. We aim at automated discovery of such regions without requirement for user-given parameters. In order to perform the segmentation within a reasonable time, we use heuristics. Most of the heuristic segmentation algorithms require some decision on the number of segments. This is usually accomplished by using asymptotic model selection methods like the Bayesian information criterion. Such methods are based on some simplification, which can limit their usage. In this paper, we propose a Bayesian model selection to choose the most proper result from heuristic segmentation. Our Bayesian model presents a simple prior for the segmentation solutions with various segment numbers and a modified Dirichlet prior for modeling multinomial data. We show with various artificial data sets in our benchmark system that our model selection criterion has the best overall performance. The application of our method in yeast cell-cycle gene expression data reveals potential active and passive regions of the genome.

  7. Segmental Isotopic Labeling of Proteins for Nuclear Magnetic Resonance

    PubMed Central

    Dongsheng, Liu; Xu, Rong; Cowburn, David

    2009-01-01

    Nuclear Magnetic Resonance (NMR) spectroscopy has emerged as one of the principle techniques of structural biology. It is not only a powerful method for elucidating the 3D structures under near physiological conditions, but also a convenient method for studying protein-ligand interactions and protein dynamics. A major drawback of macromolecular NMR is its size limitation caused by slower tumbling rates and greater complexity of the spectra as size increases. Segmental isotopic labeling allows specific segment(s) within a protein to be selectively examined by NMR thus significantly reducing the spectral complexity for large proteins and allowing a variety of solution-based NMR strategies to be applied. Two related approaches are generally used in the segmental isotopic labeling of proteins: expressed protein ligation and protein trans-splicing. Here we describe the methodology and recent application of expressed protein ligation and protein trans-splicing for NMR structural studies of proteins and protein complexes. We also describe the protocol used in our lab for the segmental isotopic labeling of a 50 kDa protein Csk (C-terminal Src Kinase) using expressed protein ligation methods. PMID:19632474

  8. Blood Pool Segmentation Results in Superior Virtual Cardiac Models than Myocardial Segmentation for 3D Printing.

    PubMed

    Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier

    2016-08-01

    The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed

  9. Spatial and temporal variation of water quality of a segment of Marikina River using multivariate statistical methods.

    PubMed

    Chounlamany, Vanseng; Tanchuling, Maria Antonia; Inoue, Takanobu

    2017-09-01

    Payatas landfill in Quezon City, Philippines, releases leachate to the Marikina River through a creek. Multivariate statistical techniques were applied to study temporal and spatial variations in water quality of a segment of the Marikina River. The data set included 12 physico-chemical parameters for five monitoring stations over a year. Cluster analysis grouped the monitoring stations into four clusters and identified January-May as dry season and June-September as wet season. Principal components analysis showed that three latent factors are responsible for the data set explaining 83% of its total variance. The chemical oxygen demand, biochemical oxygen demand, total dissolved solids, Cl - and PO 4 3- are influenced by anthropogenic impact/eutrophication pollution from point sources. Total suspended solids, turbidity and SO 4 2- are influenced by rain and soil erosion. The highest state of pollution is at the Payatas creek outfall from March to May, whereas at downstream stations it is in May. The current study indicates that the river monitoring requires only four stations, nine water quality parameters and testing over three specific months of the year. The findings of this study imply that Payatas landfill requires a proper leachate collection and treatment system to reduce its impact on the Marikina River.

  10. Integrative image segmentation optimization and machine learning approach for high quality land-use and land-cover mapping using multisource remote sensing data

    NASA Astrophysics Data System (ADS)

    Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd

    2018-01-01

    The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.

  11. What is a segment?

    PubMed

    Hannibal, Roberta L; Patel, Nipam H

    2013-12-17

    Animals have been described as segmented for more than 2,000 years, yet a precise definition of segmentation remains elusive. Here we give the history of the definition of segmentation, followed by a discussion on current controversies in defining a segment. While there is a general consensus that segmentation involves the repetition of units along the anterior-posterior (a-p) axis, long-running debates exist over whether a segment can be composed of only one tissue layer, whether the most anterior region of the arthropod head is considered segmented, and whether and how the vertebrate head is segmented. Additionally, we discuss whether a segment can be composed of a single cell in a column of cells, or a single row of cells within a grid of cells. We suggest that 'segmentation' be used in its more general sense, the repetition of units with a-p polarity along the a-p axis, to prevent artificial classification of animals. We further suggest that this general definition be combined with an exact description of what is being studied, as well as a clearly stated hypothesis concerning the specific nature of the potential homology of structures. These suggestions should facilitate dialogue among scientists who study vastly differing segmental structures.

  12. A Scalable Framework For Segmenting Magnetic Resonance Images

    PubMed Central

    Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar

    2009-01-01

    A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893

  13. Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes

    NASA Technical Reports Server (NTRS)

    Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.

    2013-01-01

    Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.

  14. 49 CFR 214.319 - Working limits, generally.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... control over working limits for the purpose of establishing on-track safety. (b) Only one roadway worker shall have control over working limits on any one segment of track. (c) All affected roadway workers..., DEPARTMENT OF TRANSPORTATION RAILROAD WORKPLACE SAFETY Roadway Worker Protection § 214.319 Working limits...

  15. Fast Segmentation of Stained Nuclei in Terabyte-Scale, Time Resolved 3D Microscopy Image Stacks

    PubMed Central

    Stegmaier, Johannes; Otte, Jens C.; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G. Ulrich; Strähle, Uwe; Mikut, Ralf

    2014-01-01

    Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu’s method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm’s superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results. PMID:24587204

  16. Host polymer influence on dilute polystyrene segmental dynamics

    NASA Astrophysics Data System (ADS)

    Lutz, T. R.

    2005-03-01

    We have utilized deuterium NMR to investigate the segmental dynamics of dilute (2%) d3-polystyrene (PS) chains in miscible polymer blends with polybutadiene, poly(vinyl ethylene), polyisoprene, poly(vinyl methylether) and poly(methyl methacrylate). In the dilute limit, we find qualitative differences depending upon whether the host polymer has dynamics that are faster or slower than that of pure PS. In blends where PS is the fast (low Tg) component, segmental dynamics are slowed upon blending and can be fit by the Lodge-McLeish model. When PS is the slow (high Tg) component, PS segmental dynamics speed up upon blending, but cannot be fit by the Lodge-McLeish model unless a temperature dependent self-concentration is employed. These results are qualitatively consistent with a recent suggestion by Kant, Kumar and Colby (Macromolecules, 2003, 10087), based upon data at higher concentrations. Furthermore, as the slow component, we find the segmental dynamics of PS has a temperature dependence similar to that of its host. This suggests viewing the high Tg component dynamics in a miscible blend as similar to a polymer in a low molecular weight solvent.

  17. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  18. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  19. Influencing factors for household water quality improvement in reducing diarrhoea in resource-limited areas.

    PubMed

    Zin, Thant; Mudin, Kamarudin D; Myint, Than; Naing, Daw K S; Sein, Tracy; Shamsul, B S

    2013-01-01

    Water and sanitation are major public health issues exacerbated by rapid population growth, limited resources, disasters and environmental depletion. This study was undertaken to study the influencing factors for household water quality improvement for reducing diarrhoea in resource-limited areas. Data were collected from articles and reviews from relevant randomized controlled trials, new articles, systematic reviews and meta-analyses from PubMed, World Health Organization (WHO), United Nations Children's Fund (UNICEF) and WELL Resource Centre For Water, Sanitation And Environmental Health. Water quality on diarrhoea prevention could be affected by contamination during storage, collection and even at point-of-use. Point-of-use water treatment (household-based) is the most cost-effective method for prevention of diarrhoea. Chemical disinfection, filtration, thermal disinfection, solar disinfection and flocculation and disinfection are five most promising household water treatment methodologies for resource-limited areas. Promoting household water treatment is most essential for preventing diarrhoeal disease. In addition, the water should be of acceptable taste, appropriate for emergency and non-emergency use.

  20. Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions

    PubMed Central

    Collins, Maxwell D.; Xu, Jia; Grady, Leo; Singh, Vikas

    2012-01-01

    We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence –the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. PMID:25278742

  1. Complete grain boundaries from incomplete EBSD maps: the influence of segmentation on grain size determinations

    NASA Astrophysics Data System (ADS)

    Heilbronner, Renée; Kilian, Ruediger

    2017-04-01

    Grain size analyses are carried out for a number of reasons, for example, the dynamically recrystallized grain size of quartz is used to assess the flow stresses during deformation. Typically a thin section or polished surface is used. If the expected grain size is large enough (10 µm or larger), the images can be obtained on a light microscope, if the grain size is smaller, the SEM is used. The grain boundaries are traced (the process is called segmentation and can be done manually or via image processing) and the size of the cross sectional areas (segments) is determined. From the resulting size distributions, 'the grain size' or 'average grain size', usually a mean diameter or similar, is derived. When carrying out such grain size analyses, a number of aspects are critical for the reproducibility of the result: the resolution of the imaging equipment (light microscope or SEM), the type of images that are used for segmentation (cross polarized, partial or full orientation images, CIP versus EBSD), the segmentation procedure (algorithm) itself, the quality of the segmentation and the mathematical definition and calculation of 'the average grain size'. The quality of the segmentation depends very strongly on the criteria that are used for identifying grain boundaries (for example, angles of misorientation versus shape considerations), on pre- and post-processing (filtering) and on the quality of the recorded images (most notably on the indexing ratio). In this contribution, we consider experimentally deformed Black Hills quartzite with dynamically re-crystallized grain sizes in the range of 2 - 15 µm. We compare two basic methods of segmentations of EBSD maps (orientation based versus shape based) and explore how the choice of methods influences the result of the grain size analysis. We also compare different measures for grain size (mean versus mode versus RMS, and 2D versus 3D) in order to determine which of the definitions of 'average grain size yields the

  2. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    PubMed

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  3. Automatic multi-organ segmentation using learning-based segmentation and level set optimization.

    PubMed

    Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.

  4. Limits on quality of life in communication after total laryngectomy

    PubMed Central

    Chaves, Adriana Di Donato; Pernambuco, Leandro de Araújo; Balata, Patrícia Maria Mendes; Santos, Veridiana da Silva; de Lima, Leilane Maria; de Souza, Síntia Ribeiro; da Silva, Hilton Justino

    2012-01-01

    Summary Introduction: Among people affected by cancer, the impairment of quality of life of people affected by cancer can cause have devastating effects. The self-image of patients after post-laryngectomyzed patients may be find themselves compromised, affecting the quality of life in this population. Objective: To characterize quality of life in related to communication in people who have undergone went total laryngectomy surgery. Methods: This is an observational study, with a cross-sectional and descriptive series. Design of series study. The sample were comprised 15 patients interviewed the period from January to February of 2011. We used the Quality Protocol for Life Communication in Post-laryngectomy adapted from Bertocello (2004); which this questionnaire contains 55 questions. The protocol was organized from the nature of using responses classified as positive and negative aspects, proposals in with respect to five 5 communication domains: family relationships, social relationships, personal analysis; morphofunctional aspect, and use of writing. To promote and guarantee the autonomy of the respondents, was examiners made use of used assistive technology with the Visual Response Scale. Results: The responses that total laryngectomy compromises the quality of life in communication amounted to 463 occurrences (65.7%), and that who responses suggesting good quality of life were represented with amounted to 242 occurrences (34.3%), from a total of 705 occurrencesresponses. From Among the five 5 Communication domains, four 4 had percentages of above 63% for occurrences of negative content for impact on communication. Appearance Morphofunctional appearance gave the had the highest percentage of negative content, amounting to 77.3% of cases. Conclusions: The results showed important limitations of a personal and social nature due to poor communication with their peers. Thus, there is a need for multidisciplinary interventions that aim to minimize the

  5. Limits on quality of life in communication after total laryngectomy.

    PubMed

    Chaves, Adriana Di Donato; Pernambuco, Leandro de Araújo; Balata, Patrícia Maria Mendes; Santos, Veridiana da Silva; de Lima, Leilane Maria; de Souza, Síntia Ribeiro; da Silva, Hilton Justino

    2012-10-01

     Among people affected by cancer, the impairment of quality of life of people affected by cancer can cause have devastating effects. The self-image of patients after post-laryngectomyzed patients may be find themselves compromised, affecting the quality of life in this population.  To characterize quality of life in related to communication in people who have undergone went total laryngectomy surgery.  This is an observational study, with a cross-sectional and descriptive series. Design of series study. The sample were comprised 15 patients interviewed the period from January to February of 2011. We used the Quality Protocol for Life Communication in Post-laryngectomy adapted from Bertocello (2004); which this questionnaire contains 55 questions. The protocol was organized from the nature of using responses classified as positive and negative aspects, proposals in with respect to five 5 communication domains: family relationships, social relationships, personal analysis; morphofunctional aspect, and use of writing. To promote and guarantee the autonomy of the respondents, was examiners made use of used assistive technology with the Visual Response Scale.  The responses that total laryngectomy compromises the quality of life in communication amounted to 463 occurrences (65.7%), and that who responses suggesting good quality of life were represented with amounted to 242 occurrences (34.3%), from a total of 705 occurrencesresponses. From Among the five 5 Communication domains, four 4 had percentages of above 63% for occurrences of negative content for impact on communication. Appearance Morphofunctional appearance gave the had the highest percentage of negative content, amounting to 77.3% of cases.  The results showed important limitations of a personal and social nature due to poor communication with their peers. Thus, there is a need for multidisciplinary interventions that aim to minimize the entrapment of negative impact on these people communication

  6. James Webb Space Telescope optical simulation testbed IV: linear control alignment of the primary segmented mirror

    NASA Astrophysics Data System (ADS)

    Egron, Sylvain; Soummer, Rémi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Levecq, Olivier; Mazoyer, Johan; N'Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand

    2017-09-01

    The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a tabletop experiment designed to study wavefront sensing and control for a segmented space telescope, such as JWST. With the JWST Science and Operations Center co-located at STScI, JOST was developed to provide both a platform for staff training and to test alternate wavefront sensing and control strategies for independent validation or future improvements beyond the baseline operations. The design of JOST reproduces the physics of JWST's three-mirror anastigmat (TMA) using three custom aspheric lenses. It provides similar quality image as JWST (80% Strehl ratio) over a field equivalent to a NIRCam module, but at 633 nm. An Iris AO segmented mirror stands for the segmented primary mirror of JWST. Actuators allow us to control (1) the 18 segments of the segmented mirror in piston, tip, tilt and (2) the second lens, which stands for the secondary mirror, in tip, tilt and x, y, z positions. We present the most recent experimental results for the segmented mirror alignment. Our implementation of the Wavefront Sensing (WFS) algorithms using phase diversity is tested on simulation and experimentally. The wavefront control (WFC) algorithms, which rely on a linear model for optical aberrations induced by misalignment of the secondary lens and the segmented mirror, are tested and validated both on simulations and experimentally. In this proceeding, we present the performance of the full active optic control loop in presence of perturbations on the segmented mirror, and we detail the quality of the alignment correction.

  7. A hybrid approach of using symmetry technique for brain tumor segmentation.

    PubMed

    Saddique, Mubbashar; Kazmi, Jawad Haider; Qureshi, Kalim

    2014-01-01

    Tumor and related abnormalities are a major cause of disability and death worldwide. Magnetic resonance imaging (MRI) is a superior modality due to its noninvasiveness and high quality images of both the soft tissues and bones. In this paper we present two hybrid segmentation techniques and their results are compared with well-recognized techniques in this area. The first technique is based on symmetry and we call it a hybrid algorithm using symmetry and active contour (HASA). In HASA, we take refection image, calculate the difference image, and then apply the active contour on the difference image to segment the tumor. To avoid unimportant segmented regions, we improve the results by proposing an enhancement in the form of the second technique, EHASA. In EHASA, we also take reflection of the original image, calculate the difference image, and then change this image into a binary image. This binary image is mapped onto the original image followed by the application of active contouring to segment the tumor region.

  8. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Yuru, E-mail: peiyuru@cis.pku.edu.cn; Ai, Xin

    Purpose: Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. Methods: The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3Dmore » exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. Results: The proposed method was applied for tooth segmentation of twenty clinically

  9. TED: A Tolerant Edit Distance for segmentation evaluation.

    PubMed

    Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew

    2017-02-15

    In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.

  10. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  11. Segmented trapped vortex cavity

    NASA Technical Reports Server (NTRS)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  12. Using multimodal information for the segmentation of fluorescent micrographs with application to virology and microbiology.

    PubMed

    Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas

    2011-01-01

    In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.

  13. Recommendations for the Use of Automated Gray Matter Segmentation Tools: Evidence from Huntington’s Disease

    PubMed Central

    Johnson, Eileanoir B.; Gregory, Sarah; Johnson, Hans J.; Durr, Alexandra; Leavitt, Blair R.; Roos, Raymund A.; Rees, Geraint; Tabrizi, Sarah J.; Scahill, Rachael I.

    2017-01-01

    The selection of an appropriate segmentation tool is a challenge facing any researcher aiming to measure gray matter (GM) volume. Many tools have been compared, yet there is currently no method that can be recommended above all others; in particular, there is a lack of validation in disease cohorts. This work utilizes a clinical dataset to conduct an extensive comparison of segmentation tools. Our results confirm that all tools have advantages and disadvantages, and we present a series of considerations that may be of use when selecting a GM segmentation method, rather than a ranking of these tools. Seven segmentation tools were compared using 3 T MRI data from 20 controls, 40 premanifest Huntington’s disease (HD), and 40 early HD participants. Segmented volumes underwent detailed visual quality control. Reliability and repeatability of total, cortical, and lobular GM were investigated in repeated baseline scans. The relationship between each tool was also examined. Longitudinal within-group change over 3 years was assessed via generalized least squares regression to determine sensitivity of each tool to disease effects. Visual quality control and raw volumes highlighted large variability between tools, especially in occipital and temporal regions. Most tools showed reliable performance and the volumes were generally correlated. Results for longitudinal within-group change varied between tools, especially within lobular regions. These differences highlight the need for careful selection of segmentation methods in clinical neuroimaging studies. This guide acts as a primer aimed at the novice or non-technical imaging scientist providing recommendations for the selection of cohort-appropriate GM segmentation software. PMID:29066997

  14. Fast ITTBC using pattern code on subband segmentation

    NASA Astrophysics Data System (ADS)

    Koh, Sung S.; Kim, Hanchil; Lee, Kooyoung; Kim, Hongbin; Jeong, Hun; Cho, Gangseok; Kim, Chunghwa

    2000-06-01

    Iterated Transformation Theory-Based Coding suffers from very high computational complexity in encoding phase. This is due to its exhaustive search. In this paper, our proposed image coding algorithm preprocess an original image to subband segmentation image by wavelet transform before image coding to reduce encoding complexity. A similar block is searched by using the 24 block pattern codes which are coded by the edge information in the image block on the domain pool of the subband segmentation. As a result, numerical data shows that the encoding time of the proposed coding method can be reduced to 98.82% of that of Joaquin's method, while the loss in quality relative to the Jacquin's is about 0.28 dB in PSNR, which is visually negligible.

  15. Tensor scale-based fuzzy connectedness image segmentation

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Udupa, Jayaram K.

    2003-05-01

    Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.

  16. A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images

    PubMed Central

    Luo, Yaozhong; Liu, Longzhong; Li, Xuelong

    2017-01-01

    Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%). PMID:28536703

  17. What is a segment?

    PubMed Central

    2013-01-01

    Animals have been described as segmented for more than 2,000 years, yet a precise definition of segmentation remains elusive. Here we give the history of the definition of segmentation, followed by a discussion on current controversies in defining a segment. While there is a general consensus that segmentation involves the repetition of units along the anterior-posterior (a-p) axis, long-running debates exist over whether a segment can be composed of only one tissue layer, whether the most anterior region of the arthropod head is considered segmented, and whether and how the vertebrate head is segmented. Additionally, we discuss whether a segment can be composed of a single cell in a column of cells, or a single row of cells within a grid of cells. We suggest that ‘segmentation’ be used in its more general sense, the repetition of units with a-p polarity along the a-p axis, to prevent artificial classification of animals. We further suggest that this general definition be combined with an exact description of what is being studied, as well as a clearly stated hypothesis concerning the specific nature of the potential homology of structures. These suggestions should facilitate dialogue among scientists who study vastly differing segmental structures. PMID:24345042

  18. Shortest-path constraints for 3D multiobject semiautomatic segmentation via clustering and Graph Cut.

    PubMed

    Kéchichian, Razmig; Valette, Sébastien; Desvignes, Michel; Prost, Rémy

    2013-11-01

    We derive shortest-path constraints from graph models of structure adjacency relations and introduce them in a joint centroidal Voronoi image clustering and Graph Cut multiobject semiautomatic segmentation framework. The vicinity prior model thus defined is a piecewise-constant model incurring multiple levels of penalization capturing the spatial configuration of structures in multiobject segmentation. Qualitative and quantitative analyses and comparison with a Potts prior-based approach and our previous contribution on synthetic, simulated, and real medical images show that the vicinity prior allows for the correct segmentation of distinct structures having identical intensity profiles and improves the precision of segmentation boundary placement while being fairly robust to clustering resolution. The clustering approach we take to simplify images prior to segmentation strikes a good balance between boundary adaptivity and cluster compactness criteria furthermore allowing to control the trade-off. Compared with a direct application of segmentation on voxels, the clustering step improves the overall runtime and memory footprint of the segmentation process up to an order of magnitude without compromising the quality of the result.

  19. In Vivo Imaging of Human Cone Photoreceptor Inner Segments

    PubMed Central

    Scoles, Drew; Sulai, Yusufu N.; Langlo, Christopher S.; Fishman, Gerald A.; Curcio, Christine A.; Carroll, Joseph; Dubra, Alfredo

    2014-01-01

    Purpose. An often overlooked prerequisite to cone photoreceptor gene therapy development is residual photoreceptor structure that can be rescued. While advances in adaptive optics (AO) retinal imaging have recently enabled direct visualization of individual cone and rod photoreceptors in the living human retina, these techniques largely detect strongly directionally-backscattered (waveguided) light from normal intact photoreceptors. This represents a major limitation in using existing AO imaging to quantify structure of remnant cones in degenerating retina. Methods. Photoreceptor inner segment structure was assessed with a novel AO scanning light ophthalmoscopy (AOSLO) differential phase technique, that we termed nonconfocal split-detector, in two healthy subjects and four subjects with achromatopsia. Ex vivo preparations of five healthy donor eyes were analyzed for comparison of inner segment diameter to that measured in vivo with split-detector AOSLO. Results. Nonconfocal split-detector AOSLO reveals the photoreceptor inner segment with or without the presence of a waveguiding outer segment. The diameter of inner segments measured in vivo is in good agreement with histology. A substantial number of foveal and parafoveal cone photoreceptors with apparently intact inner segments were identified in patients with the inherited disease achromatopsia. Conclusions. The application of nonconfocal split-detector to emerging human gene therapy trials will improve the potential of therapeutic success, by identifying patients with sufficient retained photoreceptor structure to benefit the most from intervention. Additionally, split-detector imaging may be useful for studies of other retinal degenerations such as AMD, retinitis pigmentosa, and choroideremia where the outer segment is lost before the remainder of the photoreceptor cell. PMID:24906859

  20. [Review on HSPF model for simulation of hydrology and water quality processes].

    PubMed

    Li, Zhao-fu; Liu, Hong-Yu; Li, Yan

    2012-07-01

    Hydrological Simulation Program-FORTRAN (HSPF), written in FORTRAN, is one ol the best semi-distributed hydrology and water quality models, which was first developed based on the Stanford Watershed Model. Many studies on HSPF model application were conducted. It can represent the contributions of sediment, nutrients, pesticides, conservatives and fecal coliforms from agricultural areas, continuously simulate water quantity and quality processes, as well as the effects of climate change and land use change on water quantity and quality. HSPF consists of three basic application components: PERLND (Pervious Land Segment) IMPLND (Impervious Land Segment), and RCHRES (free-flowing reach or mixed reservoirs). In general, HSPF has extensive application in the modeling of hydrology or water quality processes and the analysis of climate change and land use change. However, it has limited use in China. The main problems with HSPF include: (1) some algorithms and procedures still need to revise, (2) due to the high standard for input data, the accuracy of the model is limited by spatial and attribute data, (3) the model is only applicable for the simulation of well-mixed rivers, reservoirs and one-dimensional water bodies, it must be integrated with other models to solve more complex problems. At present, studies on HSPF model development are still undergoing, such as revision of model platform, extension of model function, method development for model calibration, and analysis of parameter sensitivity. With the accumulation of basic data and imorovement of data sharing, the HSPF model will be applied more extensively in China.

  1. Attribute importance segmentation of Norwegian seafood consumers: The inclusion of salient packaging attributes.

    PubMed

    Olsen, Svein Ottar; Tuu, Ho Huu; Grunert, Klaus G

    2017-10-01

    The main purpose of this study is to identify consumer segments based on the importance of product attributes when buying seafood for homemade meals on weekdays. There is a particular focus on the relative importance of the packaging attributes of fresh seafood. The results are based on a representative survey of 840 Norwegian consumers between 18 and 80 years of age. This study found that taste, freshness, nutritional value and naturalness are the most important attributes for the home consumption of seafood. Except for the high importance of information about expiration date, most other packaging attributes have only medium importance. Three consumer segments are identified based on the importance of 33 attributes associated with seafood: Perfectionists, Quality Conscious and Careless Consumers. The Quality Conscious consumers feel more self-confident in their evaluation of quality, and are less concerned with packaging, branding, convenience and emotional benefits compared to the Perfectionists. Careless Consumers are important as regular consumers of convenient and pre-packed seafood products and value recipe information on the packaging. The seafood industry may use the results provided in this study to strengthen their positioning of seafood across three different consumer segments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Segmenting words from natural speech: subsegmental variation in segmental cues.

    PubMed

    Rytting, C Anton; Brew, Chris; Fosler-Lussier, Eric

    2010-06-01

    Most computational models of word segmentation are trained and tested on transcripts of speech, rather than the speech itself, and assume that speech is converted into a sequence of symbols prior to word segmentation. We present a way of representing speech corpora that avoids this assumption, and preserves acoustic variation present in speech. We use this new representation to re-evaluate a key computational model of word segmentation. One finding is that high levels of phonetic variability degrade the model's performance. While robustness to phonetic variability may be intrinsically valuable, this finding needs to be complemented by parallel studies of the actual abilities of children to segment phonetically variable speech.

  3. Magnetic resonance imaging evaluation of adjacent segments after cervical disc arthroplasty: magnet strength and its effect on image quality. Clinical article.

    PubMed

    Antosh, Ivan J; DeVine, John G; Carpenter, Clyde T; Woebkenberg, Brian J; Yoest, Stephen M

    2010-12-01

    from the Co-Cr endplates. The open 0.2-T MR imaging unit reduces artifact at adjacent levels after cervical disc arthroplasty without a significant reduction in the image quality. Magnetic resonance imaging can be used to evaluate the adjacent segments after disc arthroplasty if magnet strength is addressed, providing another means to assess the long-term efficacy of this novel treatment.

  4. Segmental Vitiligo.

    PubMed

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Fault segmentation: New concepts from the Wasatch Fault Zone, Utah, USA

    USGS Publications Warehouse

    Duross, Christopher; Personius, Stephen F.; Crone, Anthony J.; Olig, Susan S.; Hylland, Michael D.; Lund, William R.; Schwartz, David P.

    2016-01-01

    The question of whether structural segment boundaries along multisegment normal faults such as the Wasatch fault zone (WFZ) act as persistent barriers to rupture is critical to seismic hazard analyses. We synthesized late Holocene paleoseismic data from 20 trench sites along the central WFZ to evaluate earthquake rupture length and fault segmentation. For the youngest (<3 ka) and best-constrained earthquakes, differences in earthquake timing across prominent primary segment boundaries, especially for the most recent earthquakes on the north-central WFZ, are consistent with segment-controlled ruptures. However, broadly constrained earthquake times, dissimilar event times along the segments, the presence of smaller-scale (subsegment) boundaries, and areas of complex faulting permit partial-segment and multisegment (e.g., spillover) ruptures that are shorter (~20–40 km) or longer (~60–100 km) than the primary segment lengths (35–59 km). We report a segmented WFZ model that includes 24 earthquakes since ~7 ka and yields mean estimates of recurrence (1.1–1.3 kyr) and vertical slip rate (1.3–2.0 mm/yr) for the segments. However, additional rupture scenarios that include segment boundary spatial uncertainties, floating earthquakes, and multisegment ruptures are necessary to fully address epistemic uncertainties in rupture length. We compare the central WFZ to paleoseismic and historical surface ruptures in the Basin and Range Province and central Italian Apennines and conclude that displacement profiles have limited value for assessing the persistence of segment boundaries but can aid in interpreting prehistoric spillover ruptures. Our comparison also suggests that the probabilities of shorter and longer ruptures on the WFZ need to be investigated.

  6. An adaptive multi-feature segmentation model for infrared image

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  7. Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies.

    PubMed

    Koch, Lisa M; Rajchl, Martin; Bai, Wenjia; Baumgartner, Christian F; Tong, Tong; Passerat-Palmbach, Jonathan; Aljabar, Paul; Rueckert, Daniel

    2017-08-22

    Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.

  8. A Multiatlas Segmentation Using Graph Cuts with Applications to Liver Segmentation in CT Scans

    PubMed Central

    2014-01-01

    An atlas-based segmentation approach is presented that combines low-level operations, an affine probabilistic atlas, and a multiatlas-based segmentation. The proposed combination provides highly accurate segmentation due to registrations and atlas selections based on the regions of interest (ROIs) and coarse segmentations. Our approach shares the following common elements between the probabilistic atlas and multiatlas segmentation: (a) the spatial normalisation and (b) the segmentation method, which is based on minimising a discrete energy function using graph cuts. The method is evaluated for the segmentation of the liver in computed tomography (CT) images. Low-level operations define a ROI around the liver from an abdominal CT. We generate a probabilistic atlas using an affine registration based on geometry moments from manually labelled data. Next, a coarse segmentation of the liver is obtained from the probabilistic atlas with low computational effort. Then, a multiatlas segmentation approach improves the accuracy of the segmentation. Both the atlas selections and the nonrigid registrations of the multiatlas approach use a binary mask defined by coarse segmentation. We experimentally demonstrate that this approach performs better than atlas selections and nonrigid registrations in the entire ROI. The segmentation results are comparable to those obtained by human experts and to other recently published results. PMID:25276219

  9. [Potentials and limitations of the planned compulsory quality assurance program for cataract surgery (Qesü)].

    PubMed

    Hahn, U; Bertram, B; Krummenauer, F; Reuscher, A; Fabian, E; Neuhann, T; Schmickler, S; Neuhann, I

    2013-04-01

    Cataract surgery is scheduled for a federal program for quality improvement across the different sectors of care (outpatient care and hospitals). In case of implementation not only ophthalmic surgeons but all ophthalmologists would have to contribute to the documentation. Urgency, potential benefits and limitations of a compulsory compared to a voluntary quality assessment system are analyzed.

  10. Can masses of non-experts train highly accurate image classifiers? A crowdsourcing approach to instrument segmentation in laparoscopic images.

    PubMed

    Maier-Hein, Lena; Mersmann, Sven; Kondermann, Daniel; Bodenstedt, Sebastian; Sanchez, Alexandro; Stock, Christian; Kenngott, Hannes Gotz; Eisenmann, Mathias; Speidel, Stefanie

    2014-01-01

    Machine learning algorithms are gaining increasing interest in the context of computer-assisted interventions. One of the bottlenecks so far, however, has been the availability of training data, typically generated by medical experts with very limited resources. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. In this work, we investigate the potential of crowdsourcing for segmenting medical instruments in endoscopic image data. Our study suggests that (1) segmentations computed from annotations of multiple anonymous non-experts are comparable to those made by medical experts and (2) training data generated by the crowd is of the same quality as that annotated by medical experts. Given the speed of annotation, scalability and low costs, this implies that the scientific community might no longer need to rely on experts to generate reference or training data for certain applications. To trigger further research in endoscopic image processing, the data used in this study will be made publicly available.

  11. Model-based video segmentation for vision-augmented interactive games

    NASA Astrophysics Data System (ADS)

    Liu, Lurng-Kuo

    2000-04-01

    This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.

  12. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  13. Measuring nanometre-scale electric fields in scanning transmission electron microscopy using segmented detectors.

    PubMed

    Brown, H G; Shibata, N; Sasaki, H; Petersen, T C; Paganin, D M; Morgan, M J; Findlay, S D

    2017-11-01

    Electric field mapping using segmented detectors in the scanning transmission electron microscope has recently been achieved at the nanometre scale. However, converting these results to quantitative field measurements involves assumptions whose validity is unclear for thick specimens. We consider three approaches to quantitative reconstruction of the projected electric potential using segmented detectors: a segmented detector approximation to differential phase contrast and two variants on ptychographical reconstruction. Limitations to these approaches are also studied, particularly errors arising from detector segment size, inelastic scattering, and non-periodic boundary conditions. A simple calibration experiment is described which corrects the differential phase contrast reconstruction to give reliable quantitative results despite the finite detector segment size and the effects of plasmon scattering in thick specimens. A plasmon scattering correction to the segmented detector ptychography approaches is also given. Avoiding the imposition of periodic boundary conditions on the reconstructed projected electric potential leads to more realistic reconstructions. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Segmentation precision of abdominal anatomy for MRI-based radiotherapy

    PubMed Central

    Noel, Camille E.; Zhu, Fan; Lee, Andrew Y.; Yanle, Hu; Parikh, Parag J.

    2014-01-01

    The limited soft tissue visualization provided by computed tomography, the standard imaging modality for radiotherapy treatment planning and daily localization, has motivated studies on the use of magnetic resonance imaging (MRI) for better characterization of treatment sites, such as the prostate and head and neck. However, no studies have been conducted on MRI-based segmentation for the abdomen, a site that could greatly benefit from enhanced soft tissue targeting. We investigated the interobserver and intraobserver precision in segmentation of abdominal organs on MR images for treatment planning and localization. Manual segmentation of 8 abdominal organs was performed by 3 independent observers on MR images acquired from 14 healthy subjects. Observers repeated segmentation 4 separate times for each image set. Interobserver and intraobserver contouring precision was assessed by computing 3-dimensional overlap (Dice coefficient [DC]) and distance to agreement (Hausdorff distance [HD]) of segmented organs. The mean and standard deviation of intraobserver and interobserver DC and HD values were DCintraobserver = 0.89 ± 0.12, HDintraobserver = 3.6 mm ± 1.5, DCinterobserver = 0.89 ± 0.15, and HDinterobserver = 3.2 mm ± 1.4. Overall, metrics indicated good interobserver/intraobserver precision (mean DC > 0.7, mean HD < 4 mm). Results suggest that MRI offers good segmentation precision for abdominal sites. These findings support the utility of MRI for abdominal planning and localization, as emerging MRI technologies, techniques, and onboard imaging devices are beginning to enable MRI-based radiotherapy. PMID:24726701

  15. [Domestic and international trends concerning allowable limits of error in external quality assessment scheme].

    PubMed

    Hosogaya, Shigemi; Ozaki, Yukio

    2005-06-01

    Many external quality assessment schemes (EQAS) are performed to support quality improvement of the services provided by participating laboratories for the benefits of patients. The EQAS organizer shall be responsible for ensuring that the method of evaluation is appropriate for maintenance of the credibility of the schemes. Procedures to evaluate each participating laboratory are gradually being standardized. In most cases of EQAS, the peer group mean is used as a target of accuracy, and the peer group standard deviation is used as a criterion for inter-laboratory variation. On the other hand, Fraser CG, et al. proposed desirable quality specifications for any imprecision and inaccuracies, which were derived from inter- and intra-biologic variations. We also proposed allowable limits of analytical error, being less than one-half of the average intra-individual variation for evaluation of imprecision, and less than one-quarter of the inter- plus intra-individual variation for evaluation of inaccuracy. When expressed in coefficient of variation terms, these allowable limits may be applied at a wide range of levels of quantity.

  16. Learning a constrained conditional random field for enhanced segmentation of fallen trees in ALS point clouds

    NASA Astrophysics Data System (ADS)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2018-06-01

    In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.

  17. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  18. 40 CFR 130.7 - Total maximum daily loads (TMDL) and individual water quality-based effluent limitations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Total maximum daily loads (TMDL) and individual water quality-based effluent limitations. 130.7 Section 130.7 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY PLANNING AND MANAGEMENT § 130.7 Total...

  19. 40 CFR 130.7 - Total maximum daily loads (TMDL) and individual water quality-based effluent limitations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Total maximum daily loads (TMDL) and individual water quality-based effluent limitations. 130.7 Section 130.7 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY PLANNING AND MANAGEMENT § 130.7 Total...

  20. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  1. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects.

    PubMed

    Kloster, Michael; Kauer, Gerhard; Beszteri, Bánk

    2014-06-25

    Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily

  2. Assessment of Image Quality of Repeated Limited Transthoracic Echocardiography After Cardiac Surgery.

    PubMed

    Canty, David J; Heiberg, Johan; Tan, Jen A; Yang, Yang; Royse, Alistair G; Royse, Colin F; Mobeirek, Abdulelah; Shaer, Fayez El; Albacker, Turki; Nazer, Rakan I; Fouda, Muhammed; Bakir, Bakir M; Alsaddique, Ahmed A

    2017-06-01

    The use of limited transthoracic echocardiography (TTE) has been restricted in patients after cardiac surgery due to reported poor image quality. The authors hypothesized that the hemodynamic state could be evaluated in a high proportion of patients at repeated intervals after cardiac surgery. Prospective observational study. Tertiary university hospital. The study comprised 51 patients aged 18 years or older presenting for cardiac surgery. Patients underwent TTE before surgery and at 3 time points after cardiac surgery. Images were assessed offline using an image quality scoring system by 2 expert observers. Hemodynamic state was assessed using the iHeartScan protocol, and the primary endpoint was the proportion of limited TTE studies in which the hemodynamic state was interpretable at each of the 3 postoperative time points. Hemodynamic state interpretability varied over time and was highest before surgery (90%) and lowest on the first postoperative day (49%) (p<0.01). This variation in interpretability over time was reflected in all 3 transthoracic windows, ranging from 43% to 80% before surgery and from 2% to 35% on the first postoperative day (p<0.01). Image quality scores were highest with the apical window, ranging from 53% to 77% across time points, and lowest with the subcostal window, ranging from 4% to 70% across time points (p< 0.01). Hemodynamic state can be determined with TTE in a high proportion of cardiac surgery patients after extubation and removal of surgical drains. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Dynamic updating atlas for heart segmentation with a nonlinear field-based model.

    PubMed

    Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng

    2017-09-01

    Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Strategies for Limiting Engineers' Potential Liability for Indoor Air Quality Problems.

    PubMed

    von Oppenfeld, Rolf R; Freeze, Mark E; Sabo, Sean M

    1998-10-01

    Engineers face indoor air quality (IAQ) issues at the design phase of building construction as well as during the investigation and mitigation of potential indoor air pollution problems during building operation. IAQ issues that can be identified are "building-related illnesses" that may include problems of volatile organic compounds (VOCs). IAQ issues that cannot be identified are termed "sick building syndrome." Frequently, microorganism-caused illnesses are difficult to confirm. Engineers who provide professional services that directly or indirectly impact IAQ face significant potential liability to clients and third parties when performing these duties. Potential theories supporting liability claims for IAQ problems against engineers include breach of contract and various common law tort theories such as negligence and negligent misrepresentation. Furthermore, an increasing number of federal, state, and local regulations affect IAQ issues and can directly increase the potential liability of engineers. A duty to disclose potential or actual air quality concerns to third parties may apply for engineers in given circumstances. Such a duty may arise from judicial precedent, the Model Guide for Professional Conduct for Engineers, or the Code of Ethics for Engineers. Practical strategies engineers can use to protect themselves from liability include regular training and continuing education in relevant regulatory, scientific, and case law developments; detailed documentation and recordkeeping practices; adequate insurance coverage; contractual indemnity clauses; contractual provisions limiting liability to the scope of work performed; and contractual provisions limiting the extent of liability for engineers' negligence. Furthermore, through the proper use of building materials and construction techniques, an engineer or other design professional can effectively limit the potential for IAQ liability.

  5. Three-dimensional rendering of segmented object using matlab - biomed 2010.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2010-01-01

    The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.

  6. CT image segmentation methods for bone used in medical additive manufacturing.

    PubMed

    van Eijnatten, Maureen; van Dijk, Roelof; Dobbe, Johannes; Streekstra, Geert; Koivisto, Juha; Wolff, Jan

    2018-01-01

    The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Echocardiographic Image Quality Deteriorates with Age in Children and Young Adults with Duchenne Muscular Dystrophy.

    PubMed

    Power, Alyssa; Poonja, Sabrina; Disler, Dal; Myers, Kimberley; Patton, David J; Mah, Jean K; Fine, Nowell M; Greenway, Steven C

    2017-01-01

    Advances in medical care for patients with Duchenne muscular dystrophy (DMD) have resulted in improved survival and an increased prevalence of cardiomyopathy. Serial echocardiographic surveillance is recommended to detect early cardiac dysfunction and initiate medical therapy. Clinical anecdote suggests that echocardiographic quality diminishes over time, impeding accurate assessment of left ventricular systolic function. Furthermore, evidence-based guidelines for the use of cardiac imaging in DMD, including cardiac magnetic resonance imaging (CMR), are limited. The objective of our single-center, retrospective study was to quantify the deterioration in echocardiographic image quality with increasing patient age and identify an age at which CMR should be considered. We retrospectively reviewed and graded the image quality of serial echocardiograms obtained in young patients with DMD. The quality of 16 left ventricular segments in two echocardiographic views was visually graded using a binary scoring system. An endocardial border delineation percentage (EBDP) score was calculated by dividing the number of segments with adequate endocardial delineation in each imaging window by the total number of segments present in that window and multiplying by 100. Linear regression analysis was performed to model the relationship between the EBDP scores and patient age. Fifty-five echocardiograms from 13 patients (mean age 11.6 years, range 3.6-19.9) were systematically reviewed. By 13 years of age, 50% of the echocardiograms were classified as suboptimal with ≥30% of segments inadequately visualized, and by 15 years of age, 78% of studies were suboptimal. Linear regression analysis revealed a negative correlation between patient age and EBDP score ( r  = -2.49, 95% confidence intervals -4.73, -0.25; p  = 0.032), with the score decreasing by 2.5% for each 1 year increase in age. Echocardiographic image quality declines with increasing age in DMD. Alternate

  8. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  9. Survey statistics of automated segmentations applied to optical imaging of mammalian cells.

    PubMed

    Bajcsy, Peter; Cardone, Antonio; Chalfoun, Joe; Halter, Michael; Juba, Derek; Kociolek, Marcin; Majurski, Michael; Peskin, Adele; Simon, Carl; Simon, Mylene; Vandecreme, Antoine; Brady, Mary

    2015-10-15

    The goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements. We define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories. The survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue. The novel contributions of this survey paper are: (1) a new type of classification of cellular

  10. Speech segmentation in aphasia

    PubMed Central

    Peñaloza, Claudia; Benetello, Annalisa; Tuomiranta, Leena; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria Carmen; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2017-01-01

    Background Speech segmentation is one of the initial and mandatory phases of language learning. Although some people with aphasia have shown a preserved ability to learn novel words, their speech segmentation abilities have not been explored. Aims We examined the ability of individuals with chronic aphasia to segment words from running speech via statistical learning. We also explored the relationships between speech segmentation and aphasia severity, and short-term memory capacity. We further examined the role of lesion location in speech segmentation and short-term memory performance. Methods & Procedures The experimental task was first validated with a group of young adults (n = 120). Participants with chronic aphasia (n = 14) were exposed to an artificial language and were evaluated in their ability to segment words using a speech segmentation test. Their performance was contrasted against chance level and compared to that of a group of elderly matched controls (n = 14) using group and case-by-case analyses. Outcomes & Results As a group, participants with aphasia were significantly above chance level in their ability to segment words from the novel language and did not significantly differ from the group of elderly controls. Speech segmentation ability in the aphasic participants was not associated with aphasia severity although it significantly correlated with word pointing span, a measure of verbal short-term memory. Case-by-case analyses identified four individuals with aphasia who performed above chance level on the speech segmentation task, all with predominantly posterior lesions and mild fluent aphasia. Their short-term memory capacity was also better preserved than in the rest of the group. Conclusions Our findings indicate that speech segmentation via statistical learning can remain functional in people with chronic aphasia and suggest that this initial language learning mechanism is associated with the functionality of the verbal short-term memory

  11. Can segmentation evaluation metric be used as an indicator of land cover classification accuracy?

    NASA Astrophysics Data System (ADS)

    Švab Lenarčič, Andreja; Đurić, Nataša; Čotar, Klemen; Ritlop, Klemen; Oštir, Krištof

    2016-10-01

    It is a broadly established belief that the segmentation result significantly affects subsequent image classification accuracy. However, the actual correlation between the two has never been evaluated. Such an evaluation would be of considerable importance for any attempts to automate the object-based classification process, as it would reduce the amount of user intervention required to fine-tune the segmentation parameters. We conducted an assessment of segmentation and classification by analyzing 100 different segmentation parameter combinations, 3 classifiers, 5 land cover classes, 20 segmentation evaluation metrics, and 7 classification accuracy measures. The reliability definition of segmentation evaluation metrics as indicators of land cover classification accuracy was based on the linear correlation between the two. All unsupervised metrics that are not based on number of segments have a very strong correlation with all classification measures and are therefore reliable as indicators of land cover classification accuracy. On the other hand, correlation at supervised metrics is dependent on so many factors that it cannot be trusted as a reliable classification quality indicator. Algorithms for land cover classification studied in this paper are widely used; therefore, presented results are applicable to a wider area.

  12. Evolutionarily stable range limits set by interspecific competition.

    PubMed

    Price, Trevor D; Kirkpatrick, Mark

    2009-04-22

    A combination of abiotic and biotic factors probably restricts the range of many species. Recent evolutionary models and tests of those models have asked how a gradual change in environmental conditions can set the range limit, with a prominent idea being that gene flow disrupts local adaptation. We investigate how biotic factors, explicitly competition for limited resources, result in evolutionarily stable range limits even in the absence of the disruptive effect of gene flow. We model two competing species occupying different segments of the resource spectrum. If one segment of the resource spectrum declines across space, a species that specializes on that segment can be driven to extinction, even though in the absence of competition it would evolve to exploit other abundant resources and so be saved. The result is that a species range limit is set in both evolutionary and ecological time, as the resources associated with its niche decline. Factors promoting this outcome include: (i) inherent gaps in the resource distribution, (ii) relatively high fitness of the species when in its own niche, and low fitness in the alternative niche, even when resource abundances are similar in each niche, (iii) strong interspecific competition, and (iv) asymmetric interspecific competition. We suggest that these features are likely to be common in multispecies communities, thereby setting evolutionarily stable range limits.

  13. Evolutionarily stable range limits set by interspecific competition

    PubMed Central

    Price, Trevor D.; Kirkpatrick, Mark

    2009-01-01

    A combination of abiotic and biotic factors probably restricts the range of many species. Recent evolutionary models and tests of those models have asked how a gradual change in environmental conditions can set the range limit, with a prominent idea being that gene flow disrupts local adaptation. We investigate how biotic factors, explicitly competition for limited resources, result in evolutionarily stable range limits even in the absence of the disruptive effect of gene flow. We model two competing species occupying different segments of the resource spectrum. If one segment of the resource spectrum declines across space, a species that specializes on that segment can be driven to extinction, even though in the absence of competition it would evolve to exploit other abundant resources and so be saved. The result is that a species range limit is set in both evolutionary and ecological time, as the resources associated with its niche decline. Factors promoting this outcome include: (i) inherent gaps in the resource distribution, (ii) relatively high fitness of the species when in its own niche, and low fitness in the alternative niche, even when resource abundances are similar in each niche, (iii) strong interspecific competition, and (iv) asymmetric interspecific competition. We suggest that these features are likely to be common in multispecies communities, thereby setting evolutionarily stable range limits. PMID:19324813

  14. Basic test framework for the evaluation of text line segmentation and text parameter extraction.

    PubMed

    Brodić, Darko; Milivojević, Dragan R; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  15. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    PubMed Central

    Brodić, Darko; Milivojević, Dragan R.; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms. PMID:22399932

  16. Physics of Limiting Phenomena in Superconducting Microwave Resonators: Vortex Dissipation, Ultimate Quench and Quality Factor Degradation Mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Checchin, Mattia

    Superconducting niobium accelerating cavities are devices operating in radio-frequency and able to accelerate charged particles up to energy of tera-electron-volts. Such accelerating structures are though limited in terms of quality factor and accelerating gradient, that translates--in some cases--in higher capital costs of construction and operation of superconducting rf accelerators. Looking forward for a new generation of more affordable accelerators, the physical description of limiting mechanisms in superconducting microwave resonators is discussed. In particular, the physics behind the dissipation introduced by vortices in the superconductor, the ultimate quench limitations and the quality factor degradation mechanism after a quench are described inmore » detail. One of the limiting factor of the quality factor is the dissipation introduced by trapped magnetic flux vortices. The radio-frequency complex response of trapped vortices in superconductors is derived by solving the motion equation for a magnetic flux line, assuming a bi-dimensional and mean free path-dependent Lorentzian-shaped pinning potential. The resulting surface resistance shows the bell-shaped trend as a function of the mean free path, in agreement with the experimental data observed. Such bell-shaped trend of the surface resistance is described in terms of the interplay of the two limiting regimes identified as pinning and flux flow regimes, for low and large mean free path values respectively. The model predicts that the dissipation regime--pinning- or flux-flow-dominated--can be tuned either by acting on the frequency or on the electron mean free path value. The effect of different configurations of pinning sites and strength on the vortex surface resistance are also discussed. Accelerating cavities are also limited by the quench of the superconductive state, which limits the maximum accelerating gradient achievable. The accelerating field limiting factor is usually associate d to the

  17. Physics of limiting phenomena in superconducting microwave resonators: Vortex dissipation, ultimate quench and quality factor degradation mechanisms

    NASA Astrophysics Data System (ADS)

    Checchin, Mattia

    Superconducting niobium accelerating cavities are devices operating in radiofrequency and able to accelerate charged particles up to energy of tera-electron-volts. Such accelerating structures are though limited in terms of quality factor and accelerating gradient, that translates--in some cases--in higher capital costs of construction and operation of superconducting rf accelerators. Looking forward for a new generation of more affordable accelerators, the physical description of limiting mechanisms in superconducting microwave resonators is discussed. In particular, the physics behind the dissipation introduced by vortices in the superconductor, the ultimate quench limitations and the quality factor degradation mechanism after a quench are described in detail. One of the limiting factor of the quality factor is the dissipation introduced by trapped magnetic flux vortices. The radio-frequency complex response of trapped vortices in superconductors is derived by solving the motion equation for a magnetic flux line, assuming a bi-dimensional and mean free path-dependent Lorentzian-shaped pinning potential. The resulting surface resistance shows the bell-shaped trend as a function of the mean free path, in agreement with the experimental data observed. Such bell-shaped trend of the surface resistance is described in terms of the interplay of the two limiting regimes identified as pinning and flux flow regimes, for low and large mean free path values respectively. The model predicts that the dissipation regime--pinning- or flux-flow-dominated--can be tuned either by acting on the frequency or on the electron mean free path value. The effect of different configurations of pinning sites and strength on the vortex surface resistance are also discussed. Accelerating cavities are also limited by the quench of the superconductive state, which limits the maximum accelerating gradient achievable. The accelerating field limiting factor is usually associated to the superheating

  18. Optical coherence tomography in anterior segment imaging

    PubMed Central

    Kalev-Landoy, Maya; Day, Alexander C.; Cordeiro, M. Francesca; Migdal, Clive

    2008-01-01

    Purpose To evaluate the ability of optical coherence tomography (OCT), designed primarily to image the posterior segment, to visualize the anterior chamber angle (ACA) in patients with different angle configurations. Methods In a prospective observational study, the anterior segments of 26 eyes of 26 patients were imaged using the Zeiss Stratus OCT, model 3000. Imaging of the anterior segment was achieved by adjusting the focusing control on the Stratus OCT. A total of 16 patients had abnormal angle configurations including narrow or closed angles and plateau irides, and 10 had normal angle configurations as determined by prior full ophthalmic examination, including slit-lamp biomicroscopy and gonioscopy. Results In all cases, OCT provided high-resolution information regarding iris configuration. The ACA itself was clearly visualized in patients with narrow or closed angles, but not in patients with open angles. Conclusions Stratus OCT offers a non-contact, convenient and rapid method of assessing the configuration of the anterior chamber. Despite its limitations, it may be of help during the routine clinical assessment and treatment of patients with glaucoma, particularly when gonioscopy is not possible or difficult to interpret. PMID:17355288

  19. Colony image acquisition and segmentation

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2007-12-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.

  20. Variability and Reproducibility of Segmental Longitudinal Strain Measurement: A Report From the EACVI-ASE Strain Standardization Task Force.

    PubMed

    Mirea, Oana; Pagourelias, Efstathios D; Duchenne, Jurgen; Bogaert, Jan; Thomas, James D; Badano, Luigi P; Voigt, Jens-Uwe

    2018-01-01

    In this study, we compared left ventricular (LV) segmental strain measurements obtained with different ultrasound machines and post-processing software packages. Global longitudinal strain (GLS) has proven to be a reproducible and valuable tool in clinical practice. Data about the reproducibility and intervendor differences of segmental strain measurements, however, are missing. We included 63 volunteers with cardiac magnetic resonance-proven infarct scar with segmental LV function ranging from normal to severely impaired. Each subject was examined within 2 h by a single expert sonographer with machines from multiple vendors. All 3 apical views were acquired twice to determine the test-retest and the intervendor variability. Segmental longitudinal peak systolic, end-systolic, and post-systolic strain were measured using 7 vendor-specific systems (Hitachi, Tokyo, Japan; Esaote, Florence, Italy; GE Vingmed Ultrasound, Horten, Norway; Philips, Andover, Massachusetts; Samsung, Seoul, South Korea; Siemens, Mountain View, California; and Toshiba, Otawara, Japan) and 2 independent software packages (Epsilon, Ann Arbor, Michigan; and TOMTEC, Unterschleissheim, Germany) and compared among vendors. Image quality and tracking feasibility differed among vendors (analysis of variance, p < 0.05). The absolute test-retest difference ranged from 2.5% to 4.9% for peak systolic, 2.6% to 5.0% for end-systolic, and 2.5% to 5.0% for post-systolic strain. The average segmental strain values varied significantly between vendors (up to 4.5%). Segmental strain parameters from each vendor correlated well with the mean of all vendors (r 2 range 0.58 to 0.81) but showed very different ranges of values. Bias and limits of agreement were up to -4.6 ± 7.5%. In contrast to GLS, LV segmental longitudinal strain measurements have a higher variability on top of the known intervendor bias. The fidelity of different software to follow segmental function varies considerably. We conclude that

  1. Segmentation precision of abdominal anatomy for MRI-based radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noel, Camille E.; Zhu, Fan; Lee, Andrew Y.

    2014-10-01

    The limited soft tissue visualization provided by computed tomography, the standard imaging modality for radiotherapy treatment planning and daily localization, has motivated studies on the use of magnetic resonance imaging (MRI) for better characterization of treatment sites, such as the prostate and head and neck. However, no studies have been conducted on MRI-based segmentation for the abdomen, a site that could greatly benefit from enhanced soft tissue targeting. We investigated the interobserver and intraobserver precision in segmentation of abdominal organs on MR images for treatment planning and localization. Manual segmentation of 8 abdominal organs was performed by 3 independent observersmore » on MR images acquired from 14 healthy subjects. Observers repeated segmentation 4 separate times for each image set. Interobserver and intraobserver contouring precision was assessed by computing 3-dimensional overlap (Dice coefficient [DC]) and distance to agreement (Hausdorff distance [HD]) of segmented organs. The mean and standard deviation of intraobserver and interobserver DC and HD values were DC{sub intraobserver} = 0.89 ± 0.12, HD{sub intraobserver} = 3.6 mm ± 1.5, DC{sub interobserver} = 0.89 ± 0.15, and HD{sub interobserver} = 3.2 mm ± 1.4. Overall, metrics indicated good interobserver/intraobserver precision (mean DC > 0.7, mean HD < 4 mm). Results suggest that MRI offers good segmentation precision for abdominal sites. These findings support the utility of MRI for abdominal planning and localization, as emerging MRI technologies, techniques, and onboard imaging devices are beginning to enable MRI-based radiotherapy.« less

  2. Pancreas and cyst segmentation

    NASA Astrophysics Data System (ADS)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  3. Why segmentation matters: experience-driven segmentation errors impair “morpheme” learning

    PubMed Central

    Finn, Amy S.; Hudson Kam, Carla L.

    2015-01-01

    We ask whether an adult learner’s knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners’ ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner’s native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner’s native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. PMID:25730305

  4. Pulmonary Lobe Segmentation with Probabilistic Segmentation of the Fissures and a Groupwise Fissure Prior

    PubMed Central

    Bragman, Felix J.S.; McClelland, Jamie R.; Jacob, Joseph; Hurst, John R.; Hawkes, David J.

    2017-01-01

    A fully automated, unsupervised lobe segmentation algorithm is presented based on a probabilistic segmentation of the fissures and the simultaneous construction of a population model of the fissures. A two-class probabilistic segmentation segments the lung into candidate fissure voxels and the surrounding parenchyma. This was combined with anatomical information and a groupwise fissure prior to drive non-parametric surface fitting to obtain the final segmentation. The performance of our fissure segmentation was validated on 30 patients from the COPDGene cohort, achieving a high median F1-score of 0.90 and showed general insensitivity to filter parameters. We evaluated our lobe segmentation algorithm on the LOLA11 dataset, which contains 55 cases at varying levels of pathology. We achieved the highest score of 0.884 of the automated algorithms. Our method was further tested quantitatively and qualitatively on 80 patients from the COPDGene study at varying levels of functional impairment. Accurate segmentation of the lobes is shown at various degrees of fissure incompleteness for 96% of all cases. We also show the utility of including a groupwise prior in segmenting the lobes in regions of grossly incomplete fissures. PMID:28436850

  5. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  6. 78 FR 35929 - Proposed Listing of Additional Waters To Be Included on Indiana's 2010 List of Impaired Waters...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-14

    ... the availability of EPA's proposed decision identifying water quality limited segments and associated... water quality standards and for which total maximum daily loads (TMDLs) must be prepared. On May 8, 2013..., EPA approved Indiana's listing of certain water quality limited segments and associated pollutants...

  7. Fully Convolutional Neural Networks Improve Abdominal Organ Segmentation.

    PubMed

    Bobo, Meg F; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G; Hilmes, Melissa A; Landman, Bennett A

    2018-03-01

    Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities.

  8. Fully convolutional neural networks improve abdominal organ segmentation

    NASA Astrophysics Data System (ADS)

    Bobo, Meg F.; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J.; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G.; Hilmes, Melissa A.; Landman, Bennett A.

    2018-03-01

    Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1

  9. Automatic liver segmentation in computed tomography using general-purpose shape modeling methods.

    PubMed

    Spinczyk, Dominik; Krasoń, Agata

    2018-05-29

    Liver segmentation in computed tomography is required in many clinical applications. The segmentation methods used can be classified according to a number of criteria. One important criterion for method selection is the shape representation of the segmented organ. The aim of the work is automatic liver segmentation using general purpose shape modeling methods. As part of the research, methods based on shape information at various levels of advancement were used. The single atlas based segmentation method was used as the simplest shape-based method. This method is derived from a single atlas using the deformable free-form deformation of the control point curves. Subsequently, the classic and modified Active Shape Model (ASM) was used, using medium body shape models. As the most advanced and main method generalized statistical shape models, Gaussian Process Morphable Models was used, which are based on multi-dimensional Gaussian distributions of the shape deformation field. Mutual information and sum os square distance were used as similarity measures. The poorest results were obtained for the single atlas method. For the ASM method in 10 analyzed cases for seven test images, the Dice coefficient was above 55[Formula: see text], of which for three of them the coefficient was over 70[Formula: see text], which placed the method in second place. The best results were obtained for the method of generalized statistical distribution of the deformation field. The DICE coefficient for this method was 88.5[Formula: see text] CONCLUSIONS: This value of 88.5 [Formula: see text] Dice coefficient can be explained by the use of general-purpose shape modeling methods with a large variance of the shape of the modeled object-the liver and limitations on the size of our training data set, which was limited to 10 cases. The obtained results in presented fully automatic method are comparable with dedicated methods for liver segmentation. In addition, the deforamtion features of the

  10. Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Bales, Ben; Pollock, Tresa; Petzold, Linda

    2017-06-01

    Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.

  11. Automated intraretinal segmentation of SD-OCT images in normal and age-related macular degeneration eyes

    PubMed Central

    de Sisternes, Luis; Jonna, Gowtham; Moss, Jason; Marmor, Michael F.; Leng, Theodore; Rubin, Daniel L.

    2017-01-01

    This work introduces and evaluates an automated intra-retinal segmentation method for spectral-domain optical coherence (SD-OCT) retinal images. While quantitative assessment of retinal features in SD-OCT data is important, manual segmentation is extremely time-consuming and subjective. We address challenges that have hindered prior automated methods, including poor performance with diseased retinas relative to healthy retinas, and data smoothing that obscures image features such as small retinal drusen. Our novel segmentation approach is based on the iterative adaptation of a weighted median process, wherein a three-dimensional weighting function is defined according to image intensity and gradient properties, and a set of smoothness constraints and pre-defined rules are considered. We compared the segmentation results for 9 segmented outlines associated with intra-retinal boundaries to those drawn by hand by two retinal specialists and to those produced by an independent state-of-the-art automated software tool in a set of 42 clinical images (from 14 patients). These images were obtained with a Zeiss Cirrus SD-OCT system, including healthy, early or intermediate AMD, and advanced AMD eyes. As a qualitative evaluation of accuracy, a highly experienced third independent reader blindly rated the quality of the outlines produced by each method. The accuracy and image detail of our method was superior in healthy and early or intermediate AMD eyes (98.15% and 97.78% of results not needing substantial editing) to the automated method we compared against. While the performance was not as good in advanced AMD (68.89%), it was still better than the manual outlines or the comparison method (which failed in such cases). We also tested our method’s performance on images acquired with a different SD-OCT manufacturer, collected from a large publicly available data set (114 healthy and 255 AMD eyes), and compared the data quantitatively to reference standard markings of the

  12. A functional-based segmentation of human body scans in arbitrary postures.

    PubMed

    Werghi, Naoufel; Xiao, Yijun; Siebert, Jan Paul

    2006-02-01

    This paper presents a general framework that aims to address the task of segmenting three-dimensional (3-D) scan data representing the human form into subsets which correspond to functional human body parts. Such a task is challenging due to the articulated and deformable nature of the human body. A salient feature of this framework is that it is able to cope with various body postures and is in addition robust to noise, holes, irregular sampling and rigid transformations. Although whole human body scanners are now capable of routinely capturing the shape of the whole body in machine readable format, they have not yet realized their potential to provide automatic extraction of key body measurements. Automated production of anthropometric databases is a prerequisite to satisfying the needs of certain industrial sectors (e.g., the clothing industry). This implies that in order to extract specific measurements of interest, whole body 3-D scan data must be segmented by machine into subsets corresponding to functional human body parts. However, previously reported attempts at automating the segmentation process suffer from various limitations, such as being restricted to a standard specific posture and being vulnerable to scan data artifacts. Our human body segmentation algorithm advances the state of the art to overcome the above limitations and we present experimental results obtained using both real and synthetic data that confirm the validity, effectiveness, and robustness of our approach.

  13. The limits of metrical segmentation: intonation modulates infants' extraction of embedded trochees.

    PubMed

    Zahner, Katharina; Schönhuber, Muna; Braun, Bettina

    2016-11-01

    We tested German nine-month-olds' reliance on pitch and metrical stress for segmentation. In a headturn-preference paradigm, infants were familiarized with trisyllabic words (weak-strong-weak (WSW) stress pattern) in sentence-contexts. The words were presented in one of three naturally occurring intonation conditions: one in which high pitch was aligned with the stressed syllable and two misalignment conditions (with high pitch preceding vs. following the stressed syllable). Infants were tested on the SW unit of the WSW carriers. Experiment 1 showed recognition only when the stressed syllable was high-pitched. Intonation of test items (similar vs. dissimilar to familiarization) had no influence (Experiment 2). Thus, German nine-month-olds perceive stressed syllables as word onsets only when high-pitched, although they already generalize over different pitch contours. Different mechanisms underlying this pattern of results are discussed.

  14. Automated Urban Travel Interpretation: A Bottom-up Approach for Trajectory Segmentation.

    PubMed

    Das, Rahul Deb; Winter, Stephan

    2016-11-23

    Understanding travel behavior is critical for an effective urban planning as well as for enabling various context-aware service provisions to support mobility as a service (MaaS). Both applications rely on the sensor traces generated by travellers' smartphones. These traces can be used to interpret travel modes, both for generating automated travel diaries as well as for real-time travel mode detection. Current approaches segment a trajectory by certain criteria, e.g., drop in speed. However, these criteria are heuristic, and, thus, existing approaches are subjective and involve significant vagueness and uncertainty in activity transitions in space and time. Also, segmentation approaches are not suited for real time interpretation of open-ended segments, and cannot cope with the frequent gaps in the location traces. In order to address all these challenges a novel, state based bottom-up approach is proposed. This approach assumes a fixed atomic segment of a homogeneous state, instead of an event-based segment, and a progressive iteration until a new state is found. The research investigates how an atomic state-based approach can be developed in such a way that can work in real time, near-real time and offline mode and in different environmental conditions with their varying quality of sensor traces. The results show the proposed bottom-up model outperforms the existing event-based segmentation models in terms of adaptivity, flexibility, accuracy and richness in information delivery pertinent to automated travel behavior interpretation.

  15. Automated Urban Travel Interpretation: A Bottom-up Approach for Trajectory Segmentation

    PubMed Central

    Das, Rahul Deb; Winter, Stephan

    2016-01-01

    Understanding travel behavior is critical for an effective urban planning as well as for enabling various context-aware service provisions to support mobility as a service (MaaS). Both applications rely on the sensor traces generated by travellers’ smartphones. These traces can be used to interpret travel modes, both for generating automated travel diaries as well as for real-time travel mode detection. Current approaches segment a trajectory by certain criteria, e.g., drop in speed. However, these criteria are heuristic, and, thus, existing approaches are subjective and involve significant vagueness and uncertainty in activity transitions in space and time. Also, segmentation approaches are not suited for real time interpretation of open-ended segments, and cannot cope with the frequent gaps in the location traces. In order to address all these challenges a novel, state based bottom-up approach is proposed. This approach assumes a fixed atomic segment of a homogeneous state, instead of an event-based segment, and a progressive iteration until a new state is found. The research investigates how an atomic state-based approach can be developed in such a way that can work in real time, near-real time and offline mode and in different environmental conditions with their varying quality of sensor traces. The results show the proposed bottom-up model outperforms the existing event-based segmentation models in terms of adaptivity, flexibility, accuracy and richness in information delivery pertinent to automated travel behavior interpretation. PMID:27886053

  16. Automatic aortic root segmentation in CTA whole-body dataset

    NASA Astrophysics Data System (ADS)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  17. UrQt: an efficient software for the Unsupervised Quality trimming of NGS data.

    PubMed

    Modolo, Laurent; Lerat, Emmanuelle

    2015-04-29

    Quality control is a necessary step of any Next Generation Sequencing analysis. Although customary, this step still requires manual interventions to empirically choose tuning parameters according to various quality statistics. Moreover, current quality control procedures that provide a "good quality" data set, are not optimal and discard many informative nucleotides. To address these drawbacks, we present a new quality control method, implemented in UrQt software, for Unsupervised Quality trimming of Next Generation Sequencing reads. Our trimming procedure relies on a well-defined probabilistic framework to detect the best segmentation between two segments of unreliable nucleotides, framing a segment of informative nucleotides. Our software only requires one user-friendly parameter to define the minimal quality threshold (phred score) to consider a nucleotide to be informative, which is independent of both the experiment and the quality of the data. This procedure is implemented in C++ in an efficient and parallelized software with a low memory footprint. We tested the performances of UrQt compared to the best-known trimming programs, on seven RNA and DNA sequencing experiments and demonstrated its optimality in the resulting tradeoff between the number of trimmed nucleotides and the quality objective. By finding the best segmentation to delimit a segment of good quality nucleotides, UrQt greatly increases the number of reads and of nucleotides that can be retained for a given quality objective. UrQt source files, binary executables for different operating systems and documentation are freely available (under the GPLv3) at the following address: https://lbbe.univ-lyon1.fr/-UrQt-.html .

  18. CT-based patient modeling for head and neck hyperthermia treatment planning: manual versus automatic normal-tissue-segmentation.

    PubMed

    Verhaart, René F; Fortunati, Valerio; Verduijn, Gerda M; van Walsum, Theo; Veenland, Jifke F; Paulides, Margarethus M

    2014-04-01

    Clinical trials have shown that hyperthermia, as adjuvant to radiotherapy and/or chemotherapy, improves treatment of patients with locally advanced or recurrent head and neck (H&N) carcinoma. Hyperthermia treatment planning (HTP) guided H&N hyperthermia is being investigated, which requires patient specific 3D patient models derived from Computed Tomography (CT)-images. To decide whether a recently developed automatic-segmentation algorithm can be introduced in the clinic, we compared the impact of manual- and automatic normal-tissue-segmentation variations on HTP quality. CT images of seven patients were segmented automatically and manually by four observers, to study inter-observer and intra-observer geometrical variation. To determine the impact of this variation on HTP quality, HTP was performed using the automatic and manual segmentation of each observer, for each patient. This impact was compared to other sources of patient model uncertainties, i.e. varying gridsizes and dielectric tissue properties. Despite geometrical variations, manual and automatic generated 3D patient models resulted in an equal, i.e. 1%, variation in HTP quality. This variation was minor with respect to the total of other sources of patient model uncertainties, i.e. 11.7%. Automatically generated 3D patient models can be introduced in the clinic for H&N HTP. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Alignment and Integration Techniques for Mirror Segment Pairs on the Constellation X Telescope

    NASA Technical Reports Server (NTRS)

    Hadjimichael, Theo; Lehan, John; Olsen, Larry; Owens, Scott; Saha, Timo; Wallace, Tom; Zhang, Will

    2007-01-01

    We present the concepts behind current alignment and integration techniques for testing a Constellation-X primary-secondary mirror segment pair in an x-ray beam line test. We examine the effects of a passive mount on thin glass x-ray mirror segments, and the issues of mount shape and environment on alignment. We also investigate how bonding and transfer to a permanent housing affects the quality of the final image, comparing predicted results to a full x-ray test on a primary secondary pair.

  20. Constraint factor graph cut-based active contour method for automated cellular image segmentation in RNAi screening.

    PubMed

    Chen, C; Li, H; Zhou, X; Wong, S T C

    2008-05-01

    Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.

  1. Reporting trends and outcomes in ST-segment-elevation myocardial infarction national hospital quality assessment programs.

    PubMed

    McCabe, James M; Kennedy, Kevin F; Eisenhauer, Andrew C; Waldman, Howard M; Mort, Elizabeth A; Pomerantsev, Eugene; Resnic, Frederic S; Yeh, Robert W

    2014-01-14

    For patients who undergo primary percutaneous coronary intervention (PCI) for ST-segment-elevation myocardial infarction, the door-to-balloon time is an important performance measure reported to the Centers for Medicare & Medicaid Services (CMS) and tied to hospital quality assessment and reimbursement. We sought to assess the use and impact of exclusion criteria associated with the CMS measure of door-to-balloon time in primary PCI. All primary PCI-eligible patients at 3 Massachusetts hospitals (Brigham and Women's, Massachusetts General, and North Shore Medical Center) were evaluated for CMS reporting status. Rates of CMS reporting exclusion were the primary end points of interest. Key secondary end points were between-group differences in patient characteristics, door-to-balloon times, and 1-year mortality rates. From 2005 to 2011, 26% (408) of the 1548 primary PCI cases were excluded from CMS reporting. This percentage increased over the study period from 13.9% in 2005 to 36.7% in the first 3 quarters of 2011 (P<0.001). The most frequent cause of exclusion was for a diagnostic dilemma such as a nondiagnostic initial ECG, accounting for 31.2% of excluded patients. Although 95% of CMS-reported cases met door-to-balloon time goals in 2011, this was true of only 61% of CMS-excluded cases and consequently 82.6% of all primary PCI cases performed that year. The 1-year mortality for CMS-excluded patients was double that of CMS-included patients (13.5% versus 6.6%; P<0.001). More than a quarter of patients who underwent primary PCI were excluded from hospital quality reports collected by CMS, and this percentage has grown substantially over time. These findings may have significant implications for our understanding of process improvement in primary PCI and mechanisms for reimbursement through Medicare.

  2. Crossword: A Fully Automated Algorithm for the Segmentation and Quality Control of Protein Microarray Images

    PubMed Central

    2015-01-01

    Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579

  3. Contour-Driven Atlas-Based Segmentation

    PubMed Central

    Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina

    2016-01-01

    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202

  4. An Efficient Pipeline for Abdomen Segmentation in CT Images.

    PubMed

    Koyuncu, Hasan; Ceylan, Rahime; Sivri, Mesut; Erdogan, Hasan

    2018-04-01

    Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98

  5. Validation tools for image segmentation

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  6. A new method named as Segment-Compound method of baffle design

    NASA Astrophysics Data System (ADS)

    Qin, Xing; Yang, Xiaoxu; Gao, Xin; Liu, Xishuang

    2017-02-01

    As the observation demand increased, the demand of the lens imaging quality rising. Segment- Compound baffle design method was proposed in this paper. Three traditional methods of baffle design they are characterized as Inside to Outside, Outside to Inside, and Mirror Symmetry. Through a transmission type of optical system, the four methods were used to design stray light suppression structure for it, respectively. Then, structures modeling simulation with Solidworks, CAXA, Tracepro, At last, point source transmittance (PST) curve lines were got to describe their performance. The result shows that the Segment- Compound method can inhibit stay light more effectively. Moreover, it is easy to active and without use special material.

  7. Lithography-induced limits to scaling of design quality

    NASA Astrophysics Data System (ADS)

    Kahng, Andrew B.

    2014-03-01

    Quality and value of an IC product are functions of power, performance, area, cost and reliability. The forthcoming 2013 ITRS roadmap observes that while manufacturers continue to enable potential Moore's Law scaling of layout densities, the "realizable" scaling in competitive products has for some years been significantly less. In this paper, we consider aspects of the question, "To what extent should this scaling gap be blamed on lithography?" Non-ideal scaling of layout densities has been attributed to (i) layout restrictions associated with multi-patterning technologies (SADP, LELE, LELELE), as well as (ii) various ground rule and layout style choices that stem from misalignment, reliability, variability, device architecture, and electrical performance vs. power constraints. Certain impacts seem obvious, e.g., loss of 2D flexibility and new line-end placement constraints with SADP, or algorithmically intractable layout stitching and mask coloring formulations with LELELE. However, these impacts may well be outweighed by weaknesses in design methodology and tooling. Arguably, the industry has entered a new era in which many new factors - (i) standard-cell library architecture, and layout guardbanding for automated place-and-route: (ii) performance model guardbanding and signoff analyses: (iii) physical design and manufacturing handoff algorithms spanning detailed placement and routing, stitching and RET; and (iv) reliability guardbanding - all contribute, hand in hand with lithography, to a newly-identified "design capability gap". How specific aspects of process and design enablements limit the scaling of design quality is a fundamental question whose answer must guide future RandD investment at the design-manufacturing interface. terface.

  8. Importance of reporting segmental bowel preparation scores during colonoscopy in clinical practice

    PubMed Central

    Jain, Deepanshu; Momeni, Mojdeh; Krishnaiah, Mahesh; Anand, Sury; Singhal, Shashideep

    2015-01-01

    AIM: To evaluate the impact of reporting bowel preparation using Boston Bowel Preparation Scale (BBPS) in clinical practice. METHODS: The study was a prospective observational cohort study which enrolled subjects reporting for screening colonoscopy. All subjects received a gallon of polyethylene glycol as bowel preparation regimen. After colonoscopy the endoscopists determined quality of bowel preparation using BBPS. Segmental scores were combined to calculate composite BBPS. Site and size of the polyps detected was recorded. Pathology reports were reviewed to determine advanced adenoma detection rates (AADR). Segmental AADR’s were calculated and categorized based on the segmental BBPS to determine the differential impact of bowel prep on AADR. RESULTS: Three hundred and sixty subjects were enrolled in the study with a mean age of 59.2 years, 36.3% males and 63.8% females. Four subjects with incomplete colonoscopy due BBPS of 0 in any segment were excluded. Based on composite BBPS subjects were divided into 3 groups; Group-0 (poor bowel prep, BBPS 0-3) n = 26 (7.3%), Group-1 (Suboptimal bowel prep, BBPS 4-6) n = 121 (34%) and Group-2 (Adequate bowel prep, BBPS 7-9) n = 209 (58.7%). AADR showed a linear trend through Group-1 to 3; with an AADR of 3.8%, 14.8% and 16.7% respectively. Also seen was a linear increasing trend in segmental AADR with improvement in segmental BBPS. There was statistical significant difference between AADR among Group 0 and 2 (3.8% vs 16.7%, P < 0.05), Group 1 and 2 (14.8% vs 16.7%, P < 0.05) and Group 0 and 1 (3.8% vs 14.8%, P < 0.05). χ2 method was used to compute P value for determining statistical significance. CONCLUSION: Segmental AADRs correlate with segmental BBPS. It is thus valuable to report segmental BBPS in colonoscopy reports in clinical practice. PMID:25852286

  9. Importance of reporting segmental bowel preparation scores during colonoscopy in clinical practice.

    PubMed

    Jain, Deepanshu; Momeni, Mojdeh; Krishnaiah, Mahesh; Anand, Sury; Singhal, Shashideep

    2015-04-07

    To evaluate the impact of reporting bowel preparation using Boston Bowel Preparation Scale (BBPS) in clinical practice. The study was a prospective observational cohort study which enrolled subjects reporting for screening colonoscopy. All subjects received a gallon of polyethylene glycol as bowel preparation regimen. After colonoscopy the endoscopists determined quality of bowel preparation using BBPS. Segmental scores were combined to calculate composite BBPS. Site and size of the polyps detected was recorded. Pathology reports were reviewed to determine advanced adenoma detection rates (AADR). Segmental AADR's were calculated and categorized based on the segmental BBPS to determine the differential impact of bowel prep on AADR. Three hundred and sixty subjects were enrolled in the study with a mean age of 59.2 years, 36.3% males and 63.8% females. Four subjects with incomplete colonoscopy due BBPS of 0 in any segment were excluded. Based on composite BBPS subjects were divided into 3 groups; Group-0 (poor bowel prep, BBPS 0-3) n = 26 (7.3%), Group-1 (Suboptimal bowel prep, BBPS 4-6) n = 121 (34%) and Group-2 (Adequate bowel prep, BBPS 7-9) n = 209 (58.7%). AADR showed a linear trend through Group-1 to 3; with an AADR of 3.8%, 14.8% and 16.7% respectively. Also seen was a linear increasing trend in segmental AADR with improvement in segmental BBPS. There was statistical significant difference between AADR among Group 0 and 2 (3.8% vs 16.7%, P < 0.05), Group 1 and 2 (14.8% vs 16.7%, P < 0.05) and Group 0 and 1 (3.8% vs 14.8%, P < 0.05). χ(2) method was used to compute P value for determining statistical significance. Segmental AADRs correlate with segmental BBPS. It is thus valuable to report segmental BBPS in colonoscopy reports in clinical practice.

  10. Integration of Sparse Multi-modality Representation and Geometrical Constraint for Isointense Infant Brain Segmentation

    PubMed Central

    Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H.; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6–8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods. PMID:24505729

  11. Integration of sparse multi-modality representation and geometrical constraint for isointense infant brain segmentation.

    PubMed

    Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang

    2013-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.

  12. Crowdsourcing for identification of polyp-free segments in virtual colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Park, Ji Hwan; Mirhosseini, Seyedkoosha; Nadeem, Saad; Marino, Joseph; Kaufman, Arie; Baker, Kevin; Barish, Matthew

    2017-03-01

    Virtual colonoscopy (VC) allows a physician to virtually navigate within a reconstructed 3D colon model searching for colorectal polyps. Though VC is widely recognized as a highly sensitive and specific test for identifying polyps, one limitation is the reading time, which can take over 30 minutes per patient. Large amounts of the colon are often devoid of polyps, and a way of identifying these polyp-free segments could be of valuable use in reducing the required reading time for the interrogating radiologist. To this end, we have tested the ability of the collective crowd intelligence of non-expert workers to identify polyp candidates and polyp-free regions. We presented twenty short videos flying through a segment of a virtual colon to each worker, and the crowd was asked to determine whether or not a possible polyp was observed within that video segment. We evaluated our framework on Amazon Mechanical Turk and found that the crowd was able to achieve a sensitivity of 80.0% and specificity of 86.5% in identifying video segments which contained a clinically proven polyp. Since each polyp appeared in multiple consecutive segments, all polyps were in fact identified. Using the crowd results as a first pass, 80% of the video segments could in theory be skipped by the radiologist, equating to a significant time savings and enabling more VC examinations to be performed.

  13. Laparoscopic surgery of postero-lateral segments: a comparison between transthoracic and abdominal approach.

    PubMed

    Fuks, David; Gayet, Brice

    2015-06-01

    Lesions located in the postero-lateral part of the liver (segments 6 and 7) have been considered as poor candidates for a laparoscopic liver resection due to the limited visualization and difficulty in bleeding control. Although no comparison has been done between transthoracic and abdominal resection of tumors located in the postero-lateral segments, we propose a description of these different strategies, specifying the benefits as well as the disadvantages of the various approaches.

  14. Gap-free segmentation of vascular networks with automatic image processing pipeline.

    PubMed

    Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas

    2017-03-01

    Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Impact assisted segmented cutterhead

    DOEpatents

    Morrell, Roger J.; Larson, David A.; Ruzzi, Peter L.

    1992-01-01

    An impact assisted segmented cutterhead device is provided for cutting various surfaces from coal to granite. The device comprises a plurality of cutting bit segments deployed in side by side relationship to form a continuous cutting face and a plurality of impactors individually associated with respective cutting bit segments. An impactor rod of each impactor connects that impactor to the corresponding cutting bit segment. A plurality of shock mounts dampening the vibration from the associated impactor. Mounting brackets are used in mounting the cutterhead to a base machine.

  16. Multiresolution texture models for brain tumor segmentation in MRI.

    PubMed

    Iftekharuddin, Khan M; Ahmed, Shaheen; Hossen, Jakir

    2011-01-01

    In this study we discuss different types of texture features such as Fractal Dimension (FD) and Multifractional Brownian Motion (mBm) for estimating random structures and varying appearance of brain tissues and tumors in magnetic resonance images (MRI). We use different selection techniques including KullBack - Leibler Divergence (KLD) for ranking different texture and intensity features. We then exploit graph cut, self organizing maps (SOM) and expectation maximization (EM) techniques to fuse selected features for brain tumors segmentation in multimodality T1, T2, and FLAIR MRI. We use different similarity metrics to evaluate quality and robustness of these selected features for tumor segmentation in MRI for real pediatric patients. We also demonstrate a non-patient-specific automated tumor prediction scheme by using improved AdaBoost classification based on these image features.

  17. The elastic ratio: introducing curvature into ratio-based image segmentation.

    PubMed

    Schoenemann, Thomas; Masnou, Simon; Cremers, Daniel

    2011-09-01

    We present the first ratio-based image segmentation method that allows imposing curvature regularity of the region boundary. Our approach is a generalization of the ratio framework pioneered by Jermyn and Ishikawa so as to allow penalty functions that take into account the local curvature of the curve. The key idea is to cast the segmentation problem as one of finding cyclic paths of minimal ratio in a graph where each graph node represents a line segment. Among ratios whose discrete counterparts can be globally minimized with our approach, we focus in particular on the elastic ratio [Formula: see text] that depends, given an image I, on the oriented boundary C of the segmented region candidate. Minimizing this ratio amounts to finding a curve, neither small nor too curvy, through which the brightness flux is maximal. We prove the existence of minimizers for this criterion among continuous curves with mild regularity assumptions. We also prove that the discrete minimizers provided by our graph-based algorithm converge, as the resolution increases, to continuous minimizers. In contrast to most existing segmentation methods with computable and meaningful, i.e., nondegenerate, global optima, the proposed approach is fully unsupervised in the sense that it does not require any kind of user input such as seed nodes. Numerical experiments demonstrate that curvature regularity allows substantial improvement of the quality of segmentations. Furthermore, our results allow drawing conclusions about global optima of a parameterization-independent version of the snakes functional: the proposed algorithm allows determining parameter values where the functional has a meaningful solution and simultaneously provides the corresponding global solution.

  18. Rediscovering market segmentation.

    PubMed

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  19. A prior feature SVM-MRF based method for mouse brain segmentation.

    PubMed

    Wu, Teresa; Bae, Min Hyeok; Zhang, Min; Pan, Rong; Badea, Alexandra

    2012-02-01

    We introduce an automated method, called prior feature Support Vector Machine-Markov Random Field (pSVMRF), to segment three-dimensional mouse brain Magnetic Resonance Microscopy (MRM) images. Our earlier work, extended MRF (eMRF) integrated Support Vector Machine (SVM) and Markov Random Field (MRF) approaches, leading to improved segmentation accuracy; however, the computation of eMRF is very expensive, which may limit its performance on segmentation and robustness. In this study pSVMRF reduces training and testing time for SVM, while boosting segmentation performance. Unlike the eMRF approach, where MR intensity information and location priors are linearly combined, pSVMRF combines this information in a nonlinear fashion, and enhances the discriminative ability of the algorithm. We validate the proposed method using MR imaging of unstained and actively stained mouse brain specimens, and compare segmentation accuracy with two existing methods: eMRF and MRF. C57BL/6 mice are used for training and testing, using cross validation. For formalin fixed C57BL/6 specimens, pSVMRF outperforms both eMRF and MRF. The segmentation accuracy for C57BL/6 brains, stained or not, was similar for larger structures like hippocampus and caudate putamen, (~87%), but increased substantially for smaller regions like susbtantia nigra (from 78.36% to 91.55%), and anterior commissure (from ~50% to ~80%). To test segmentation robustness against increased anatomical variability we add two strains, BXD29 and a transgenic mouse model of Alzheimer's disease. Segmentation accuracy for new strains is 80% for hippocampus, and caudate putamen, indicating that pSVMRF is a promising approach for phenotyping mouse models of human brain disorders. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. INDUCTION HEATING PROCESS FOR MELTING TITANIUM (COLD-WALL CRUCIBLES, SEGMENTED AND NON-SEGMENTED).

    DTIC Science & Technology

    system during melting tests. Three types of cold-wall crucibles were investigated. The first was a four-segment copper crucible , the second a non...segmented silicon bronze crucible, and the third a two-segment copper crucible coated with BeO. Attempts to melt titanium in an induction field in a cold

  1. Segmented ion thruster

    NASA Technical Reports Server (NTRS)

    Brophy, John R. (Inventor)

    1993-01-01

    Apparatus and methods for large-area, high-power ion engines comprise dividing a single engine into a combination of smaller discharge chambers (or segments) configured to operate as a single large-area engine. This segmented ion thruster (SIT) approach enables the development of 100-kW class argon ion engines for operation at a specific impulse of 10,000 s. A combination of six 30-cm diameter ion chambers operating as a single engine can process over 100 kW. Such a segmented ion engine can be operated from a single power processor unit.

  2. Quality expectations and tolerance limits of trial master files (TMF) - Developing a risk-based approach for quality assessments of TMFs.

    PubMed

    Hecht, Arthur; Busch-Heidger, Barbara; Gertzen, Heiner; Pfister, Heike; Ruhfus, Birgit; Sanden, Per-Holger; Schmidt, Gabriele B

    2015-01-01

    This article addresses the question of when a trial master file (TMF) can be considered sufficiently accurate and complete: What attributes does the TMF need to have so that a clinical trial can be adequately reconstructed from documented data and procedures? Clinical trial sponsors face significant challenges in assembling the TMF, especially when dealing with large, international, multicenter studies; despite all newly introduced archiving techniques it is becoming more and more difficult to ensure that the TMF is complete. This is directly reflected in the number of inspection findings reported and published by the EMA in 2014. Based on quality risk management principles in clinical trials the authors defined the quality expectations for the different document types in a TMF and furthermore defined tolerance limits for missing documents. This publication provides guidance on what type of documents and processes are most important, and in consequence, indicates on which documents and processes trial team staff should focus in order to achieve a high-quality TMF. The members of this working group belong to the CQAG Group (Clinical Quality Assurance Germany) and are QA (quality assurance) experts (auditors or compliance functions) with long-term experience in the practical handling of TMFs.

  3. Segment fusion of ToF-SIMS images.

    PubMed

    Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A

    2016-06-08

    The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.

  4. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects

    PubMed Central

    2014-01-01

    Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and

  5. Understanding the optics to aid microscopy image segmentation.

    PubMed

    Yin, Zhaozheng; Li, Kang; Kanade, Takeo; Chen, Mei

    2010-01-01

    Image segmentation is essential for many automated microscopy image analysis systems. Rather than treating microscopy images as general natural images and rushing into the image processing warehouse for solutions, we propose to study a microscope's optical properties to model its image formation process first using phase contrast microscopy as an exemplar. It turns out that the phase contrast imaging system can be relatively well explained by a linear imaging model. Using this model, we formulate a quadratic optimization function with sparseness and smoothness regularizations to restore the "authentic" phase contrast images that directly correspond to specimen's optical path length without phase contrast artifacts such as halo and shade-off. With artifacts removed, high quality segmentation can be achieved by simply thresholding the restored images. The imaging model and restoration method are quantitatively evaluated on two sequences with thousands of cells captured over several days.

  6. Segment Specification for the Payload Segment of the Reusable Reentry Satellite: Rodent Module Version

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Reusable Reentry Satellite (RRS) System is composed of the payload segment (PS), vehicle segment (VS), and mission support (MS) segments. This specification establishes the performance, design, development, and test requirements for the RRS Rodent Module (RM).

  7. Recreational Water Quality Criteria Limits

    EPA Pesticide Factsheets

    This set of Frequently Asked Questions (FAQ) provides an overview of NPDES permitting applicable to continuous dischargers (such as POTWs) based on water quality standards for pathogens and pathogen indicators associated with fecal contamination.

  8. Automatic segmentation of cerebral white matter hyperintensities using only 3D FLAIR images.

    PubMed

    Simões, Rita; Mönninghoff, Christoph; Dlugaj, Martha; Weimar, Christian; Wanke, Isabel; van Cappellen van Walsum, Anne-Marie; Slump, Cornelis

    2013-09-01

    Magnetic Resonance (MR) white matter hyperintensities have been shown to predict an increased risk of developing cognitive decline. However, their actual role in the conversion to dementia is still not fully understood. Automatic segmentation methods can help in the screening and monitoring of Mild Cognitive Impairment patients who take part in large population-based studies. Most existing segmentation approaches use multimodal MR images. However, multiple acquisitions represent a limitation in terms of both patient comfort and computational complexity of the algorithms. In this work, we propose an automatic lesion segmentation method that uses only three-dimensional fluid-attenuation inversion recovery (FLAIR) images. We use a modified context-sensitive Gaussian mixture model to determine voxel class probabilities, followed by correction of FLAIR artifacts. We evaluate the method against the manual segmentation performed by an experienced neuroradiologist and compare the results with other unimodal segmentation approaches. Finally, we apply our method to the segmentation of multiple sclerosis lesions by using a publicly available benchmark dataset. Results show a similar performance to other state-of-the-art multimodal methods, as well as to the human rater. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Why Segmentation Matters: Experience-Driven Segmentation Errors Impair "Morpheme" Learning

    ERIC Educational Resources Information Center

    Finn, Amy S.; Hudson Kam, Carla L.

    2015-01-01

    We ask whether an adult learner's knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners' ability to segment words into their component morphemes and learn phonologically…

  10. Oscillatory network with self-organized dynamical connections for synchronization-based image segmentation.

    PubMed

    Kuzmina, Margarita; Manykin, Eduard; Surina, Irina

    2004-01-01

    An oscillatory network of columnar architecture located in 3D spatial lattice was recently designed by the authors as oscillatory model of the brain visual cortex. Single network oscillator is a relaxational neural oscillator with internal dynamics tunable by visual image characteristics - local brightness and elementary bar orientation. It is able to demonstrate either activity state (stable undamped oscillations) or "silence" (quickly damped oscillations). Self-organized nonlocal dynamical connections of oscillators depend on oscillator activity levels and orientations of cortical receptive fields. Network performance consists in transfer into a state of clusterized synchronization. At current stage grey-level image segmentation tasks are carried out by 2D oscillatory network, obtained as a limit version of the source model. Due to supplemented network coupling strength control the 2D reduced network provides synchronization-based image segmentation. New results on segmentation of brightness and texture images presented in the paper demonstrate accurate network performance and informative visualization of segmentation results, inherent in the model.

  11. Lung segment geometry study: simulation of largest possible tumours that fit into bronchopulmonary segments.

    PubMed

    Welter, S; Stöcker, C; Dicken, V; Kühl, H; Krass, S; Stamatis, G

    2012-03-01

    Segmental resection in stage I non-small cell lung cancer (NSCLC) has been well described and is considered to have similar survival rates as lobectomy but with increased rates of local tumour recurrence due to inadequate parenchymal margins. In consequence, today segmentectomy is only performed when the tumour is smaller than 2 cm. Three-dimensional reconstructions from 11 thin-slice CT scans of bronchopulmonary segments were generated, and virtual spherical tumours were placed over the segments, respecting all segmental borders. As a next step, virtual parenchymal safety margins of 2 cm and 3 cm were subtracted and the size of the remaining tumour calculated. The maximum tumour diameters with a 30-mm parenchymal safety margin ranged from 26.1 mm in right-sided segments 7 + 8 to 59.8 mm in the left apical segments 1-3. Using a three-dimensional reconstruction of lung CT scans, we demonstrated that segmentectomy or resection of segmental groups should be feasible with adequate margins, even for larger tumours in selected cases. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  12. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  13. Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography.

    PubMed

    Ahlers, C; Simader, C; Geitzenauer, W; Stock, G; Stetson, P; Dastmalchi, S; Schmidt-Erfurth, U

    2008-02-01

    A limited number of scans compromise conventional optical coherence tomography (OCT) to track chorioretinal disease in its full extension. Failures in edge-detection algorithms falsify the results of retinal mapping even further. High-definition-OCT (HD-OCT) is based on raster scanning and was used to visualise the localisation and volume of intra- and sub-pigment-epithelial (RPE) changes in fibrovascular pigment epithelial detachments (fPED). Two different scanning patterns were evaluated. 22 eyes with fPED were imaged using a frequency-domain, high-speed prototype of the Cirrus HD-OCT. The axial resolution was 6 mum, and the scanning speed was 25 kA scans/s. Two different scanning patterns covering an area of 6 x 6 mm in the macular retina were compared. Three-dimensional topographic reconstructions and volume calculations were performed using MATLAB-based automatic segmentation software. Detailed information about layer-specific distribution of fluid accumulation and volumetric measurements can be obtained for retinal- and sub-RPE volumes. Both raster scans show a high correlation (p<0.01; R2>0.89) of measured values, that is PED volume/area, retinal volume and mean retinal thickness. Quality control of the automatic segmentation revealed reasonable results in over 90% of the examinations. Automatic segmentation allows for detailed quantitative and topographic analysis of the RPE and the overlying retina. In fPED, the 128 x 512 scanning-pattern shows mild advantages when compared with the 256 x 256 scan. Together with the ability for automatic segmentation, HD-OCT clearly improves the clinical monitoring of chorioretinal disease by adding relevant new parameters. HD-OCT is likely capable of enhancing the understanding of pathophysiology and benefits of treatment for current anti-CNV strategies in future.

  14. Survey of contemporary trends in color image segmentation

    NASA Astrophysics Data System (ADS)

    Vantaram, Sreenath Rao; Saber, Eli

    2012-10-01

    In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.

  15. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  16. Evolution of fine scale segmentation at intermediate ridges: example of Alarcon Rise and Endeavour Segment.

    NASA Astrophysics Data System (ADS)

    Le Saout, M.; Clague, D. A.; Paduan, J. B.; Caress, D. W.

    2016-12-01

    Mid-ocean ridges are marked by a segmentation of the axis and underlying magmatic system. Fine-scale segmentation is mainly studied along fast spreading ridges. Here we analyze the evolution of the 3rd and 4th order segmentation along two intermediate spreading centers, characterized by contrasting morphologies. Alarcon Rise, with a full spreading rate of 49 mm/yr, is characterized by an axial high and a relatively narrow axial summit trough. Endeavour segment has a spreading rate of 52.5 mm/yr and is represented by a wide axial valley affected by numerous faults. These two ridges are characterized by high and low volcanic periods, respectively. The segmentation is analyzed using high-resolution bathymetric cross-sections perpendicular to the axes. These profiles are 1200-m-long for Alarcon Rise and 2400-m-long at Endeavour Segment and are 100 m apart. The discontinuity order is based on variations, from either side of each offset, in: 1/the geometry and orientation of the axial summit trough or graben 2/ the lava morphology, and 3/ the distribution of hydrothermal vents. Alarcon Rise is marked by a recent southeast jump in volcanic activity. The comparison between actual and previous segmentation reveals a rapid evolution of the 3rd order segmentation in the most active part of the ridge, with a lengthening of the central 3rd segment of 8 km over 3-4 ky. However, no relation is observed in the 4th order segmentation before and after the axis jump. Along Endeavour, traces of the previous 3rd order discontinuities are still perceptible on the walls of the graben. This 3rd order segmentation has persisted at least during the last 4.5 ky. Indeed, it is visible in the distribution of the recent hydrothermal vents observed in the axial valley as well as in the segmentation of the axial magma lens. Analysis of the two ridges suggests that small-scale segmentation varies primarily during high magmatic phases.

  17. Combining watershed and graph cuts methods to segment organs at risk in radiotherapy

    NASA Astrophysics Data System (ADS)

    Dolz, Jose; Kirisli, Hortense A.; Viard, Romain; Massoptier, Laurent

    2014-03-01

    Computer-aided segmentation of anatomical structures in medical images is a valuable tool for efficient radiation therapy planning (RTP). As delineation errors highly affect the radiation oncology treatment, it is crucial to delineate geometric structures accurately. In this paper, a semi-automatic segmentation approach for computed tomography (CT) images, based on watershed and graph-cuts methods, is presented. The watershed pre-segmentation groups small areas of similar intensities in homogeneous labels, which are subsequently used as input for the graph-cuts algorithm. This methodology does not require of prior knowledge of the structure to be segmented; even so, it performs well with complex shapes and low intensity. The presented method also allows the user to add foreground and background strokes in any of the three standard orthogonal views - axial, sagittal or coronal - making the interaction with the algorithm easy and fast. Hence, the segmentation information is propagated within the whole volume, providing a spatially coherent result. The proposed algorithm has been evaluated using 9 CT volumes, by comparing its segmentation performance over several organs - lungs, liver, spleen, heart and aorta - to those of manual delineation from experts. A Dicés coefficient higher than 0.89 was achieved in every case. That demonstrates that the proposed approach works well for all the anatomical structures analyzed. Due to the quality of the results, the introduction of the proposed approach in the RTP process will be a helpful tool for organs at risk (OARs) segmentation.

  18. A local segmentation parameter optimization approach for mapping heterogeneous urban environments using VHR imagery

    NASA Astrophysics Data System (ADS)

    Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore

    2017-10-01

    Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.

  19. How does playing adapted sports affect quality of life of people with mobility limitations? Results from a mixed-method sequential explanatory study.

    PubMed

    Côté-Leclerc, Félix; Boileau Duchesne, Gabrielle; Bolduc, Patrick; Gélinas-Lafrenière, Amélie; Santerre, Corinne; Desrosiers, Johanne; Levasseur, Mélanie

    2017-01-25

    Occupations, including physical activity, are a strong determinant of health. However, mobility limitations can restrict opportunities to perform these occupations, which may affect quality of life. Some people will turn to adapted sports to meet their need to be involved in occupations. Little is known, however, about how participation in adapted sports affects the quality of life of people with mobility limitations. This study thus aimed to explore the influence of adapted sports on quality of life in adult wheelchair users. A mixed-method sequential explanatory design was used, including a quantitative and a qualitative component with a clinical research design. A total of 34 wheelchair users aged 18 to 62, who regularly played adapted sports, completed the Quality of Life Index (/30). Their scores were compared to those obtained by people of similar age without limitations (general population). Ten of the wheelchair users also participated in individual semi-structured interviews exploring their perceptions regarding how sports-related experiences affected their quality of life. The participants were 9 women and 25 men with paraplegia, the majority of whom worked and played an individual adapted sport (athletics, tennis or rugby) at the international or national level. People with mobility limitations who participated in adapted sports had a quality of life comparable to the group without limitations (21.9 ± 3.3 vs 22.3 ± 2.9 respectively), except for poorer family-related quality of life (21.0 ± 5.3 vs 24.1 ± 4.9 respectively). Based on the interviews, participants reported that the positive effect of adapted sports on the quality of life of people with mobility limitations operates mainly through the following: personal factors (behavior-related abilities and health), social participation (in general and through interpersonal relationships), and environmental factors (society's perceptions and support from the environment). Some contextual

  20. Multi-atlas segmentation enables robust multi-contrast MRI spleen segmentation for splenomegaly

    NASA Astrophysics Data System (ADS)

    Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L.; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.

    2017-02-01

    Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.

  1. Multi-atlas Segmentation Enables Robust Multi-contrast MRI Spleen Segmentation for Splenomegaly.

    PubMed

    Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L; Assad, Albert; Abramson, Richard G; Landman, Bennett A

    2017-02-11

    Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.

  2. Comprehensive evaluation of an image segmentation technique for measuring tumor volume from CT images

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun

    2008-03-01

    Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.

  3. Efficient graph-cut tattoo segmentation

    NASA Astrophysics Data System (ADS)

    Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.

    2015-03-01

    Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.

  4. Segmented-memory recurrent neural networks.

    PubMed

    Chen, Jinmiao; Chaudhari, Narendra S

    2009-08-01

    Conventional recurrent neural networks (RNNs) have difficulties in learning long-term dependencies. To tackle this problem, we propose an architecture called segmented-memory recurrent neural network (SMRNN). A symbolic sequence is broken into segments and then presented as inputs to the SMRNN one symbol per cycle. The SMRNN uses separate internal states to store symbol-level context, as well as segment-level context. The symbol-level context is updated for each symbol presented for input. The segment-level context is updated after each segment. The SMRNN is trained using an extended real-time recurrent learning algorithm. We test the performance of SMRNN on the information latching problem, the "two-sequence problem" and the problem of protein secondary structure (PSS) prediction. Our implementation results indicate that SMRNN performs better on long-term dependency problems than conventional RNNs. Besides, we also theoretically analyze how the segmented memory of SMRNN helps learning long-term temporal dependencies and study the impact of the segment length.

  5. Using Predictability for Lexical Segmentation.

    PubMed

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  6. A Hierarchical Building Segmentation in Digital Surface Models for 3D Reconstruction

    PubMed Central

    Yan, Yiming; Gao, Fengjiao; Deng, Shupei; Su, Nan

    2017-01-01

    In this study, a hierarchical method for segmenting buildings in a digital surface model (DSM), which is used in a novel framework for 3D reconstruction, is proposed. Most 3D reconstructions of buildings are model-based. However, the limitations of these methods are overreliance on completeness of the offline-constructed models of buildings, and the completeness is not easily guaranteed since in modern cities buildings can be of a variety of types. Therefore, a model-free framework using high precision DSM and texture-images buildings was introduced. There are two key problems with this framework. The first one is how to accurately extract the buildings from the DSM. Most segmentation methods are limited by either the terrain factors or the difficult choice of parameter-settings. A level-set method are employed to roughly find the building regions in the DSM, and then a recently proposed ‘occlusions of random textures model’ are used to enhance the local segmentation of the buildings. The second problem is how to generate the facades of buildings. Synergizing with the corresponding texture-images, we propose a roof-contour guided interpolation of building facades. The 3D reconstruction results achieved by airborne-like images and satellites are compared. Experiments show that the segmentation method has good performance, and 3D reconstruction is easily performed by our framework, and better visualization results can be obtained by airborne-like images, which can be further replaced by UAV images. PMID:28125018

  7. A Hierarchical Building Segmentation in Digital Surface Models for 3D Reconstruction.

    PubMed

    Yan, Yiming; Gao, Fengjiao; Deng, Shupei; Su, Nan

    2017-01-24

    In this study, a hierarchical method for segmenting buildings in a digital surface model (DSM), which is used in a novel framework for 3D reconstruction, is proposed. Most 3D reconstructions of buildings are model-based. However, the limitations of these methods are overreliance on completeness of the offline-constructed models of buildings, and the completeness is not easily guaranteed since in modern cities buildings can be of a variety of types. Therefore, a model-free framework using high precision DSM and texture-images buildings was introduced. There are two key problems with this framework. The first one is how to accurately extract the buildings from the DSM. Most segmentation methods are limited by either the terrain factors or the difficult choice of parameter-settings. A level-set method are employed to roughly find the building regions in the DSM, and then a recently proposed 'occlusions of random textures model' are used to enhance the local segmentation of the buildings. The second problem is how to generate the facades of buildings. Synergizing with the corresponding texture-images, we propose a roof-contour guided interpolation of building facades. The 3D reconstruction results achieved by airborne-like images and satellites are compared. Experiments show that the segmentation method has good performance, and 3D reconstruction is easily performed by our framework, and better visualization results can be obtained by airborne-like images, which can be further replaced by UAV images.

  8. High power broadband all fiber super-fluorescent source with linear polarization and near diffraction-limited beam quality.

    PubMed

    Ma, Pengfei; Huang, Long; Wang, Xiaolin; Zhou, Pu; Liu, Zejin

    2016-01-25

    In this manuscript, a high power broadband superfluorescent source (SFS) with linear polarization and near-diffraction-limited beam quality is achieved based on an ytterbium-doped (Yb-doped), all fiberized and polarization-maintained master oscillator power amplifier (MOPA) configuration. The MOPA structure generates a linearly polarized output power of 1427 W with a slope efficiency of 80% and a full width at half maximum (FWHM) of 11 nm, which is power scaled by an order of magnitude compared with the previously reported SFSs with linear polarization. In the experiment, both the polarization extinction ratio (PER) and beam quality (M(2) factor) are degraded little during the power scaling process. At maximal output power, the PER and M(2) factor are measured to be 19.1dB and 1.14, respectively. The root-mean-square (RMS) and peak-vale (PV) values of the power fluctuation at maximal output power are just 0.48% and within 3%, respectively. Further power scaling of the whole system is limited by the available pump sources. To the best of our knowledge, this is the first demonstration of kilowatt level broadband SFS with linear polarization and near-diffraction-limited beam quality.

  9. Crowdsourcing the creation of image segmentation algorithms for connectomics.

    PubMed

    Arganda-Carreras, Ignacio; Turaga, Srinivas C; Berger, Daniel R; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D; Bas, Erhan; Uzunbas, Mustafa G; Cardona, Albert; Schindelin, Johannes; Seung, H Sebastian

    2015-01-01

    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.

  10. Three-dimensional choroidal segmentation in spectral OCT volumes using optic disc prior information

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Girkin, Christopher A.; Hariri, Amirhossein; Sadda, SriniVas R.

    2016-03-01

    Recently, much attention has been focused on determining the role of the peripapillary choroid - the layer between the outer retinal pigment epithelium (RPE)/Bruchs membrane (BM) and choroid-sclera (C-S) junction, whether primary or secondary in the pathogenesis of glaucoma. However, the automated choroidal segmentation in spectral-domain optical coherence tomography (SD-OCT) images of optic nerve head (ONH) has not been reported probably due to the fact that the presence of the BM opening (BMO, corresponding to the optic disc) can deflect the choroidal segmentation from its correct position. The purpose of this study is to develop a 3D graph-based approach to identify the 3D choroidal layer in ONH-centered SD-OCT images using the BMO prior information. More specifically, an initial 3D choroidal segmentation was first performed using the 3D graph search algorithm. Note that varying surface interaction constraints based on the choroidal morphological model were applied. To assist the choroidal segmentation, two other surfaces of internal limiting membrane and innerouter segment junction were also segmented. Based on the segmented layer between the RPE/BM and C-S junction, a 2D projection map was created. The BMO in the projection map was detected by a 2D graph search. The pre-defined BMO information was then incorporated into the surface interaction constraints of the 3D graph search to obtain more accurate choroidal segmentation. Twenty SD-OCT images from 20 healthy subjects were used. The mean differences of the choroidal borders between the algorithm and manual segmentation were at a sub-voxel level, indicating a high level segmentation accuracy.

  11. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields

    PubMed Central

    Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi

    2015-01-01

    Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy. PMID:26630674

  12. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields.

    PubMed

    Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi; Åkerfelt, Malin; Nees, Matthias

    2015-01-01

    Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy.

  13. Robust generative asymmetric GMM for brain MR image segmentation.

    PubMed

    Ji, Zexuan; Xia, Yong; Zheng, Yuhui

    2017-11-01

    Accurate segmentation of brain tissues from magnetic resonance (MR) images based on the unsupervised statistical models such as Gaussian mixture model (GMM) has been widely studied during last decades. However, most GMM based segmentation methods suffer from limited accuracy due to the influences of noise and intensity inhomogeneity in brain MR images. To further improve the accuracy for brain MR image segmentation, this paper presents a Robust Generative Asymmetric GMM (RGAGMM) for simultaneous brain MR image segmentation and intensity inhomogeneity correction. First, we develop an asymmetric distribution to fit the data shapes, and thus construct a spatial constrained asymmetric model. Then, we incorporate two pseudo-likelihood quantities and bias field estimation into the model's log-likelihood, aiming to exploit the neighboring priors of within-cluster and between-cluster and to alleviate the impact of intensity inhomogeneity, respectively. Finally, an expectation maximization algorithm is derived to iteratively maximize the approximation of the data log-likelihood function to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. To demonstrate the performances of the proposed algorithm, we first applied the proposed algorithm to a synthetic brain MR image to show the intermediate illustrations and the estimated distribution of the proposed algorithm. The next group of experiments is carried out in clinical 3T-weighted brain MR images which contain quite serious intensity inhomogeneity and noise. Then we quantitatively compare our algorithm to state-of-the-art segmentation approaches by using Dice coefficient (DC) on benchmark images obtained from IBSR and BrainWeb with different level of noise and intensity inhomogeneity. The comparison results on various brain MR images demonstrate the superior performances of the proposed algorithm in dealing with the noise and intensity inhomogeneity. In this paper, the RGAGMM

  14. Implementation and assessment of diffusion-weighted partial Fourier readout-segmented echo-planar imaging.

    PubMed

    Frost, Robert; Porter, David A; Miller, Karla L; Jezzard, Peter

    2012-08-01

    Single-shot echo-planar imaging has been used widely in diffusion magnetic resonance imaging due to the difficulties in correcting motion-induced phase corruption in multishot data. Readout-segmented EPI has addressed the multishot problem by introducing a two-dimensional nonlinear navigator correction with online reacquisition of uncorrectable data to enable acquisition of high-resolution diffusion data with reduced susceptibility artifact and T*(2) blurring. The primary shortcoming of readout-segmented EPI in its current form is its long acquisition time (longer than similar resolution single-shot echo-planar imaging protocols by approximately the number of readout segments), which limits the number of diffusion directions. By omitting readout segments at one side of k-space and using partial Fourier reconstruction, readout-segmented EPI imaging times could be reduced. In this study, the effects of homodyne and projection onto convex sets reconstructions on estimates of the fractional anisotropy, mean diffusivity, and diffusion orientation in fiber tracts and raw T(2)- and trace-weighted signal are compared, along with signal-to-noise ratio results. It is found that projections onto convex sets reconstruction with 3/5 segments in a 2 mm isotropic diffusion tensor image acquisition and 9/13 segments in a 0.9 × 0.9 × 4.0 mm(3) diffusion-weighted image acquisition provide good fidelity relative to the full k-space parameters. This allows application of readout-segmented EPI to tractography studies, and clinical stroke and oncology protocols. Copyright © 2011 Wiley-Liss, Inc.

  15. Speculation detection for Chinese clinical notes: Impacts of word segmentation and embedding models.

    PubMed

    Zhang, Shaodian; Kang, Tian; Zhang, Xingting; Wen, Dong; Elhadad, Noémie; Lei, Jianbo

    2016-04-01

    Speculations represent uncertainty toward certain facts. In clinical texts, identifying speculations is a critical step of natural language processing (NLP). While it is a nontrivial task in many languages, detecting speculations in Chinese clinical notes can be particularly challenging because word segmentation may be necessary as an upstream operation. The objective of this paper is to construct a state-of-the-art speculation detection system for Chinese clinical notes and to investigate whether embedding features and word segmentations are worth exploiting toward this overall task. We propose a sequence labeling based system for speculation detection, which relies on features from bag of characters, bag of words, character embedding, and word embedding. We experiment on a novel dataset of 36,828 clinical notes with 5103 gold-standard speculation annotations on 2000 notes, and compare the systems in which word embeddings are calculated based on word segmentations given by general and by domain specific segmenters respectively. Our systems are able to reach performance as high as 92.2% measured by F score. We demonstrate that word segmentation is critical to produce high quality word embedding to facilitate downstream information extraction applications, and suggest that a domain dependent word segmenter can be vital to such a clinical NLP task in Chinese language. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Segmentation of multiple heart cavities in 3-D transesophageal ultrasound images.

    PubMed

    Haak, Alexander; Vegas-Sánchez-Ferrero, Gonzalo; Mulder, Harriët W; Ren, Ben; Kirişli, Hortense A; Metz, Coert; van Burken, Gerard; van Stralen, Marijn; Pluim, Josien P W; van der Steen, Antonius F W; van Walsum, Theo; Bosch, Johannes G

    2015-06-01

    Three-dimensional transesophageal echocardiography (TEE) is an excellent modality for real-time visualization of the heart and monitoring of interventions. To improve the usability of 3-D TEE for intervention monitoring and catheter guidance, automated segmentation is desired. However, 3-D TEE segmentation is still a challenging task due to the complex anatomy with multiple cavities, the limited TEE field of view, and typical ultrasound artifacts. We propose to segment all cavities within the TEE view with a multi-cavity active shape model (ASM) in conjunction with a tissue/blood classification based on a gamma mixture model (GMM). 3-D TEE image data of twenty patients were acquired with a Philips X7-2t matrix TEE probe. Tissue probability maps were estimated by a two-class (blood/tissue) GMM. A statistical shape model containing the left ventricle, right ventricle, left atrium, right atrium, and aorta was derived from computed tomography angiography (CTA) segmentations by principal component analysis. ASMs of the whole heart and individual cavities were generated and consecutively fitted to tissue probability maps. First, an average whole-heart model was aligned with the 3-D TEE based on three manually indicated anatomical landmarks. Second, pose and shape of the whole-heart ASM were fitted by a weighted update scheme excluding parts outside of the image sector. Third, pose and shape of ASM for individual heart cavities were initialized by the previous whole heart ASM and updated in a regularized manner to fit the tissue probability maps. The ASM segmentations were validated against manual outlines by two observers and CTA derived segmentations. Dice coefficients and point-to-surface distances were used to determine segmentation accuracy. ASM segmentations were successful in 19 of 20 cases. The median Dice coefficient for all successful segmentations versus the average observer ranged from 90% to 71% compared with an inter-observer range of 95% to 84%. The

  17. Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients.

    PubMed

    Mayer, Markus A; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2010-11-08

    Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis.

  18. Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients

    PubMed Central

    Mayer, Markus A.; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2010-01-01

    Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis. PMID:21258556

  19. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  20. Noise destroys feedback enhanced figure-ground segmentation but not feedforward figure-ground segmentation.

    PubMed

    Romeo, August; Arall, Marina; Supèr, Hans

    2012-01-01

    Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception.

  1. Noise destroys feedback enhanced figure-ground segmentation but not feedforward figure-ground segmentation

    PubMed Central

    Romeo, August; Arall, Marina; Supèr, Hans

    2012-01-01

    Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception. PMID:22934028

  2. Quantitative analysis of retina layer elasticity based on automatic 3D segmentation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    He, Youmin; Qu, Yueqiao; Zhang, Yi; Ma, Teng; Zhu, Jiang; Miao, Yusi; Humayun, Mark; Zhou, Qifa; Chen, Zhongping

    2017-02-01

    Age-related macular degeneration (AMD) is an eye condition that is considered to be one of the leading causes of blindness among people over 50. Recent studies suggest that the mechanical properties in retina layers are affected during the early onset of disease. Therefore, it is necessary to identify such changes in the individual layers of the retina so as to provide useful information for disease diagnosis. In this study, we propose using an acoustic radiation force optical coherence elastography (ARF-OCE) system to dynamically excite the porcine retina and detect the vibrational displacement with phase resolved Doppler optical coherence tomography. Due to the vibrational mechanism of the tissue response, the image quality is compromised during elastogram acquisition. In order to properly analyze the images, all signals, including the trigger and control signals for excitation, as well as detection and scanning signals, are synchronized within the OCE software and are kept consistent between frames, making it possible for easy phase unwrapping and elasticity analysis. In addition, a combination of segmentation algorithms is used to accommodate the compromised image quality. An automatic 3D segmentation method has been developed to isolate and measure the relative elasticity of every individual retinal layer. Two different segmentation schemes based on random walker and dynamic programming are implemented. The algorithm has been validated using a 3D region of the porcine retina, where individual layers have been isolated and analyzed using statistical methods. The errors compared to manual segmentation will be calculated.

  3. Compatibility of segmented thermoelectric generators

    NASA Technical Reports Server (NTRS)

    Snyder, J.; Ursell, T.

    2002-01-01

    It is well known that power generation efficiency improves when materials with appropriate properties are combined either in a cascaded or segmented fashion across a temperature gradient. Past methods for determining materials used in segmentation weremainly concerned with materials that have the highest figure of merit in the temperature range. However, the example of SiGe segmented with Bi2Te3 and/or various skutterudites shows a marked decline in device efficiency even though SiGe has the highest figure of merit in the temperature range. The origin of the incompatibility of SiGe with other thermoelectric materials leads to a general definition of compatibility and intrinsic efficiency. The compatibility factor derived as = (Jl+zr - 1) a is a function of only intrinsic material properties and temperature, which is represented by a ratio of current to conduction heat. For maximum efficiency the compatibility factor should not change with temperature both within a single material, and in the segmented leg as a whole. This leads to a measure of compatibility not only between segments, but also within a segment. General temperature trends show that materials are more self compatible at higher temperatures, and segmentation is more difficult across a larger -T. The compatibility factor can be used as a quantitative guide for deciding whether a material is better suited for segmentation orcascading. Analysis of compatibility factors and intrinsic efficiency for optimal segmentation are discussed, with intent to predict optimal material properties, temperature interfaces, and/or currentheat ratios.

  4. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  5. Automated localization and segmentation techniques for B-mode ultrasound images: A review.

    PubMed

    Meiburger, Kristen M; Acharya, U Rajendra; Molinari, Filippo

    2018-01-01

    B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  7. Segmental bone defects: from cellular and molecular pathways to the development of novel biological treatments

    PubMed Central

    Pneumaticos, Spyros G; Triantafyllopoulos, Georgios K; Basdra, Efthimia K; Papavassiliou, Athanasios G

    2010-01-01

    Abstract Several conditions in clinical orthopaedic practice can lead to the development of a diaphyseal segmental bone defect, which cannot heal without intervention. Segmental bone defects have been traditionally treated with bone grafting and/or distraction osteogenesis, methods that have many advantages, but also major drawbacks, such as limited availability, risk of disease transmission and prolonged treatment. In order to overcome such limitations, biological treatments have been developed based on specific pathways of bone physiology and healing. Bone tissue engineering is a dynamic field of research, combining osteogenic cells, osteoinductive factors, such as bone morphogenetic proteins, and scaffolds with osteoconductive and osteoinductive attributes, to produce constructs that could be used as bone graft substitutes for the treatment of segmental bone defects. Scaffolds are usually made of ceramic or polymeric biomaterials, or combinations of both in composite materials. The purpose of the present review is to discuss in detail the molecular and cellular basis for the development of bone tissue engineering constructs. PMID:20345845

  8. Segmenting the Adult Education Market.

    ERIC Educational Resources Information Center

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  9. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  10. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    NASA Astrophysics Data System (ADS)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  11. Performing label-fusion-based segmentation using multiple automatically generated templates.

    PubMed

    Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P

    2013-10-01

    Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). Copyright © 2012 Wiley Periodicals, Inc.

  12. A prior feature SVM – MRF based method for mouse brain segmentation

    PubMed Central

    Wu, Teresa; Bae, Min Hyeok; Zhang, Min; Pan, Rong; Badea, Alexandra

    2012-01-01

    We introduce an automated method, called prior feature Support Vector Machine- Markov Random Field (pSVMRF), to segment three-dimensional mouse brain Magnetic Resonance Microscopy (MRM) images. Our earlier work, extended MRF (eMRF) integrated Support Vector Machine (SVM) and Markov Random Field (MRF) approaches, leading to improved segmentation accuracy; however, the computation of eMRF is very expensive, which may limit its performance on segmentation and robustness. In this study pSVMRF reduces training and testing time for SVM, while boosting segmentation performance. Unlike the eMRF approach, where MR intensity information and location priors are linearly combined, pSVMRF combines this information in a nonlinear fashion, and enhances the discriminative ability of the algorithm. We validate the proposed method using MR imaging of unstained and actively stained mouse brain specimens, and compare segmentation accuracy with two existing methods: eMRF and MRF. C57BL/6 mice are used for training and testing, using cross validation. For formalin fixed C57BL/6 specimens, pSVMRF outperforms both eMRF and MRF. The segmentation accuracy for C57BL/6 brains, stained or not, was similar for larger structures like hippocampus and caudate putamen, (~87%), but increased substantially for smaller regions like susbtantia nigra (from 78.36% to 91.55%), and anterior commissure (from ~50% to ~80%). To test segmentation robustness against increased anatomical variability we add two strains, BXD29 and a transgenic mouse model of Alzheimer’s Disease. Segmentation accuracy for new strains is 80% for hippocampus, and caudate putamen, indicating that pSVMRF is a promising approach for phenotyping mouse models of human brain disorders. PMID:21988893

  13. Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method.

    PubMed

    Han, Dongfeng; Bayouth, John; Song, Qi; Taurani, Aakant; Sonka, Milan; Buatti, John; Wu, Xiaodong

    2011-01-01

    Tumor segmentation in PET and CT images is notoriously challenging due to the low spatial resolution in PET and low contrast in CT images. In this paper, we have proposed a general framework to use both PET and CT images simultaneously for tumor segmentation. Our method utilizes the strength of each imaging modality: the superior contrast of PET and the superior spatial resolution of CT. We formulate this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized term that penalizes the segmentation difference between PET and CT. Our method simulates the clinical practice of delineating tumor simultaneously using both PET and CT, and is able to concurrently segment tumor from both modalities, achieving globally optimal solutions in low-order polynomial time by a single maximum flow computation. The method was evaluated on clinically relevant tumor segmentation problems. The results showed that our method can effectively make use of both PET and CT image information, yielding segmentation accuracy of 0.85 in Dice similarity coefficient and the average median hausdorff distance (HD) of 6.4 mm, which is 10% (resp., 16%) improvement compared to the graph cuts method solely using the PET (resp., CT) images.

  14. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  15. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.

    PubMed

    Peyer, Kathrin E; Morris, Mark; Sellers, William I

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.

  16. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters.

    PubMed

    Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia

    2018-04-01

    Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd

  17. 75 FR 77798 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Limiting Emissions of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-14

    ... Promulgation of Air Quality Implementation Plans; Delaware; Limiting Emissions of Volatile Organic Compounds... Organic Compounds (VOC) from Consumer and Commercial Products, Section 3.0, Portable Fuel Containers. In... second comment period. Any parties interested in commenting on this action should do so at this time...

  18. Limited English proficient Hmong- and Spanish-speaking patients' perceptions of the quality of interpreter services.

    PubMed

    Lor, Maichou; Xiong, Phia; Schwei, Rebecca J; Bowers, Barbara J; Jacobs, Elizabeth A

    2016-02-01

    Language barriers are a large and growing problem for patients in the US and around the world. Interpreter services are a standard solution for addressing language barriers and most research has focused on utilization of interpreter services and their effect on health outcomes for patients who do not speak the same language as their healthcare providers including nurses. However, there is limited research on patients' perceptions of these interpreter services. To examine Hmong- and Spanish-speaking patients' perceptions of interpreter service quality in the context of receiving cancer preventive services. Twenty limited English proficient Hmong (n=10) and Spanish-speaking participants (n=10) ranging in age from 33 to 75 years were interviewed by two bilingual researchers in a Midwestern state. Interviews were audio taped, transcribed verbatim, and translated into English. Analysis was done using conventional content analysis. The two groups shared perceptions about the quality of interpreter services as variable along three dimensions. Specifically, both groups evaluated quality of interpreters based on the interpreters' ability to provide: (a) literal interpretation, (b) cultural interpretation, and (c) emotional interpretation during the health care encounter. The groups differed, however, on how they described the consequences of poor interpretation quality. Hmong participants described how poor quality interpretation could lead to: (a) poor interpersonal relationships among patients, providers, and interpreters, (b) inability of patients to follow through with treatment plans, and (c) emotional distress for patients. Our study highlights the fact that patients are discerning consumers of interpreter services; and could be effective partners in efforts to reform and enhance interpreter services. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Limited English proficient Hmong- and Spanish-speaking patients’ perceptions of the quality of interpreter services

    PubMed Central

    Lor, Maichou; Xiong, Phia; Schweia, Rebecca J.; Bowers, Barbara; Jacobs, Elizabeth A.

    2015-01-01

    Background Language barriers are a large and growing problem for patients in the U.S. and around the world. Interpreter services are a standard solution for addressing language barriers and most research has focused on utilization of interpreter services and their effect on health outcomes for patients who do not speak the same language as their healthcare providers including nurses. However, there is limited research on patients’ perceptions of these interpreter services. Objective To examine Hmong- and Spanish-speaking patients’ perceptions of interpreter service quality in the context of receiving cancer preventive services. Methods Twenty limited English proficient Hmong (n=10) and Spanish-speaking participants (N=10) ranging in age from 33 to 75 years were interviewed by two bilingual researchers in a Midwestern state. Interviews were audio taped, transcribed verbatim, and translated into English. Analysis was done using conventional content analysis. Results The two groups shared perceptions about the quality of interpreter services as variable along three dimensions. Specifically, both groups evaluated quality of interpreters based on the interpreters’ ability to provide: (a) literal interpretation, (b) cultural interpretation, and (c) emotional interpretation during the health care encounter. The groups differed, however, on how they described the consequences of poor interpretation quality. Hmong participants described how poor quality interpretation could lead to: (a) poor interpersonal relationships among patients, providers, and interpreters, (b) inability of patients to follow through with treatment plans, and (c) emotional distress for patients. Conclusions Our study highlights the fact that patients are discerning consumers of interpreter services; and could be effective partners in efforts to reform and enhance interpreter services. PMID:25865517

  20. Exploring DeepMedic for the purpose of segmenting white matter hyperintensity lesions

    NASA Astrophysics Data System (ADS)

    Lippert, Fiona; Cheng, Bastian; Golsari, Amir; Weiler, Florian; Gregori, Johannes; Thomalla, Götz; Klein, Jan

    2018-02-01

    DeepMedic, an open source software library based on a multi-channel multi-resolution 3D convolutional neural network, has recently been made publicly available for brain lesion segmentations. It has already been shown that segmentation tasks on MRI data of patients having traumatic brain injuries, brain tumors, and ischemic stroke lesions can be performed very well. In this paper we describe how it can efficiently be used for the purpose of detecting and segmenting white matter hyperintensity lesions. We examined if it can be applied to single-channel routine 2D FLAIR data. For evaluation, we annotated 197 datasets with different numbers and sizes of white matter hyperintensity lesions. Our experiments have shown that substantial results with respect to the segmentation quality can be achieved. Compared to the original parametrization of the DeepMedic neural network, the timings for training can be drastically reduced if adjusting corresponding training parameters, while at the same time the Dice coefficients remain nearly unchanged. This enables for performing a whole training process within a single day utilizing a NVIDIA GeForce GTX 580 graphics board which makes this library also very interesting for research purposes on low-end GPU hardware.

  1. Impact of image quality on OCT angiography based quantitative measurements.

    PubMed

    Al-Sheikh, Mayss; Ghasemi Falavarjani, Khalil; Akil, Handan; Sadda, SriniVas R

    2017-01-01

    To study the impact of image quality on quantitative measurements and the frequency of segmentation error with optical coherence tomography angiography (OCTA). Seventeen eyes of 10 healthy individuals were included in this study. OCTA was performed using a swept-source device (Triton, Topcon). Each subject underwent three scanning sessions 1-2 min apart; the first two scans were obtained under standard conditions and for the third session, the image quality index was reduced using application of a topical ointment. En face OCTA images of the retinal vasculature were generated using the default segmentation for the superficial and deep retinal layer (SRL, DRL). Intraclass correlation coefficient (ICC) was used as a measure for repeatability. The frequency of segmentation error, motion artifact, banding artifact and projection artifact was also compared among the three sessions. The frequency of segmentation error, and motion artifact was statistically similar between high and low image quality sessions (P = 0.707, and P = 1 respectively). However, the frequency of projection and banding artifact was higher with a lower image quality. The vessel density in the SRL was highly repeatable in the high image quality sessions (ICC = 0.8), however, the repeatability was low, comparing the high and low image quality measurements (ICC = 0.3). In the DRL, the repeatability of the vessel density measurements was fair in the high quality sessions (ICC = 0.6 and ICC = 0.5, with and without automatic artifact removal, respectively) and poor comparing high and low image quality sessions (ICC = 0.3 and ICC = 0.06, with and without automatic artifact removal, respectively). The frequency of artifacts is higher and the repeatability of the measurements is lower with lower image quality. The impact of image quality index should be always considered in OCTA based quantitative measurements.

  2. ECG signal quality during arrhythmia and its application to false alarm reduction.

    PubMed

    Behar, Joachim; Oster, Julien; Li, Qiao; Clifford, Gari D

    2013-06-01

    An automated algorithm to assess electrocardiogram (ECG) quality for both normal and abnormal rhythms is presented for false arrhythmia alarm suppression of intensive care unit (ICU) monitors. A particular focus is given to the quality assessment of a wide variety of arrhythmias. Data from three databases were used: the Physionet Challenge 2011 dataset, the MIT-BIH arrhythmia database, and the MIMIC II database. The quality of more than 33 000 single-lead 10 s ECG segments were manually assessed and another 12 000 bad-quality single-lead ECG segments were generated using the Physionet noise stress test database. Signal quality indices (SQIs) were derived from the ECGs segments and used as the inputs to a support vector machine classifier with a Gaussian kernel. This classifier was trained to estimate the quality of an ECG segment. Classification accuracies of up to 99% on the training and test set were obtained for normal sinus rhythm and up to 95% for arrhythmias, although performance varied greatly depending on the type of rhythm. Additionally, the association between 4050 ICU alarms from the MIMIC II database and the signal quality, as evaluated by the classifier, was studied. Results suggest that the SQIs should be rhythm specific and that the classifier should be trained for each rhythm call independently. This would require a substantially increased set of labeled data in order to train an accurate algorithm.

  3. An automatic brain tumor segmentation tool.

    PubMed

    Diaz, Idanis; Boulanger, Pierre; Greiner, Russell; Hoehn, Bret; Rowe, Lindsay; Murtha, Albert

    2013-01-01

    This paper introduces an automatic brain tumor segmentation method (ABTS) for segmenting multiple components of brain tumor using four magnetic resonance image modalities. ABTS's four stages involve automatic histogram multi-thresholding and morphological operations including geodesic dilation. Our empirical results, on 16 real tumors, show that ABTS works very effectively, achieving a Dice accuracy compared to expert segmentation of 81% in segmenting edema and 85% in segmenting gross tumor volume (GTV).

  4. W. M. Keck Observatory primary mirror segment repair project: overview and status

    NASA Astrophysics Data System (ADS)

    Meeks, Robert L.; Doyle, Steve; Higginson, Jamie; Hudek, John S.; Irace, William; McBride, Dennis; Pollard, Mike; Tai, Kuochou; Von Boeckmann, Tod; Wold, Leslie; Wold, Truman

    2016-07-01

    The W. M. Keck Observatory Segment Repair Project is repairing stress-induced fractures near the support points in the primary mirror segments. The cracks are believed to result from deficiencies in the original design and implementation of the adhesive joints connecting the Invar support components to the ZERODUR mirror. Stresses caused by temperature cycling over 20 years of service drove cracks that developed at the glass-metal interfaces. Over the last few years the extent and cause of the cracks have been studied, and new supports have been designed. Repair of the damaged glass required development of specialized tools and procedures for: (1) transport of the segments; (2) pre-repair metrology to establish the initial condition; (3) removal of support hardware assemblies; (4) removal of the original supports; (5) grinding and re-surfacing the damaged glass areas; (6) etching to remove sub-surface damage; (7) bonding new supports; (8) re-installation of support assemblies; and (9) post-repair metrology. Repair of the first segment demonstrated the new tools and processes. On-sky measurements before and after repair verified compliance with the requirements. This paper summarizes the repair process, on-sky results, and transportation system, and also provides an update on the project status and schedule for repairing all 84 mirror segments. Strategies for maintaining quality and ensuring that repairs are done consistently are also presented.

  5. Joint level-set and spatio-temporal motion detection for cell segmentation.

    PubMed

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    -Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.

  6. U.S. Army Custom Segmentation System

    DTIC Science & Technology

    2007-06-01

    segmentation is individual or intergroup differences in response to marketing - mix variables. Presumptions about segments: •different demands in a...product or service category, •respond differently to changes in the marketing mix Criteria for segments: •The segments must exist in the environment

  7. On-sky performance of the Zernike phase contrast sensor for the phasing of segmented telescopes.

    PubMed

    Surdej, Isabelle; Yaitskova, Natalia; Gonte, Frederic

    2010-07-20

    The Zernike phase contrast method is a novel technique to phase the primary mirrors of segmented telescopes. It has been tested on-sky on a unit telescope of the Very Large Telescope with a segmented mirror conjugated to the primary mirror to emulate a segmented telescope. The theoretical background of this sensor and the algorithm used to retrieve the piston, tip, and tilt information are described. The performance of the sensor as a function of parameters such as star magnitude, seeing, and integration time is discussed. The phasing accuracy has always been below 15 nm root mean square wavefront error under normal conditions of operation and the limiting star magnitude achieved on-sky with this sensor is 15.7 in the red, which would be sufficient to phase segmented telescopes in closed-loop during observations.

  8. Limits of the possible: diagnostic image quality in coronary angiography with third-generation dual-source CT.

    PubMed

    Ochs, Marco M; Siepen, Fabian Aus dem; Fritz, Thomas; Andre, Florian; Gitsioudis, Gitsios; Korosoglou, Grigorios; Seitz, Sebastian; Bogomazov, Yuriy; Schlett, Christopher L; Sokiranski, Roman; Sommer, Andre; Gückel, Friedemann; Brado, Matthias; Kauczor, Hans-Ulrich; Görich, Johannes; Friedrich, Matthias G W; Katus, Hugo A; Buss, Sebastian J

    2017-07-01

    The usage of coronary CT angiography (CTA) is appropriate in patients with acute or chronic chest pain; however the diagnostic accuracy may be challenged with increased Agatston score (AS), increased heart rate, arrhythmia and severe obesity. Thus, we aim to determine the potential of the recently introduced third-generation dual-source CT (DSCT) for CTA in a 'real-life' clinical setting. Two hundred and sixty-eight consecutive patients (age: 67 ± 10 years; BMI: 27 ± 5 kg/m²; 61% male) undergoing clinically indicated CTA with DSCT were included in the retrospective single-center analysis. A contrast-enhanced volume dataset was acquired in sequential (SSM) (n = 151) or helical scan mode (HSM) (n = 117). Coronary segments were classified in diagnostic or non-diagnostic image quality. A subset underwent invasive angiography to determine the diagnostic accuracy of CTA. SSM (96.8 ± 6%) and HSM (97.5 ± 8%) provided no significant differences in the overall diagnostic image quality. However, AS had significant influence on diagnostic image quality exclusively in SSM (B = 0.003; p = 0.0001), but not in HSM. Diagnostic image quality significantly decreased in SSM in patients with AS ≥2,000 (p = 0.03). SSM (sensitivity: 93.9%; specificity: 96.7%; PPV: 88.6%; NPV: 98.3%) and HSM (sensitivity: 97.4%; specificity: 94.3%; PPV: 86.0%; NPV: 99.0%) provided comparable diagnostic accuracy (p = n.s.). SSM yielded significantly lower radiation doses as compared to HSM (2.1 ± 2.0 vs. 5.1 ± 3.3 mSv; p = 0.0001) in age and BMI-matched cohorts. SSM in third-generation DSCT enables significant dose savings and provides robust diagnostic image quality in patients with AS ≤2000 independent of heart rate, heart rhythm or obesity.

  9. Simple "TRS" Auxiliary tube for retraction of anterior segment using segmental T loop mechanics.

    PubMed

    Shyagali, Tarulatha R; Rajpara, Yagnesh; Trivedi, Kalyani

    2014-01-01

    Segmental T loop is the most popular frictionless mechanics so far. This biomechanically sound system was designed for the Burstone's canine bracket, which can be extra inventory for the clinicians who want to practice the segmental T loop routinely. The present manuscript provides the alternate to Burstones canine bracket for the retraction of the anterior segment.

  10. Image segmentation using joint spatial-intensity-shape features: application to CT lung nodule segmentation

    NASA Astrophysics Data System (ADS)

    Ye, Xujiong; Siddique, Musib; Douiri, Abdel; Beddoe, Gareth; Slabaugh, Greg

    2009-02-01

    Automatic segmentation of medical images is a challenging problem due to the complexity and variability of human anatomy, poor contrast of the object being segmented, and noise resulting from the image acquisition process. This paper presents a novel feature-guided method for the segmentation of 3D medical lesions. The proposed algorithm combines 1) a volumetric shape feature (shape index) based on high-order partial derivatives; 2) mean shift clustering in a joint spatial-intensity-shape (JSIS) feature space; and 3) a modified expectation-maximization (MEM) algorithm on the mean shift mode map to merge the neighboring regions (modes). In such a scenario, the volumetric shape feature is integrated into the process of the segmentation algorithm. The joint spatial-intensity-shape features provide rich information for the segmentation of the anatomic structures or lesions (tumors). The proposed method has been evaluated on a clinical dataset of thoracic CT scans that contains 68 nodules. A volume overlap ratio between each segmented nodule and the ground truth annotation is calculated. Using the proposed method, the mean overlap ratio over all the nodules is 0.80. On visual inspection and using a quantitative evaluation, the experimental results demonstrate the potential of the proposed method. It can properly segment a variety of nodules including juxta-vascular and juxta-pleural nodules, which are challenging for conventional methods due to the high similarity of intensities between the nodules and their adjacent tissues. This approach could also be applied to lesion segmentation in other anatomies, such as polyps in the colon.

  11. PCL-based Shape Memory Polymers with Variable PDMS Soft Segment Lengths

    PubMed Central

    Zhang, Dawei; Giese, Melissa L.; Prukop, Stacy L.; Grunlan, Melissa A.

    2012-01-01

    Thermoresponsive shape memory polymers (SMPs) are stimuli-responsive materials that return to their permanent shape from a temporary shape in response to heating. The design of new SMPs which obtain a broader range of properties including mechanical behavior is critical to realize their potential in biomedical as well as industrial and aerospace applications. To tailor the properties of SMPs, “AB networks” comprised of two distinct polymer components have been investigated but are overwhelmingly limited to those in which both components are organic. In this present work, we prepared inorganic-organic SMPs comprised of inorganic polydimethyl-siloxane (PDMS) segments of varying lengths and organic poly(ε-caprolactone) (PCL) segments. PDMS has a particularly low Tg (−125 °C) which makes it a particularly effective soft segment to tailor the mechanical properties of PCL-based SMPs. The SMPs were prepared via the rapid photocure of solutions of diacrylated PCL40-block-PDMSm-block-PCL40 macromers (m = 20, 37, 66 and 130). The resulting inorganic-organic SMP networks exhibited excellent shape fixity and recovery. By changing the PDMS segment length, the thermal, mechanical, and surface properties were systematically altered. PMID:22904597

  12. Segmentation of the Himalayas as revealed by arc-parallel gravity anomalies.

    PubMed

    Hetényi, György; Cattin, Rodolphe; Berthet, Théo; Le Moigne, Nicolas; Chophel, Jamyang; Lechmann, Sarah; Hammer, Paul; Drukpa, Dowchu; Sapkota, Soma Nath; Gautier, Stéphanie; Thinley, Kinzang

    2016-09-21

    Lateral variations along the Himalayan arc are suggested by an increasing number of studies and carry important information about the orogen's segmentation. Here we compile the hitherto most complete land gravity dataset in the region which enables the currently highest resolution plausible analysis. To study lateral variations in collisional structure we compute arc-parallel gravity anomalies (APaGA) by subtracting the average arc-perpendicular profile from our dataset; we compute likewise for topography (APaTA). We find no direct correlation between APaGA, APaTA and background seismicity, as suggested in oceanic subduction context. In the Himalayas APaTA mainly reflect relief and erosional effects, whereas APaGA reflect the deep structure of the orogen with clear lateral boundaries. Four segments are outlined and have disparate flexural geometry: NE India, Bhutan, Nepal &India until Dehradun, and NW India. The segment boundaries in the India plate are related to inherited structures, and the boundaries of the Shillong block are highlighted by seismic activity. We find that large earthquakes of the past millennium do not propagate across the segment boundaries defined by APaGA, therefore these seem to set limits for potential rupture of megathrust earthquakes.

  13. Segmentation of the Himalayas as revealed by arc-parallel gravity anomalies

    PubMed Central

    Hetényi, György; Cattin, Rodolphe; Berthet, Théo; Le Moigne, Nicolas; Chophel, Jamyang; Lechmann, Sarah; Hammer, Paul; Drukpa, Dowchu; Sapkota, Soma Nath; Gautier, Stéphanie; Thinley, Kinzang

    2016-01-01

    Lateral variations along the Himalayan arc are suggested by an increasing number of studies and carry important information about the orogen’s segmentation. Here we compile the hitherto most complete land gravity dataset in the region which enables the currently highest resolution plausible analysis. To study lateral variations in collisional structure we compute arc-parallel gravity anomalies (APaGA) by subtracting the average arc-perpendicular profile from our dataset; we compute likewise for topography (APaTA). We find no direct correlation between APaGA, APaTA and background seismicity, as suggested in oceanic subduction context. In the Himalayas APaTA mainly reflect relief and erosional effects, whereas APaGA reflect the deep structure of the orogen with clear lateral boundaries. Four segments are outlined and have disparate flexural geometry: NE India, Bhutan, Nepal & India until Dehradun, and NW India. The segment boundaries in the India plate are related to inherited structures, and the boundaries of the Shillong block are highlighted by seismic activity. We find that large earthquakes of the past millennium do not propagate across the segment boundaries defined by APaGA, therefore these seem to set limits for potential rupture of megathrust earthquakes. PMID:27649782

  14. Object segmentation using graph cuts and active contours in a pyramidal framework

    NASA Astrophysics Data System (ADS)

    Subudhi, Priyambada; Mukhopadhyay, Susanta

    2018-03-01

    Graph cuts and active contours are two very popular interactive object segmentation techniques in the field of computer vision and image processing. However, both these approaches have their own well-known limitations. Graph cut methods perform efficiently giving global optimal segmentation result for smaller images. However, for larger images, huge graphs need to be constructed which not only takes an unacceptable amount of memory but also increases the time required for segmentation to a great extent. On the other hand, in case of active contours, initial contour selection plays an important role in the accuracy of the segmentation. So a proper selection of initial contour may improve the complexity as well as the accuracy of the result. In this paper, we have tried to combine these two approaches to overcome their above-mentioned drawbacks and develop a fast technique of object segmentation. Here, we have used a pyramidal framework and applied the mincut/maxflow algorithm on the lowest resolution image with the least number of seed points possible which will be very fast due to the smaller size of the image. Then, the obtained segmentation contour is super-sampled and and worked as the initial contour for the next higher resolution image. As the initial contour is very close to the actual contour, so fewer number of iterations will be required for the convergence of the contour. The process is repeated for all the high-resolution images and experimental results show that our approach is faster as well as memory efficient as compare to both graph cut or active contour segmentation alone.

  15. Identifying Benefit Segments among College Students.

    ERIC Educational Resources Information Center

    Brown, Joseph D.

    1991-01-01

    Using concept of market segmentation (dividing market into distinct groups requiring different product benefits), surveyed 398 college students to determine benefit segments among students selecting a college to attend and factors describing each benefit segment. Identified one major segment of students (classroomers) plus three minor segments…

  16. Stress Analysis of Bolted, Segmented Cylindrical Shells Exhibiting Flange Mating-Surface Waviness

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2009-01-01

    Bolted, segmented cylindrical shells are a common structural component in many engineering systems especially for aerospace launch vehicles. Segmented shells are often needed due to limitations of manufacturing capabilities or transportation issues related to very long, large-diameter cylindrical shells. These cylindrical shells typically have a flange or ring welded to opposite ends so that shell segments can be mated together and bolted to form a larger structural system. As the diameter of these shells increases, maintaining strict fabrication tolerances for the flanges to be flat and parallel on a welded structure is an extreme challenge. Local fit-up stresses develop in the structure due to flange mating-surface mismatch (flange waviness). These local stresses need to be considered when predicting a critical initial flaw size. Flange waviness is one contributor to the fit-up stress state. The present paper describes the modeling and analysis effort to simulate fit-up stresses due to flange waviness in a typical bolted, segmented cylindrical shell. Results from parametric studies are presented for various flange mating-surface waviness distributions and amplitudes.

  17. TuMore: generation of synthetic brain tumor MRI data for deep learning based segmentation approaches

    NASA Astrophysics Data System (ADS)

    Lindner, Lydia; Pfarrkirchner, Birgit; Gsaxner, Christina; Schmalstieg, Dieter; Egger, Jan

    2018-03-01

    Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.

  18. DTI segmentation by statistical surface evolution.

    PubMed

    Lenglet, Christophe; Rousson, Mikaël; Deriche, Rachid

    2006-06-01

    We address the problem of the segmentation of cerebral white matter structures from diffusion tensor images (DTI). A DTI produces, from a set of diffusion-weighted MR images, tensor-valued images where each voxel is assigned with a 3 x 3 symmetric, positive-definite matrix. This second order tensor is simply the covariance matrix of a local Gaussian process, with zero-mean, modeling the average motion of water molecules. As we will show in this paper, the definition of a dissimilarity measure and statistics between such quantities is a nontrivial task which must be tackled carefully. We claim and demonstrate that, by using the theoretically well-founded differential geometrical properties of the manifold of multivariate normal distributions, it is possible to improve the quality of the segmentation results obtained with other dissimilarity measures such as the Euclidean distance or the Kullback-Leibler divergence. The main goal of this paper is to prove that the choice of the probability metric, i.e., the dissimilarity measure, has a deep impact on the tensor statistics and, hence, on the achieved results. We introduce a variational formulation, in the level-set framework, to estimate the optimal segmentation of a DTI according to the following hypothesis: Diffusion tensors exhibit a Gaussian distribution in the different partitions. We must also respect the geometric constraints imposed by the interfaces existing among the cerebral structures and detected by the gradient of the DTI. We show how to express all the statistical quantities for the different probability metrics. We validate and compare the results obtained on various synthetic data-sets, a biological rat spinal cord phantom and human brain DTIs.

  19. Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Guinard, S.; Landrieu, L.

    2017-05-01

    We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.

  20. Computer aided system for segmentation and visualization of microcalcifications in digital mammograms.

    PubMed

    Reljin, Branimir; Milosević, Zorica; Stojić, Tomislav; Reljin, Irini

    2009-01-01

    Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.

  1. On-Line Detection and Segmentation of Sports Motions Using a Wearable Sensor.

    PubMed

    Kim, Woosuk; Kim, Myunggyu

    2018-03-19

    In sports motion analysis, observation is a prerequisite for understanding the quality of motions. This paper introduces a novel approach to detect and segment sports motions using a wearable sensor for supporting systematic observation. The main goal is, for convenient analysis, to automatically provide motion data, which are temporally classified according to the phase definition. For explicit segmentation, a motion model is defined as a sequence of sub-motions with boundary states. A sequence classifier based on deep neural networks is designed to detect sports motions from continuous sensor inputs. The evaluation on two types of motions (soccer kicking and two-handed ball throwing) verifies that the proposed method is successful for the accurate detection and segmentation of sports motions. By developing a sports motion analysis system using the motion model and the sequence classifier, we show that the proposed method is useful for observation of sports motions by automatically providing relevant motion data for analysis.

  2. A validation framework for brain tumor segmentation.

    PubMed

    Archip, Neculai; Jolesz, Ferenc A; Warfield, Simon K

    2007-10-01

    We introduce a validation framework for the segmentation of brain tumors from magnetic resonance (MR) images. A novel unsupervised semiautomatic brain tumor segmentation algorithm is also presented. The proposed framework consists of 1) T1-weighted MR images of patients with brain tumors, 2) segmentation of brain tumors performed by four independent experts, 3) segmentation of brain tumors generated by a semiautomatic algorithm, and 4) a software tool that estimates the performance of segmentation algorithms. We demonstrate the validation of the novel segmentation algorithm within the proposed framework. We show its performance and compare it with existent segmentation. The image datasets and software are available at http://www.brain-tumor-repository.org/. We present an Internet resource that provides access to MR brain tumor image data and segmentation that can be openly used by the research community. Its purpose is to encourage the development and evaluation of segmentation methods by providing raw test and image data, human expert segmentation results, and methods for comparing segmentation results.

  3. Cavity contour segmentation in chest radiographs using supervised learning and dynamic programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maduskar, Pragnya, E-mail: pragnya.maduskar@radboudumc.nl; Hogeweg, Laurens; Sánchez, Clara I.

    , respectively, 2.48 ± 2.19 and 8.32 ± 5.66 mm, whereas these distances were 1.66 ± 1.29 and 5.75 ± 4.88 mm between the segmentations by the reference reader and the independent observer, respectively. The automatic segmentations were also visually assessed by two trained CXR readers as “excellent,” “adequate,” or “insufficient.” The readers had good agreement in assessing the cavity outlines and 84% of the segmentations were rated as “excellent” or “adequate” by both readers. Conclusions: The proposed cavity segmentation technique produced results with a good degree of overlap with manual expert segmentations. The evaluation measures demonstrated that the results approached the results of the experienced chest radiologists, in terms of overlap measure and contour distance measures. Automatic cavity segmentation can be employed in TB clinics for treatment monitoring, especially in resource limited settings where radiologists are not available.« less

  4. Metric Learning for Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  5. Generalized expectation-maximization segmentation of brain MR images

    NASA Astrophysics Data System (ADS)

    Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.

    2006-03-01

    Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.

  6. Efficacy of texture, shape, and intensity feature fusion for posterior-fossa tumor segmentation in MRI.

    PubMed

    Ahmed, Shaheen; Iftekharuddin, Khan M; Vossough, Arastoo

    2011-03-01

    Our previous works suggest that fractal texture feature is useful to detect pediatric brain tumor in multimodal MRI. In this study, we systematically investigate efficacy of using several different image features such as intensity, fractal texture, and level-set shape in segmentation of posterior-fossa (PF) tumor for pediatric patients. We explore effectiveness of using four different feature selection and three different segmentation techniques, respectively, to discriminate tumor regions from normal tissue in multimodal brain MRI. We further study the selective fusion of these features for improved PF tumor segmentation. Our result suggests that Kullback-Leibler divergence measure for feature ranking and selection and the expectation maximization algorithm for feature fusion and tumor segmentation offer the best results for the patient data in this study. We show that for T1 and fluid attenuation inversion recovery (FLAIR) MRI modalities, the best PF tumor segmentation is obtained using the texture feature such as multifractional Brownian motion (mBm) while that for T2 MRI is obtained by fusing level-set shape with intensity features. In multimodality fused MRI (T1, T2, and FLAIR), mBm feature offers the best PF tumor segmentation performance. We use different similarity metrics to evaluate quality and robustness of these selected features for PF tumor segmentation in MRI for ten pediatric patients.

  7. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    PubMed

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20

  8. Large deep neural networks for MS lesion segmentation

    NASA Astrophysics Data System (ADS)

    Prieto, Juan C.; Cavallari, Michele; Palotai, Miklos; Morales Pinzon, Alfredo; Egorova, Svetlana; Styner, Martin; Guttmann, Charles R. G.

    2017-02-01

    Multiple sclerosis (MS) is a multi-factorial autoimmune disorder, characterized by spatial and temporal dissemination of brain lesions that are visible in T2-weighted and Proton Density (PD) MRI. Assessment of lesion burden and is useful for monitoring the course of the disease, and assessing correlates of clinical outcomes. Although there are established semi-automated methods to measure lesion volume, most of them require human interaction and editing, which are time consuming and limits the ability to analyze large sets of data with high accuracy. The primary objective of this work is to improve existing segmentation algorithms and accelerate the time consuming operation of identifying and validating MS lesions. In this paper, a Deep Neural Network for MS Lesion Segmentation is implemented. The MS lesion samples are extracted from the Partners Comprehensive Longitudinal Investigation of Multiple Sclerosis (CLIMB) study. A set of 900 subjects with T2, PD and a manually corrected label map images were used to train a Deep Neural Network and identify MS lesions. Initial tests using this network achieved a 90% accuracy rate. A secondary goal was to enable this data repository for big data analysis by using this algorithm to segment the remaining cases available in the CLIMB repository.

  9. WCE video segmentation using textons

    NASA Astrophysics Data System (ADS)

    Gallo, Giovanni; Granata, Eliana

    2010-03-01

    Wireless Capsule Endoscopy (WCE) integrates wireless transmission with image and video technology. It has been used to examine the small intestine non invasively. Medical specialists look for signicative events in the WCE video by direct visual inspection manually labelling, in tiring and up to one hour long sessions, clinical relevant frames. This limits the WCE usage. To automatically discriminate digestive organs such as esophagus, stomach, small intestine and colon is of great advantage. In this paper we propose to use textons for the automatic discrimination of abrupt changes within a video. In particular, we consider, as features, for each frame hue, saturation, value, high-frequency energy content and the responses to a bank of Gabor filters. The experiments have been conducted on ten video segments extracted from WCE videos, in which the signicative events have been previously labelled by experts. Results have shown that the proposed method may eliminate up to 70% of the frames from further investigations. The direct analysis of the doctors may hence be concentrated only on eventful frames. A graphical tool showing sudden changes in the textons frequencies for each frame is also proposed as a visual aid to find clinically relevant segments of the video.

  10. Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis

    NASA Astrophysics Data System (ADS)

    Che, E.; Olsen, M. J.

    2017-09-01

    Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  11. Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.

    PubMed

    Fromm, S A; Sachse, C

    2016-01-01

    Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. © 2016 Elsevier Inc. All rights reserved.

  12. Anterior Segment Ischemia after Strabismus Surgery

    PubMed Central

    Göçmen, Emine Seyhan; Atalay, Yonca; Evren Kemer, Özlem; Sarıkatipoğlu, Hikmet Yavuz

    2017-01-01

    A 46-year-old male patient was referred to our clinic with complaints of diplopia and esotropia in his right eye that developed after a car accident. The patient had right esotropia in primary position and abduction of the right eye was totally limited. Primary deviation was over 40 prism diopters at near and distance. The patient was diagnosed with sixth nerve palsy and 18 months after trauma, he underwent right medial rectus muscle recession. Ten months after the first operation, full-thickness tendon transposition of the superior and inferior rectus muscles (with Foster suture) was performed. On the first postoperative day, slit-lamp examination revealed corneal edema, 3+ cells in the anterior chamber and an irregular pupil. According to these findings, the diagnosis was anterior segment ischemia. Treatment with 0.1/5 mL topical dexamethasone drops (16 times/day), cyclopentolate hydrochloride drops (3 times/day) and 20 mg oral fluocortolone (3 times/day) was initiated. After 1 week of treatment, corneal edema regressed and the anterior chamber was clean. Topical and systemic steroid treatment was gradually discontinued. At postoperative 1 month, the patient was orthophoric and there were no pathologic symptoms besides the irregular pupil. Anterior segment ischemia is one of the most serious complications of strabismus surgery. Despite the fact that in most cases the only remaining sequel is an irregular pupil, serious circulation deficits could lead to phthisis bulbi. Clinical properties of anterior segment ischemia should be well recognized and in especially risky cases, preventative measures should be taken. PMID:28182149

  13. Generalized pixel profiling and comparative segmentation with application to arteriovenous malformation segmentation.

    PubMed

    Babin, D; Pižurica, A; Bellens, R; De Bock, J; Shang, Y; Goossens, B; Vansteenkiste, E; Philips, W

    2012-07-01

    Extraction of structural and geometric information from 3-D images of blood vessels is a well known and widely addressed segmentation problem. The segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, with a special application in diagnostics and surgery on arteriovenous malformations (AVM). However, the techniques addressing the problem of the AVM inner structure segmentation are rare. In this work we present a novel method of pixel profiling with the application to segmentation of the 3-D angiography AVM images. Our algorithm stands out in situations with low resolution images and high variability of pixel intensity. Another advantage of our method is that the parameters are set automatically, which yields little manual user intervention. The results on phantoms and real data demonstrate its effectiveness and potentials for fine delineation of AVM structure. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging

    NASA Astrophysics Data System (ADS)

    Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand

    2018-01-01

    The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.

  15. Model-based segmentation of the facial nerve and chorda tympani in pediatric CT scans

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Noble, Jack H.; Rivas, Alejandro; Labadie, Robert F.; Dawant, Benoit M.

    2011-03-01

    In image-guided cochlear implant surgery an electrode array is implanted in the cochlea to treat hearing loss. Access to the cochlea is achieved by drilling from the outer skull to the cochlea through the facial recess, a region bounded by the facial nerve and the chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The effectiveness of traditional segmentation approaches to achieve this is severely limited because the facial nerve and chorda are small structures (~1 mm and ~0.3 mm in diameter, respectively) and exhibit poor image contrast. We have recently proposed a technique to achieve this task in adult patients, which relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work we use the same method to segment pediatric scans. We show that substantial differences exist between the anatomy of children and the anatomy of adults, which lead to poor segmentation results when an adult model is used to segment a pediatric volume. We have built a new model for pediatric cases and we have applied it to ten scans. A leave-one-out validation experiment was conducted in which manually segmented structures were compared to automatically segmented structures. The maximum segmentation error was 1 mm. This result indicates that accurate segmentation of the facial nerve and chorda in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed automatically.

  16. STEM Employment in the New Economy: A Labor Market Segmentation Approach

    ERIC Educational Resources Information Center

    Torres-Olave, Blanca M.

    2013-01-01

    The present study examined the extent to which the U.S. STEM labor market is stratified in terms of quality of employment. Through a series of cluster analyses and Chi-square tests on data drawn from the 2008 Survey of Income Program Participation (SIPP), the study found evidence of segmentation in the highly-skilled STEM and non-STEM samples,…

  17. Support for context effects on segmentation and segments depends on the context.

    PubMed

    Heffner, Christopher C; Newman, Rochelle S; Idsardi, William J

    2017-04-01

    Listeners must adapt to differences in speech rate across talkers and situations. Speech rate adaptation effects are strong for adjacent syllables (i.e., proximal syllables). For studies that have assessed adaptation effects on speech rate information more than one syllable removed from a point of ambiguity in speech (i.e., distal syllables), the difference in strength between different types of ambiguity is stark. Studies of word segmentation have shown large shifts in perception as a result of distal rate manipulations, while studies of segmental perception have shown only weak, or even nonexistent, effects. However, no study has standardized methods and materials to study context effects for both types of ambiguity simultaneously. Here, a set of sentences was created that differed as minimally as possible except for whether the sentences were ambiguous to the voicing of a consonant or ambiguous to the location of a word boundary. The sentences were then rate-modified to slow down the distal context speech rate to various extents, dependent on three different definitions of distal context that were adapted from previous experiments, along with a manipulation of proximal context to assess whether proximal effects were comparable across ambiguity types. The results indicate that the definition of distal influenced the extent of distal rate effects strongly for both segments and segmentation. They also establish the presence of distal rate effects on word-final segments for the first time. These results were replicated, with some caveats regarding the perception of individual segments, in an Internet-based sample recruited from Mechanical Turk.

  18. Optic disc segmentation: level set methods and blood vessels inpainting

    NASA Astrophysics Data System (ADS)

    Almazroa, A.; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-03-01

    Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.

  19. Segmentation of hospital markets: where do HMO enrollees get care?

    PubMed

    Escarce, J J; Shea, J A; Chen, W

    1997-01-01

    Commercially insured and Medicare patients who are not in health maintenance organizations (HMOs) tend to use different hospitals than HMO patients use. This phenomenon, called market segmentation, raises important questions about how hospitals that treat many HMO patients differ from those that treat few HMO patients, especially with regard to quality of care. This study of patients undergoing coronary artery bypass graft surgery found no evidence that HMOs in southeast Florida systematically channel their patients to high-volume or low-mortality hospitals. These findings are consistent with other evidence that in many areas of the country, incentives for managed care plans to reduce costs may outweigh incentives to improve quality.

  20. Improving Spleen Volume Estimation via Computer Assisted Segmentation on Clinically Acquired CT Scans

    PubMed Central

    Xu, Zhoubing; Gertz, Adam L.; Burke, Ryan P.; Bansal, Neil; Kang, Hakmook; Landman, Bennett A.; Abramson, Richard G.

    2016-01-01

    OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomical structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically-acquired CT scans. MATERIALS AND METHODS Under IRB approval, we obtained 294 deidentified (HIPAA-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1–manual segmentation of all scans, Pipeline 2–automated segmentation of all scans, Pipeline 3–automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, Pipelines 4 and 5–volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracy of Pipelines 2–5 (Dice similarity coefficient [DSC], Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1–5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation 23.7 cm3, and 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient 0.98, absolute deviation 46.92 cm3, and 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. PMID:27519156

  1. Standardisation of DNA quantitation by image analysis: quality control of instrumentation.

    PubMed

    Puech, M; Giroud, F

    1999-05-01

    DNA image analysis is frequently performed in clinical practice as a prognostic tool and to improve diagnosis. The precision of prognosis and diagnosis depends on the accuracy of analysis and particularly on the quality of image analysis systems. It has been reported that image analysis systems used for DNA quantification differ widely in their characteristics (Thunissen et al.: Cytometry 27: 21-25, 1997). This induces inter-laboratory variations when the same sample is analysed in different laboratories. In microscopic image analysis, the principal instrumentation errors arise from the optical and electronic parts of systems. They bring about problems of instability, non-linearity, and shading and glare phenomena. The aim of this study is to establish tools and standardised quality control procedures for microscopic image analysis systems. Specific reference standard slides have been developed to control instability, non-linearity, shading and glare phenomena and segmentation efficiency. Some systems have been controlled with these tools and these quality control procedures. Interpretation criteria and accuracy limits of these quality control procedures are proposed according to the conclusions of a European project called PRESS project (Prototype Reference Standard Slide). Beyond these limits, tested image analysis systems are not qualified to realise precise DNA analysis. The different procedures presented in this work determine if an image analysis system is qualified to deliver sufficiently precise DNA measurements for cancer case analysis. If the controlled systems are beyond the defined limits, some recommendations are given to find a solution to the problem.

  2. Structure-properties relationships of novel poly(carbonate-co-amide) segmented copolymers with polyamide-6 as hard segments and polycarbonate as soft segments

    NASA Astrophysics Data System (ADS)

    Yang, Yunyun; Kong, Weibo; Yuan, Ye; Zhou, Changlin; Cai, Xufu

    2018-04-01

    Novel poly(carbonate-co-amide) (PCA) block copolymers are prepared with polycarbonate diol (PCD) as soft segments, polyamide-6 (PA6) as hard segments and 4,4'-diphenylmethane diisocyanate (MDI) as coupling agent through reactive processing. The reactive processing strategy is eco-friendly and resolve the incompatibility between polyamide segments and PCD segments in preparation processing. The chemical structure, crystalline properties, thermal properties, mechanical properties and water resistance were extensively studied by Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), Differential scanning calorimetry (DSC), Thermal gravity analysis (TGA), Dynamic mechanical analysis (DMA), tensile testing, water contact angle and water absorption, respectively. The as-prepared PCAs exhibit obvious microphase separation between the crystalline hard PA6 phase and amorphous PCD soft segments. Meanwhile, PCAs showed outstanding mechanical with the maximum tensile strength of 46.3 MPa and elongation at break of 909%. The contact angle and water absorption results indicate that PCAs demonstrate outstanding water resistance even though possess the hydrophilic surfaces. The TGA measurements prove that the thermal stability of PCA can satisfy the requirement of multiple-processing without decomposition.

  3. Semi-automatic brain tumor segmentation by constrained MRFs using structural trajectories.

    PubMed

    Zhao, Liang; Wu, Wei; Corso, Jason J

    2013-01-01

    Quantifying volume and growth of a brain tumor is a primary prognostic measure and hence has received much attention in the medical imaging community. Most methods have sought a fully automatic segmentation, but the variability in shape and appearance of brain tumor has limited their success and further adoption in the clinic. In reaction, we present a semi-automatic brain tumor segmentation framework for multi-channel magnetic resonance (MR) images. This framework does not require prior model construction and only requires manual labels on one automatically selected slice. All other slices are labeled by an iterative multi-label Markov random field optimization with hard constraints. Structural trajectories-the medical image analog to optical flow and 3D image over-segmentation are used to capture pixel correspondences between consecutive slices for pixel labeling. We show robustness and effectiveness through an evaluation on the 2012 MICCAI BRATS Challenge Dataset; our results indicate superior performance to baselines and demonstrate the utility of the constrained MRF formulation.

  4. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different

  5. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    PubMed Central

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and

  6. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-08

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of

  7. An improved K-means clustering method for cDNA microarray image segmentation.

    PubMed

    Wang, T N; Li, T J; Shao, G F; Wu, S X

    2015-07-14

    Microarray technology is a powerful tool for human genetic research and other biomedical applications. Numerous improvements to the standard K-means algorithm have been carried out to complete the image segmentation step. However, most of the previous studies classify the image into two clusters. In this paper, we propose a novel K-means algorithm, which first classifies the image into three clusters, and then one of the three clusters is divided as the background region and the other two clusters, as the foreground region. The proposed method was evaluated on six different data sets. The analyses of accuracy, efficiency, expression values, special gene spots, and noise images demonstrate the effectiveness of our method in improving the segmentation quality.

  8. Morphological Parsing and the Use of Segmentation Cues in Reading Finnish Compounds

    ERIC Educational Resources Information Center

    Bertram, Raymond; Pollatsek, Alexander; Hyona, Jukka

    2004-01-01

    This eye movement study investigated the use of two types of segmentation cues in processing long Finnish compounds. The cues were related to the vowel quality properties of the constituents and properties of the consonant starting the second constituent. In Finnish, front vowels never appear with back vowels in a lexeme, but different quality…

  9. Intelligent multi-spectral IR image segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert

    2017-05-01

    This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.

  10. Limited view angle iterative CT reconstruction

    NASA Astrophysics Data System (ADS)

    Kisner, Sherman J.; Haneda, Eri; Bouman, Charles A.; Skatter, Sondre; Kourinny, Mikhail; Bedford, Simon

    2012-03-01

    Computed Tomography (CT) is widely used for transportation security to screen baggage for potential threats. For example, many airports use X-ray CT to scan the checked baggage of airline passengers. The resulting reconstructions are then used for both automated and human detection of threats. Recently, there has been growing interest in the use of model-based reconstruction techniques for application in CT security systems. Model-based reconstruction offers a number of potential advantages over more traditional direct reconstruction such as filtered backprojection (FBP). Perhaps one of the greatest advantages is the potential to reduce reconstruction artifacts when non-traditional scan geometries are used. For example, FBP tends to produce very severe streaking artifacts when applied to limited view data, which can adversely affect subsequent processing such as segmentation and detection. In this paper, we investigate the use of model-based reconstruction in conjunction with limited-view scanning architectures, and we illustrate the value of these methods using transportation security examples. The advantage of limited view architectures is that it has the potential to reduce the cost and complexity of a scanning system, but its disadvantage is that limited-view data can result in structured artifacts in reconstructed images. Our method of reconstruction depends on the formulation of both a forward projection model for the system, and a prior model that accounts for the contents and densities of typical baggage. In order to evaluate our new method, we use realistic models of baggage with randomly inserted simple simulated objects. Using this approach, we show that model-based reconstruction can substantially reduce artifacts and improve important metrics of image quality such as the accuracy of the estimated CT numbers.

  11. Summary of available state ambient stream-water-quality data, 1990-98, and limitations for national assessment

    USGS Publications Warehouse

    Pope, Larry M.; Rosner, Stacy M.; Hoffman, Darren C.; Ziegler, Andrew C.

    2004-01-01

    The investigation described in this report summarized data from State ambient stream-water-quality monitoring sites for 10 water-quality constituents or measurements (suspended solids, fecal coliform bacteria, ammonia as nitrogen, nitrite plus nitrate as nitrogen, total phosphorus, total arsenic, dissolved solids, chloride, sulfate, and pH). These 10 water-quality constituents or measurements commonly are listed nationally as major contributors to degradation of surface water. Water-quality data were limited to that electronically accessible from the U.S. Environmental Protection Agency Storage and Retrieval System (STORET), the U.S. Geological Survey National Water Information System (NWIS), or individual State databases. Forty-two States had ambient stream-water-quality data electronically accessible for some or all of the constituents or measurements summarized during this investigation. Ambient in this report refers to data collected for the purpose of evaluating stream ecosystems in relation to human health, environmental and ecological conditions, and designated uses. Generally, data were from monitoring sites assessed for State 305(b) reports. Comparisons of monitoring data among States are problematic for several reasons, including differences in the basic spatial design of monitoring networks; water-quality constituents for which samples are analyzed; water-quality criteria to which constituent concentrations are compared; quantity and comprehensiveness of water-quality data; sample collection, processing, and handling; analytical methods; temporal variability in sample collection; and quality-assurance practices. Large differences among the States in number of monitoring sites precluded a general assumption that statewide water-quality conditions were represented by data from these sites. Furthermore, data from individual monitoring sites may not represent water-quality conditions at the sites because sampling conditions and protocols are unknown. Because

  12. Quality expectations and tolerance limits of trial master files (TMF) – Developing a risk-based approach for quality assessments of TMFs

    PubMed Central

    Hecht, Arthur; Busch-Heidger, Barbara; Gertzen, Heiner; Pfister, Heike; Ruhfus, Birgit; Sanden, Per-Holger; Schmidt, Gabriele B.

    2015-01-01

    This article addresses the question of when a trial master file (TMF) can be considered sufficiently accurate and complete: What attributes does the TMF need to have so that a clinical trial can be adequately reconstructed from documented data and procedures? Clinical trial sponsors face significant challenges in assembling the TMF, especially when dealing with large, international, multicenter studies; despite all newly introduced archiving techniques it is becoming more and more difficult to ensure that the TMF is complete. This is directly reflected in the number of inspection findings reported and published by the EMA in 2014. Based on quality risk management principles in clinical trials the authors defined the quality expectations for the different document types in a TMF and furthermore defined tolerance limits for missing documents. This publication provides guidance on what type of documents and processes are most important, and in consequence, indicates on which documents and processes trial team staff should focus in order to achieve a high-quality TMF. The members of this working group belong to the CQAG Group (Clinical Quality Assurance Germany) and are QA (quality assurance) experts (auditors or compliance functions) with long-term experience in the practical handling of TMFs. PMID:26693218

  13. Active hexagonally segmented mirror to investigate new optical phasing technologies for segmented telescopes.

    PubMed

    Gonté, Frédéric; Dupuy, Christophe; Luong, Bruno; Frank, Christoph; Brast, Roland; Sedghi, Baback

    2009-11-10

    The primary mirror of the future European Extremely Large Telescope will be equipped with 984 hexagonal segments. The alignment of the segments in piston, tip, and tilt within a few nanometers requires an optical phasing sensor. A test bench has been designed to study four different optical phasing sensor technologies. The core element of the test bench is an active segmented mirror composed of 61 flat hexagonal segments with a size of 17 mm side to side. Each of them can be controlled in piston, tip, and tilt by three piezoactuators with a precision better than 1 nm. The context of this development, the requirements, the design, and the integration of this system are explained. The first results on the final precision obtained in closed-loop control are also presented.

  14. Efficient threshold for volumetric segmentation

    NASA Astrophysics Data System (ADS)

    Burdescu, Dumitru D.; Brezovan, Marius; Stanescu, Liana; Stoica Spahiu, Cosmin; Ebanca, Daniel

    2015-07-01

    Image segmentation plays a crucial role in effective understanding of digital images. However, the research on the existence of general purpose segmentation algorithm that suits for variety of applications is still very much active. Among the many approaches in performing image segmentation, graph based approach is gaining popularity primarily due to its ability in reflecting global image properties. Volumetric image segmentation can simply result an image partition composed by relevant regions, but the most fundamental challenge in segmentation algorithm is to precisely define the volumetric extent of some object, which may be represented by the union of multiple regions. The aim in this paper is to present a new method to detect visual objects from color volumetric images and efficient threshold. We present a unified framework for volumetric image segmentation and contour extraction that uses a virtual tree-hexagonal structure defined on the set of the image voxels. The advantage of using a virtual tree-hexagonal network superposed over the initial image voxels is that it reduces the execution time and the memory space used, without losing the initial resolution of the image.

  15. Segmentation of prostate biopsy needles in transrectal ultrasound images

    NASA Astrophysics Data System (ADS)

    Krefting, Dagmar; Haupt, Barbara; Tolxdorff, Thomas; Kempkensteffen, Carsten; Miller, Kurt

    2007-03-01

    Prostate cancer is the most common cancer in men. Tissue extraction at different locations (biopsy) is the gold-standard for diagnosis of prostate cancer. These biopsies are commonly guided by transrectal ultrasound imaging (TRUS). Exact location of the extracted tissue within the gland is desired for more specific diagnosis and provides better therapy planning. While the orientation and the position of the needle within clinical TRUS image are limited, the appearing length and visibility of the needle varies strongly. Marker lines are present and tissue inhomogeneities and deflection artefacts may appear. Simple intensity, gradient oder edge-detecting based segmentation methods fail. Therefore a multivariate statistical classificator is implemented. The independent feature model is built by supervised learning using a set of manually segmented needles. The feature space is spanned by common binary object features as size and eccentricity as well as imaging-system dependent features like distance and orientation relative to the marker line. The object extraction is done by multi-step binarization of the region of interest. The ROI is automatically determined at the beginning of the segmentation and marker lines are removed from the images. The segmentation itself is realized by scale-invariant classification using maximum likelihood estimation and Mahalanobis distance as discriminator. The technique presented here could be successfully applied in 94% of 1835 TRUS images from 30 tissue extractions. It provides a robust method for biopsy needle localization in clinical prostate biopsy TRUS images.

  16. Unsupervised segmentation of MRI knees using image partition forests

    NASA Astrophysics Data System (ADS)

    Marčan, Marija; Voiculescu, Irina

    2016-03-01

    Nowadays many people are affected by arthritis, a condition of the joints with limited prevention measures, but with various options of treatment the most radical of which is surgical. In order for surgery to be successful, it can make use of careful analysis of patient-based models generated from medical images, usually by manual segmentation. In this work we show how to automate the segmentation of a crucial and complex joint -- the knee. To achieve this goal we rely on our novel way of representing a 3D voxel volume as a hierarchical structure of partitions which we have named Image Partition Forest (IPF). The IPF contains several partition layers of increasing coarseness, with partitions nested across layers in the form of adjacency graphs. On the basis of a set of properties (size, mean intensity, coordinates) of each node in the IPF we classify nodes into different features. Values indicating whether or not any particular node belongs to the femur or tibia are assigned through node filtering and node-based region growing. So far we have evaluated our method on 15 MRI knee images. Our unsupervised segmentation compared against a hand-segmented gold standard has achieved an average Dice similarity coefficient of 0.95 for femur and 0.93 for tibia, and an average symmetric surface distance of 0.98 mm for femur and 0.73 mm for tibia. The paper also discusses ways to introduce stricter morphological and spatial conditioning in the bone labelling process.

  17. Bacterial communities in different locations, seasons and segments of a dairy wastewater treatment system consisting of six segments.

    PubMed

    Hirota, Kikue; Yokota, Yuji; Sekimura, Toru; Uchiumi, Hiroshi; Guo, Yong; Ohta, Hiroyuki; Yumoto, Isao

    2016-08-01

    A dairy wastewater treatment system composed of the 1st segment (no aeration) equipped with a facility for the destruction of milk fat particles, four successive aerobic treatment segments with activated sludge and a final sludge settlement segment was developed. The activated sludge is circulated through the six segments by settling sediments (activated sludge) in the 6th segment and sending the sediments beck to the 1st and 2nd segments. Microbiota was examined using samples from the non-aerated 1st and aerated 2nd segments obtained from two farms using the same system in summer or winter. Principal component analysis showed that the change in microbiota from the 1st to 2nd segments concomitant with effective wastewater treatment is affected by the concentrations of activated sludge and organic matter (biological oxygen demand [BOD]), and dissolved oxygen (DO) content. Microbiota from five segments (1st and four successive aerobic segments) in one location was also examined. Although the activated sludge is circulating throughout all the segments, microbiota fluctuation was observed. The observed successive changes in microbiota reflected the changes in the concentrations of organic matter and other physicochemical conditions (such as DO), suggesting that the microbiota is flexibly changeable depending on the environmental condition in the segments. The genera Dechloromonas, Zoogloea and Leptothrix are frequently observed in this wastewater treatment system throughout the analyses of microbiota in this study. Copyright © 2016. Published by Elsevier B.V.

  18. Space Segment (SS) and the Navigation User Segment (US) Interface Control Document (ICD)

    DOT National Transportation Integrated Search

    1993-10-10

    This Interface Control Document (ICD) defines the requirements related to the interface between the Space Segment (SS) of the Global Positioning System (GPS) and the Navigation Users Segment of the GPS. 2880k, 154p.

  19. Analysis of a kinetic multi-segment foot model part II: kinetics and clinical implications.

    PubMed

    Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L

    2012-04-01

    Kinematic multi-segment foot models have seen increased use in clinical and research settings, but the addition of kinetics has been limited and hampered by measurement limitations and modeling assumptions. In this second of two companion papers, we complete the presentation and analysis of a three segment kinetic foot model by incorporating kinetic parameters and calculating joint moments and powers. The model was tested on 17 pediatric subjects (ages 7-18 years) during normal gait. Ground reaction forces were measured using two adjacent force platforms, requiring targeted walking and the creation of two sub-models to analyze ankle, midtarsal, and 1st metatarsophalangeal joints. Targeted walking resulted in only minimal kinematic and kinetic differences compared with walking at self selected speeds. Joint moments and powers were calculated and ensemble averages are presented as a normative database for comparison purposes. Ankle joint powers are shown to be overestimated when using a traditional single-segment foot model, as substantial angular velocities are attributed to the mid-tarsal joint. Power transfer is apparent between the 1st metatarsophalangeal and mid-tarsal joints in terminal stance/pre-swing. While the measurement approach presented here is limited to clinical populations with only minimal impairments, some elements of the model can also be incorporated into routine clinical gait analysis. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Scaling Relations for the Thermal Structure of Segmented Oceanic Transform Faults

    NASA Astrophysics Data System (ADS)

    Wolfson-Schwehr, M.; Boettcher, M. S.; Behn, M. D.

    2015-12-01

    Mid-ocean ridge-transform faults (RTFs) are a natural laboratory for studying strike-slip earthquake behavior due to their relatively simple geometry, well-constrained slip rates, and quasi-periodic seismic cycles. However, deficiencies in our understanding of the limited size of the largest RTF earthquakes are due, in part, to not considering the effect of short intra-transform spreading centers (ITSCs) on fault thermal structure. We use COMSOL Multiphysics to run a series of 3D finite element simulations of segmented RTFs with visco-plastic rheology. The models test a range of RTF segment lengths (L = 10-150 km), ITSC offset lengths (O = 1-30 km), and spreading rates (V = 2-14 cm/yr). The lithosphere and upper mantle are approximated as steady-state, incompressible flow. Coulomb failure incorporates brittle processes in the lithosphere, and a temperature-dependent flow law for dislocation creep of olivine activates ductile deformation in the mantle. ITSC offsets as small as 2 km affect the thermal structure underlying many segmented RTFs, reducing the area above the 600˚C isotherm, A600, and thus the size of the largest expected earthquakes, Mc. We develop a scaling relation for the critical ITSC offset length, OC, which significantly reduces the thermal affect of adjacent fault segments of length L1 and L2. OC is defined as the ITSC offset that results in an area loss ratio of R = (Aunbroken - Acombined)/Aunbroken - Adecoupled) = 63%, where Aunbroken = C600(L1+L2)1.5V-0.6 is A600 for an RTF of length L1 + L2; Adecoupled = C600(L11.5+L21.5)V-0.6 is the combined A600 of RTFs of lengths L1 and L2, respectively; and Acombined = Aunbroken exp(-O/ OC) + Adecoupled (1-exp(-O/ OC)). C600 is a constant. We use OC and kinematic fault parameters (L1, L2, O, and V) to develop a scaling relation for the approximate seismogenic area, Aseg, for each segment of a RTF system composed of two fault segments. Finally, we estimate the size of Mc on a fault segment based on Aseg. We

  1. Segmented ceramic liner for induction furnaces

    DOEpatents

    Gorin, Andrew H.; Holcombe, Cressie E.

    1994-01-01

    A non-fibrous ceramic liner for induction furnaces is provided by vertically stackable ring-shaped liner segments made of ceramic material in a light-weight cellular form. The liner segments can each be fabricated as a single unit or from a plurality of arcuate segments joined together by an interlocking mechanism. Also, the liner segments can be formed of a single ceramic material or can be constructed of multiple concentric layers with the layers being of different ceramic materials and/or cellular forms. Thermomechanically damaged liner segments are selectively replaceable in the furnace.

  2. Segmented ceramic liner for induction furnaces

    DOEpatents

    Gorin, A.H.; Holcombe, C.E.

    1994-07-26

    A non-fibrous ceramic liner for induction furnaces is provided by vertically stackable ring-shaped liner segments made of ceramic material in a light-weight cellular form. The liner segments can each be fabricated as a single unit or from a plurality of arcuate segments joined together by an interlocking mechanism. Also, the liner segments can be formed of a single ceramic material or can be constructed of multiple concentric layers with the layers being of different ceramic materials and/or cellular forms. Thermomechanically damaged liner segments are selectively replaceable in the furnace. 5 figs.

  3. Novel multimodality segmentation using level sets and Jensen-Rényi divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva

    2013-12-15

    Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a

  4. Novel multimodality segmentation using level sets and Jensen-Rényi divergence.

    PubMed

    Markel, Daniel; Zaidi, Habib; El Naqa, Issam

    2013-12-01

    Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation

  5. The successive projection algorithm as an initialization method for brain tumor segmentation using non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Bharath, Halandur N; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Van Huffel, Sabine

    2017-01-01

    Non-negative matrix factorization (NMF) has become a widely used tool for additive parts-based analysis in a wide range of applications. As NMF is a non-convex problem, the quality of the solution will depend on the initialization of the factor matrices. In this study, the successive projection algorithm (SPA) is proposed as an initialization method for NMF. SPA builds on convex geometry and allocates endmembers based on successive orthogonal subspace projections of the input data. SPA is a fast and reproducible method, and it aligns well with the assumptions made in near-separable NMF analyses. SPA was applied to multi-parametric magnetic resonance imaging (MRI) datasets for brain tumor segmentation using different NMF algorithms. Comparison with common initialization methods shows that SPA achieves similar segmentation quality and it is competitive in terms of convergence rate. Whereas SPA was previously applied as a direct endmember extraction tool, we have shown improved segmentation results when using SPA as an initialization method, as it allows further enhancement of the sources during the NMF iterative procedure.

  6. Combined magnitude and phase-based segmentation of the cerebral cortex in 7T MR images of the elderly.

    PubMed

    Doan, Nhat Trung; van Rooden, Sanneke; Versluis, Maarten J; Webb, Andrew G; van der Grond, Jeroen; van Buchem, Mark A; Reiber, Johan H C; Milles, Julien

    2012-07-01

    To propose a new method that integrates both magnitude and phase information obtained from magnetic resonance (MR) T*(2) -weighted scans for cerebral cortex segmentation of the elderly. This method makes use of K-means clustering on magnitude and phase images to compute an initial segmentation, which is further refined by means of transformation with reconstruction criteria. The method was evaluated against the manual segmentation of 7T in vivo MR data of 20 elderly subjects (age = 67.7 ± 10.9). The added value of combining magnitude and phase was also evaluated by comparing the performance of the proposed method with the results obtained when limiting the available data to either magnitude or phase. The proposed method shows good overlap agreement, as quantified by the Dice Index (0.79 ± 0.04), limited bias (average relative volume difference = 2.94%), and reasonable volumetric correlation (R = 0.555, p = 0.011). Using the combined magnitude and phase information significantly improves the segmentation accuracy compared with using either magnitude or phase. This study suggests that the proposed method is an accurate and robust approach for cerebral cortex segmentation in datasets presenting low gray/white matter contrast. Copyright © 2012 Wiley Periodicals, Inc.

  7. Integrated circuit layer image segmentation

    NASA Astrophysics Data System (ADS)

    Masalskis, Giedrius; Petrauskas, Romas

    2010-09-01

    In this paper we present IC layer image segmentation techniques which are specifically created for precise metal layer feature extraction. During our research we used many samples of real-life de-processed IC metal layer images which were obtained using optical light microscope. We have created sequence of various image processing filters which provides segmentation results of good enough precision for our application. Filter sequences were fine tuned to provide best possible results depending on properties of IC manufacturing process and imaging technology. Proposed IC image segmentation filter sequences were experimentally tested and compared with conventional direct segmentation algorithms.

  8. Segmentation of discrete vector fields.

    PubMed

    Li, Hongyu; Chen, Wenbin; Shen, I-Fan

    2006-01-01

    In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.

  9. Segmentation of the zebrafish axial skeleton relies on notochord sheath cells and not on the segmentation clock

    PubMed Central

    Lleras Forero, Laura; Narayanan, Rachna; Huitema, Leonie FA; VanBergen, Maaike; Apschner, Alexander; Peterson-Maduro, Josi; Logister, Ive; Valentin, Guillaume

    2018-01-01

    Segmentation of the axial skeleton in amniotes depends on the segmentation clock, which patterns the paraxial mesoderm and the sclerotome. While the segmentation clock clearly operates in teleosts, the role of the sclerotome in establishing the axial skeleton is unclear. We severely disrupt zebrafish paraxial segmentation, yet observe a largely normal segmentation process of the chordacentra. We demonstrate that axial entpd5+ notochord sheath cells are responsible for chordacentrum mineralization, and serve as a marker for axial segmentation. While autonomous within the notochord sheath, entpd5 expression and centrum formation show some plasticity and can respond to myotome pattern. These observations reveal for the first time the dynamics of notochord segmentation in a teleost, and are consistent with an autonomous patterning mechanism that is influenced, but not determined by adjacent paraxial mesoderm. This behavior is not consistent with a clock-type mechanism in the notochord. PMID:29624170

  10. Segmentation of the zebrafish axial skeleton relies on notochord sheath cells and not on the segmentation clock.

    PubMed

    Lleras Forero, Laura; Narayanan, Rachna; Huitema, Leonie Fa; VanBergen, Maaike; Apschner, Alexander; Peterson-Maduro, Josi; Logister, Ive; Valentin, Guillaume; Morelli, Luis G; Oates, Andrew C; Schulte-Merker, Stefan

    2018-04-06

    Segmentation of the axial skeleton in amniotes depends on the segmentation clock, which patterns the paraxial mesoderm and the sclerotome. While the segmentation clock clearly operates in teleosts, the role of the sclerotome in establishing the axial skeleton is unclear. We severely disrupt zebrafish paraxial segmentation, yet observe a largely normal segmentation process of the chordacentra. We demonstrate that axial entpd5+ notochord sheath cells are responsible for chordacentrum mineralization, and serve as a marker for axial segmentation. While autonomous within the notochord sheath, entpd5 expression and centrum formation show some plasticity and can respond to myotome pattern. These observations reveal for the first time the dynamics of notochord segmentation in a teleost, and are consistent with an autonomous patterning mechanism that is influenced, but not determined by adjacent paraxial mesoderm. This behavior is not consistent with a clock-type mechanism in the notochord. © 2018, Lleras Forero et al.

  11. Assessment of LVEF using a new 16-segment wall motion score in echocardiography.

    PubMed

    Lebeau, Real; Serri, Karim; Lorenzo, Maria Di; Sauvé, Claude; Le, Van Hoai Viet; Soulières, Vicky; El-Rayes, Malak; Pagé, Maude; Zaïani, Chimène; Garot, Jérôme; Poulin, Frédéric

    2018-06-01

    Simpson biplane method and 3D by transthoracic echocardiography (TTE), radionuclide angiography (RNA) and cardiac magnetic resonance imaging (CMR) are the most accepted techniques for left ventricular ejection fraction (LVEF) assessment. Wall motion score index (WMSI) by TTE is an accepted complement. However, the conversion from WMSI to LVEF is obtained through a regression equation, which may limit its use. In this retrospective study, we aimed to validate a new method to derive LVEF from the wall motion score in 95 patients. The new score consisted of attributing a segmental EF to each LV segment based on the wall motion score and averaging all 16 segmental EF into a global LVEF. This segmental EF score was calculated on TTE in 95 patients, and RNA was used as the reference LVEF method. LVEF using the new segmental EF 15-40-65 score on TTE was compared to the reference methods using linear regression and Bland-Altman analyses. The median LVEF was 45% (interquartile range 32-53%; range from 15 to 65%). Our new segmental EF 15-40-65 score derived on TTE correlated strongly with RNA-LVEF ( r  = 0.97). Overall, the new score resulted in good agreement of LVEF compared to RNA (mean bias 0.61%). The standard deviations (s.d.s) of the distributions of inter-method difference for the comparison of the new score with RNA were 6.2%, indicating good precision. LVEF assessment using segmental EF derived from the wall motion score applied to each of the 16 LV segments has excellent correlation and agreement with a reference method. © 2018 The authors.

  12. Automated intraretinal layer segmentation of optical coherence tomography images using graph-theoretical methods

    NASA Astrophysics Data System (ADS)

    Roy, Priyanka; Gholami, Peyman; Kuppuswamy Parthasarathy, Mohana; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images facilitates visualization and quantification of sub-retinal layers for diagnosis of retinal pathologies. However, manual segmentation is subjective, expertise dependent, and time-consuming, which limits applicability of SD-OCT. Efforts are therefore being made to implement active-contours, artificial intelligence, and graph-search to automatically segment retinal layers with accuracy comparable to that of manual segmentation, to ease clinical decision-making. Although, low optical contrast, heavy speckle noise, and pathologies pose challenges to automated segmentation. Graph-based image segmentation approach stands out from the rest because of its ability to minimize the cost function while maximising the flow. This study has developed and implemented a shortest-path based graph-search algorithm for automated intraretinal layer segmentation of SD-OCT images. The algorithm estimates the minimal-weight path between two graph-nodes based on their gradients. Boundary position indices (BPI) are computed from the transition between pixel intensities. The mean difference between BPIs of two consecutive layers quantify individual layer thicknesses, which shows statistically insignificant differences when compared to a previous study [for overall retina: p = 0.17, for individual layers: p > 0.05 (except one layer: p = 0.04)]. These results substantiate the accurate delineation of seven intraretinal boundaries in SD-OCT images by this algorithm, with a mean computation time of 0.93 seconds (64-bit Windows10, core i5, 8GB RAM). Besides being self-reliant for denoising, the algorithm is further computationally optimized to restrict segmentation within the user defined region-of-interest. The efficiency and reliability of this algorithm, even in noisy image conditions, makes it clinically applicable.

  13. Technical Limitations of Electronic Health Records in Community Health Centers: Implications on Ambulatory Care Quality

    ERIC Educational Resources Information Center

    West, Christopher E.

    2010-01-01

    Research objectives: This dissertation examines the state of development of each of the eight core electronic health record (EHR) functionalities as described by the IOM and describes how the current state of these functionalities limit quality improvement efforts in ambulatory care settings. There is a great deal of literature describing both the…

  14. Effects of uneven-aged and diameter-limit management on West Virginia tree and wood quality

    Treesearch

    Michael C. Wiemann; Thomas M. Schuler; John E. Baumgras

    2004-01-01

    Uneven-aged and diameter-limit management were compared with an unmanaged control on the Fernow Experimental Forest near Parsons, West Virginia, to determine how treatment affects the quality of red oak (Quercus rubra L.), sugar maple (Acer saccharum Marsh.), and yellow-poplar (Liriodendron tulipifera L.). Periodic harvests slightly increased stem lean, which often...

  15. [Quality criteria in medicine: which limits?].

    PubMed

    Minvielle, E

    2006-06-01

    This article aims to develop a critical appraisal of the criteria's development in medicine. The COMPAQH (Coordination for Measuring Performance and Assuring Quality in Hospitals) project (Ministry of Health/ High Authority of Health/ National Institute of Medical Research) helps to support this analysis. This project based on the test of 42 Quality indicators (QI) gives findings not only about the manner to build criteria, but also to interpret and diffuse results among physicians and hospital managers. Criteria must be elaborated in a pragmatic way. They must be in compliance with practice guidelines supported by scientific evidences. The associated risk is to create and develop a normative medicine. Collaboration with professional societies may be useful in preventing this risk.

  16. A completely automated processing pipeline for lung and lung lobe segmentation and its application to the LIDC-IDRI data base

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Wiemker, Rafael; Barschdorf, Hans; Kabus, Sven; Klinder, Tobias; Lorenz, Cristian; Schadewaldt, Nicole; Dharaiya, Ekta

    2010-03-01

    Automated segmentation of lung lobes in thoracic CT images has relevance for various diagnostic purposes like localization of tumors within the lung or quantification of emphysema. Since emphysema is a known risk factor for lung cancer, both purposes are even related to each other. The main steps of the segmentation pipeline described in this paper are the lung detector and the lung segmentation based on a watershed algorithm, and the lung lobe segmentation based on mesh model adaptation. The segmentation procedure was applied to data sets of the data base of the Image Database Resource Initiative (IDRI) that currently contains over 500 thoracic CT scans with delineated lung nodule annotations. We visually assessed the reliability of the single segmentation steps, with a success rate of 98% for the lung detection and 90% for lung delineation. For about 20% of the cases we found the lobe segmentation not to be anatomically plausible. A modeling confidence measure is introduced that gives a quantitative indication of the segmentation quality. For a demonstration of the segmentation method we studied the correlation between emphysema score and malignancy on a per-lobe basis.

  17. Metric Learning to Enhance Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Castano, Rebecca; Bue, Brian; Gilmore, Martha S.

    2013-01-01

    Unsupervised hyperspectral image segmentation can reveal spatial trends that show the physical structure of the scene to an analyst. They highlight borders and reveal areas of homogeneity and change. Segmentations are independently helpful for object recognition, and assist with automated production of symbolic maps. Additionally, a good segmentation can dramatically reduce the number of effective spectra in an image, enabling analyses that would otherwise be computationally prohibitive. Specifically, using an over-segmentation of the image instead of individual pixels can reduce noise and potentially improve the results of statistical post-analysis. In this innovation, a metric learning approach is presented to improve the performance of unsupervised hyperspectral image segmentation. The prototype demonstrations attempt a superpixel segmentation in which the image is conservatively over-segmented; that is, the single surface features may be split into multiple segments, but each individual segment, or superpixel, is ensured to have homogenous mineralogy.

  18. Lifetime Segmented Assimilation Trajectories and Health Outcomes in Latino and Other Community Residents

    PubMed Central

    Marsiglia, Flavio F.; Kulis, Stephen; Kellison, Joshua G.

    2010-01-01

    Objectives. Under an ecodevelopmental framework, we examined lifetime segmented assimilation trajectories (diverging assimilation pathways influenced by prior life conditions) and related them to quality-of-life indicators in a diverse sample of 258 men in the Pheonix, AZ, metropolitan area. Methods. We used a growth mixture model analysis of lifetime changes in socioeconomic status, and used acculturation to identify distinct lifetime segmented assimilation trajectory groups, which we compared on life satisfaction, exercise, and dietary behaviors. We hypothesized that lifetime assimilation change toward mainstream American culture (upward assimilation) would be associated with favorable health outcomes, and downward assimilation change with unfavorable health outcomes. Results. A growth mixture model latent class analysis identified 4 distinct assimilation trajectory groups. In partial support of the study hypotheses, the extreme upward assimilation trajectory group (the most successful of the assimilation pathways) exhibited the highest life satisfaction and the lowest frequency of unhealthy food consumption. Conclusions. Upward segmented assimilation is associated in adulthood with certain positive health outcomes. This may be the first study to model upward and downward lifetime segmented assimilation trajectories, and to associate these with life satisfaction, exercise, and dietary behaviors. PMID:20167890

  19. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  20. Amagmatic Accretionary Segments, Ultraslow Spreading and Non-Volcanic Rifted Margins (Invited)

    NASA Astrophysics Data System (ADS)

    Dick, H. J.; Snow, J. E.

    2009-12-01

    The evolution of non-volcanic rifted margins is key to understanding continental breakup and the early evolution of some of the world’s most productive hydrocarbon basins. However, the early stages of such rifting are constrained by limited observations on ancient heavily sedimented margins such as Newfoundland and Iberia. Ultraslow spreading ridges, however, provide a modern analogue for early continental rifting. Ultraslow spreading ridges (<20 mm/yr) comprise ~30% of the global ridge system (e.g. Gakkel, Southwest Indian, Terceira, and Knipovitch Ridges). They have unique tectonics with widely spaced volcanic segments and amagmatic accretionary ridge segments. The volcanic segments, though far from hot spots, include some of the largest axial volcanoes on the global ridge system, and have, unusual magma chemistry, often showing local isotopic and incompatible element enrichment unrelated to mantle hot spots. The transition from slow to ultraslow tectonics and spreading is not uniquely defined by spreading rate, and may also be moderated by magma supply and mantle temperature. Amagmatic accretionary segments are the 4th class of plate boundary structure, and, we believe, the defining tectonic feature of early continental breakup. They form at effective spreading rates <12 mm/yr, assume any orientation to spreading, and replace transform faults and magmatic segments. At amagmatic segments the earth splits apart with the mantle emplaced directly to the seafloor, and great slabs of peridotite are uplifted to form the rift mountains. A thick conductive lid suppresses mantle melting, and magmatic segments form only at widely spaced intervals, with only scattered volcanics in between. Amagmatic segments link with the magmatic segments forming curvilinear plate boundaries, rather than the step-like morphology found at faster spreading ridges. These are all key features of non-volcanic rifted margins; explaining, for example, the presence of mantle peridotites emplaced