Sample records for segmentation procedure based

  1. Surgical motion characterization in simulated needle insertion procedures

    NASA Astrophysics Data System (ADS)

    Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor

    2012-02-01

    PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.

  2. A web-based procedure for liver segmentation in CT images

    NASA Astrophysics Data System (ADS)

    Yuan, Rong; Luo, Ming; Wang, Luyao; Xie, Qingguo

    2015-03-01

    Liver segmentation in CT images has been acknowledged as a basic and indispensable part in systems of computer aided liver surgery for operation design and risk evaluation. In this paper, we will introduce and implement a web-based procedure for liver segmentation to help radiologists and surgeons get an accurate result efficiently and expediently. Several clinical datasets are used to evaluate the accessibility and the accuracy. This procedure seems a promising approach for extraction of liver volumetry of various shapes. Moreover, it is possible for user to access the segmentation wherever the Internet is available without any specific machine.

  3. Breast tumor segmentation in high resolution x-ray phase contrast analyzer based computed tomography.

    PubMed

    Brun, E; Grandl, S; Sztrókay-Gaul, A; Barbone, G; Mittone, A; Gasilov, S; Bravin, A; Coan, P

    2014-11-01

    Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer based phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure's possible applications. A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.

  4. Blurry-frame detection and shot segmentation in colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2003-12-01

    Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.

  5. Markov models of genome segmentation

    NASA Astrophysics Data System (ADS)

    Thakur, Vivek; Azad, Rajeev K.; Ramaswamy, Ram

    2007-01-01

    We introduce Markov models for segmentation of symbolic sequences, extending a segmentation procedure based on the Jensen-Shannon divergence that has been introduced earlier. Higher-order Markov models are more sensitive to the details of local patterns and in application to genome analysis, this makes it possible to segment a sequence at positions that are biologically meaningful. We show the advantage of higher-order Markov-model-based segmentation procedures in detecting compositional inhomogeneity in chimeric DNA sequences constructed from genomes of diverse species, and in application to the E. coli K12 genome, boundaries of genomic islands, cryptic prophages, and horizontally acquired regions are accurately identified.

  6. Delphi definition of the EADC-ADNI Harmonized Protocol for hippocampal segmentation on magnetic resonance

    PubMed Central

    Boccardi, Marina; Bocchetta, Martina; Apostolova, Liana G.; Barnes, Josephine; Bartzokis, George; Corbetta, Gabriele; DeCarli, Charles; deToledo-Morrell, Leyla; Firbank, Michael; Ganzola, Rossana; Gerritsen, Lotte; Henneman, Wouter; Killiany, Ronald J.; Malykhin, Nikolai; Pasqualetti, Patrizio; Pruessner, Jens C.; Redolfi, Alberto; Robitaille, Nicolas; Soininen, Hilkka; Tolomeo, Daniele; Wang, Lei; Watson, Craig; Wolf, Henrike; Duvernoy, Henri; Duchesne, Simon; Jack, Clifford R.; Frisoni, Giovanni B.

    2015-01-01

    Background This study aimed to have international experts converge on a harmonized definition of whole hippocampus boundaries and segmentation procedures, to define standard operating procedures for magnetic resonance (MR)-based manual hippocampal segmentation. Methods The panel received a questionnaire regarding whole hippocampus boundaries and segmentation procedures. Quantitative information was supplied to allow evidence-based answers. A recursive and anonymous Delphi procedure was used to achieve convergence. Significance of agreement among panelists was assessed by exact probability on Fisher’s and binomial tests. Results Agreement was significant on the inclusion of alveus/fimbria (P =.021), whole hippocampal tail (P =.013), medial border of the body according to visible morphology (P =.0006), and on this combined set of features (P =.001). This definition captures 100% of hippocampal tissue, 100% of Alzheimer’s disease-related atrophy, and demonstrated good reliability on preliminary intrarater (0.98) and inter-rater (0.94) estimates. Discussion Consensus was achieved among international experts with respect to hippocampal segmentation using MR resulting in a harmonized segmentation protocol. PMID:25130658

  7. Delphi definition of the EADC-ADNI Harmonized Protocol for hippocampal segmentation on magnetic resonance.

    PubMed

    Boccardi, Marina; Bocchetta, Martina; Apostolova, Liana G; Barnes, Josephine; Bartzokis, George; Corbetta, Gabriele; DeCarli, Charles; deToledo-Morrell, Leyla; Firbank, Michael; Ganzola, Rossana; Gerritsen, Lotte; Henneman, Wouter; Killiany, Ronald J; Malykhin, Nikolai; Pasqualetti, Patrizio; Pruessner, Jens C; Redolfi, Alberto; Robitaille, Nicolas; Soininen, Hilkka; Tolomeo, Daniele; Wang, Lei; Watson, Craig; Wolf, Henrike; Duvernoy, Henri; Duchesne, Simon; Jack, Clifford R; Frisoni, Giovanni B

    2015-02-01

    This study aimed to have international experts converge on a harmonized definition of whole hippocampus boundaries and segmentation procedures, to define standard operating procedures for magnetic resonance (MR)-based manual hippocampal segmentation. The panel received a questionnaire regarding whole hippocampus boundaries and segmentation procedures. Quantitative information was supplied to allow evidence-based answers. A recursive and anonymous Delphi procedure was used to achieve convergence. Significance of agreement among panelists was assessed by exact probability on Fisher's and binomial tests. Agreement was significant on the inclusion of alveus/fimbria (P = .021), whole hippocampal tail (P = .013), medial border of the body according to visible morphology (P = .0006), and on this combined set of features (P = .001). This definition captures 100% of hippocampal tissue, 100% of Alzheimer's disease-related atrophy, and demonstrated good reliability on preliminary intrarater (0.98) and inter-rater (0.94) estimates. Consensus was achieved among international experts with respect to hippocampal segmentation using MR resulting in a harmonized segmentation protocol. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  8. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  9. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  10. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  11. Use of market segmentation to identify untapped consumer needs in vision correction surgery for future growth.

    PubMed

    Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P

    2003-01-01

    Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.

  12. Adaptive segmentation of nuclei in H&S stained tendon microscopy

    NASA Astrophysics Data System (ADS)

    Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien

    2015-12-01

    Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.

  13. Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.

    ERIC Educational Resources Information Center

    Lay, Robert S.; Maguire, John J.

    1983-01-01

    Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)

  14. A comparison study of atlas-based 3D cardiac MRI segmentation: global versus global and local transformations

    NASA Astrophysics Data System (ADS)

    Daryanani, Aditya; Dangi, Shusil; Ben-Zikri, Yehuda Kfir; Linte, Cristian A.

    2016-03-01

    Magnetic Resonance Imaging (MRI) is a standard-of-care imaging modality for cardiac function assessment and guidance of cardiac interventions thanks to its high image quality and lack of exposure to ionizing radiation. Cardiac health parameters such as left ventricular volume, ejection fraction, myocardial mass, thickness, and strain can be assessed by segmenting the heart from cardiac MRI images. Furthermore, the segmented pre-operative anatomical heart models can be used to precisely identify regions of interest to be treated during minimally invasive therapy. Hence, the use of accurate and computationally efficient segmentation techniques is critical, especially for intra-procedural guidance applications that rely on the peri-operative segmentation of subject-specific datasets without delaying the procedure workflow. Atlas-based segmentation incorporates prior knowledge of the anatomy of interest from expertly annotated image datasets. Typically, the ground truth atlas label is propagated to a test image using a combination of global and local registration. The high computational cost of non-rigid registration motivated us to obtain an initial segmentation using global transformations based on an atlas of the left ventricle from a population of patient MRI images and refine it using well developed technique based on graph cuts. Here we quantitatively compare the segmentations obtained from the global and global plus local atlases and refined using graph cut-based techniques with the expert segmentations according to several similarity metrics, including Dice correlation coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, E., E-mail: emmanuel.brun@esrf.fr; Grandl, S.; Sztrókay-Gaul, A.

    Purpose: Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. Methods: The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer basedmore » phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure’s possible applications. Results: A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. Conclusions: The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.« less

  16. Noise/spike detection in phonocardiogram signal as a cyclic random process with non-stationary period interval.

    PubMed

    Naseri, H; Homaeinezhad, M R; Pourkhajeh, H

    2013-09-01

    The major aim of this study is to describe a unified procedure for detecting noisy segments and spikes in transduced signals with a cyclic but non-stationary periodic nature. According to this procedure, the cycles of the signal (onset and offset locations) are detected. Then, the cycles are clustered into a finite number of groups based on appropriate geometrical- and frequency-based time series. Next, the median template of each time series of each cluster is calculated. Afterwards, a correlation-based technique is devised for making a comparison between a test cycle feature and the associated time series of each cluster. Finally, by applying a suitably chosen threshold for the calculated correlation values, a segment is prescribed to be either clean or noisy. As a key merit of this research, the procedure can introduce a decision support for choosing accurately orthogonal-expansion-based filtering or to remove noisy segments. In this paper, the application procedure of the proposed method is comprehensively described by applying it to phonocardiogram (PCG) signals for finding noisy cycles. The database consists of 126 records from several patients of a domestic research station acquired by a 3M Littmann(®) 3200, 4KHz sampling frequency electronic stethoscope. By implementing the noisy segments detection algorithm with this database, a sensitivity of Se=91.41% and a positive predictive value, PPV=92.86% were obtained based on physicians assessments. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Segmenting Student Markets with a Student Satisfaction and Priorities Survey.

    ERIC Educational Resources Information Center

    Borden, Victor M. H.

    1995-01-01

    A market segmentation analysis of 872 university students compared 2 hierarchical clustering procedures for deriving market segments: 1 using matching-type measures and an agglomerative clustering algorithm, and 1 using the chi-square based automatic interaction detection. Results and implications for planning, evaluating, and improving academic…

  18. Rapid Contour-based Segmentation for 18F-FDG PET Imaging of Lung Tumors by Using ITK-SNAP: Comparison to Expert-based Segmentation.

    PubMed

    Besson, Florent L; Henry, Théophraste; Meyer, Céline; Chevance, Virgile; Roblot, Victoire; Blanchet, Elise; Arnould, Victor; Grimon, Gilles; Chekroun, Malika; Mabille, Laurence; Parent, Florence; Seferian, Andrei; Bulifon, Sophie; Montani, David; Humbert, Marc; Chaumet-Riffaud, Philippe; Lebon, Vincent; Durand, Emmanuel

    2018-04-03

    Purpose To assess the performance of the ITK-SNAP software for fluorodeoxyglucose (FDG) positron emission tomography (PET) segmentation of complex-shaped lung tumors compared with an optimized, expert-based manual reference standard. Materials and Methods Seventy-six FDG PET images of thoracic lesions were retrospectively segmented by using ITK-SNAP software. Each tumor was manually segmented by six raters to generate an optimized reference standard by using the simultaneous truth and performance level estimate algorithm. Four raters segmented 76 FDG PET images of lung tumors twice by using ITK-SNAP active contour algorithm. Accuracy of ITK-SNAP procedure was assessed by using Dice coefficient and Hausdorff metric. Interrater and intrarater reliability were estimated by using intraclass correlation coefficients of output volumes. Finally, the ITK-SNAP procedure was compared with currently recommended PET tumor delineation methods on the basis of thresholding at 41% volume of interest (VOI; VOI 41 ) and 50% VOI (VOI 50 ) of the tumor's maximal metabolism intensity. Results Accuracy estimates for the ITK-SNAP procedure indicated a Dice coefficient of 0.83 (95% confidence interval: 0.77, 0.89) and a Hausdorff distance of 12.6 mm (95% confidence interval: 9.82, 15.32). Interrater reliability was an intraclass correlation coefficient of 0.94 (95% confidence interval: 0.91, 0.96). The intrarater reliabilities were intraclass correlation coefficients above 0.97. Finally, VOI 41 and VOI 50 accuracy metrics were as follows: Dice coefficient, 0.48 (95% confidence interval: 0.44, 0.51) and 0.34 (95% confidence interval: 0.30, 0.38), respectively, and Hausdorff distance, 25.6 mm (95% confidence interval: 21.7, 31.4) and 31.3 mm (95% confidence interval: 26.8, 38.4), respectively. Conclusion ITK-SNAP is accurate and reliable for active-contour-based segmentation of heterogeneous thoracic PET tumors. ITK-SNAP surpassed the recommended PET methods compared with ground truth manual segmentation. © RSNA, 2018.

  19. Automatic Segmentation of the Cortical Grey and White Matter in MRI Using a Region-Growing Approach Based on Anatomical Knowledge

    NASA Astrophysics Data System (ADS)

    Wasserthal, Christian; Engel, Karin; Rink, Karsten; Brechmann, Andr'e.

    We propose an automatic procedure for the correct segmentation of grey and white matter in MR data sets of the human brain. Our method exploits general anatomical knowledge for the initial segmentation and for the subsequent refinement of the estimation of the cortical grey matter. Our results are comparable to manual segmentations.

  20. Automated identification of brain tumors from single MR images based on segmentation with refined patient-specific priors

    PubMed Central

    Sanjuán, Ana; Price, Cathy J.; Mancini, Laura; Josse, Goulven; Grogan, Alice; Yamamoto, Adam K.; Geva, Sharon; Leff, Alex P.; Yousry, Tarek A.; Seghier, Mohamed L.

    2013-01-01

    Brain tumors can have different shapes or locations, making their identification very challenging. In functional MRI, it is not unusual that patients have only one anatomical image due to time and financial constraints. Here, we provide a modified automatic lesion identification (ALI) procedure which enables brain tumor identification from single MR images. Our method rests on (A) a modified segmentation-normalization procedure with an explicit “extra prior” for the tumor and (B) an outlier detection procedure for abnormal voxel (i.e., tumor) classification. To minimize tissue misclassification, the segmentation-normalization procedure requires prior information of the tumor location and extent. We therefore propose that ALI is run iteratively so that the output of Step B is used as a patient-specific prior in Step A. We test this procedure on real T1-weighted images from 18 patients, and the results were validated in comparison to two independent observers' manual tracings. The automated procedure identified the tumors successfully with an excellent agreement with the manual segmentation (area under the ROC curve = 0.97 ± 0.03). The proposed procedure increases the flexibility and robustness of the ALI tool and will be particularly useful for lesion-behavior mapping studies, or when lesion identification and/or spatial normalization are problematic. PMID:24381535

  1. General Staining and Segmentation Procedures for High Content Imaging and Analysis.

    PubMed

    Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S

    2018-01-01

    Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.

  2. New Software for Market Segmentation Analysis: A Chi-Square Interaction Detector. AIR 1983 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Lay, Robert S.

    The advantages and disadvantages of new software for market segmentation analysis are discussed, and the application of this new, chi-square based procedure (CHAID), is illustrated. A comparison is presented of an earlier, binary segmentation technique (THAID) and a multiple discriminant analysis. It is suggested that CHAID is superior to earlier…

  3. Numerical Simulations for Distribution Characteristics of Internal Forces on Segments of Tunnel Linings

    NASA Astrophysics Data System (ADS)

    Li, Shouju; Shangguan, Zichang; Cao, Lijuan

    A procedure based on FEM is proposed to simulate interaction between concrete segments of tunnel linings and soils. The beam element named as Beam 3 in ANSYS software was used to simulate segments. The ground loss induced from shield tunneling and segment installing processes is simulated in finite element analysis. The distributions of bending moment, axial force and shear force on segments were computed by FEM. The commutated internal forces on segments will be used to design reinforced bars on shield linings. Numerically simulated ground settlements agree with observed values.

  4. Segmenting hospitals for improved management strategy.

    PubMed

    Malhotra, N K

    1989-09-01

    The author presents a conceptual framework for the a priori and clustering-based approaches to segmentation and evaluates them in the context of segmenting institutional health care markets. An empirical study is reported in which the hospital market is segmented on three state-of-being variables. The segmentation approach also takes into account important organizational decision-making variables. The sophisticated Thurstone Case V procedure is employed. Several marketing implications for hospitals, other health care organizations, hospital suppliers, and donor publics are identified.

  5. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  6. Functionally interpretable local coordinate systems for the upper extremity using inertial & magnetic measurement systems.

    PubMed

    de Vries, W H K; Veeger, H E J; Cutti, A G; Baten, C; van der Helm, F C T

    2010-07-20

    Inertial Magnetic Measurement Systems (IMMS) are becoming increasingly popular by allowing for measurements outside the motion laboratory. The latest models enable long term, accurate measurement of segment motion in terms of joint angles, if initial segment orientations can accurately be determined. The standard procedure for definition of segmental orientation is based on the measurement of positions of bony landmarks (BLM). However, IMMS do not deliver position information, so an alternative method to establish IMMS based, anatomically understandable segment orientations is proposed. For five subjects, IMMS recordings were collected in a standard anatomical position for definition of static axes, and during a series of standardized motions for the estimation of kinematic axes of rotation. For all axes, the intra- and inter-individual dispersion was estimated. Subsequently, local coordinate systems (LCS) were constructed on the basis of the combination of IMMS axes with the lowest dispersion and compared with BLM based LCS. The repeatability of the method appeared to be high; for every segment at least two axes could be determined with a dispersion of at most 3.8 degrees. Comparison of IMMS based with BLM based LCS yielded compatible results for the thorax, but less compatible results for the humerus, forearm and hand, where differences in orientation rose to 17.2 degrees. Although different from the 'gold standard' BLM based LCS, IMMS based LCS can be constructed repeatable, enabling the estimation of segment orientations outside the laboratory. A procedure for the definition of local reference frames using IMMS is proposed. 2010 Elsevier Ltd. All rights reserved.

  7. Segmentation of medical images using explicit anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee

    1999-07-01

    Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.

  8. Statistical segmentation of multidimensional brain datasets

    NASA Astrophysics Data System (ADS)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  9. A completely automated processing pipeline for lung and lung lobe segmentation and its application to the LIDC-IDRI data base

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Wiemker, Rafael; Barschdorf, Hans; Kabus, Sven; Klinder, Tobias; Lorenz, Cristian; Schadewaldt, Nicole; Dharaiya, Ekta

    2010-03-01

    Automated segmentation of lung lobes in thoracic CT images has relevance for various diagnostic purposes like localization of tumors within the lung or quantification of emphysema. Since emphysema is a known risk factor for lung cancer, both purposes are even related to each other. The main steps of the segmentation pipeline described in this paper are the lung detector and the lung segmentation based on a watershed algorithm, and the lung lobe segmentation based on mesh model adaptation. The segmentation procedure was applied to data sets of the data base of the Image Database Resource Initiative (IDRI) that currently contains over 500 thoracic CT scans with delineated lung nodule annotations. We visually assessed the reliability of the single segmentation steps, with a success rate of 98% for the lung detection and 90% for lung delineation. For about 20% of the cases we found the lobe segmentation not to be anatomically plausible. A modeling confidence measure is introduced that gives a quantitative indication of the segmentation quality. For a demonstration of the segmentation method we studied the correlation between emphysema score and malignancy on a per-lobe basis.

  10. A robust hidden Markov Gauss mixture vector quantizer for a noisy source.

    PubMed

    Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M

    2009-07-01

    Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.

  11. Vertical stratification of forest canopy for segmentation of understory trees within small-footprint airborne LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun

    2017-08-01

    Airborne LiDAR point cloud representing a forest contains 3D data, from which vertical stand structure even of understory layers can be derived. This paper presents a tree segmentation approach for multi-story stands that stratifies the point cloud to canopy layers and segments individual tree crowns within each layer using a digital surface model based tree segmentation method. The novelty of the approach is the stratification procedure that separates the point cloud to an overstory and multiple understory tree canopy layers by analyzing vertical distributions of LiDAR points within overlapping locales. The procedure does not make a priori assumptions about the shape and size of the tree crowns and can, independent of the tree segmentation method, be utilized to vertically stratify tree crowns of forest canopies. We applied the proposed approach to the University of Kentucky Robinson Forest - a natural deciduous forest with complex and highly variable terrain and vegetation structure. The segmentation results showed that using the stratification procedure strongly improved detecting understory trees (from 46% to 68%) at the cost of introducing a fair number of over-segmented understory trees (increased from 1% to 16%), while barely affecting the overall segmentation quality of overstory trees. Results of vertical stratification of the canopy showed that the point density of understory canopy layers were suboptimal for performing a reasonable tree segmentation, suggesting that acquiring denser LiDAR point clouds would allow more improvements in segmenting understory trees. As shown by inspecting correlations of the results with forest structure, the segmentation approach is applicable to a variety of forest types.

  12. Spine segmentation from C-arm CT data sets: application to region-of-interest volumes for spinal interventions

    NASA Astrophysics Data System (ADS)

    Buerger, C.; Lorenz, C.; Babic, D.; Hoppenbrouwers, J.; Homan, R.; Nachabe, R.; Racadio, J. M.; Grass, M.

    2017-03-01

    Spinal fusion is a common procedure to stabilize the spinal column by fixating parts of the spine. In such procedures, metal screws are inserted through the patients back into a vertebra, and the screws of adjacent vertebrae are connected by metal rods to generate a fixed bridge. In these procedures, 3D image guidance for intervention planning and outcome control is required. Here, for anatomical guidance, an automated approach for vertebra segmentation from C-arm CT images of the spine is introduced and evaluated. As a prerequisite, 3D C-arm CT images are acquired covering the vertebrae of interest. An automatic model-based segmentation approach is applied to delineate the outline of the vertebrae of interest. The segmentation approach is based on 24 partial models of the cervical, thoracic and lumbar vertebrae which aggregate information about (i) the basic shape itself, (ii) trained features for image based adaptation, and (iii) potential shape variations. Since the volume data sets generated by the C-arm system are limited to a certain region of the spine the target vertebra and hence initial model position is assigned interactively. The approach was trained and tested on 21 human cadaver scans. A 3-fold cross validation to ground truth annotations yields overall mean segmentation errors of 0.5 mm for T1 to 1.1 mm for C6. The results are promising and show potential to support the clinician in pedicle screw path and rod planning to allow accurate and reproducible insertions.

  13. Line fiducial material and thickness considerations for ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; McLeod, A. J.; Baxter, John S. H.; Chen, Elvis C. S.; Peters, Terry M.

    2015-03-01

    Ultrasound calibration is a necessary procedure in many image-guided interventions, relating the position of tools and anatomical structures in the ultrasound image to a common coordinate system. This is a necessary component of augmented reality environments in image-guided interventions as it allows for a 3D visualization where other surgical tools outside the imaging plane can be found. Accuracy of ultrasound calibration fundamentally affects the total accuracy of this interventional guidance system. Many ultrasound calibration procedures have been proposed based on a variety of phantom materials and geometries. These differences lead to differences in representation of the phantom on the ultrasound image which subsequently affect the ability to accurately and automatically segment the phantom. For example, taut wires are commonly used as line fiducials in ultrasound calibration. However, at large depths or oblique angles, the fiducials appear blurred and smeared in ultrasound images making it hard to localize their cross-section with the ultrasound image plane. Intuitively, larger diameter phantoms with lower echogenicity are more accurately segmented in ultrasound images in comparison to highly reflective thin phantoms. In this work, an evaluation of a variety of calibration phantoms with different geometrical and material properties for the phantomless calibration procedure was performed. The phantoms used in this study include braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. Conventional B-mode and synthetic aperture images of the phantoms at different positions were obtained. The phantoms were automatically segmented from the ultrasound images using an ellipse fitting algorithm, the centroid of which is subsequently used as a fiducial for calibration. Calibration accuracy was evaluated for these procedures based on the leave-one-out target registration error. It was shown that larger diameter phantoms with lower echogenicity are more accurately segmented in comparison to highly reflective thin phantoms. This improvement in segmentation accuracy leads to a lower fiducial localization error, which ultimately results in low target registration error. This would have a profound effect on calibration procedures and the feasibility of different calibration procedures in the context of image-guided procedures.

  14. Multifractal-based nuclei segmentation in fish images.

    PubMed

    Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan

    2017-09-01

    The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.

  15. Automatic segmentation of colon glands using object-graphs.

    PubMed

    Gunduz-Demir, Cigdem; Kandemir, Melih; Tosun, Akif Burak; Sokmensuer, Cenk

    2010-02-01

    Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues.

  16. Performing label-fusion-based segmentation using multiple automatically generated templates.

    PubMed

    Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P

    2013-10-01

    Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). Copyright © 2012 Wiley Periodicals, Inc.

  17. Integrating shape into an interactive segmentation framework

    NASA Astrophysics Data System (ADS)

    Kamalakannan, S.; Bryant, B.; Sari-Sarraf, H.; Long, R.; Antani, S.; Thoma, G.

    2013-02-01

    This paper presents a novel interactive annotation toolbox which extends a well-known user-steered segmentation framework, namely Intelligent Scissors (IS). IS, posed as a shortest path problem, is essentially driven by lower level image based features. All the higher level knowledge about the problem domain is obtained from the user through mouse clicks. The proposed work integrates one higher level feature, namely shape up to a rigid transform, into the IS framework, thus reducing the burden on the user and the subjectivity involved in the annotation procedure, especially during instances of occlusions, broken edges, noise and spurious boundaries. The above mentioned scenarios are commonplace in medical image annotation applications and, hence, such a tool will be of immense help to the medical community. As a first step, an offline training procedure is performed in which a mean shape and the corresponding shape variance is computed by registering training shapes up to a rigid transform in a level-set framework. The user starts the interactive segmentation procedure by providing a training segment, which is a part of the target boundary. A partial shape matching scheme based on a scale-invariant curvature signature is employed in order to extract shape correspondences and subsequently predict the shape of the unsegmented target boundary. A `zone of confidence' is generated for the predicted boundary to accommodate shape variations. The method is evaluated on segmentation of digital chest x-ray images for lung annotation which is a crucial step in developing algorithms for screening Tuberculosis.

  18. Automatic aortic root segmentation in CTA whole-body dataset

    NASA Astrophysics Data System (ADS)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  19. An analog scrambler for speech based on sequential permutations in time and frequency

    NASA Astrophysics Data System (ADS)

    Cox, R. V.; Jayant, N. S.; McDermott, B. J.

    Permutation of speech segments is an operation that is frequently used in the design of scramblers for analog speech privacy. In this paper, a sequential procedure for segment permutation is considered. This procedure can be extended to two dimensional permutation of time segments and frequency bands. By subjective testing it is shown that this combination gives a residual intelligibility for spoken digits of 20 percent with a delay of 256 ms. (A lower bound for this test would be 10 percent). The complexity of implementing such a system is considered and the issues of synchronization and channel equalization are addressed. The computer simulation results for the system using both real and simulated channels are examined.

  20. Jansen-MIDAS: A multi-level photomicrograph segmentation software based on isotropic undecimated wavelets.

    PubMed

    de Siqueira, Alexandre Fioravante; Cabrera, Flávio Camargo; Nakasuga, Wagner Massayuki; Pagamisse, Aylton; Job, Aldo Eloizo

    2018-01-01

    Image segmentation, the process of separating the elements within a picture, is frequently used for obtaining information from photomicrographs. Segmentation methods should be used with reservations, since incorrect results can mislead when interpreting regions of interest (ROI). This decreases the success rate of extra procedures. Multi-Level Starlet Segmentation (MLSS) and Multi-Level Starlet Optimal Segmentation (MLSOS) were developed to be an alternative for general segmentation tools. These methods gave rise to Jansen-MIDAS, an open-source software. A scientist can use it to obtain several segmentations of hers/his photomicrographs. It is a reliable alternative to process different types of photomicrographs: previous versions of Jansen-MIDAS were used to segment ROI in photomicrographs of two different materials, with an accuracy superior to 89%. © 2017 Wiley Periodicals, Inc.

  1. Multi-spectral brain tissue segmentation using automatically trained k-Nearest-Neighbor classification.

    PubMed

    Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J

    2007-08-01

    Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.

  2. [Target volume segmentation of PET images by an iterative method based on threshold value].

    PubMed

    Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L

    2014-01-01

    An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  3. Development of a novel 2D color map for interactive segmentation of histological images.

    PubMed

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D

    2012-05-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  4. H-Ransac a Hybrid Point Cloud Segmentation Combining 2d and 3d Data

    NASA Astrophysics Data System (ADS)

    Adam, A.; Chatzilari, E.; Nikolopoulos, S.; Kompatsiaris, I.

    2018-05-01

    In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.

  5. Effects of CT resolution and radiodensity threshold on the CFD evaluation of nasal airflow.

    PubMed

    Quadrio, Maurizio; Pipolo, Carlotta; Corti, Stefano; Messina, Francesco; Pesci, Chiara; Saibene, Alberto M; Zampini, Samuele; Felisati, Giovanni

    2016-03-01

    The article focuses on the robustness of a CFD-based procedure for the quantitative evaluation of the nasal airflow. CFD ability to yield robust results with respect to the unavoidable procedural and modeling inaccuracies must be demonstrated to allow this tool to become part of the clinical practice in this field. The present article specifically addresses the sensitivity of the CFD procedure to the spatial resolution of the available CT scans, as well as to the choice of the segmentation level of the CT images. We found no critical problems concerning these issues; nevertheless, the choice of the segmentation level is potentially delicate if carried out by an untrained operator.

  6. Optimal reinforcement of training datasets in semi-supervised landmark-based segmentation

    NASA Astrophysics Data System (ADS)

    Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2015-03-01

    During the last couple of decades, the development of computerized image segmentation shifted from unsupervised to supervised methods, which made segmentation results more accurate and robust. However, the main disadvantage of supervised segmentation is a need for manual image annotation that is time-consuming and subjected to human error. To reduce the need for manual annotation, we propose a novel learning approach for training dataset reinforcement in the area of landmark-based segmentation, where newly detected landmarks are optimally combined with reference landmarks from the training dataset and therefore enriches the training process. The approach is formulated as a nonlinear optimization problem, where the solution is a vector of weighting factors that measures how reliable are the detected landmarks. The detected landmarks that are found to be more reliable are included into the training procedure with higher weighting factors, whereas the detected landmarks that are found to be less reliable are included with lower weighting factors. The approach is integrated into the landmark-based game-theoretic segmentation framework and validated against the problem of lung field segmentation from chest radiographs.

  7. Algorithm for Automatic Segmentation of Nuclear Boundaries in Cancer Cells in Three-Channel Luminescent Images

    NASA Astrophysics Data System (ADS)

    Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.

    2015-09-01

    We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.

  8. An adaptive Fuzzy C-means method utilizing neighboring information for breast tumor segmentation in ultrasound images.

    PubMed

    Feng, Yuan; Dong, Fenglin; Xia, Xiaolong; Hu, Chun-Hong; Fan, Qianmin; Hu, Yanle; Gao, Mingyuan; Mutic, Sasa

    2017-07-01

    Ultrasound (US) imaging has been widely used in breast tumor diagnosis and treatment intervention. Automatic delineation of the tumor is a crucial first step, especially for the computer-aided diagnosis (CAD) and US-guided breast procedure. However, the intrinsic properties of US images such as low contrast and blurry boundaries pose challenges to the automatic segmentation of the breast tumor. Therefore, the purpose of this study is to propose a segmentation algorithm that can contour the breast tumor in US images. To utilize the neighbor information of each pixel, a Hausdorff distance based fuzzy c-means (FCM) method was adopted. The size of the neighbor region was adaptively updated by comparing the mutual information between them. The objective function of the clustering process was updated by a combination of Euclid distance and the adaptively calculated Hausdorff distance. Segmentation results were evaluated by comparing with three experts' manual segmentations. The results were also compared with a kernel-induced distance based FCM with spatial constraints, the method without adaptive region selection, and conventional FCM. Results from segmenting 30 patient images showed the adaptive method had a value of sensitivity, specificity, Jaccard similarity, and Dice coefficient of 93.60 ± 5.33%, 97.83 ± 2.17%, 86.38 ± 5.80%, and 92.58 ± 3.68%, respectively. The region-based metrics of average symmetric surface distance (ASSD), root mean square symmetric distance (RMSD), and maximum symmetric surface distance (MSSD) were 0.03 ± 0.04 mm, 0.04 ± 0.03 mm, and 1.18 ± 1.01 mm, respectively. All the metrics except sensitivity were better than that of the non-adaptive algorithm and the conventional FCM. Only three region-based metrics were better than that of the kernel-induced distance based FCM with spatial constraints. Inclusion of the pixel neighbor information adaptively in segmenting US images improved the segmentation performance. The results demonstrate the potential application of the method in breast tumor CAD and other US-guided procedures. © 2017 American Association of Physicists in Medicine.

  9. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    PubMed

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  10. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    NASA Astrophysics Data System (ADS)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  11. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung

    PubMed Central

    Guo, Shengwen; Fei, Baowei

    2013-01-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531

  12. A segmentation approach for a delineation of terrestrial ecoregions

    NASA Astrophysics Data System (ADS)

    Nowosad, J.; Stepinski, T.

    2017-12-01

    Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for global ecological and conservation studies.

  13. Multineuronal vectorization is more efficient than time-segmental vectorization for information extraction from neuronal activities in the inferior temporal cortex.

    PubMed

    Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro

    2010-08-01

    In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  14. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  15. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  16. A segmentation editing framework based on shape change statistics

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen

    2017-02-01

    Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.

  17. Development of techniques for producing static strata maps and development of photointerpretive methods based on multitemporal LANDSAT data

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator); Hay, C. M.; Thomas, R. W.; Benson, A. S.

    1977-01-01

    Progress in the evaluation of the static stratification procedure and the development of alternative photointerpretive techniques to the present LACIE procedure for the identification of training fields is reported. Statistically significant signature controlling variables were defined for use in refining the stratification procedure. A subset of the 1973-74 Kansas LACIE segments for wheat was analyzed.

  18. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  19. Model-based segmentation of hand radiographs

    NASA Astrophysics Data System (ADS)

    Weiler, Frank; Vogelsang, Frank

    1998-06-01

    An important procedure in pediatrics is to determine the skeletal maturity of a patient from radiographs of the hand. There is great interest in the automation of this tedious and time-consuming task. We present a new method for the segmentation of the bones of the hand, which allows the assessment of the skeletal maturity with an appropriate database of reference bones, similar to the atlas based methods. The proposed algorithm uses an extended active contour model for the segmentation of the hand bones, which incorporates a-priori knowledge of shape and topology of the bones in an additional energy term. This `scene knowledge' is integrated in a complex hierarchical image model, that is used for the image analysis task.

  20. A continuous analog of run length distributions reflecting accumulated fractionation events.

    PubMed

    Yu, Zhe; Sankoff, David

    2016-11-11

    We propose a new, continuous model of the fractionation process (duplicate gene deletion after polyploidization) on the real line. The aim is to infer how much DNA is deleted at a time, based on segment lengths for alternating deleted (invisible) and undeleted (visible) regions. After deriving a number of analytical results for "one-sided" fractionation, we undertake a series of simulations that help us identify the distribution of segment lengths as a gamma with shape and rate parameters evolving over time. This leads to an inference procedure based on observed length distributions for visible and invisible segments. We suggest extensions of this mathematical and simulation work to biologically realistic discrete models, including two-sided fractionation.

  1. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  2. Computer vision based nacre thickness measurement of Tahitian pearls

    NASA Astrophysics Data System (ADS)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  3. Automatic and manual segmentation of healthy retinas using high-definition optical coherence tomography.

    PubMed

    Golbaz, Isabelle; Ahlers, Christian; Goesseringer, Nina; Stock, Geraldine; Geitzenauer, Wolfgang; Prünte, Christian; Schmidt-Erfurth, Ursula Margarethe

    2011-03-01

    This study compared automatic- and manual segmentation modalities in the retina of healthy eyes using high-definition optical coherence tomography (HD-OCT). Twenty retinas in 20 healthy individuals were examined using an HD-OCT system (Carl Zeiss Meditec, Inc.). Three-dimensional imaging was performed with an axial resolution of 6 μm at a maximum scanning speed of 25,000 A-scans/second. Volumes of 6 × 6 × 2 mm were scanned. Scans were analysed using a matlab-based algorithm and a manual segmentation software system (3D-Doctor). The volume values calculated by the two methods were compared. Statistical analysis revealed a high correlation between automatic and manual modes of segmentation. The automatic mode of measuring retinal volume and the corresponding three-dimensional images provided similar results to the manual segmentation procedure. Both methods were able to visualize retinal and subretinal features accurately. This study compared two methods of assessing retinal volume using HD-OCT scans in healthy retinas. Both methods were able to provide realistic volumetric data when applied to raster scan sets. Manual segmentation methods represent an adequate tool with which to control automated processes and to identify clinically relevant structures, whereas automatic procedures will be needed to obtain data in larger patient populations. © 2009 The Authors. Journal compilation © 2009 Acta Ophthalmol.

  4. Automated methods for hippocampus segmentation: the evolution and a review of the state of the art.

    PubMed

    Dill, Vanderson; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia

    2015-04-01

    The segmentation of the hippocampus in Magnetic Resonance Imaging (MRI) has been an important procedure to diagnose and monitor several clinical situations. The precise delineation of the borders of this brain structure makes it possible to obtain a measure of the volume and estimate its shape, which can be used to diagnose some diseases, such as Alzheimer's disease, schizophrenia and epilepsy. As the manual segmentation procedure in three-dimensional images is highly time consuming and the reproducibility is low, automated methods introduce substantial gains. On the other hand, the implementation of those methods is a challenge because of the low contrast of this structure in relation to the neighboring areas of the brain. Within this context, this research presents a review of the evolution of automatized methods for the segmentation of the hippocampus in MRI. Many proposed methods for segmentation of the hippocampus have been published in leading journals in the medical image processing area. This paper describes these methods presenting the techniques used and quantitatively comparing the methods based on Dice Similarity Coefficient. Finally, we present an evaluation of those methods considering the degree of user intervention, computational cost, segmentation accuracy and feasibility of application in a clinical routine.

  5. Missing observations in multiyear rotation sampling designs

    NASA Technical Reports Server (NTRS)

    Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)

    1982-01-01

    Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.

  6. A volumetric pulmonary CT segmentation method with applications in emphysema assessment

    NASA Astrophysics Data System (ADS)

    Silva, José Silvestre; Silva, Augusto; Santos, Beatriz S.

    2006-03-01

    A segmentation method is a mandatory pre-processing step in many automated or semi-automated analysis tasks such as region identification and densitometric analysis, or even for 3D visualization purposes. In this work we present a fully automated volumetric pulmonary segmentation algorithm based on intensity discrimination and morphologic procedures. Our method first identifies the trachea as well as primary bronchi and then the pulmonary region is identified by applying a threshold and morphologic operations. When both lungs are in contact, additional procedures are performed to obtain two separated lung volumes. To evaluate the performance of the method, we compared contours extracted from 3D lung surfaces with reference contours, using several figures of merit. Results show that the worst case generally occurs at the middle sections of high resolution CT exams, due the presence of aerial and vascular structures. Nevertheless, the average error is inferior to the average error associated with radiologist inter-observer variability, which suggests that our method produces lung contours similar to those drawn by radiologists. The information created by our segmentation algorithm is used by an identification and representation method in pulmonary emphysema that also classifies emphysema according to its severity degree. Two clinically proved thresholds are applied which identify regions with severe emphysema, and with highly severe emphysema. Based on this thresholding strategy, an application for volumetric emphysema assessment was developed offering new display paradigms concerning the visualization of classification results. This framework is easily extendable to accommodate other classifiers namely those related with texture based segmentation as it is often the case with interstitial diseases.

  7. Lesion identification using unified segmentation-normalisation models and fuzzy clustering

    PubMed Central

    Seghier, Mohamed L.; Ramlackhansingh, Anil; Crinion, Jenny; Leff, Alexander P.; Price, Cathy J.

    2008-01-01

    In this paper, we propose a new automated procedure for lesion identification from single images based on the detection of outlier voxels. We demonstrate the utility of this procedure using artificial and real lesions. The scheme rests on two innovations: First, we augment the generative model used for combined segmentation and normalization of images, with an empirical prior for an atypical tissue class, which can be optimised iteratively. Second, we adopt a fuzzy clustering procedure to identify outlier voxels in normalised gray and white matter segments. These two advances suppress misclassification of voxels and restrict lesion identification to gray/white matter lesions respectively. Our analyses show a high sensitivity for detecting and delineating brain lesions with different sizes, locations, and textures. Our approach has important implications for the generation of lesion overlap maps of a given population and the assessment of lesion-deficit mappings. From a clinical perspective, our method should help to compute the total volume of lesion or to trace precisely lesion boundaries that might be pertinent for surgical or diagnostic purposes. PMID:18482850

  8. Northeast Artificial Intelligence Consortium Annual Report for 1987. Volume 4. Research in Automated Photointerpretation

    DTIC Science & Technology

    1989-03-01

    KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection

  9. Echogenicity based approach to detect, segment and track the common carotid artery in 2D ultrasound images.

    PubMed

    Narayan, Nikhil S; Marziliano, Pina

    2015-08-01

    Automatic detection and segmentation of the common carotid artery in transverse ultrasound (US) images of the thyroid gland play a vital role in the success of US guided intervention procedures. We propose in this paper a novel method to accurately detect, segment and track the carotid in 2D and 2D+t US images of the thyroid gland using concepts based on tissue echogenicity and ultrasound image formation. We first segment the hypoechoic anatomical regions of interest using local phase and energy in the input image. We then make use of a Hessian based blob like analysis to detect the carotid within the segmented hypoechoic regions. The carotid artery is segmented by making use of least squares ellipse fit for the edge points around the detected carotid candidate. Experiments performed on a multivendor dataset of 41 images show that the proposed algorithm can segment the carotid artery with high sensitivity (99.6 ±m 0.2%) and specificity (92.9 ±m 0.1%). Further experiments on a public database containing 971 images of the carotid artery showed that the proposed algorithm can achieve a detection accuracy of 95.2% with a 2% increase in performance when compared to the state-of-the-art method.

  10. Precise determination of anthropometric dimensions by means of image processing methods for estimating human body segment parameter values.

    PubMed

    Baca, A

    1996-04-01

    A method has been developed for the precise determination of anthropometric dimensions from the video images of four different body configurations. High precision is achieved by incorporating techniques for finding the location of object boundaries with sub-pixel accuracy, the implementation of calibration algorithms, and by taking into account the varying distances of the body segments from the recording camera. The system allows automatic segment boundary identification from the video image, if the boundaries are marked on the subject by black ribbons. In connection with the mathematical finite-mass-element segment model of Hatze, body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers etc.) can be computed by using the anthropometric data determined videometrically as input data. Compared to other, recently published video-based systems for the estimation of the inertial properties of body segments, the present algorithms reduce errors originating from optical distortions, inaccurate edge-detection procedures, and user-specified upper and lower segment boundaries or threshold levels for the edge-detection. The video-based estimation of human body segment parameters is especially useful in situations where ease of application and rapid availability of comparatively precise parameter values are of importance.

  11. Automatic extraction of numeric strings in unconstrained handwritten document images

    NASA Astrophysics Data System (ADS)

    Haji, M. Mehdi; Bui, Tien D.; Suen, Ching Y.

    2011-01-01

    Numeric strings such as identification numbers carry vital pieces of information in documents. In this paper, we present a novel algorithm for automatic extraction of numeric strings in unconstrained handwritten document images. The algorithm has two main phases: pruning and verification. In the pruning phase, the algorithm first performs a new segment-merge procedure on each text line, and then using a new regularity measure, it prunes all sequences of characters that are unlikely to be numeric strings. The segment-merge procedure is composed of two modules: a new explicit character segmentation algorithm which is based on analysis of skeletal graphs and a merging algorithm which is based on graph partitioning. All the candidate sequences that pass the pruning phase are sent to a recognition-based verification phase for the final decision. The recognition is based on a coarse-to-fine approach using probabilistic RBF networks. We developed our algorithm for the processing of real-world documents where letters and digits may be connected or broken in a document. The effectiveness of the proposed approach is shown by extensive experiments done on a real-world database of 607 documents which contains handwritten, machine-printed and mixed documents with different types of layouts and levels of noise.

  12. Semantic Segmentation of Building Elements Using Point Cloud Hashing

    NASA Astrophysics Data System (ADS)

    Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.

    2018-05-01

    For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).

  13. A Modular Hierarchical Approach to 3D Electron Microscopy Image Segmentation

    PubMed Central

    Liu, Ting; Jones, Cory; Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2014-01-01

    The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy. PMID:24491638

  14. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.

    PubMed

    Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.

  15. Label fusion based brain MR image segmentation via a latent selective model

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  16. New software tools for enhanced precision in robot-assisted laser phonomicrosurgery.

    PubMed

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2012-01-01

    This paper describes a new software package created to enhance precision during robot-assisted laser phonomicrosurgery procedures. The new software is composed of three tools for camera calibration, automatic tumor segmentation, and laser tracking. These were designed and developed to improve the outcome of this demanding microsurgical technique, and were tested herein to produce quantitative performance data. The experimental setup was based on the motorized laser micromanipulator created by Istituto Italiano di Tecnologia and the experimental protocols followed are fully described in this paper. The results show the new tools are robust and effective: The camera calibration tool reduced residual errors (RMSE) to 0.009 ± 0.002 mm under 40× microscope magnification; the automatic tumor segmentation tool resulted in deep lesion segmentations comparable to manual segmentations (RMSE= 0.160 ± 0.028 mm under 40× magnification); and the laser tracker tool proved to be reliable even during cutting procedures (RMSE= 0.073 ± 0.023 mm under 40× magnification). These results demonstrate the new software package can provide excellent improvements to the previous microsurgical system, leading to important enhancements in surgical outcome.

  17. A Patch-Based Approach for the Segmentation of Pathologies: Application to Glioma Labelling.

    PubMed

    Cordier, Nicolas; Delingette, Herve; Ayache, Nicholas

    2016-04-01

    In this paper, we describe a novel and generic approach to address fully-automatic segmentation of brain tumors by using multi-atlas patch-based voting techniques. In addition to avoiding the local search window assumption, the conventional patch-based framework is enhanced through several simple procedures: an improvement of the training dataset in terms of both label purity and intensity statistics, augmented features to implicitly guide the nearest-neighbor-search, multi-scale patches, invariance to cube isometries, stratification of the votes with respect to cases and labels. A probabilistic model automatically delineates regions of interest enclosing high-probability tumor volumes, which allows the algorithm to achieve highly competitive running time despite minimal processing power and resources. This method was evaluated on Multimodal Brain Tumor Image Segmentation challenge datasets. State-of-the-art results are achieved, with a limited learning stage thus restricting the risk of overfit. Moreover, segmentation smoothness does not involve any post-processing.

  18. Surgeon and type of anesthesia predict variability in surgical procedure times.

    PubMed

    Strum, D P; Sampson, A R; May, J H; Vargas, L G

    2000-05-01

    Variability in surgical procedure times increases the cost of healthcare delivery by increasing both the underutilization and overutilization of expensive surgical resources. To reduce variability in surgical procedure times, we must identify and study its sources. Our data set consisted of all surgeries performed over a 7-yr period at a large teaching hospital, resulting in 46,322 surgical cases. To study factors associated with variability in surgical procedure times, data mining techniques were used to segment and focus the data so that the analyses would be both technically and intellectually feasible. The data were subdivided into 40 representative segments of manageable size and variability based on headers adopted from the common procedural terminology classification. Each data segment was then analyzed using a main-effects linear model to identify and quantify specific sources of variability in surgical procedure times. The single most important source of variability in surgical procedure times was surgeon effect. Type of anesthesia, age, gender, and American Society of Anesthesiologists risk class were additional sources of variability. Intrinsic case-specific variability, unexplained by any of the preceding factors, was found to be highest for shorter surgeries relative to longer procedures. Variability in procedure times among surgeons was a multiplicative function (proportionate to time) of surgical time and total procedure time, such that as procedure times increased, variability in surgeons' surgical time increased proportionately. Surgeon-specific variability should be considered when building scheduling heuristics for longer surgeries. Results concerning variability in surgical procedure times due to factors such as type of anesthesia, age, gender, and American Society of Anesthesiologists risk class may be extrapolated to scheduling in other institutions, although specifics on individual surgeons may not. This research identifies factors associated with variability in surgical procedure times, knowledge of which may ultimately be used to improve surgical scheduling and operating room utilization.

  19. 14 CFR 97.3 - Symbols and terms used in procedures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... established on the intermediate course or final approach course. (2) Initial approach altitude is the altitude (or altitudes, in high altitude procedure) prescribed for the initial approach segment of an...: Speed 166 knots or more. Approach procedure segments for which altitudes (minimum altitudes, unless...

  20. 14 CFR 97.3 - Symbols and terms used in procedures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... established on the intermediate course or final approach course. (2) Initial approach altitude is the altitude (or altitudes, in high altitude procedure) prescribed for the initial approach segment of an...: Speed 166 knots or more. Approach procedure segments for which altitudes (minimum altitudes, unless...

  1. SU-C-207B-03: A Geometrical Constrained Chan-Vese Based Tumor Segmentation Scheme for PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, L; Zhou, Z; Wang, J

    Purpose: Accurate segmentation of tumor in PET is challenging when part of tumor is connected with normal organs/tissues with no difference in intensity. Conventional segmentation methods, such as thresholding or region growing, cannot generate satisfactory results in this case. We proposed a geometrical constrained Chan-Vese based scheme to segment tumor in PET for this special case by considering the similarity between two adjacent slices. Methods: The proposed scheme performs segmentation in a slice-by-slice fashion where an accurate segmentation of one slice is used as the guidance for segmentation of rest slices. For a slice that the tumor is not directlymore » connected to organs/tissues with similar intensity values, a conventional clustering-based segmentation method under user’s guidance is used to obtain an exact tumor contour. This is set as the initial contour and the Chan-Vese algorithm is applied for segmenting the tumor in the next adjacent slice by adding constraints of tumor size, position and shape information. This procedure is repeated until the last slice of PET containing tumor. The proposed geometrical constrained Chan-Vese based algorithm was implemented in Matlab and its performance was tested on several cervical cancer patients where cervix and bladder are connected with similar activity values. The positive predictive values (PPV) are calculated to characterize the segmentation accuracy of the proposed scheme. Results: Tumors were accurately segmented by the proposed method even when they are connected with bladder in the image with no difference in intensity. The average PPVs were 0.9571±0.0355 and 0.9894±0.0271 for 17 slices and 11 slices of PET from two patients, respectively. Conclusion: We have developed a new scheme to segment tumor in PET images for the special case that the tumor is quite similar to or connected to normal organs/tissues in the image. The proposed scheme can provide a reliable way for segmenting tumors.« less

  2. Automated detection of videotaped neonatal seizures of epileptic origin.

    PubMed

    Karayiannis, Nicolaos B; Xiong, Yaohua; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M

    2006-06-01

    This study aimed at the development of a seizure-detection system by training neural networks with quantitative motion information extracted from short video segments of neonatal seizures of the myoclonic and focal clonic types and random infant movements. The motion of the infants' body parts was quantified by temporal motion-strength signals extracted from video segments by motion-segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The motion of the infants' body parts also was quantified by temporal motion-trajectory signals extracted from video recordings by robust motion trackers based on block-motion models. These motion trackers were developed to adjust autonomously to illumination and contrast changes that may occur during the video-frame sequence. Video segments were represented by quantitative features obtained by analyzing motion-strength and motion-trajectory signals in both the time and frequency domains. Seizure recognition was performed by conventional feed-forward neural networks, quantum neural networks, and cosine radial basis function neural networks, which were trained to detect neonatal seizures of the myoclonic and focal clonic types and to distinguish them from random infant movements. The computational tools and procedures developed for automated seizure detection were evaluated on a set of 240 video segments of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). Regardless of the decision scheme used for interpreting the responses of the trained neural networks, all the neural network models exhibited sensitivity and specificity>90%. For one of the decision schemes proposed for interpreting the responses of the trained neural networks, the majority of the trained neural-network models exhibited sensitivity>90% and specificity>95%. In particular, cosine radial basis function neural networks achieved the performance targets of this phase of the project (i.e., sensitivity>95% and specificity>95%). The best among the motion segmentation and tracking methods developed in this study produced quantitative features that constitute a reliable basis for detecting neonatal seizures. The performance targets of this phase of the project were achieved by combining the quantitative features obtained by analyzing motion-strength signals with those produced by analyzing motion-trajectory signals. The computational procedures and tools developed in this study to perform off-line analysis of short video segments will be used in the next phase of this project, which involves the integration of these procedures and tools into a system that can process and analyze long video recordings of infants monitored for seizures in real time.

  3. A segmentation/clustering model for the analysis of array CGH data.

    PubMed

    Picard, F; Robin, S; Lebarbier, E; Daudin, J-J

    2007-09-01

    Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.

  4. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  5. Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm

    NASA Astrophysics Data System (ADS)

    Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter

    2004-05-01

    The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.

  6. Combined texture feature analysis of segmentation and classification of benign and malignant tumour CT slices.

    PubMed

    Padma, A; Sukanesh, R

    2013-01-01

    A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity.

  7. Brain tumor segmentation based on local independent projection-based classification.

    PubMed

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Chen, Wufan; Feng, Qianjin

    2014-10-01

    Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.

  8. Measurement of gamma' precipitates in a nickel-based superalloy using energy-filtered transmission electron microscopy coupled with automated segmenting techniques.

    PubMed

    Tiley, J S; Viswanathan, G B; Shiveley, A; Tschopp, M; Srinivasan, R; Banerjee, R; Fraser, H L

    2010-08-01

    Precipitates of the ordered L1(2) gamma' phase (dispersed in the face-centered cubic or FCC gamma matrix) were imaged in Rene 88 DT, a commercial multicomponent Ni-based superalloy, using energy-filtered transmission electron microscopy (EFTEM). Imaging was performed using the Cr, Co, Ni, Ti and Al elemental L-absorption edges in the energy loss spectrum. Manual and automated segmentation procedures were utilized for identification of precipitate boundaries and measurement of precipitate sizes. The automated region growing technique for precipitate identification in images was determined to measure accurately precipitate diameters. In addition, the region growing technique provided a repeatable method for optimizing segmentation techniques for varying EFTEM conditions. (c) 2010 Elsevier Ltd. All rights reserved.

  9. Weather analysis and interpretation procedures developed for the US/Canada wheat and barley exploratory experiment

    NASA Technical Reports Server (NTRS)

    Trenchard, M. H. (Principal Investigator)

    1980-01-01

    Procedures and techniques for providing analyses of meteorological conditions at segments during the growing season were developed for the U.S./Canada Wheat and Barley Exploratory Experiment. The main product and analysis tool is the segment-level climagraph which depicts temporally meteorological variables for the current year compared with climatological normals. The variable values for the segment are estimates derived through objective analysis of values obtained at first-order station in the region. The procedures and products documented represent a baseline for future Foreign Commodity Production Forecasting experiments.

  10. An Efficient Pipeline for Abdomen Segmentation in CT Images.

    PubMed

    Koyuncu, Hasan; Ceylan, Rahime; Sivri, Mesut; Erdogan, Hasan

    2018-04-01

    Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98.98/99.45/99.66% (precision). In summary, a statistical pipeline performing the task of abdomen segmentation is achieved that is not affected by the disadvantages, and the most detailed abdomen segmentation study is performed for the use before organ and tumor segmentation, feature extraction, and classification.

  11. Hierarchical brain tissue segmentation and its application in multiple sclerosis and Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Udupa, Jayaram K.; Moonis, Gul; Schwartz, Eric; Balcer, Laura

    2005-04-01

    Based on Fuzzy Connectedness (FC) object delineation principles and algorithms, a hierarchical brain tissue segmentation technique has been developed for MR images. After MR image background intensity inhomogeneity correction and intensity standardization, three FC objects for cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM) are generated via FC object delineation, and an intracranial (IC) mask is created via morphological operations. Then, the IC mask is decomposed into parenchymal (BP) and CSF masks, while the BP mask is separated into WM and GM masks. WM mask is further divided into pure and dirty white matter masks (PWM and DWM). In Multiple Sclerosis studies, a severe white matter lesion (LS) mask is defined from DWM mask. Based on the segmented brain tissue images, a histogram-based method has been developed to find disease-specific, image-based quantitative markers for characterizing the macromolecular manifestation of the two diseases. These same procedures have been applied to 65 MS (46 patients and 19 normal subjects) and 25 AD (15 patients and 10 normal subjects) data sets, each of which consists of FSE PD- and T2-weighted MR images. Histograms representing standardized PD and T2 intensity distributions and their numerical parameters provide an effective means for characterizing the two diseases. The procedures are systematic, nearly automated, robust, and the results are reproducible.

  12. Development of techniques for producing static strata maps and development of photointerpretive methods based on multitemporal LANDSAT data

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1977-01-01

    The results and progress of work conducted in support of the Large Area Crop Inventory Experiment (LACIE) are documented. Research was conducted for two tasks. These tasks include: (1) evaluation of the UCB static stratification procedure and modification of that procedure if warranted; and (2) the development of alternative photointerpretive techniques to the present LACIE procedure for the identification and selection of training areas for machine-processing of LACIE segments.

  13. A Rapid Segmentation-Insensitive "Digital Biopsy" Method for Radiomic Feature Extraction: Method and Pilot Study Using CT Images of Non-Small Cell Lung Cancer.

    PubMed

    Echegaray, Sebastian; Nair, Viswam; Kadoch, Michael; Leung, Ann; Rubin, Daniel; Gevaert, Olivier; Napel, Sandy

    2016-12-01

    Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required.

  14. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  15. Planning of electroporation-based treatments using Web-based treatment-planning software.

    PubMed

    Pavliha, Denis; Kos, Bor; Marčan, Marija; Zupanič, Anže; Serša, Gregor; Miklavčič, Damijan

    2013-11-01

    Electroporation-based treatment combining high-voltage electric pulses and poorly permanent cytotoxic drugs, i.e., electrochemotherapy (ECT), is currently used for treating superficial tumor nodules by following standard operating procedures. Besides ECT, another electroporation-based treatment, nonthermal irreversible electroporation (N-TIRE), is also efficient at ablating deep-seated tumors. To perform ECT or N-TIRE of deep-seated tumors, following standard operating procedures is not sufficient and patient-specific treatment planning is required for successful treatment. Treatment planning is required because of the use of individual long-needle electrodes and the diverse shape, size and location of deep-seated tumors. Many institutions that already perform ECT of superficial metastases could benefit from treatment-planning software that would enable the preparation of patient-specific treatment plans. To this end, we have developed a Web-based treatment-planning software for planning electroporation-based treatments that does not require prior engineering knowledge from the user (e.g., the clinician). The software includes algorithms for automatic tissue segmentation and, after segmentation, generation of a 3D model of the tissue. The procedure allows the user to define how the electrodes will be inserted. Finally, electric field distribution is computed, the position of electrodes and the voltage to be applied are optimized using the 3D model and a downloadable treatment plan is made available to the user.

  16. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  17. FROM2D to 3d Supervised Segmentation and Classification for Cultural Heritage Applications

    NASA Astrophysics Data System (ADS)

    Grilli, E.; Dininno, D.; Petrucci, G.; Remondino, F.

    2018-05-01

    The digital management of architectural heritage information is still a complex problem, as a heritage object requires an integrated representation of various types of information in order to develop appropriate restoration or conservation strategies. Currently, there is extensive research focused on automatic procedures of segmentation and classification of 3D point clouds or meshes, which can accelerate the study of a monument and integrate it with heterogeneous information and attributes, useful to characterize and describe the surveyed object. The aim of this study is to propose an optimal, repeatable and reliable procedure to manage various types of 3D surveying data and associate them with heterogeneous information and attributes to characterize and describe the surveyed object. In particular, this paper presents an approach for classifying 3D heritage models, starting from the segmentation of their textures based on supervised machine learning methods. Experimental results run on three different case studies demonstrate that the proposed approach is effective and with many further potentials.

  18. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  19. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  20. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu Wu; Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8; Yuchi Ming

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped;more » the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions: The proposed needle segmentation algorithm is accurate, robust, and suitable for 3D TRUS guided prostate transperineal therapy.« less

  1. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  2. Modeling of market segmentation for new IT product development

    NASA Astrophysics Data System (ADS)

    Nasiopoulos, Dimitrios K.; Sakas, Damianos P.; Vlachos, D. S.; Mavrogianni, Amanda

    2015-02-01

    Businesses from all Information Technology sectors use market segmentation[1] in their product development[2] and strategic planning[3]. Many studies have concluded that market segmentation is considered as the norm of modern marketing. With the rapid development of technology, customer needs are becoming increasingly diverse. These needs can no longer be satisfied by a mass marketing approach and follow one rule. IT Businesses can face with this diversity by pooling customers[4] with similar requirements and buying behavior and strength into segments. The result of the best choices about which segments are the most appropriate to serve can then be made, thus making the best of finite resources. Despite the attention which segmentation gathers and the resources that are invested in it, growing evidence suggests that businesses have problems operationalizing segmentation[5]. These problems take various forms. There may have been a rule that the segmentation process necessarily results in homogeneous groups of customers for whom appropriate marketing programs and procedures for dealing with them can be developed. Then the segmentation process, that a company follows, can fail. This increases concerns about what causes segmentation failure and how it might be overcome. To prevent the failure, we created a dynamic simulation model of market segmentation[6] based on the basic factors leading to this segmentation.

  3. Segmentation of bone and soft tissue regions in digital radiographic images of extremities

    NASA Astrophysics Data System (ADS)

    Pakin, S. Kubilay; Gaborski, Roger S.; Barski, Lori L.; Foos, David H.; Parker, Kevin J.

    2001-07-01

    This paper presents an algorithm for segmentation of computed radiography (CR) images of extremities into bone and soft tissue regions. The algorithm is a region-based one in which the regions are constructed using a growing procedure with two different statistical tests. Following the growing process, tissue classification procedure is employed. The purpose of the classification is to label each region as either bone or soft tissue. This binary classification goal is achieved by using a voting procedure that consists of clustering of regions in each neighborhood system into two classes. The voting procedure provides a crucial compromise between local and global analysis of the image, which is necessary due to strong exposure variations seen on the imaging plate. Also, the existence of regions whose size is large enough such that exposure variations can be observed through them makes it necessary to use overlapping blocks during the classification. After the classification step, resulting bone and soft tissue regions are refined by fitting a 2nd order surface to each tissue, and reevaluating the label of each region according to the distance between the region and surfaces. The performance of the algorithm is tested on a variety of extremity images using manually segmented images as gold standard. The experiments showed that our algorithm provided a bone boundary with an average area overlap of 90% compared to the gold standard.

  4. Shape-driven 3D segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2006-01-01

    This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details.

  5. Analysis of simulated angiographic procedures. Part 2: extracting efficiency data from audio and video recordings.

    PubMed

    Duncan, James R; Kline, Benjamin; Glaiberman, Craig B

    2007-04-01

    To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.

  6. Accurate cytogenetic biodosimetry through automated dicentric chromosome curation and metaphase cell selection

    PubMed Central

    Wilkins, Ruth; Flegal, Farrah; Knoll, Joan H.M.; Rogan, Peter K.

    2017-01-01

    Accurate digital image analysis of abnormal microscopic structures relies on high quality images and on minimizing the rates of false positive (FP) and negative objects in images. Cytogenetic biodosimetry detects dicentric chromosomes (DCs) that arise from exposure to ionizing radiation, and determines radiation dose received based on DC frequency. Improvements in automated DC recognition increase the accuracy of dose estimates by reclassifying FP DCs as monocentric chromosomes or chromosome fragments. We also present image segmentation methods to rank high quality digital metaphase images and eliminate suboptimal metaphase cells. A set of chromosome morphology segmentation methods selectively filtered out FP DCs arising primarily from sister chromatid separation, chromosome fragmentation, and cellular debris. This reduced FPs by an average of 55% and was highly specific to these abnormal structures (≥97.7%) in three samples. Additional filters selectively removed images with incomplete, highly overlapped, or missing metaphase cells, or with poor overall chromosome morphologies that increased FP rates. Image selection is optimized and FP DCs are minimized by combining multiple feature based segmentation filters and a novel image sorting procedure based on the known distribution of chromosome lengths. Applying the same image segmentation filtering procedures to both calibration and test samples reduced the average dose estimation error from 0.4 Gy to <0.2 Gy, obviating the need to first manually review these images. This reliable and scalable solution enables batch processing for multiple samples of unknown dose, and meets current requirements for triage radiation biodosimetry of high quality metaphase cell preparations. PMID:29026522

  7. An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues

    PubMed Central

    Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P.; Pajares, Gonzalo; Sanchez del Arco, Maria J.; Navarrete, Luis

    2011-01-01

    Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain). PMID:22163966

  8. Chemical synthesis of the precursor molecule of the Aequorea green fluorescent protein, subsequent folding, and development of fluorescence

    PubMed Central

    Nishiuchi, Yuji; Inui, Tatsuya; Nishio, Hideki; Bódi, József; Kimura, Terutoshi; Tsuji, Frederick I.; Sakakibara, Shumpei

    1998-01-01

    The present paper describes the total chemical synthesis of the precursor molecule of the Aequorea green fluorescent protein (GFP). The molecule is made up of 238 amino acid residues in a single polypeptide chain and is nonfluorescent. To carry out the synthesis, a procedure, first described in 1981 for the synthesis of complex peptides, was used. The procedure is based on performing segment condensation reactions in solution while providing maximum protection to the segment. The effectiveness of the procedure has been demonstrated by the synthesis of various biologically active peptides and small proteins, such as human angiogenin, a 123-residue protein analogue of ribonuclease A, human midkine, a 121-residue protein, and pleiotrophin, a 136-residue protein analogue of midkine. The GFP precursor molecule was synthesized from 26 fully protected segments in solution, and the final 238-residue peptide was treated with anhydrous hydrogen fluoride to obtain the precursor molecule of GFP containing two Cys(acetamidomethyl) residues. After removal of the acetamidomethyl groups, the product was dissolved in 0.1 M Tris⋅HCl buffer (pH 8.0) in the presence of DTT. After several hours at room temperature, the solution began to emit a green fluorescence (λmax = 509 nm) under near-UV light. Both fluorescence excitation and fluorescence emission spectra were measured and were found to have the same shape and maxima as those reported for native GFP. The present results demonstrate the utility of the segment condensation procedure in synthesizing large protein molecules such as GFP. The result also provides evidence that the formation of the chromophore in GFP is not dependent on any external cofactor. PMID:9811837

  9. Tumor or abnormality identification from magnetic resonance images using statistical region fusion based segmentation.

    PubMed

    Subudhi, Badri Narayan; Thangaraj, Veerakumar; Sankaralingam, Esakkirajan; Ghosh, Ashish

    2016-11-01

    In this article, a statistical fusion based segmentation technique is proposed to identify different abnormality in magnetic resonance images (MRI). The proposed scheme follows seed selection, region growing-merging and fusion of multiple image segments. In this process initially, an image is divided into a number of blocks and for each block we compute the phase component of the Fourier transform. The phase component of each block reflects the gray level variation among the block but contains a large correlation among them. Hence a singular value decomposition (SVD) technique is adhered to generate a singular value of each block. Then a thresholding procedure is applied on these singular values to identify edgy and smooth regions and some seed points are selected for segmentation. By considering each seed point we perform a binary segmentation of the complete MRI and hence with all seed points we get an equal number of binary images. A parcel based statistical fusion process is used to fuse all the binary images into multiple segments. Effectiveness of the proposed scheme is tested on identifying different abnormalities: prostatic carcinoma detection, tuberculous granulomas identification and intracranial neoplasm or brain tumor detection. The proposed technique is established by comparing its results against seven state-of-the-art techniques with six performance evaluation measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  11. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    PubMed

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  12. Shape-Driven 3D Segmentation Using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2013-01-01

    This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details. PMID:17354875

  13. Automatic pelvis segmentation from x-ray images of a mouse model

    NASA Astrophysics Data System (ADS)

    Al Okashi, Omar M.; Du, Hongbo; Al-Assam, Hisham

    2017-05-01

    The automatic detection and quantification of skeletal structures has a variety of different applications for biological research. Accurate segmentation of the pelvis from X-ray images of mice in a high-throughput project such as the Mouse Genomes Project not only saves time and cost but also helps achieving an unbiased quantitative analysis within the phenotyping pipeline. This paper proposes an automatic solution for pelvis segmentation based on structural and orientation properties of the pelvis in X-ray images. The solution consists of three stages including pre-processing image to extract pelvis area, initial pelvis mask preparation and final pelvis segmentation. Experimental results on a set of 100 X-ray images showed consistent performance of the algorithm. The automated solution overcomes the weaknesses of a manual annotation procedure where intra- and inter-observer variations cannot be avoided.

  14. Engineering flight and guest pilot evaluation report, phase 2. [DC 8 aircraft

    NASA Technical Reports Server (NTRS)

    Morrison, J. A.; Anderson, E. B.; Brown, G. W.; Schwind, G. K.

    1974-01-01

    Prior to the flight evaluation, the two-segment profile capabilities of the DC-8-61 were evaluated and flight procedures were developed in a flight simulator at the UA Flight Training Center in Denver, Colorado. The flight evaluation reported was conducted to determine the validity of the simulation results, further develop the procedures and use of the area navigation system in the terminal area, certify the system for line operation, and obtain evaluations of the system and procedures by a number of pilots from the industry. The full area navigation capabilities of the special equipment installed were developed to provide terminal area guidance for two-segment approaches. The objectives of this evaluation were: (1) perform an engineering flight evaluation sufficient to certify the two-segment system for the six-month in-service evaluation; (2) evaluate the suitability of a modified RNAV system for flying two-segment approaches; and (3) provide evaluation of the two-segment approach by management and line pilots.

  15. Segmentation of clustered cells in negative phase contrast images with integrated light intensity and cell shape information.

    PubMed

    Wang, Y; Wang, C; Zhang, Z

    2018-05-01

    Automated cell segmentation plays a key role in characterisations of cell behaviours for both biology research and clinical practices. Currently, the segmentation of clustered cells still remains as a challenge and is the main reason for false segmentation. In this study, the emphasis was put on the segmentation of clustered cells in negative phase contrast images. A new method was proposed to combine both light intensity and cell shape information through the construction of grey-weighted distance transform (GWDT) within preliminarily segmented areas. With the constructed GWDT, the clustered cells can be detected and then separated with a modified region skeleton-based method. Moreover, a contour expansion operation was applied to get optimised detection of cell boundaries. In this paper, the working principle and detailed procedure of the proposed method are described, followed by the evaluation of the method on clustered cell segmentation. Results show that the proposed method achieves an improved performance in clustered cell segmentation compared with other methods, with 85.8% and 97.16% accuracy rate for clustered cells and all cells, respectively. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  16. Segmentation of Large Unstructured Point Clouds Using Octree-Based Region Growing and Conditional Random Fields

    NASA Astrophysics Data System (ADS)

    Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.

    2017-11-01

    Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.

  17. Cardiac Multi-detector CT Segmentation Based on Multiscale Directional Edge Detector and 3D Level Set.

    PubMed

    Antunes, Sofia; Esposito, Antonio; Palmisano, Anna; Colantoni, Caterina; Cerutti, Sergio; Rizzo, Giovanna

    2016-05-01

    Extraction of the cardiac surfaces of interest from multi-detector computed tomographic (MDCT) data is a pre-requisite step for cardiac analysis, as well as for image guidance procedures. Most of the existing methods need manual corrections, which is time-consuming. We present a fully automatic segmentation technique for the extraction of the right ventricle, left ventricular endocardium and epicardium from MDCT images. The method consists in a 3D level set surface evolution approach coupled to a new stopping function based on a multiscale directional second derivative Gaussian filter, which is able to stop propagation precisely on the real boundary of the structures of interest. We validated the segmentation method on 18 MDCT volumes from healthy and pathologic subjects using manual segmentation performed by a team of expert radiologists as gold standard. Segmentation errors were assessed for each structure resulting in a surface-to-surface mean error below 0.5 mm and a percentage of surface distance with errors less than 1 mm above 80%. Moreover, in comparison to other segmentation approaches, already proposed in previous work, our method presented an improved accuracy (with surface distance errors less than 1 mm increased of 8-20% for all structures). The obtained results suggest that our approach is accurate and effective for the segmentation of ventricular cavities and myocardium from MDCT images.

  18. Hybrid region merging method for segmentation of high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo

    2014-12-01

    Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.

  19. Operational flight evaluation of the two-segment approach for use in airline service

    NASA Technical Reports Server (NTRS)

    Schwind, G. K.; Morrison, J. A.; Nylen, W. E.; Anderson, E. B.

    1975-01-01

    United Airlines has developed and evaluated a two-segment noise abatement approach procedure for use on Boeing 727 aircraft in air carrier service. In a flight simulator, the two-segment approach was studied in detail and a profile and procedures were developed. Equipment adaptable to contemporary avionics and navigation systems was designed and manufactured by Collins Radio Company and was installed and evaluated in B-727-200 aircraft. The equipment, profile, and procedures were evaluated out of revenue service by pilots representing government agencies, airlines, airframe manufacturers, and professional pilot associations. A system was then placed into scheduled airline service for six months during which 555 two-segment approaches were flown at three airports by 55 airline pilots. The system was determined to be safe, easy to fly, and compatible with the airline operational environment.

  20. Automatic colonic lesion detection and tracking in endoscopic videos

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gustafsson, Ulf; A-Rahim, Yoursif

    2011-03-01

    The biology of colorectal cancer offers an opportunity for both early detection and prevention. Compared with other imaging modalities, optical colonoscopy is the procedure of choice for simultaneous detection and removal of colonic polyps. Computer assisted screening makes it possible to assist physicians and potentially improve the accuracy of the diagnostic decision during the exam. This paper presents an unsupervised method to detect and track colonic lesions in endoscopic videos. The aim of the lesion screening and tracking is to facilitate detection of polyps and abnormal mucosa in real time as the physician is performing the procedure. For colonic lesion detection, the conventional marker controlled watershed based segmentation is used to segment the colonic lesions, followed by an adaptive ellipse fitting strategy to further validate the shape. For colonic lesion tracking, a mean shift tracker with background modeling is used to track the target region from the detection phase. The approach has been tested on colonoscopy videos acquired during regular colonoscopic procedures and demonstrated promising results.

  1. Deep residual networks for automatic segmentation of laparoscopic videos of the liver

    NASA Astrophysics Data System (ADS)

    Gibson, Eli; Robu, Maria R.; Thompson, Stephen; Edwards, P. Eddie; Schneider, Crispin; Gurusamy, Kurinchi; Davidson, Brian; Hawkes, David J.; Barratt, Dean C.; Clarkson, Matthew J.

    2017-03-01

    Motivation: For primary and metastatic liver cancer patients undergoing liver resection, a laparoscopic approach can reduce recovery times and morbidity while offering equivalent curative results; however, only about 10% of tumours reside in anatomical locations that are currently accessible for laparoscopic resection. Augmenting laparoscopic video with registered vascular anatomical models from pre-procedure imaging could support using laparoscopy in a wider population. Segmentation of liver tissue on laparoscopic video supports the robust registration of anatomical liver models by filtering out false anatomical correspondences between pre-procedure and intra-procedure images. In this paper, we present a convolutional neural network (CNN) approach to liver segmentation in laparoscopic liver procedure videos. Method: We defined a CNN architecture comprising fully-convolutional deep residual networks with multi-resolution loss functions. The CNN was trained in a leave-one-patient-out cross-validation on 2050 video frames from 6 liver resections and 7 laparoscopic staging procedures, and evaluated using the Dice score. Results: The CNN yielded segmentations with Dice scores >=0.95 for the majority of images; however, the inter-patient variability in median Dice score was substantial. Four failure modes were identified from low scoring segmentations: minimal visible liver tissue, inter-patient variability in liver appearance, automatic exposure correction, and pathological liver tissue that mimics non-liver tissue appearance. Conclusion: CNNs offer a feasible approach for accurately segmenting liver from other anatomy on laparoscopic video, but additional data or computational advances are necessary to address challenges due to the high inter-patient variability in liver appearance.

  2. Applicability of NASA (ARC) two-segment approach procedures to Boeing Aircraft

    NASA Technical Reports Server (NTRS)

    Allison, R. L.

    1974-01-01

    An engineering study to determine the feasibility of applying the NASA (ARC) two-segment approach procedures and avionics to the Boeing fleet of commercial jet transports is presented. This feasibility study is concerned with the speed/path control and systems compability aspects of the procedures. Path performance data are provided for representative Boeing 707/727/737/747 passenger models. Thrust margin requirements for speed/path control are analyzed for still air and shearing tailwind conditions. Certification of the two-segment equipment and possible effects on existing airplane certification are discussed. Operational restrictions on use of the procedures with current autothrottles and in icing or reported tailwind conditions are recommended. Using the NASA/UAL 727 procedures as a baseline, maximum upper glide slopes for representative 707/727/737/747 models are defined as a starting point for further study and/or flight evaluation programs.

  3. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  4. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers

    PubMed Central

    Filipovic, Nenad D.

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851

  5. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  6. SU-E-J-220: Evaluation of Atlas-Based Auto-Segmentation (ABAS) in Head-And-Neck Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Q; Yan, D

    2014-06-01

    Purpose: Evaluate the accuracy of atlas-based auto segmentation of organs at risk (OARs) on both helical CT (HCT) and cone beam CT (CBCT) images in head and neck (HN) cancer adaptive radiotherapy (ART). Methods: Six HN patients treated in the ART process were included in this study. For each patient, three images were selected: pretreatment planning CT (PreTx-HCT), in treatment CT for replanning (InTx-HCT) and a CBCT acquired in the same day of the InTx-HCT. Three clinical procedures of auto segmentation and deformable registration performed in the ART process were evaluated: a) auto segmentation on PreTx-HCT using multi-subject atlases, b)more » intra-patient propagation of OARs from PreTx-HCT to InTx-HCT using deformable HCT-to-HCT image registration, and c) intra-patient propagation of OARs from PreTx-HCT to CBCT using deformable CBCT-to-HCT image registration. Seven OARs (brainstem, cord, L/R parotid, L/R submandibular gland and mandible) were manually contoured on PreTx-HCT and InTx-HCT for comparison. In addition, manual contours on InTx-CT were copied on the same day CBCT, and a local region rigid body registration was performed accordingly for each individual OAR. For procedures a) and b), auto contours were compared to manual contours, and for c) auto contours were compared to those rigidly transferred contours on CBCT. Dice similarity coefficients (DSC) and mean surface distances of agreement (MSDA) were calculated for evaluation. Results: For procedure a), the mean DSC/MSDA of most OARs are >80%/±2mm. For intra-patient HCT-to-HCT propagation, the Resultimproved to >85%/±1.5mm. Compared to HCT-to-HCT, the mean DSC for HCT-to-CBCT propagation drops ∼2–3% and MSDA increases ∼0.2mm. This Resultindicates that the inferior imaging quality of CBCT seems only degrade auto propagation performance slightly. Conclusion: Auto segmentation and deformable propagation can generate OAR structures on HCT and CBCT images with clinically acceptable accuracy. Therefore, they can be reliably implemented in the clinical HN ART process.« less

  7. Thoracoscopic stapler-based "bidirectional" segmentectomy for posterior basal segment (S10) and its variants.

    PubMed

    Sato, Masaaki; Murayama, Tomonori; Nakajima, Jun

    2018-04-01

    Thoracoscopic segmentectomy for the posterior basal segment (S10) and its variant (e.g., S9+10 and S10b+c combined subsegmentectomy) is one of the most challenging anatomical segmentectomies. Stapler-based segmentectomy is attractive to simplify the operation and to prevent post-operative air leakage. However, this approach makes thoracoscopic S10 segmentectomy even more tricky. The challenges are caused mostly from the following three reasons: first, similar to other basal segments, "three-dimensional" stapling is needed to fold a cuboidal segment; second, the belonging pulmonary artery is not directly facing the interlobar fissure or the hilum, making identification of target artery difficult; third, the anatomy of S10 and adjacent segments such as superior (S6) and medial basal (S7) is variable. To overcome these challenges, this article summarizes the "bidirectional approach" that allows for solid confirmation of anatomy while avoiding separation of S6 and the basal segment. To assist this approach under limited thoracoscopic view, we also show stapling techniques to fold the cuboidal segment with the aid of "standing stiches". Attention should also be paid to the anatomy of adjacent segments particularly that of S7, which tends to be congested after stapling. The use of virtual-assisted lung mapping (VAL-MAP) is also recommended to demark resection lines because it flexibly allows for complex procedures such as combined subsegmentectomy such as S10b+c, extended segmentectomy such as S10+S9b, and non-anatomically extended segmentectomy.

  8. ST-segment resolution with bivalirudin versus heparin and routine glycoprotein IIb/IIIa inhibitors started in the ambulance in ST-segment elevation myocardial infarction patients transported for primary percutaneous coronary intervention: The EUROMAX ST-segment resolution substudy.

    PubMed

    Van't Hof, Arnoud; Giannini, Francesco; Ten Berg, Jurrien; Tolsma, Rudolf; Clemmensen, Peter; Bernstein, Debra; Coste, Pierre; Goldstein, Patrick; Zeymer, Uwe; Hamm, Christian; Deliargyris, Efthymios; Steg, Philippe G

    2017-08-01

    Myocardial reperfusion after primary percutaneous coronary intervention (PCI) can be assessed by the extent of post-procedural ST-segment resolution. The European Ambulance Acute Coronary Syndrome Angiography (EUROMAX) trial compared pre-hospital bivalirudin and pre-hospital heparin or enoxaparin with or without GPIIb/IIIa inhibitors (GPIs) in primary PCI. This nested substudy was performed in centres routinely using pre-hospital GPI in order to compare the impact of randomized treatments on ST-resolution after primary PCI. Residual cumulative ST-segment deviation on the single one hour post-procedure electrocardiogram (ECG) was assessed by an independent core laboratory and was the primary endpoint. It was calculated that 762 evaluable patients were needed to show non-inferiority (85% power, alpha 2.5%) between randomized treatments. A total of 871 participated with electrocardiographic data available in 824 patients (95%). Residual ST-segment deviation one hour after PCI was 3.8±4.9 mm versus 3.9±5.2 mm for bivalirudin and heparin+GPI, respectively ( p=0.0019 for non-inferiority). Overall, there were no differences between randomized treatments in any measures of ST-segment resolution either before or after the index procedure. Pre-hospital treatment with bivalirudin is non-inferior to pre-hospital heparin + GPI with regard to residual ST-segment deviation or ST-segment resolution, reflecting comparable myocardial reperfusion with the two strategies.

  9. Pressure Oscillations and Structural Vibrations in Space Shuttle RSRM and ETM-3 Motors

    NASA Technical Reports Server (NTRS)

    Mason, D. R.; Morstadt, R. A.; Cannon, S. M.; Gross, E. G.; Nielsen, D. B.

    2004-01-01

    The complex interactions between internal motor pressure oscillations resulting from vortex shedding, the motor's internal acoustic modes, and the motor's structural vibration modes were assessed for the Space Shuttle four-segment booster Reusable Solid Rocket Motor and for the five-segment engineering test motor ETM-3. Two approaches were applied 1) a predictive procedure based on numerically solving modal representations of a solid rocket motor s acoustic equations of motion and 2) a computational fluid dynamics two-dimensional axi-symmetric large eddy simulation at discrete motor burn times.

  10. NSEG, a segmented mission analysis program for low and high speed aircraft. Volume 1: Theoretical development

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is presented. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed characteristics were specified in tabular form. The code also contains extensive flight envelope performance mapping capabilities. Approximate take off and landing analyses were performed. At high speeds, centrifugal lift effects were accounted for. Extensive turbojet and ramjet engine scaling procedures were incorporated in the code.

  11. Recuperator assembly and procedures

    DOEpatents

    Kang, Yungmo; McKeirnan, Jr., Robert D.

    2008-08-26

    A construction of recuperator core segments is provided which insures proper assembly of the components of the recuperator core segment, and of a plurality of recuperator core segments. Each recuperator core segment must be constructed so as to prevent nesting of fin folds of the adjacent heat exchanger foils of the recuperator core segment. A plurality of recuperator core segments must be assembled together so as to prevent nesting of adjacent fin folds of adjacent recuperator core segments.

  12. In-Situ Observations of Longitudinal Compression Damage in Carbon-Epoxy Cross Ply Laminates Using Fast Synchrotron Radiation Computed Tomography

    NASA Technical Reports Server (NTRS)

    Bergan, Andrew C.; Garcea, Serafina C.

    2017-01-01

    The role of longitudinal compressive failure mechanisms in notched cross-ply laminates is studied experimentally with in-situ synchrotron radiation based computed tomography. Carbon/epoxy specimens loaded monotonically in uniaxial compression exhibited a quasi-stable failure process, which was captured with computed tomography scans recorded continuously with a temporal resolutions of 2.4 seconds and a spatial resolution of 1.1 microns per voxel. A detailed chronology of the initiation and propagation of longitudinal matrix splitting cracks, in-plane and out-of-plane kink bands, shear-driven fiber failure, delamination, and transverse matrix cracks is provided with a focus on kink bands as the dominant failure mechanism. An automatic segmentation procedure is developed to identify the boundary surfaces of a kink band. The segmentation procedure enables 3-dimensional visualization of the kink band and conveys the orientation, inclination, and spatial variation of the kink band. The kink band inclination and length are examined using the segmented data revealing tunneling and spatial variations not apparent from studying the 2-dimensional section data.

  13. Topography-guided transepithelial PRK after intracorneal ring segments implantation and corneal collagen CXL in a three-step procedure for keratoconus.

    PubMed

    Coskunseven, Efekan; Jankov, Mirko R; Grentzelos, Michael A; Plaka, Argyro D; Limnopoulou, Aliki N; Kymionis, George D

    2013-01-01

    To present the results of topography-guided transepithelial photorefractive keratectomy (PRK) after intracorneal ring segments implantation followed by corneal collagen cross-linking (CXL) for keratoconus. In this prospective case series, 10 patients (16 eyes) with progressive keratoconus were included. All patients underwent topography-guided transepithelial PRK after Keraring intracorneal ring segments (Mediphacos Ltda) implantation, followed by CXL treatment. The follow-up period was 6 months after the last procedure for all patients. Time interval between both intracorneal ring segments implantation and CXL and between CXL and topography-guided transepithelial PRK was 6 months. LogMAR mean uncorrected distance visual acuity and mean corrected distance visual acuity were significantly improved (P<.05) from 1.14±0.36 and 0.75±0.24 preoperatively to 0.25±0.13 and 0.13±0.06 after the completion of the three-step procedure, respectively. Mean spherical equivalent refraction was significantly reduced (P<.05) from -5.66±5.63 diopters (D) preoperatively to -0.98±2.21 D after the three-step procedure. Mean steep and flat keratometry values were significantly reduced (P<.05) from 54.65±5.80 D and 47.80±3.97 D preoperatively to 45.99±3.12 D and 44.69±3.19 D after the three-step procedure, respectively. Combined topography-guided transepithelial PRK with intracorneal ring segments implantation and CXL in a three-step procedure seems to be an effective, promising treatment sequence offering patients a functional visual acuity and ceasing progression of the ectatic disorder. A longer follow-up and larger case series are necessary to thoroughly evaluate safety, stability, and efficacy of this innovative procedure. Copyright 2013, SLACK Incorporated.

  14. The segmentation of bones in pelvic CT images based on extraction of key frames.

    PubMed

    Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen

    2018-05-22

    Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.

  15. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  16. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  17. Systems Maintenance Automated Repair Tasks (SMART)

    NASA Technical Reports Server (NTRS)

    Schuh, Joseph; Mitchell, Brent; Locklear, Louis; Belson, Martin A.; Al-Shihabi, Mary Jo Y.; King, Nadean; Norena, Elkin; Hardin, Derek

    2010-01-01

    SMART is a uniform automated discrepancy analysis and repair-authoring platform that improves technical accuracy and timely delivery of repair procedures for a given discrepancy (see figure a). SMART will minimize data errors, create uniform repair processes, and enhance the existing knowledge base of engineering repair processes. This innovation is the first tool developed that links the hardware specification requirements with the actual repair methods, sequences, and required equipment. SMART is flexibly designed to be useable by multiple engineering groups requiring decision analysis, and by any work authorization and disposition platform (see figure b). The organizational logic creates the link between specification requirements of the hardware, and specific procedures required to repair discrepancies. The first segment in the SMART process uses a decision analysis tree to define all the permutations between component/ subcomponent/discrepancy/repair on the hardware. The second segment uses a repair matrix to define what the steps and sequences are for any repair defined in the decision tree. This segment also allows for the selection of specific steps from multivariable steps. SMART will also be able to interface with outside databases and to store information from them to be inserted into the repair-procedure document. Some of the steps will be identified as optional, and would only be used based on the location and the current configuration of the hardware. The output from this analysis would be sent to a work authoring system in the form of a predefined sequence of steps containing required actions, tools, parts, materials, certifications, and specific requirements controlling quality, functional requirements, and limitations.

  18. The Out of Service Guest Pilot Evaluation of the Two-segment Noise Abatement Approach in the Boeing B727-200

    NASA Technical Reports Server (NTRS)

    Nylen, W. E.

    1974-01-01

    Guest pilot evaluation results of an approach profile modification for reducing ground level noise under the approach of jet aircraft runways are reported. Evaluation results were used to develop a two segmented landing approach procedure and equipment necessary to obtain pilot, airline, and FAA acceptance of the two segmented flight as a routine way of operating aircraft on approach and landing. Data are given on pilot workload and acceptance of the procedure.

  19. Chain-Wise Generalization of Road Networks Using Model Selection

    NASA Astrophysics Data System (ADS)

    Bulatov, D.; Wenzel, S.; Häufel, G.; Meidow, J.

    2017-05-01

    Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using circlePeucker and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.

  20. Segmentation of optic disc and optic cup in retinal fundus images using shape regression.

    PubMed

    Sedai, Suman; Roy, Pallab K; Mahapatra, Dwarikanath; Garnavi, Rahil

    2016-08-01

    Glaucoma is one of the leading cause of blindness. The manual examination of optic cup and disc is a standard procedure used for detecting glaucoma. This paper presents a fully automatic regression based method which accurately segments optic cup and disc in retinal colour fundus image. First, we roughly segment optic disc using circular hough transform. The approximated optic disc is then used to compute the initial optic disc and cup shapes. We propose a robust and efficient cascaded shape regression method which iteratively learns the final shape of the optic cup and disc from a given initial shape. Gradient boosted regression trees are employed to learn each regressor in the cascade. A novel data augmentation approach is proposed to improve the regressors performance by generating synthetic training data. The proposed optic cup and disc segmentation method is applied on an image set of 50 patients and demonstrate high segmentation accuracy for optic cup and disc with dice metric of 0.95 and 0.85 respectively. Comparative study shows that our proposed method outperforms state of the art optic cup and disc segmentation methods.

  1. Implementation of a School-Based Fluoride Tablet Program in a Rural Community.

    ERIC Educational Resources Information Center

    Eriksen, Michael; And Others

    A segment of a 3-year dental research project involving 2,000 school children aged 5-13 conducted in a rural Pennsylvania county, this study presents 1 component in a 3-pronged attempt to determine the effectiveness of a school-based dental health delivery system. The implementation procedures of this program are described as involving:…

  2. The successive projection algorithm as an initialization method for brain tumor segmentation using non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Bharath, Halandur N; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Van Huffel, Sabine

    2017-01-01

    Non-negative matrix factorization (NMF) has become a widely used tool for additive parts-based analysis in a wide range of applications. As NMF is a non-convex problem, the quality of the solution will depend on the initialization of the factor matrices. In this study, the successive projection algorithm (SPA) is proposed as an initialization method for NMF. SPA builds on convex geometry and allocates endmembers based on successive orthogonal subspace projections of the input data. SPA is a fast and reproducible method, and it aligns well with the assumptions made in near-separable NMF analyses. SPA was applied to multi-parametric magnetic resonance imaging (MRI) datasets for brain tumor segmentation using different NMF algorithms. Comparison with common initialization methods shows that SPA achieves similar segmentation quality and it is competitive in terms of convergence rate. Whereas SPA was previously applied as a direct endmember extraction tool, we have shown improved segmentation results when using SPA as an initialization method, as it allows further enhancement of the sources during the NMF iterative procedure.

  3. Artificial extracellular matrix for biomedical applications: biocompatible and biodegradable poly (tetramethylene ether) glycol/poly (ε-caprolactone diol)-based polyurethanes.

    PubMed

    Shahrousvand, Mohsen; Mir Mohamad Sadeghi, Gity; Salimi, Ali

    2016-12-01

    The cells as a tissue component need to viscoelastic, biocompatible, biodegradable, and wettable extracellular matrix for their biological activity. In this study, in order to prepare biomedical polyurethane elastomers with good mechanical behavior and biodegradability, a series of novel polyester-polyether- based polyurethanes (PUs) were synthesized using a two-step bulk reaction by melting pre-polymer method, taking 1,4-Butanediol (BDO) as chain extender, hexamethylene diisocyanate as the hard segment, and poly (tetramethylene ether) glycol (PTMEG) and poly (ε-caprolactone diol) (PCL-Diol) as the soft segment without a catalyst. The soft to the hard segment ratio was kept constant in all samples. Polyurethane characteristics such as thermal and mechanical properties, wettability and water adsorption, biodegradability, and cellular behavior were changed by changing the ratio of polyether diol to polyester diol composition in the soft segment. Our present work provides a new procedure for the preparation of engineered polyurethanes in surface properties and biodegradability, which could be a good candidate for bone, cartilage, and skin tissue engineering.

  4. Fuzzy Markov random fields versus chains for multispectral image segmentation.

    PubMed

    Salzenstein, Fabien; Collet, Christophe

    2006-11-01

    This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.

  5. Hierarchical image segmentation via recursive superpixel with adaptive regularity

    NASA Astrophysics Data System (ADS)

    Nakamura, Kensuke; Hong, Byung-Woo

    2017-11-01

    A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.

  6. Image processing of vaporizing GDI sprays: a new curvature-based approach

    NASA Astrophysics Data System (ADS)

    Lazzaro, Maurizio; Ianniello, Roberto

    2018-01-01

    This article introduces an innovative method for the segmentation of Mie-scattering and schlieren images of GDI sprays. The contours of the liquid phase are obtained by segmenting the scattering images of the spray by means of optimal filtering of the image, relying on variational methods, and an original thresholding procedure based on an iterative application of Otsu’s method. The segmentation of schlieren images, to get the contours of the spray vapour phase, is obtained by exploiting the surface curvature of the image to strongly enhance the intensity texture due to the vapour density gradients. This approach allows one to unambiguously discern the whole vapour phase of the spray from the background. Additional information about the spray liquid phase can be obtained by thresholding filtered schlieren images. The potential of this method has been substantiated in the segmentation of schlieren and scattering images of a GDI spray of isooctane. The fuel, heated to 363 K, was injected into nitrogen at a density of 1.12 and 3.5 kg m-3 with temperatures of 333 K and 573 K.

  7. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques.

    PubMed

    Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego

    2010-11-01

    Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.

  8. Assessing the Robustness of Complete Bacterial Genome Segmentations

    NASA Astrophysics Data System (ADS)

    Devillers, Hugo; Chiapello, Hélène; Schbath, Sophie; El Karoui, Meriem

    Comparison of closely related bacterial genomes has revealed the presence of highly conserved sequences forming a "backbone" that is interrupted by numerous, less conserved, DNA fragments. Segmentation of bacterial genomes into backbone and variable regions is particularly useful to investigate bacterial genome evolution. Several software tools have been designed to compare complete bacterial chromosomes and a few online databases store pre-computed genome comparisons. However, very few statistical methods are available to evaluate the reliability of these software tools and to compare the results obtained with them. To fill this gap, we have developed two local scores to measure the robustness of bacterial genome segmentations. Our method uses a simulation procedure based on random perturbations of the compared genomes. The scores presented in this paper are simple to implement and our results show that they allow to discriminate easily between robust and non-robust bacterial genome segmentations when using aligners such as MAUVE and MGA.

  9. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    NASA Technical Reports Server (NTRS)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  10. Machine learning in soil classification.

    PubMed

    Bhattacharya, B; Solomatine, D P

    2006-03-01

    In a number of engineering problems, e.g. in geotechnics, petroleum engineering, etc. intervals of measured series data (signals) are to be attributed a class maintaining the constraint of contiguity and standard classification methods could be inadequate. Classification in this case needs involvement of an expert who observes the magnitude and trends of the signals in addition to any a priori information that might be available. In this paper, an approach for automating this classification procedure is presented. Firstly, a segmentation algorithm is developed and applied to segment the measured signals. Secondly, the salient features of these segments are extracted using boundary energy method. Based on the measured data and extracted features to assign classes to the segments classifiers are built; they employ Decision Trees, ANN and Support Vector Machines. The methodology was tested in classifying sub-surface soil using measured data from Cone Penetration Testing and satisfactory results were obtained.

  11. Object-Based Point Cloud Analysis of Full-Waveform Airborne Laser Scanning Data for Urban Vegetation Classification

    PubMed Central

    Rutzinger, Martin; Höfle, Bernhard; Hollaus, Markus; Pfeifer, Norbert

    2008-01-01

    Airborne laser scanning (ALS) is a remote sensing technique well-suited for 3D vegetation mapping and structure characterization because the emitted laser pulses are able to penetrate small gaps in the vegetation canopy. The backscattered echoes from the foliage, woody vegetation, the terrain, and other objects are detected, leading to a cloud of points. Higher echo densities (>20 echoes/m2) and additional classification variables from full-waveform (FWF) ALS data, namely echo amplitude, echo width and information on multiple echoes from one shot, offer new possibilities in classifying the ALS point cloud. Currently FWF sensor information is hardly used for classification purposes. This contribution presents an object-based point cloud analysis (OBPA) approach, combining segmentation and classification of the 3D FWF ALS points designed to detect tall vegetation in urban environments. The definition tall vegetation includes trees and shrubs, but excludes grassland and herbage. In the applied procedure FWF ALS echoes are segmented by a seeded region growing procedure. All echoes sorted descending by their surface roughness are used as seed points. Segments are grown based on echo width homogeneity. Next, segment statistics (mean, standard deviation, and coefficient of variation) are calculated by aggregating echo features such as amplitude and surface roughness. For classification a rule base is derived automatically from a training area using a statistical classification tree. To demonstrate our method we present data of three sites with around 500,000 echoes each. The accuracy of the classified vegetation segments is evaluated for two independent validation sites. In a point-wise error assessment, where the classification is compared with manually classified 3D points, completeness and correctness better than 90% are reached for the validation sites. In comparison to many other algorithms the proposed 3D point classification works on the original measurements directly, i.e. the acquired points. Gridding of the data is not necessary, a process which is inherently coupled to loss of data and precision. The 3D properties provide especially a good separability of buildings and terrain points respectively, if they are occluded by vegetation. PMID:27873771

  12. 3D geometric split-merge segmentation of brain MRI datasets.

    PubMed

    Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis

    2014-05-01

    In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Restoring Wood-Rich Hotspots in Mountain Stream Networks

    NASA Astrophysics Data System (ADS)

    Wohl, E.; Scott, D.

    2016-12-01

    Mountain streams commonly include substantial longitudinal variability in valley and channel geometry, alternating repeatedly between steep, narrow and relatively wide, low gradient segments. Segments that are wider and lower gradient than neighboring steeper sections are hotspots with respect to: retention of large wood (LW) and finer sediment and organic matter; uptake of nutrients; and biomass and biodiversity of aquatic and riparian organisms. These segments are also more likely to be transport-limited with respect to floodplain and instream LW. Management designed to protect and restore riverine LW and the physical and ecological processes facilitated by the presence of LW is likely to be most effective if focused on relatively low-gradient stream segments. These segments can be identified using a simple, reach-scale gradient analysis based on high-resolution DEMs, with field visits to identify factors that potentially limit or facilitate LW recruitment and retention, such as forest disturbance history or land use. Drawing on field data from the western US, this presentation outlines a procedure for mapping relatively low-gradient segments in a stream network and for identifying those segments where LW reintroduction or retention is most likely to balance maximizing environmental benefits derived from the presence of LW while minimizing hazards associated with LW.

  14. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations

    PubMed Central

    Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.

    2015-01-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key points Musculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest. Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis. Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349

  15. Efficacy and safety of catheter-based radiofrequency renal denervation in stented renal arteries.

    PubMed

    Mahfoud, Felix; Tunev, Stefan; Ruwart, Jennifer; Schulz-Jander, Daniel; Cremers, Bodo; Linz, Dominik; Zeller, Thomas; Bhatt, Deepak L; Rocha-Singh, Krishna; Böhm, Michael; Melder, Robert J

    2014-12-01

    In selected patients with hypertension, renal artery (RA) stenting is used to treat significant atherosclerotic stenoses. However, blood pressure often remains uncontrolled after the procedure. Although catheter-based renal denervation (RDN) can reduce blood pressure in certain patients with resistant hypertension, there are no data on the feasibility and safety of RDN in stented RA. We report marked blood pressure reduction after RDN in a patient with resistant hypertension who underwent previous stenting. Subsequently, radiofrequency ablation was investigated within the stented segment of porcine RA, distal to the stented segment, and in nonstented RA and compared with stent only and untreated controls. There were neither observations of thrombus nor gross or histological changes in the kidneys. After radiofrequency ablation of the nonstented RA, sympathetic nerves innervating the kidney were significantly reduced, as indicated by significant decreases in sympathetic terminal axons and reduction of norepinephrine in renal tissue. Similar denervation efficacy was found when RDN was performed distal to a renal stent. In contrast, when radiofrequency ablation was performed within the stented segment of the RA, significant sympathetic nerve ablation was not seen. Histological observation showed favorable healing in all arteries. Radiofrequency ablation of previously stented RA demonstrated that RDN provides equally safe experimental procedural outcomes in a porcine model whether the radiofrequency treatment is delivered within, adjacent, or without the stent struts being present in the RA. However, efficacious RDN is only achieved when radiofrequency ablation is delivered to the nonstented RA segment distal to the stent. © 2014 American Heart Association, Inc.

  16. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  17. Jersey number detection in sports video for athlete identification

    NASA Astrophysics Data System (ADS)

    Ye, Qixiang; Huang, Qingming; Jiang, Shuqiang; Liu, Yang; Gao, Wen

    2005-07-01

    Athlete identification is important for sport video content analysis since users often care about the video clips with their preferred athletes. In this paper, we propose a method for athlete identification by combing the segmentation, tracking and recognition procedures into a coarse-to-fine scheme for jersey number (digital characters on sport shirt) detection. Firstly, image segmentation is employed to separate the jersey number regions with its background. And size/pipe-like attributes of digital characters are used to filter out candidates. Then, a K-NN (K nearest neighbor) classifier is employed to classify a candidate into a digit in "0-9" or negative. In the recognition procedure, we use the Zernike moment features, which are invariant to rotation and scale for digital shape recognition. Synthetic training samples with different fonts are used to represent the pattern of digital characters with non-rigid deformation. Once a character candidate is detected, a SSD (smallest square distance)-based tracking procedure is started. The recognition procedure is performed every several frames in the tracking process. After tracking tens of frames, the overall recognition results are combined to determine if a candidate is a true jersey number or not by a voting procedure. Experiments on several types of sports video shows encouraging result.

  18. Comparison between manual and semi-automatic segmentation of nasal cavity and paranasal sinuses from CT images.

    PubMed

    Tingelhoff, K; Moral, A I; Kunkel, M E; Rilk, M; Wagner, I; Eichhorn, K G; Wahl, F M; Bootz, F

    2007-01-01

    Segmentation of medical image data is getting more and more important over the last years. The results are used for diagnosis, surgical planning or workspace definition of robot-assisted systems. The purpose of this paper is to find out whether manual or semi-automatic segmentation is adequate for ENT surgical workflow or whether fully automatic segmentation of paranasal sinuses and nasal cavity is needed. We present a comparison of manual and semi-automatic segmentation of paranasal sinuses and the nasal cavity. Manual segmentation is performed by custom software whereas semi-automatic segmentation is realized by a commercial product (Amira). For this study we used a CT dataset of the paranasal sinuses which consists of 98 transversal slices, each 1.0 mm thick, with a resolution of 512 x 512 pixels. For the analysis of both segmentation procedures we used volume, extension (width, length and height), segmentation time and 3D-reconstruction. The segmentation time was reduced from 960 minutes with manual to 215 minutes with semi-automatic segmentation. We found highest variances segmenting nasal cavity. For the paranasal sinuses manual and semi-automatic volume differences are not significant. Dependent on the segmentation accuracy both approaches deliver useful results and could be used for e.g. robot-assisted systems. Nevertheless both procedures are not useful for everyday surgical workflow, because they take too much time. Fully automatic and reproducible segmentation algorithms are needed for segmentation of paranasal sinuses and nasal cavity.

  19. In-vivo segmentation and quantification of coronary lesions by optical coherence tomography images for a lesion type definition and stenosis grading.

    PubMed

    Celi, Simona; Berti, Sergio

    2014-10-01

    Optical coherence tomography (OCT) is a catheter-based medical imaging technique that produces cross-sectional images of blood vessels. This technique is particularly useful for studying coronary atherosclerosis. In this paper, we present a new framework that allows a segmentation and quantification of OCT images of coronary arteries to define the plaque type and stenosis grading. These analyses are usually carried out on-line on the OCT-workstation where measuring is mainly operator-dependent and mouse-based. The aim of this program is to simplify and improve the processing of OCT images for morphometric investigations and to present a fast procedure to obtain 3D geometrical models that can also be used for external purposes such as for finite element simulations. The main phases of our toolbox are the lumen segmentation and the identification of the main tissues in the artery wall. We validated the proposed method with identification and segmentation manually performed by expert OCT readers. The method was evaluated on ten datasets from clinical routine and the validation was performed on 210 images randomly extracted from the pullbacks. Our results show that automated segmentation of the vessel and of the tissue components are possible off-line with a precision that is comparable to manual segmentation for the tissue component and to the proprietary-OCT-console for the lumen segmentation. Several OCT sections have been processed to provide clinical outcome. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Anatomical modeling of the bronchial tree

    NASA Astrophysics Data System (ADS)

    Hentschel, Gerrit; Klinder, Tobias; Blaffert, Thomas; Bülow, Thomas; Wiemker, Rafael; Lorenz, Cristian

    2010-02-01

    The bronchial tree is of direct clinical importance in the context of respective diseases, such as chronic obstructive pulmonary disease (COPD). It furthermore constitutes a reference structure for object localization in the lungs and it finally provides access to lung tissue in, e.g., bronchoscope based procedures for diagnosis and therapy. This paper presents a comprehensive anatomical model for the bronchial tree, including statistics of position, relative and absolute orientation, length, and radius of 34 bronchial segments, going beyond previously published results. The model has been built from 16 manually annotated CT scans, covering several branching variants. The model is represented as a centerline/tree structure but can also be converted in a surface representation. Possible model applications are either to anatomically label extracted bronchial trees or to improve the tree extraction itself by identifying missing segments or sub-trees, e.g., if located beyond a bronchial stenosis. Bronchial tree labeling is achieved using a naïve Bayesian classifier based on the segment properties contained in the model in combination with tree matching. The tree matching step makes use of branching variations covered by the model. An evaluation of the model has been performed in a leaveone- out manner. In total, 87% of the branches resulting from preceding airway tree segmentation could be correctly labeled. The individualized model enables the detection of missing branches, allowing a targeted search, e.g., a local rerun of the tree-segmentation segmentation.

  1. User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy.

    PubMed

    Ramkumar, Anjana; Dolz, Jose; Kirisli, Hortense A; Adebahr, Sonja; Schimek-Jasch, Tanja; Nestle, Ursula; Massoptier, Laurent; Varga, Edit; Stappers, Pieter Jan; Niessen, Wiro J; Song, Yu

    2016-04-01

    Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians' expertise and computers' potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the "strokes" and the "contour", to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design.

  2. Thoracoscopic stapler-based “bidirectional” segmentectomy for posterior basal segment (S10) and its variants

    PubMed Central

    Murayama, Tomonori; Nakajima, Jun

    2018-01-01

    Thoracoscopic segmentectomy for the posterior basal segment (S10) and its variant (e.g., S9+10 and S10b+c combined subsegmentectomy) is one of the most challenging anatomical segmentectomies. Stapler-based segmentectomy is attractive to simplify the operation and to prevent post-operative air leakage. However, this approach makes thoracoscopic S10 segmentectomy even more tricky. The challenges are caused mostly from the following three reasons: first, similar to other basal segments, “three-dimensional” stapling is needed to fold a cuboidal segment; second, the belonging pulmonary artery is not directly facing the interlobar fissure or the hilum, making identification of target artery difficult; third, the anatomy of S10 and adjacent segments such as superior (S6) and medial basal (S7) is variable. To overcome these challenges, this article summarizes the “bidirectional approach” that allows for solid confirmation of anatomy while avoiding separation of S6 and the basal segment. To assist this approach under limited thoracoscopic view, we also show stapling techniques to fold the cuboidal segment with the aid of “standing stiches”. Attention should also be paid to the anatomy of adjacent segments particularly that of S7, which tends to be congested after stapling. The use of virtual-assisted lung mapping (VAL-MAP) is also recommended to demark resection lines because it flexibly allows for complex procedures such as combined subsegmentectomy such as S10b+c, extended segmentectomy such as S10+S9b, and non-anatomically extended segmentectomy. PMID:29785292

  3. Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting

    2014-12-01

    This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.

  4. A fully-automated multiscale kernel graph cuts based particle localization scheme for temporal focusing two-photon microscopy

    NASA Astrophysics Data System (ADS)

    Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei

    2017-03-01

    The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.

  5. Development of A Two-Stage Procedure for the Automatic Recognition of Dysfluencies in the Speech of Children Who Stutter: I. Psychometric Procedures Appropriate for Selection of Training Material for Lexical Dysfluency Classifiers

    PubMed Central

    Howell, Peter; Sackin, Stevie; Glenn, Kazan

    2007-01-01

    This program of work is intended to develop automatic recognition procedures to locate and assess stuttered dysfluencies. This and the following article together, develop and test recognizers for repetitions and prolongations. The automatic recognizers classify the speech in two stages: In the first, the speech is segmented and in the second the segments are categorized. The units that are segmented are words. Here assessments by human judges on the speech of 12 children who stutter are described using a corresponding procedure. The accuracy of word boundary placement across judges, categorization of the words as fluent, repetition or prolongation, and duration of the different fluency categories are reported. These measures allow reliable instances of repetitions and prolongations to be selected for training and assessing the recognizers in the subsequent paper. PMID:9328878

  6. Completion of a Liver Surgery Complexity Score and Classification Based on an International Survey of Experts.

    PubMed

    Lee, Major K; Gao, Feng; Strasberg, Steven M

    2016-08-01

    Liver resections have classically been distinguished as "minor" or "major," based on number of segments removed. This is flawed because the number of segments resected alone does not convey the complexity of a resection. We recently developed a 3-tiered classification for the complexity of liver resections based on utility weighting by experts. This study aims to complete the earlier classification and to illustrate its application. Two surveys were administered to expert liver surgeons. Experts were asked to rate the difficulty of various open liver resections on a scale of 1 to 10. Statistical methods were then used to develop a complexity score for each procedure. Sixty-six of 135 (48.9%) surgeons responded to the earlier survey, and 66 of 122 (54.1%) responded to the current survey. In all, 19 procedures were rated. The lowest mean score of 1.36 (indicating least difficult) was given to peripheral wedge resection. Right hepatectomy with IVC reconstruction was deemed most difficult, with a score of 9.35. Complexity scores were similar for 9 procedures present in both surveys. Caudate resection, hepaticojejunostomy, and vascular reconstruction all increased the complexity of standard resections significantly. These data permit quantitative assessment of the difficulty of a variety of liver resections. The complexity scores generated allow for separation of liver resections into 3 categories of complexity (low complexity, medium complexity, and high complexity) on a quantitative basis. This provides a more accurate representation of the complexity of procedures in comparative studies. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  7. A model-based approach for estimation of changes in lumbar segmental kinematics associated with alterations in trunk muscle forces.

    PubMed

    Shojaei, Iman; Arjmand, Navid; Meakin, Judith R; Bazrgari, Babak

    2018-03-21

    The kinematics information from imaging, if combined with optimization-based biomechanical models, may provide a unique platform for personalized assessment of trunk muscle forces (TMFs). Such a method, however, is feasible only if differences in lumbar spine kinematics due to differences in TMFs can be captured by the current imaging techniques. A finite element model of the spine within an optimization procedure was used to estimate segmental kinematics of lumbar spine associated with five different sets of TMFs. Each set of TMFs was associated with a hypothetical trunk neuromuscular strategy that optimized one aspect of lower back biomechanics. For each set of TMFs, the segmental kinematics of lumbar spine was estimated for a single static trunk flexed posture involving, respectively, 40° and 10° of thoracic and pelvic rotations. Minimum changes in the angular and translational deformations of a motion segment with alterations in TMFs ranged from 0° to 0.7° and 0 mm to 0.04 mm, respectively. Maximum changes in the angular and translational deformations of a motion segment with alterations in TMFs ranged from 2.4° to 7.6° and 0.11 mm to 0.39 mm, respectively. The differences in kinematics of lumbar segments between each combination of two sets of TMFs in 97% of cases for angular deformation and 55% of cases for translational deformation were within the reported accuracy of current imaging techniques. Therefore, it might be possible to use image-based kinematics of lumbar segments along with computational modeling for personalized assessment of TMFs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Random walks with shape prior for cochlea segmentation in ex vivo μCT.

    PubMed

    Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Angel

    2016-09-01

    Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from [Formula: see text] images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate [Formula: see text] segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo [Formula: see text] images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten [Formula: see text] data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236-253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215-226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.

  9. Incorporating partially identified sample segments into acreage estimation procedures: Estimates using only observations from the current year

    NASA Technical Reports Server (NTRS)

    Sielken, R. L., Jr. (Principal Investigator)

    1981-01-01

    Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.

  10. Modification to area navigation equipment for instrument two-segment approaches

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A two-segment aircraft landing approach concept utilizing an area random navigation (RNAV) system to execute the two-segment approach and eliminate the requirements for co-located distance measuring equipment (DME) was investigated. This concept permits non-precision approaches to be made to runways not equipped with ILS systems, down to appropriate minima. A hardware and software retrofit kit for the concept was designed, built, and tested on a DC-8-61 aircraft for flight evaluation. A two-segment approach profile and piloting procedure for that aircraft that will provide adequate safety margin under adverse weather, in the presence of system failures, and with the occurrence of an abused approach, was also developed. The two-segment approach procedure and equipment was demonstrated to line pilots under conditions which are representative of those encountered in air carrier service.

  11. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 2: Program users manual

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  12. Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA

    Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less

  13. Land Cover Classification in a Complex Urban-Rural Landscape with Quickbird Imagery

    PubMed Central

    Moran, Emilio Federico.

    2010-01-01

    High spatial resolution images have been increasingly used for urban land use/cover classification, but the high spectral variation within the same land cover, the spectral confusion among different land covers, and the shadow problem often lead to poor classification performance based on the traditional per-pixel spectral-based classification methods. This paper explores approaches to improve urban land cover classification with Quickbird imagery. Traditional per-pixel spectral-based supervised classification, incorporation of textural images and multispectral images, spectral-spatial classifier, and segmentation-based classification are examined in a relatively new developing urban landscape, Lucas do Rio Verde in Mato Grosso State, Brazil. This research shows that use of spatial information during the image classification procedure, either through the integrated use of textural and spectral images or through the use of segmentation-based classification method, can significantly improve land cover classification performance. PMID:21643433

  14. Segmentation, modeling and classification of the compact objects in a pile

    NASA Technical Reports Server (NTRS)

    Gupta, Alok; Funka-Lea, Gareth; Wohn, Kwangyoen

    1990-01-01

    The problem of interpreting dense range images obtained from the scene of a heap of man-made objects is discussed. A range image interpretation system consisting of segmentation, modeling, verification, and classification procedures is described. First, the range image is segmented into regions and reasoning is done about the physical support of these regions. Second, for each region several possible three-dimensional interpretations are made based on various scenarios of the objects physical support. Finally each interpretation is tested against the data for its consistency. The superquadric model is selected as the three-dimensional shape descriptor, plus tapering deformations along the major axis. Experimental results obtained from some complex range images of mail pieces are reported to demonstrate the soundness and the robustness of our approach.

  15. Classification of microscopy images of Langerhans islets

    NASA Astrophysics Data System (ADS)

    Å vihlík, Jan; Kybic, Jan; Habart, David; Berková, Zuzana; Girman, Peter; Kříž, Jan; Zacharovová, Klára

    2014-03-01

    Evaluation of images of Langerhans islets is a crucial procedure for planning an islet transplantation, which is a promising diabetes treatment. This paper deals with segmentation of microscopy images of Langerhans islets and evaluation of islet parameters such as area, diameter, or volume (IE). For all the available images, the ground truth and the islet parameters were independently evaluated by four medical experts. We use a pixelwise linear classifier (perceptron algorithm) and SVM (support vector machine) for image segmentation. The volume is estimated based on circle or ellipse fitting to individual islets. The segmentations were compared with the corresponding ground truth. Quantitative islet parameters were also evaluated and compared with parameters given by medical experts. We can conclude that accuracy of the presented fully automatic algorithm is fully comparable with medical experts.

  16. Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.

    PubMed

    Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C

    2009-09-01

    A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.

  17. Automated side-chain model building and sequence assignment by template matching.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  18. What provides a better value for your time? The use of relative value units to compare posterior segmental instrumentation of vertebral segments.

    PubMed

    Orr, R Douglas; Sodhi, Nipun; Dalton, Sarah E; Khlopas, Anton; Sultan, Assem A; Chughtai, Morad; Newman, Jared M; Savage, Jason; Mroz, Thomas E; Mont, Michael A

    2018-02-02

    Relative value units (RVUs) are a compensation model based on the effort required to provide a procedure or service to a patient. Thus, procedures that are more complex and require greater technical skill and aftercare, such as multilevel spine surgery, should provide greater physician compensation. However, there are limited data comparing RVUs with operative time. Therefore, this study aims to compare mean (1) operative times; (2) RVUs; and (3) RVU/min between posterior segmental instrumentation of 3-6, 7-12, and ≥13 vertebral segments, and to perform annual cost difference analysis. A total of 437 patients who underwent instrumentation of 3-6 segments (Cohort 1, current procedural terminology [CPT] code: 22842), 67 patients who had instrumentation of 7-12 segments (Cohort 2, CPT code: 22843), and 16 patients who had instrumentation of ≥13 segments (Cohort 3, CPT code: 22844) were identified from the National Surgical Quality Improvement Program (NSQIP) database. Mean operative times, RVUs, and RVU/min, as well as an annualized cost difference analysis, were calculated and compared using Student t test. This study received no funding from any party or entity. Cohort 1 had shorter mean operative times than Cohorts 2 and 3 (217 minutes vs. 325 minutes vs. 426 minutes, p<.05). Cohort 1 had a lower mean RVU than Cohorts 2 and 3 (12.6 vs. 13.4 vs. 16.4). Cohort 1 had a greater RVU/min than Cohorts 2 and 3 (0.08 vs. 0.05, p<.05; vs. 0.08 vs. 0.05, p>.05). A $112,432.12 annualized cost difference between Cohorts 1 and 2, a $176,744.76 difference between Cohorts 1 and 3, and a $64,312.55 difference between Cohorts 2 and 3 were calculated. The RVU/min takes into account not just the value provided but also the operative times required for highly complex cases. The RVU/min for fewer vertebral level instrumentation being greater (0.08 vs. 0.05), as well as the $177,000 annualized cost difference, indicates that compensation is not proportional to the added time, effort, and skill for more complex cases. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Method 365.5 Determination of Orthophosphate in Estuarine and Coastal Waters by Automated Colorimetric Analysis

    EPA Science Inventory

    This method provides a procedure for the determination of low-level orthophosphate concentrations normally found in estuarine and/or coastal waters. It is based upon the method of Murphy and Riley1 adapted for automated segmented flow analysis2 in which the two reagent solutions ...

  20. Extracting oil palm crown from WorldView-2 satellite image

    NASA Astrophysics Data System (ADS)

    Korom, A.; Phua, M.-H.; Hirata, Y.; Matsuura, T.

    2014-02-01

    Oil palm (OP) is the most commercial crop in Malaysia. Estimating the crowns is important for biomass estimation from high resolution satellite (HRS) image. This study examined extraction of individual OP crown from a WorldView-2 image using twofold algorithms, i.e., masking of Non-OP pixels and detection of individual OP crown based on the watershed segmentation of greyscale images. The study site was located in Beluran district, central Sabah, where matured OPs with the age ranging from 15 to 25 years old have been planted. We examined two compound vegetation indices of (NDVI+1)*DVI and NDII for masking non-OP crown areas. Using kappa statistics, an optimal threshold value was set with the highest accuracy at 90.6% for differentiating OP crown areas from Non-OP areas. After the watershed segmentation of OP crown areas with additional post-procedures, about 77% of individual OP crowns were successfully detected in comparison to the manual based delineation. Shape and location of each crown segment was then assessed based on a modified version of the goodness measures of Möller et al which was 0.3, indicating an acceptable CSGM (combined segmentation goodness measures) agreements between the automated and manually delineated crowns (perfect case is '1').

  1. Segmentation of British Sign Language (BSL): mind the gap!

    PubMed

    Orfanidou, Eleni; McQueen, James M; Adam, Robert; Morgan, Gary

    2015-01-01

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms.

  2. Mode calculations in unstable resonators with flowing saturable gain. 1:hermite-gaussian expansion.

    PubMed

    Siegman, A E; Sziklas, E A

    1974-12-01

    We present a procedure for calculating the three-dimensional mode pattern, the output beam characteristics, and the power output of an oscillating high-power laser taking into account a nonuniform, transversely flowing, saturable gain medium; index inhomogeneities inside the laser resonator; and arbitrary mirror distortion and misalignment. The laser is divided into a number of axial segments. The saturated gain-and-index variation. across each short segment is lumped into a complex gain profile across the midplane of that segment. The circulating optical wave within the resonator is propagated from midplane to midplane in free-space fashion and is multiplied by the lumped complex gain profile upon passing through each midplane. After each complete round trip of the optical wave inside the resonator, the saturated gain profiles are recalculated based upon the circulating fields in the cavity. The procedure when applied to typical unstable-resonator flowing-gain lasers shows convergence to a single distorted steady-state mode of oscillation. Typical near-field and far-field results are presented. Several empirical rules of thumb for finite truncated Hermite-Gaussian expansions, including an approximate sampling theorem, have been developed as part of the calculations.

  3. Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams

    NASA Astrophysics Data System (ADS)

    Mueller, M.; Groch, A.; Baumhauer, M.; Maier-Hein, L.; Teber, D.; Rassweiler, J.; Meinzer, H.-P.; Wegner, In.

    2012-02-01

    Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.

  4. Automated segmentation and tracking for large-scale analysis of focal adhesion dynamics.

    PubMed

    Würflinger, T; Gamper, I; Aach, T; Sechi, A S

    2011-01-01

    Cell adhesion, a process mediated by the formation of discrete structures known as focal adhesions (FAs), is pivotal to many biological events including cell motility. Much is known about the molecular composition of FAs, although our knowledge of the spatio-temporal recruitment and the relative occupancy of the individual components present in the FAs is still incomplete. To fill this gap, an essential prerequisite is a highly reliable procedure for the recognition, segmentation and tracking of FAs. Although manual segmentation and tracking may provide some advantages when done by an expert, its performance is usually hampered by subjective judgement and the long time required in analysing large data sets. Here, we developed a model-based segmentation and tracking algorithm that overcomes these problems. In addition, we developed a dedicated computational approach to correct segmentation errors that may arise from the analysis of poorly defined FAs. Thus, by achieving accurate and consistent FA segmentation and tracking, our work establishes the basis for a comprehensive analysis of FA dynamics under various experimental regimes and the future development of mathematical models that simulate FA behaviour. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  5. Monitoring of mirror degradation of fluorescence detectors at the Pierre Auger Observatory due to dust sedimentation

    NASA Astrophysics Data System (ADS)

    Nozka, L.; Hiklova, H.; Horvath, P.; Hrabovsky, M.; Mandat, D.; Palatka, M.; Pech, M.; Ridky, J.; Schovanek, P.

    2018-05-01

    We present results of the monitoring method we have used to characterize the optical performance deterioration due to the dust of our mirror segments produced for fluorescence detectors used in astrophysics experiments. The method is based on the measurement of scatter profiles of reflected light. The scatter profiles and the reflectivity of the mirror segments sufficiently describe the performance of the mirrors from the perspective of reconstruction algorithms. The method is demonstrated on our mirror segments installed in frame of the Pierre Auger Observatory project. Although installed in air-conditioned buildings, both the dust sedimentation and the natural aging of the reflective layer deteriorate the optical throughput of the segments. In the paper, we summarized data from ten years of operation of the fluorescence detectors. During this time, we periodically measured in-situ scatter characteristics represented by the specular reflectivity and the reflectivity of the diffusion part at the wavelength of 670 nm of the segment surface (measured by means of the optical scatter technique as well). These measurements were extended with full Bidirectional Reflectance Distribution Functions (BRDF) profiles of selected segments made in the laboratory. Cleaning procedures are also discussed in the paper.

  6. Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis

    NASA Astrophysics Data System (ADS)

    Che, E.; Olsen, M. J.

    2017-09-01

    Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  7. A KST framework for correlation network construction from time series signals

    NASA Astrophysics Data System (ADS)

    Qi, Jin-Peng; Gu, Quan; Zhu, Ying; Zhang, Ping

    2018-04-01

    A KST (Kolmogorov-Smirnov test and T statistic) method is used for construction of a correlation network based on the fluctuation of each time series within the multivariate time signals. In this method, each time series is divided equally into multiple segments, and the maximal data fluctuation in each segment is calculated by a KST change detection procedure. Connections between each time series are derived from the data fluctuation matrix, and are used for construction of the fluctuation correlation network (FCN). The method was tested with synthetic simulations and the result was compared with those from using KS or T only for detection of data fluctuation. The novelty of this study is that the correlation analyses was based on the data fluctuation in each segment of each time series rather than on the original time signals, which would be more meaningful for many real world applications and for analysis of large-scale time signals where prior knowledge is uncertain.

  8. Interactive Tooth Separation from Dental Model Using Segmentation Field

    PubMed Central

    2016-01-01

    Tooth segmentation on dental model is an essential step of computer-aided-design systems for orthodontic virtual treatment planning. However, fast and accurate identifying cutting boundary to separate teeth from dental model still remains a challenge, due to various geometrical shapes of teeth, complex tooth arrangements, different dental model qualities, and varying degrees of crowding problems. Most segmentation approaches presented before are not able to achieve a balance between fine segmentation results and simple operating procedures with less time consumption. In this article, we present a novel, effective and efficient framework that achieves tooth segmentation based on a segmentation field, which is solved by a linear system defined by a discrete Laplace-Beltrami operator with Dirichlet boundary conditions. A set of contour lines are sampled from the smooth scalar field, and candidate cutting boundaries can be detected from concave regions with large variations of field data. The sensitivity to concave seams of the segmentation field facilitates effective tooth partition, as well as avoids obtaining appropriate curvature threshold value, which is unreliable in some case. Our tooth segmentation algorithm is robust to dental models with low quality, as well as is effective to dental models with different levels of crowding problems. The experiments, including segmentation tests of varying dental models with different complexity, experiments on dental meshes with different modeling resolutions and surface noises and comparison between our method and the morphologic skeleton segmentation method are conducted, thus demonstrating the effectiveness of our method. PMID:27532266

  9. Semiautomatic segmentation of the heart from CT images based on intensity and morphological features

    NASA Astrophysics Data System (ADS)

    Redwood, Abena B.; Camp, Jon J.; Robb, Richard A.

    2005-04-01

    The incidence of certain types of cardiac arrhythmias is increasing. Effective, minimally invasive treatment has remained elusive. Pharmacologic treatment has been limited by drug intolerance and recurrence of disease. Catheter based ablation has been moderately successful in treating certain types of cardiac arrhythmias, including typical atrial flutter and fibrillation, but there remains a relatively high rate of recurrence. Additional side effects associated with cardiac ablation procedures include stroke, perivascular lung damage, and skin burns caused by x-ray fluoroscopy. Access to patient specific 3-D cardiac images has potential to significantly improve the process of cardiac ablation by providing the physician with a volume visualization of the heart. This would facilitate more effective guidance of the catheter, increase the accuracy of the ablative process, and eliminate or minimize the damage to surrounding tissue. In this study, a semiautomatic method for faithful cardiac segmentation was investigated using Analyze - a comprehensive processing software package developed at the Biomedical Imaging Resource, Mayo Clinic. This method included use of interactive segmentation based on math morphology and separation of the chambers based on morphological connections. The external surfaces of the hearts were readily segmented, while accurate separation of individual chambers was a challenge. Nonetheless, a skilled operator could manage the task in a few minutes. Useful improvements suggested in this paper would give this method a promising future.

  10. MLS data segmentation using Point Cloud Library procedures. (Polish Title: Segmentacja danych MLS z użyciem procedur Point Cloud Library)

    NASA Astrophysics Data System (ADS)

    Grochocka, M.

    2013-12-01

    Mobile laser scanning is dynamically developing measurement technology, which is becoming increasingly widespread in acquiring three-dimensional spatial information. Continuous technical progress based on the use of new tools, technology development, and thus the use of existing resources in a better way, reveals new horizons of extensive use of MLS technology. Mobile laser scanning system is usually used for mapping linear objects, and in particular the inventory of roads, railways, bridges, shorelines, shafts, tunnels, and even geometrically complex urban spaces. The measurement is done from the perspective of use of the object, however, does not interfere with the possibilities of movement and work. This paper presents the initial results of the segmentation data acquired by the MLS. The data used in this work was obtained as part of an inventory measurement infrastructure railway line. Measurement of point clouds was carried out using a profile scanners installed on the railway platform. To process the data, the tools of 'open source' Point Cloud Library was used. These tools allow to use templates of programming libraries. PCL is an open, independent project, operating on a large scale for processing 2D/3D image and point clouds. Software PCL is released under the terms of the BSD license (Berkeley Software Distribution License), which means it is a free for commercial and research use. The article presents a number of issues related to the use of this software and its capabilities. Segmentation data is based on applying the templates library pcl_ segmentation, which contains the segmentation algorithms to separate clusters. These algorithms are best suited to the processing point clouds, consisting of a number of spatially isolated regions. Template library performs the extraction of the cluster based on the fit of the model by the consensus method samples for various parametric models (planes, cylinders, spheres, lines, etc.). Most of the mathematical operation is carried out on the basis of Eigen library, a set of templates for linear algebra.

  11. Iris recognition: on the segmentation of degraded images acquired in the visible wavelength.

    PubMed

    Proença, Hugo

    2010-08-01

    Iris recognition imaging constraints are receiving increasing attention. There are several proposals to develop systems that operate in the visible wavelength and in less constrained environments. These imaging conditions engender acquired noisy artifacts that lead to severely degraded images, making iris segmentation a major issue. Having observed that existing iris segmentation methods tend to fail in these challenging conditions, we present a segmentation method that can handle degraded images acquired in less constrained conditions. We offer the following contributions: 1) to consider the sclera the most easily distinguishable part of the eye in degraded images, 2) to propose a new type of feature that measures the proportion of sclera in each direction and is fundamental in segmenting the iris, and 3) to run the entire procedure in deterministically linear time in respect to the size of the image, making the procedure suitable for real-time applications.

  12. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  13. Automatic MRI 2D brain segmentation using graph searching technique.

    PubMed

    Pedoia, Valentina; Binaghi, Elisabetta

    2013-09-01

    Accurate and efficient segmentation of the whole brain in magnetic resonance (MR) images is a key task in many neuroscience and medical studies either because the whole brain is the final anatomical structure of interest or because the automatic extraction facilitates further analysis. The problem of segmenting brain MRI images has been extensively addressed by many researchers. Despite the relevant achievements obtained, automated segmentation of brain MRI imagery is still a challenging problem whose solution has to cope with critical aspects such as anatomical variability and pathological deformation. In the present paper, we describe and experimentally evaluate a method for segmenting brain from MRI images basing on two-dimensional graph searching principles for border detection. The segmentation of the whole brain over the entire volume is accomplished slice by slice, automatically detecting frames including eyes. The method is fully automatic and easily reproducible by computing the internal main parameters directly from the image data. The segmentation procedure is conceived as a tool of general applicability, although design requirements are especially commensurate with the accuracy required in clinical tasks such as surgical planning and post-surgical assessment. Several experiments were performed to assess the performance of the algorithm on a varied set of MRI images obtaining good results in terms of accuracy and stability. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.

    PubMed

    Peyer, Kathrin E; Morris, Mark; Sellers, William I

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.

  15. Modelling of subject specific based segmental dynamics of knee joint

    NASA Astrophysics Data System (ADS)

    Nasir, N. H. M.; Ibrahim, B. S. K. K.; Huq, M. S.; Ahmad, M. K. I.

    2017-09-01

    This study determines segmental dynamics parameters based on subject specific method. Five hemiplegic patients participated in the study, two men and three women. Their ages ranged from 50 to 60 years, weights from 60 to 70 kg and heights from 145 to 170 cm. Sample group included patients with different side of stroke. The parameters of the segmental dynamics resembling the knee joint functions measured via measurement of Winter and its model generated via the employment Kane's equation of motion. Inertial parameters in the form of the anthropometry can be identified and measured by employing Standard Human Dimension on the subjects who are in hemiplegia condition. The inertial parameters are the location of centre of mass (COM) at the length of the limb segment, inertia moment around the COM and masses of shank and foot to generate accurate motion equations. This investigation has also managed to dig out a few advantages of employing the table of anthropometry in movement biomechanics of Winter's and Kane's equation of motion. A general procedure is presented to yield accurate measurement of estimation for the inertial parameters for the joint of the knee of certain subjects with stroke history.

  16. Reproducibility of tract segmentation between sessions using an unsupervised modelling-based approach.

    PubMed

    Clayden, Jonathan D; Storkey, Amos J; Muñoz Maniega, Susana; Bastin, Mark E

    2009-04-01

    This work describes a reproducibility analysis of scalar water diffusion parameters, measured within white matter tracts segmented using a probabilistic shape modelling method. In common with previously reported neighbourhood tractography (NT) work, the technique optimises seed point placement for fibre tracking by matching the tracts generated using a number of candidate points against a reference tract, which is derived from a white matter atlas in the present study. No direct constraints are applied to the fibre tracking results. An Expectation-Maximisation algorithm is used to fully automate the procedure, and make dramatically more efficient use of data than earlier NT methods. Within-subject and between-subject variances for fractional anisotropy and mean diffusivity within the tracts are then separated using a random effects model. We find test-retest coefficients of variation (CVs) similar to those reported in another study using landmark-guided single seed points; and subject to subject CVs similar to a constraint-based multiple ROI method. We conclude that our approach is at least as effective as other methods for tract segmentation using tractography, whilst also having some additional benefits, such as its provision of a goodness-of-match measure for each segmentation.

  17. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  18. Segmentation-less Digital Rock Physics

    NASA Astrophysics Data System (ADS)

    Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.

    2017-12-01

    In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.

  19. VirSSPA- a virtual reality tool for surgical planning workflow.

    PubMed

    Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T

    2009-03-01

    A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.

  20. Computer aided system for segmentation and visualization of microcalcifications in digital mammograms.

    PubMed

    Reljin, Branimir; Milosević, Zorica; Stojić, Tomislav; Reljin, Irini

    2009-01-01

    Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.

  1. Automatic tracking of laparoscopic instruments for autonomous control of a cameraman robot.

    PubMed

    Khoiy, Keyvan Amini; Mirbagheri, Alireza; Farahmand, Farzam

    2016-01-01

    An automated instrument tracking procedure was designed and developed for autonomous control of a cameraman robot during laparoscopic surgery. The procedure was based on an innovative marker-free segmentation algorithm for detecting the tip of the surgical instruments in laparoscopic images. A compound measure of Saturation and Value components of HSV color space was incorporated that was enhanced further using the Hue component and some essential characteristics of the instrument segment, e.g., crossing the image boundaries. The procedure was then integrated into the controlling system of the RoboLens cameraman robot, within a triple-thread parallel processing scheme, such that the tip is always kept at the center of the image. Assessment of the performance of the system on prerecorded real surgery movies revealed an accuracy rate of 97% for high quality images and about 80% for those suffering from poor lighting and/or blood, water and smoke noises. A reasonably satisfying performance was also observed when employing the system for autonomous control of the robot in a laparoscopic surgery phantom, with a mean time delay of 200ms. It was concluded that with further developments, the proposed procedure can provide a practical solution for autonomous control of cameraman robots during laparoscopic surgery operations.

  2. Design of multi-body Lambert type orbits with specified departure and arrival positions

    NASA Astrophysics Data System (ADS)

    Ishii, Nobuaki; Kawaguchi, Jun'ichiro; Matsuo, Hiroki

    1991-10-01

    A new procedure for designing a multi-body Lambert type orbit comprising a multiple swingby process is developed, aiming at relieving a numerical difficulty inherent to a highly nonlinear swingby mechanism. The proposed algorithm, Recursive Multi-Step Linearization, first divides a whole orbit into several trajectory segments. Then, with a maximum use of piecewised transition matrices, a segmentized orbit is repeatedly upgraded until an approximated orbit initially based on a patched conics method eventually converges. In application to the four body earth-moon system with sun's gravitation, one of the double lunar swingby orbits including 12 lunar swingbys is successfully designed without any velocity mismatch.

  3. Iterative-cuts: longitudinal and scale-invariant segmentation via user-defined templates for rectosigmoid colon in gynecological brachytherapy.

    PubMed

    Lüddemann, Tobias; Egger, Jan

    2016-04-01

    Among all types of cancer, gynecological malignancies belong to the fourth most frequent type of cancer among women. In addition to chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an organ-at-risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two-dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graph's outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual result yielded a dice similarity coefficient value of [Formula: see text], in comparison to [Formula: see text] for the comparison of two manual segmentations by the same physician. Utilizing the proposed methodology resulted in a median time of [Formula: see text], compared to 300 s needed for pure manual segmentation.

  4. Interactive and scale invariant segmentation of the rectum/sigmoid via user-defined templates

    NASA Astrophysics Data System (ADS)

    Lüddemann, Tobias; Egger, Jan

    2016-03-01

    Among all types of cancer, gynecological malignancies belong to the 4th most frequent type of cancer among women. Besides chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an Organ-At-Risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graphs outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual results yielded to a Dice Similarity Coefficient value of 83.85+/-4.08%, in comparison to 83.97+/-8.08% for the comparison of two manual segmentations of the same physician. Utilizing the proposed methodology resulted in a median time of 128 seconds per dataset, compared to 300 seconds needed for pure manual segmentation.

  5. Morphology-based three-dimensional segmentation of coronary artery tree from CTA scans

    NASA Astrophysics Data System (ADS)

    Banh, Diem Phuc T.; Kyprianou, Iacovos S.; Paquerault, Sophie; Myers, Kyle J.

    2007-03-01

    We developed an algorithm based on a rule-based threshold framework to segment the coronary arteries from angiographic computed tomography (CTA) data. Computerized segmentation of the coronary arteries is a challenging procedure due to the presence of diverse anatomical structures surrounding the heart on cardiac CTA data. The proposed algorithm incorporates various levels of image processing and organ information including region, connectivity and morphology operations. It consists of three successive stages. The first stage involves the extraction of the three-dimensional scaffold of the heart envelope. This stage is semiautomatic requiring a reader to review the CTA scans and manually select points along the heart envelope in slices. These points are further processed using a surface spline-fitting technique to automatically generate the heart envelope. The second stage consists of segmenting the left heart chambers and coronary arteries using grayscale threshold, size and connectivity criteria. This is followed by applying morphology operations to further detach the left and right coronary arteries from the aorta. In the final stage, the 3D vessel tree is reconstructed and labeled using an Isolated Connected Threshold technique. The algorithm was developed and tested on a patient coronary artery CTA that was graciously shared by the Department of Radiology of the Massachusetts General Hospital. The test showed that our method constantly segmented the vessels above 79% of the maximum gray-level and automatically extracted 55 of the 58 coronary segments that can be seen on the CTA scan by a reader. These results are an encouraging step toward our objective of generating high resolution models of the male and female heart that will be subsequently used as phantoms for medical imaging system optimization studies.

  6. Interactive experimenters' planning procedures and mission control

    NASA Technical Reports Server (NTRS)

    Desjardins, R. L.

    1973-01-01

    The computerized mission control and planning system routinely generates a 24-hour schedule in one hour of operator time by including time dimensions into experimental planning procedures. Planning is validated interactively as it is being generated segment by segment in the frame of specific event times. The planner simply points a light pen at the time mark of interest on the time line for entering specific event times into the schedule.

  7. Laboratory Preparation in the Ocular Therapy Curriculum.

    ERIC Educational Resources Information Center

    Cummings, Roger W.

    1986-01-01

    Aspects of laboratory preparation necessary for undergraduate or graduate optometric training in the use of therapeutic drugs are discussed, including glaucoma therapy, anterior segment techniques, posterior segment, and systemic procedures. (MSE)

  8. Preservation or Restoration of Segmental and Regional Spinal Lordosis Using Minimally Invasive Interbody Fusion Techniques in Degenerative Lumbar Conditions: A Literature Review.

    PubMed

    Uribe, Juan S; Myhre, Sue Lynn; Youssef, Jim A

    2016-04-01

    A literature review. The purpose of this study was to review lumbar segmental and regional alignment changes following treatment with a variety of minimally invasive surgery (MIS) interbody fusion procedures for short-segment, degenerative conditions. An increasing number of lumbar fusions are being performed with minimally invasive exposures, despite a perception that minimally invasive lumbar interbody fusion procedures are unable to affect segmental and regional lordosis. Through a MEDLINE and Google Scholar search, a total of 23 articles were identified that reported alignment following minimally invasive lumbar fusion for degenerative (nondeformity) lumbar spinal conditions to examine aggregate changes in postoperative alignment. Of the 23 studies identified, 28 study cohorts were included in the analysis. Procedural cohorts included MIS ALIF (two), extreme lateral interbody fusion (XLIF) (16), and MIS posterior/transforaminal lumbar interbody fusion (P/TLIF) (11). Across 19 study cohorts and 720 patients, weighted average of lumbar lordosis preoperatively for all procedures was 43.5° (range 28.4°-52.5°) and increased 3.4° (9%) (range -2° to 7.4°) postoperatively (P < 0.001). Segmental lordosis increased, on average, by 4° from a weighted average of 8.3° preoperatively (range -0.8° to 15.8°) to 11.2° at postoperative time points (range -0.2° to 22.8°) (P < 0.001) in 1182 patient from 24 study cohorts. Simple linear regression revealed a significant relationship between preoperative lumbar lordosis and change in lumbar lordosis (r = 0.413; P = 0.003), wherein lower preoperative lumbar lordosis predicted a greater increase in postoperative lumbar lordosis. Significant gains in both weighted average lumbar lordosis and segmental lordosis were seen following MIS interbody fusion. None of the segmental lordosis cohorts and only two of the 19 lumbar lordosis cohorts showed decreases in lordosis postoperatively. These results suggest that MIS approaches are able to impact regional and local segmental alignment and that preoperative patient factors can impact the extent of correction gained (preserving vs. restoring alignment). 4.

  9. Flight evaluation of two-segment approaches using area navigation guidance equipment

    NASA Technical Reports Server (NTRS)

    Schwind, G. K.; Morrison, J. A.; Nylen, W. E.; Anderson, E. B.

    1976-01-01

    A two-segment noise abatement approach procedure for use on DC-8-61 aircraft in air carrier service was developed and evaluated. The approach profile and procedures were developed in a flight simulator. Full guidance is provided throughout the approach by a Collins Radio Company three-dimensional area navigation (RNAV) system which was modified to provide the two-segment approach capabilities. Modifications to the basic RNAV software included safety protection logic considered necessary for an operationally acceptable two-segment system. With an aircraft out of revenue service, the system was refined and extensively flight tested, and the profile and procedures were evaluated by representatives of the airlines, airframe manufacturers, the Air Line Pilots Association, and the Federal Aviation Adminstration. The system was determined to be safe and operationally acceptable. It was then placed into scheduled airline service for an evaluation during which 180 approaches were flown by 48 airline pilots. The approach was determined to be compatible with the airline operational environment, although operation of the RNAV system in the existing terminal area air traffic control environment was difficult.

  10. Automated identification of the lung contours in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Nery, F.; Silvestre Silva, J.; Ferreira, N. C.; Caramelo, F. J.; Faustino, R.

    2013-03-01

    Positron Emission Tomography (PET) is a nuclear medicine imaging technique that permits to analyze, in three dimensions, the physiological processes in vivo. One of the areas where PET has demonstrated its advantages is in the staging of lung cancer, where it offers better sensitivity and specificity than other techniques such as CT. On the other hand, accurate segmentation, an important procedure for Computer Aided Diagnostics (CAD) and automated image analysis, is a challenging task given the low spatial resolution and the high noise that are intrinsic characteristics of PET images. This work presents an algorithm for the segmentation of lungs in PET images, to be used in CAD and group analysis in a large patient database. The lung boundaries are automatically extracted from a PET volume through the application of a marker-driven watershed segmentation procedure which is robust to the noise. In order to test the effectiveness of the proposed method, we compared the segmentation results in several slices using our approach with the results obtained from manual delineation. The manual delineation was performed by nuclear medicine physicians that used a software routine that we developed specifically for this task. To quantify the similarity between the contours obtained from the two methods, we used figures of merit based on region and also on contour definitions. Results show that the performance of the algorithm was similar to the performance of human physicians. Additionally, we found that the algorithm-physician agreement is similar (statistically significant) to the inter-physician agreement.

  11. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images.

    PubMed

    Vázquez, David; Bernal, Jorge; Sánchez, F Javier; Fernández-Esparrach, Gloria; López, Antonio M; Romero, Adriana; Drozdzal, Michal; Courville, Aaron

    2017-01-01

    Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

  12. Segmentation of multiple heart cavities in 3-D transesophageal ultrasound images.

    PubMed

    Haak, Alexander; Vegas-Sánchez-Ferrero, Gonzalo; Mulder, Harriët W; Ren, Ben; Kirişli, Hortense A; Metz, Coert; van Burken, Gerard; van Stralen, Marijn; Pluim, Josien P W; van der Steen, Antonius F W; van Walsum, Theo; Bosch, Johannes G

    2015-06-01

    Three-dimensional transesophageal echocardiography (TEE) is an excellent modality for real-time visualization of the heart and monitoring of interventions. To improve the usability of 3-D TEE for intervention monitoring and catheter guidance, automated segmentation is desired. However, 3-D TEE segmentation is still a challenging task due to the complex anatomy with multiple cavities, the limited TEE field of view, and typical ultrasound artifacts. We propose to segment all cavities within the TEE view with a multi-cavity active shape model (ASM) in conjunction with a tissue/blood classification based on a gamma mixture model (GMM). 3-D TEE image data of twenty patients were acquired with a Philips X7-2t matrix TEE probe. Tissue probability maps were estimated by a two-class (blood/tissue) GMM. A statistical shape model containing the left ventricle, right ventricle, left atrium, right atrium, and aorta was derived from computed tomography angiography (CTA) segmentations by principal component analysis. ASMs of the whole heart and individual cavities were generated and consecutively fitted to tissue probability maps. First, an average whole-heart model was aligned with the 3-D TEE based on three manually indicated anatomical landmarks. Second, pose and shape of the whole-heart ASM were fitted by a weighted update scheme excluding parts outside of the image sector. Third, pose and shape of ASM for individual heart cavities were initialized by the previous whole heart ASM and updated in a regularized manner to fit the tissue probability maps. The ASM segmentations were validated against manual outlines by two observers and CTA derived segmentations. Dice coefficients and point-to-surface distances were used to determine segmentation accuracy. ASM segmentations were successful in 19 of 20 cases. The median Dice coefficient for all successful segmentations versus the average observer ranged from 90% to 71% compared with an inter-observer range of 95% to 84%. The agreement against the CTA segmentations was slightly lower with a median Dice coefficient between 85% and 57%. In this work, we successfully showed the accuracy and robustness of the proposed multi-cavity segmentation scheme. This is a promising development for intraoperative procedure guidance, e.g., in cardiac electrophysiology.

  13. Automated detection of videotaped neonatal seizures based on motion segmentation methods.

    PubMed

    Karayiannis, Nicolaos B; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M

    2006-07-01

    This study was aimed at the development of a seizure detection system by training neural networks using quantitative motion information extracted by motion segmentation methods from short video recordings of infants monitored for seizures. The motion of the infants' body parts was quantified by temporal motion strength signals extracted from video recordings by motion segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by direct thresholding, by clustering of the pixel velocities, and by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The computational tools and procedures developed for automated seizure detection were tested and evaluated on 240 short video segments selected and labeled by physicians from a set of video recordings of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). The experimental study described in this paper provided the basis for selecting the most effective strategy for training neural networks to detect neonatal seizures as well as the decision scheme used for interpreting the responses of the trained neural networks. Depending on the decision scheme used for interpreting the responses of the trained neural networks, the best neural networks exhibited sensitivity above 90% or specificity above 90%. The best among the motion segmentation methods developed in this study produced quantitative features that constitute a reliable basis for detecting myoclonic and focal clonic neonatal seizures. The performance targets of this phase of the project may be achieved by combining the quantitative features described in this paper with those obtained by analyzing motion trajectory signals produced by motion tracking methods. A video system based upon automated analysis potentially offers a number of advantages. Infants who are at risk for seizures could be monitored continuously using relatively inexpensive and non-invasive video techniques that supplement direct observation by nursery personnel. This would represent a major advance in seizure surveillance and offers the possibility for earlier identification of potential neurological problems and subsequent intervention.

  14. Atlas-based system for functional neurosurgery

    NASA Astrophysics Data System (ADS)

    Nowinski, Wieslaw L.; Yeo, Tseng T.; Yang, Guo L.; Dow, Douglas E.

    1997-05-01

    This paper addresses the development of an atlas-based system for preoperative functional neurosurgery planning and training, intraoperative support and postoperative analysis. The system is based on Atlas of Stereotaxy of the Human Brain by Schaltenbrand and Wahren used for interactive segmentation and labeling of clinical data in 2D/3D, and for assisting stereotactic targeting. The atlas microseries are digitized, enhanced, segmented, labeled, aligned and organized into mutually preregistered atlas volumes 3D models of the structures are also constructed. The atlas may be interactively registered with the actual patient's data. Several other features are also provided including data reformatting, visualization, navigation, mensuration, and stereotactic path display and editing in 2D/3D. The system increases the accuracy of target definition, reduces the time of planning and time of the procedure itself. It also constitutes a research platform for the construction of more advanced neurosurgery supporting tools and brain atlases.

  15. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    PubMed

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-10-01

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Reverberation Chamber Uniformity Validation and Radiated Susceptibility Test Procedures for the NASA High Intensity Radiated Fields Laboratory

    NASA Technical Reports Server (NTRS)

    Koppen, Sandra V.; Nguyen, Truong X.; Mielnik, John J.

    2010-01-01

    The NASA Langley Research Center's High Intensity Radiated Fields Laboratory has developed a capability based on the RTCA/DO-160F Section 20 guidelines for radiated electromagnetic susceptibility testing in reverberation chambers. Phase 1 of the test procedure utilizes mode-tuned stirrer techniques and E-field probe measurements to validate chamber uniformity, determines chamber loading effects, and defines a radiated susceptibility test process. The test procedure is segmented into numbered operations that are largely software controlled. This document is intended as a laboratory test reference and includes diagrams of test setups, equipment lists, as well as test results and analysis. Phase 2 of development is discussed.

  18. Revised Methods for Characterizing Stream Habitat in the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Fitzpatrick, Faith A.; Waite, Ian R.; D'Arconte, Patricia J.; Meador, Michael R.; Maupin, Molly A.; Gurtz, Martin E.

    1998-01-01

    Stream habitat is characterized in the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program as part of an integrated physical, chemical, and biological assessment of the Nation's water quality. The goal of stream habitat characterization is to relate habitat to other physical, chemical, and biological factors that describe water-quality conditions. To accomplish this goal, environmental settings are described at sites selected for water-quality assessment. In addition, spatial and temporal patterns in habitat are examined at local, regional, and national scales. This habitat protocol contains updated methods for evaluating habitat in NAWQA Study Units. Revisions are based on lessons learned after 6 years of applying the original NAWQA habitat protocol to NAWQA Study Unit ecological surveys. Similar to the original protocol, these revised methods for evaluating stream habitat are based on a spatially hierarchical framework that incorporates habitat data at basin, segment, reach, and microhabitat scales. This framework provides a basis for national consistency in collection techniques while allowing flexibility in habitat assessment within individual Study Units. Procedures are described for collecting habitat data at basin and segment scales; these procedures include use of geographic information system data bases, topographic maps, and aerial photographs. Data collected at the reach scale include channel, bank, and riparian characteristics.

  19. Computer aided diagnosis and treatment planning for developmental dysplasia of the hip

    NASA Astrophysics Data System (ADS)

    Li, Bin; Lu, Hongbing; Cai, Wenli; Li, Xiang; Meng, Jie; Liang, Zhengrong

    2005-04-01

    The developmental dysplasia of the hip (DDH) is a congenital malformation affecting the proximal femurs and acetabulum that are subluxatable, dislocatable, and dislocated. Early diagnosis and treatment is important because failure to diagnose and improper treatment can result in significant morbidity. In this paper, we designed and implemented a computer aided system for the diagnosis and treatment planning of this disease. With the design, the patient received CT (computed tomography) or MRI (magnetic resonance imaging) scan first. A mixture-based PV partial-volume algorithm was applied to perform bone segmentation on CT image, followed by three-dimensional (3D) reconstruction and display of the segmented image, demonstrating the special relationship between the acetabulum and femurs for visual judgment. Several standard procedures, such as Salter procedure, Pemberton procedure and Femoral Shortening osteotomy, were simulated on the screen to rehearse a virtual treatment plan. Quantitative measurement of Acetabular Index (AI) and Femoral Neck Anteversion (FNA) were performed on the 3D image for evaluation of DDH and treatment plans. PC graphics-card GPU architecture was exploited to accelerate the 3D rendering and geometric manipulation. The prototype system was implemented on PC/Windows environment and is currently under clinical trial on patient datasets.

  20. Efficiency Benefits Using the Terminal Area Precision Scheduling and Spacing System

    NASA Technical Reports Server (NTRS)

    Thipphavong, Jane; Swenson, Harry N.; Lin, Paul; Seo, Anthony Y.; Bagasol, Leonard N.

    2011-01-01

    NASA has developed a capability for terminal area precision scheduling and spacing (TAPSS) to increase the use of fuel-efficient arrival procedures during periods of traffic congestion at a high-density airport. Sustained use of fuel-efficient procedures throughout the entire arrival phase of flight reduces overall fuel burn, greenhouse gas emissions and noise pollution. The TAPSS system is a 4D trajectory-based strategic planning and control tool that computes schedules and sequences for arrivals to facilitate optimal profile descents. This paper focuses on quantifying the efficiency benefits associated with using the TAPSS system, measured by reduction of level segments during aircraft descent and flight distance and time savings. The TAPSS system was tested in a series of human-in-the-loop simulations and compared to current procedures. Compared to the current use of the TMA system, simulation results indicate a reduction of total level segment distance by 50% and flight distance and time savings by 7% in the arrival portion of flight (200 nm from the airport). The TAPSS system resulted in aircraft maintaining continuous descent operations longer and with more precision, both achieved under heavy traffic demand levels.

  1. Prostate segmentation by feature enhancement using domain knowledge and adaptive region based operations

    NASA Astrophysics Data System (ADS)

    Nanayakkara, Nuwan D.; Samarabandu, Jagath; Fenster, Aaron

    2006-04-01

    Estimation of prostate location and volume is essential in determining a dose plan for ultrasound-guided brachytherapy, a common prostate cancer treatment. However, manual segmentation is difficult, time consuming and prone to variability. In this paper, we present a semi-automatic discrete dynamic contour (DDC) model based image segmentation algorithm, which effectively combines a multi-resolution model refinement procedure together with the domain knowledge of the image class. The segmentation begins on a low-resolution image by defining a closed DDC model by the user. This contour model is then deformed progressively towards higher resolution images. We use a combination of a domain knowledge based fuzzy inference system (FIS) and a set of adaptive region based operators to enhance the edges of interest and to govern the model refinement using a DDC model. The automatic vertex relocation process, embedded into the algorithm, relocates deviated contour points back onto the actual prostate boundary, eliminating the need of user interaction after initialization. The accuracy of the prostate boundary produced by the proposed algorithm was evaluated by comparing it with a manually outlined contour by an expert observer. We used this algorithm to segment the prostate boundary in 114 2D transrectal ultrasound (TRUS) images of six patients scheduled for brachytherapy. The mean distance between the contours produced by the proposed algorithm and the manual outlines was 2.70 ± 0.51 pixels (0.54 ± 0.10 mm). We also showed that the algorithm is insensitive to variations of the initial model and parameter values, thus increasing the accuracy and reproducibility of the resulting boundaries in the presence of noise and artefacts.

  2. Iterative normalization method for improved prostate cancer localization with multispectral magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Samil Yetik, Imam

    2012-04-01

    Use of multispectral magnetic resonance imaging has received a great interest for prostate cancer localization in research and clinical studies. Manual extraction of prostate tumors from multispectral magnetic resonance imaging is inefficient and subjective, while automated segmentation is objective and reproducible. For supervised, automated segmentation approaches, learning is essential to obtain the information from training dataset. However, in this procedure, all patients are assumed to have similar properties for the tumor and normal tissues, and the segmentation performance suffers since the variations across patients are ignored. To conquer this difficulty, we propose a new iterative normalization method based on relative intensity values of tumor and normal tissues to normalize multispectral magnetic resonance images and improve segmentation performance. The idea of relative intensity mimics the manual segmentation performed by human readers, who compare the contrast between regions without knowing the actual intensity values. We compare the segmentation performance of the proposed method with that of z-score normalization followed by support vector machine, local active contours, and fuzzy Markov random field. Our experimental results demonstrate that our method outperforms the three other state-of-the-art algorithms, and was found to have specificity of 0.73, sensitivity of 0.69, and accuracy of 0.79, significantly better than alternative methods.

  3. Automated segmentation of the atrial region and fossa ovalis towards computer-aided planning of inter-atrial wall interventions.

    PubMed

    Morais, Pedro; Vilaça, João L; Queirós, Sandro; Marchi, Alberto; Bourier, Felix; Deisenhofer, Isabel; D'hooge, Jan; Tavares, João Manuel R S

    2018-07-01

    Image-fusion strategies have been applied to improve inter-atrial septal (IAS) wall minimally-invasive interventions. Hereto, several landmarks are initially identified on richly-detailed datasets throughout the planning stage and then combined with intra-operative images, enhancing the relevant structures and easing the procedure. Nevertheless, such planning is still performed manually, which is time-consuming and not necessarily reproducible, hampering its regular application. In this article, we present a novel automatic strategy to segment the atrial region (left/right atrium and aortic tract) and the fossa ovalis (FO). The method starts by initializing multiple 3D contours based on an atlas-based approach with global transforms only and refining them to the desired anatomy using a competitive segmentation strategy. The obtained contours are then applied to estimate the FO by evaluating both IAS wall thickness and the expected FO spatial location. The proposed method was evaluated in 41 computed tomography datasets, by comparing the atrial region segmentation and FO estimation results against manually delineated contours. The automatic segmentation method presented a performance similar to the state-of-the-art techniques and a high feasibility, failing only in the segmentation of one aortic tract and of one right atrium. The FO estimation method presented an acceptable result in all the patients with a performance comparable to the inter-observer variability. Moreover, it was faster and fully user-interaction free. Hence, the proposed method proved to be feasible to automatically segment the anatomical models for the planning of IAS wall interventions, making it exceptionally attractive for use in the clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Use of Binary Partition Tree and energy minimization for object-based classification of urban land cover

    NASA Astrophysics Data System (ADS)

    Li, Mengmeng; Bijker, Wietske; Stein, Alfred

    2015-04-01

    Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.

  5. Semi-automatic medical image segmentation with adaptive local statistics in Conditional Random Fields framework.

    PubMed

    Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S

    2008-01-01

    Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.

  6. Small amounts of tissue preserve pancreatic function: Long-term follow-up study of middle-segment preserving pancreatectomy.

    PubMed

    Lu, Zipeng; Yin, Jie; Wei, Jishu; Dai, Cuncai; Wu, Junli; Gao, Wentao; Xu, Qing; Dai, Hao; Li, Qiang; Guo, Feng; Chen, Jianmin; Xi, Chunhua; Wu, Pengfei; Zhang, Kai; Jiang, Kuirong; Miao, Yi

    2016-11-01

    Middle-segment preserving pancreatectomy (MPP) is a novel procedure for treating multifocal lesions of the pancreas while preserving pancreatic function. However, long-term pancreatic function after this procedure remains unclear.The aims of this current study are to investigate short- and long-term outcomes, especially long-term pancreatic endocrine function, after MPP.From September 2011 to December 2015, 7 patients underwent MPP in our institution, and 5 cases with long-term outcomes were further analyzed in a retrospective manner. Percentage of tissue preservation was calculated using computed tomography volumetry. Serum insulin and C-peptide levels after oral glucose challenge were evaluated in 5 patients. Beta-cell secreting function including modified homeostasis model assessment of beta-cell function (HOMA2-beta), area under the curve (AUC) for C-peptide, and C-peptide index were evaluated and compared with those after pancreaticoduodenectomy (PD) and total pancreatectomy. Exocrine function was assessed based on questionnaires.Our case series included 3 women and 2 men, with median age of 50 (37-81) years. Four patients underwent pylorus-preserving PD together with distal pancreatectomy (DP), including 1 with spleen preserved. The remaining patient underwent Beger procedure and spleen-preserving DP. Median operation time and estimated intraoperative blood loss were 330 (250-615) min and 800 (400-5500) mL, respectively. Histological examination revealed 3 cases of metastatic lesion to the pancreas, 1 case of chronic pancreatitis, and 1 neuroendocrine tumor. Major postoperative complications included 3 cases of delayed gastric emptying and 2 cases of postoperative pancreatic fistula. Imaging studies showed that segments representing 18.2% to 39.5% of the pancreas with good blood supply had been preserved. With a median 35.0 months of follow-ups on pancreatic functions, only 1 patient developed new-onset diabetes mellitus of the 4 preoperatively euglycemic patients. Beta-cell function parameters in this group of patients were quite comparable to those after Whipple procedure, and seemed better than those after total pancreatectomy. No symptoms of hypoglycemia were identified in any patient, although half of the patients reported symptoms of exocrine insufficiency.In conclusion, MPP is a feasible and effective procedure for middle-segment sparing multicentric lesions in the pancreas, and patients exhibit satisfied endocrine function after surgery.

  7. Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.

    2001-01-01

    A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.

  8. Lordosis Re-Creation in TLIF and PLIF: A Cadaveric Study of the Influence of Surgical Bone Resection and Cage Angle.

    PubMed

    Robertson, Peter A; Armstrong, William A; Woods, Daniel L; Rawlinson, Jeremy J

    2018-04-24

    Controlled cadaveric study of surgical technique in Transforaminal and Posterior Lumbar Interbody Fusion (TLIF & PLIF) OBJECTIVE.: To evaluate the contribution of surgical techniques and cage variables in lordosis re-creation in posterior interbody fusion (TLIF/PLIF). The major contributors to lumbar lordosis are the lordotic lower lumbar discs. The pathologies requiring treatment with segmental fusion are frequently hypolordotic or kyphotic. Current posterior based interbody techniques have a poor track record for recreating lordosis, although re-creation of lordosis with optimum anatomical alignment is associated with better outcomes and reduced adjacent segment change needing revision. It is unclear whether surgical techniques or cage parameters contribute significantly to lordosis re-creation. Eight instrumented cadaveric motion segments were evaluated with pre and post experimental radiological assessment of lordosis. Each motion segment was instrumented with pedicle screw fixation to allow segmental stabilization. The surgical procedures were unilateral TLIF with an 18° lordotic and 27 mm length cage, unilateral TLIF (18°, 27 mm) with bilateral facetectomy, unilateral TLIF (18°, 27 mm) with posterior column osteotomy, PLIF with bilateral cages (18°, 22 mm), and PLIF with bilateral cages (24°, 22 mm). Cage insertion used and 'insert and rotate' technique. Pooled results demonstrated a mean increase in lordosis of 2.2° with each procedural step (Lordosis increase was serially 1.8°, 3.5°, 1.6°, 2.5° & 1.6° through the procedures). TLIF and PLIF with posterior column osteotomy increased lordosis significantly compared with Unilateral TLIF and TLIF with bilateral facetectomy. The major contributors to lordosis re-creation were posterior column osteotomy, and PLIF with paired shorter cages rather than TLIF. This study demonstrates that the surgical approach to posterior interbody surgery influences lordosis gain and posterior column osteotomy optimizes lordosis gain in TLIF. The bilateral cages used in PLIF are shorter and associated with further gain in lordosis. This information has the potential to aid surgical planning when attempting to recreate lordosis to optimize outcomes. N/A.

  9. Software and Algorithms for Biomedical Image Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such as product inspection or assembly of parts in space and industry.

  10. Focal liver lesions segmentation and classification in nonenhanced T2-weighted MRI.

    PubMed

    Gatos, Ilias; Tsantis, Stavros; Karamesini, Maria; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Hazle, John D; Kagadis, George C

    2017-07-01

    To automatically segment and classify focal liver lesions (FLLs) on nonenhanced T2-weighted magnetic resonance imaging (MRI) scans using a computer-aided diagnosis (CAD) algorithm. 71 FLLs (30 benign lesions, 19 hepatocellular carcinomas, and 22 metastases) on T2-weighted MRI scans were delineated by the proposed CAD scheme. The FLL segmentation procedure involved wavelet multiscale analysis to extract accurate edge information and mean intensity values for consecutive edges computed using horizontal and vertical analysis that were fed into the subsequent fuzzy C-means algorithm for final FLL border extraction. Texture information for each extracted lesion was derived using 42 first- and second-order textural features from grayscale value histogram, co-occurrence, and run-length matrices. Twelve morphological features were also extracted to capture any shape differentiation between classes. Feature selection was performed with stepwise multilinear regression analysis that led to a reduced feature subset. A multiclass Probabilistic Neural Network (PNN) classifier was then designed and used for lesion classification. PNN model evaluation was performed using the leave-one-out (LOO) method and receiver operating characteristic (ROC) curve analysis. The mean overlap between the automatically segmented FLLs and the manual segmentations performed by radiologists was 0.91 ± 0.12. The highest classification accuracies in the PNN model for the benign, hepatocellular carcinoma, and metastatic FLLs were 94.1%, 91.4%, and 94.1%, respectively, with sensitivity/specificity values of 90%/97.3%, 89.5%/92.2%, and 90.9%/95.6% respectively. The overall classification accuracy for the proposed system was 90.1%. Our diagnostic system using sophisticated FLL segmentation and classification algorithms is a powerful tool for routine clinical MRI-based liver evaluation and can be a supplement to contrast-enhanced MRI to prevent unnecessary invasive procedures. © 2017 American Association of Physicists in Medicine.

  11. The introduction of capillary structures in 4D simulated vascular tree for ART 3.5D algorithm further validation

    NASA Astrophysics Data System (ADS)

    Barra, Beatrice; El Hadji, Sara; De Momi, Elena; Ferrigno, Giancarlo; Cardinale, Francesco; Baselli, Giuseppe

    2017-03-01

    Several neurosurgical procedures, such as Artero Venous Malformations (AVMs), aneurysm embolizations and StereoElectroEncephaloGraphy (SEEG) require accurate reconstruction of the cerebral vascular tree, as well as the classification of arteries and veins, in order to increase the safety of the intervention. Segmentation of arteries and veins from 4D CT perfusion scans has already been proposed in different studies. Nonetheless, such procedures require long acquisition protocols and the radiation dose given to the patient is not negligible. Hence, space is open to approaches attempting to recover the dynamic information from standard Contrast Enhanced Cone Beam Computed Tomography (CE-CBCT) scans. The algorithm proposed by our team is called ART 3.5 D. It is a novel algorithm based on the postprocessing of both the angiogram and the raw data of a standard Digital Subtraction Angiography from a CBCT (DSACBCT) allowing arteries and veins segmentation and labeling without requiring any additional radiation exposure for the patient and neither lowering the resolution. In addition, while in previous versions of the algorithm just the distinction of arteries and veins was considered, here the capillary phase simulation and identification is introduced, in order to increase further information useful for more precise vasculature segmentation.

  12. Site conditions related to erosion on logging roads

    Treesearch

    R. M. Rice; J. D. McCashion

    1985-01-01

    Synopsis - Data collected from 299 road segments in northwestern California were used to develop and test a procedure for estimating and managing road-related erosion. Site conditions and the design of each segment were described by 30 variables. Equations developed using 149 of the road segments were tested on the other 150. The best multiple regression equation...

  13. Infant Word Segmentation Revisited: Edge Alignment Facilitates Target Extraction

    ERIC Educational Resources Information Center

    Seidl, Amanda; Johnson, Elizabeth K.

    2006-01-01

    In a landmark study, Jusczyk and Aslin (1995 ) demonstrated that English-learning infants are able to segment words from continuous speech at 7.5 months of age. In the current study, we explored the possibility that infants segment words from the edges of utterances more readily than the middle of utterances. The same procedure was used as in…

  14. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  15. Landmark-guided diffeomorphic demons algorithm and its application to automatic segmentation of the whole spine and pelvis in CT images.

    PubMed

    Hanaoka, Shouhei; Masutani, Yoshitaka; Nemoto, Mitsutaka; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni; Shimizu, Akinobu

    2017-03-01

    A fully automatic multiatlas-based method for segmentation of the spine and pelvis in a torso CT volume is proposed. A novel landmark-guided diffeomorphic demons algorithm is used to register a given CT image to multiple atlas volumes. This algorithm can utilize both grayscale image information and given landmark coordinate information optimally. The segmentation has four steps. Firstly, 170 bony landmarks are detected in the given volume. Using these landmark positions, an atlas selection procedure is performed to reduce the computational cost of the following registration. Then the chosen atlas volumes are registered to the given CT image. Finally, voxelwise label voting is performed to determine the final segmentation result. The proposed method was evaluated using 50 torso CT datasets as well as the public SpineWeb dataset. As a result, a mean distance error of [Formula: see text] and a mean Dice coefficient of [Formula: see text] were achieved for the whole spine and the pelvic bones, which are competitive with other state-of-the-art methods. From the experimental results, the usefulness of the proposed segmentation method was validated.

  16. An Integrated Approach to Segmentation and Nonrigid Registration for Application in Image-Guided Pelvic Radiotherapy

    PubMed Central

    Lu, Chao; Chelikani, Sudhakar; Papademetris, Xenophon; Knisely, Jonathan P.; Milosevic, Michael F.; Chen, Zhe; Jaffray, David A.; Staib, Lawrence H.; Duncan, James S.

    2011-01-01

    External beam radiotherapy (EBRT) has become the preferred options for non-surgical treatment of prostate cancer and cervix cancer. In order to deliver higher doses to cancerous regions within these pelvic structures (i.e. prostate or cervix) while maintaining or lowering the doses to surrounding non-cancerous regions, it is critical to account for setup variation, organ motion, anatomical changes due to treatment and intra-fraction motion. In previous work, manual segmentation of the soft tissues is performed and then images are registered based on the manual segmentation. In this paper, we present an integrated automatic approach to multiple organ segmentation and nonrigid constrained registration, which can achieve these two aims simultaneously. The segmentation and registration steps are both formulated using a Bayesian framework, and they constrain each other using an iterative conditional model strategy. We also propose a new strategy to assess cumulative actual dose for this novel integrated algorithm, in order to both determine whether the intended treatment is being delivered and, potentially, whether or not a plan should be adjusted for future treatment fractions. Quantitative results show that the automatic segmentation produced results that have an accuracy comparable to manual segmentation, while the registration part significantly outperforms both rigid and non-rigid registration. Clinical application and evaluation of dose delivery show the superiority of proposed method to the procedure currently used in clinical practice, i.e. manual segmentation followed by rigid registration. PMID:21646038

  17. An investigation of the use of temporal decomposition in space mission scheduling

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley E.; Narayanan, Venkat

    1994-01-01

    This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.

  18. Complex Building Detection Through Integrating LIDAR and Aerial Photos

    NASA Astrophysics Data System (ADS)

    Zhai, R.

    2015-02-01

    This paper proposes a new approach on digital building detection through the integration of LiDAR data and aerial imagery. It is known that most building rooftops are represented by different regions from different seed pixels. Considering the principals of image segmentation, this paper employs a new region based technique to segment images, combining both the advantages of LiDAR and aerial images together. First, multiple seed points are selected by taking several constraints into consideration in an automated way. Then, the region growing procedures proceed by combining the elevation attribute from LiDAR data, visibility attribute from DEM (Digital Elevation Model), and radiometric attribute from warped images in the segmentation. Through this combination, the pixels with similar height, visibility, and spectral attributes are merged into one region, which are believed to represent the whole building area. The proposed methodology was implemented on real data and competitive results were achieved.

  19. TOPEX Microwave Radiometer - Thermal design verification test and analytical model validation

    NASA Technical Reports Server (NTRS)

    Lin, Edward I.

    1992-01-01

    The testing of the TOPEX Microwave Radiometer (TMR) is described in terms of hardware development based on the modeling and thermal vacuum testing conducted. The TMR and the vacuum-test facility are described, and the thermal verification test includes a hot steady-state segment, a cold steady-state segment, and a cold survival mode segment totalling 65 hours. A graphic description is given of the test history which is related temperature tracking, and two multinode TMR test-chamber models are compared to the test results. Large discrepancies between the test data and the model predictions are attributed to contact conductance, effective emittance from the multilayer insulation, and heat leaks related to deviations from the flight configuration. The TMR thermal testing/modeling effort is shown to provide technical corrections for the procedure outlined, and the need for validating predictive models is underscored.

  20. A probability tracking approach to segmentation of ultrasound prostate images using weak shape priors

    NASA Astrophysics Data System (ADS)

    Xu, Robert S.; Michailovich, Oleg V.; Solovey, Igor; Salama, Magdy M. A.

    2010-03-01

    Prostate specific antigen density is an established parameter for indicating the likelihood of prostate cancer. To this end, the size and volume of the gland have become pivotal quantities used by clinicians during the standard cancer screening process. As an alternative to manual palpation, an increasing number of volume estimation methods are based on the imagery data of the prostate. The necessity to process large volumes of such data requires automatic segmentation algorithms, which can accurately and reliably identify the true prostate region. In particular, transrectal ultrasound (TRUS) imaging has become a standard means of assessing the prostate due to its safe nature and high benefit-to-cost ratio. Unfortunately, modern TRUS images are still plagued by many ultrasound imaging artifacts such as speckle noise and shadowing, which results in relatively low contrast and reduced SNR of the acquired images. Consequently, many modern segmentation methods incorporate prior knowledge about the prostate geometry to enhance traditional segmentation techniques. In this paper, a novel approach to the problem of TRUS segmentation, particularly the definition of the prostate shape prior, is presented. The proposed approach is based on the concept of distribution tracking, which provides a unified framework for tracking both photometric and morphological features of the prostate. In particular, the tracking of morphological features defines a novel type of "weak" shape priors. The latter acts as a regularization force, which minimally bias the segmentation procedure, while rendering the final estimate stable and robust. The value of the proposed methodology is demonstrated in a series of experiments.

  1. Comparison of atlas-based techniques for whole-body bone segmentation.

    PubMed

    Arabi, Hossein; Zaidi, Habib

    2017-02-01

    We evaluate the accuracy of whole-body bone extraction from whole-body MR images using a number of atlas-based segmentation methods. The motivation behind this work is to find the most promising approach for the purpose of MRI-guided derivation of PET attenuation maps in whole-body PET/MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean square distance (MSD) as image similarity measures for calculating the weighting factors, along with other atlas-dependent algorithms, such as (v) shape-based averaging (SBA) and (vi) Hofmann's pseudo-CT generation method. The performance evaluation of the different segmentation techniques was carried out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice criterion, global weighting atlas fusion methods provided moderate improvement of whole-body bone segmentation (DSC= 0.65 ± 0.05) compared to non-weighted IA (DSC= 0.60 ± 0.02). The local weighed atlas fusion approach using the MSD similarity measure outperformed the other strategies by achieving a DSC of 0.81 ± 0.03 while using the NCC and NMI measures resulted in a DSC of 0.78 ± 0.05 and 0.75 ± 0.04, respectively. Despite very long computation time, the extracted bone obtained from both SBA (DSC= 0.56 ± 0.05) and Hofmann's methods (DSC= 0.60 ± 0.02) exhibited no improvement compared to non-weighted IA. Finding the optimum parameters for implementation of the atlas fusion approach, such as weighting factors and image similarity patch size, have great impact on the performance of atlas-based segmentation approaches. The voxel-wise atlas fusion approach exhibited excellent performance in terms of cancelling out the non-systematic registration errors leading to accurate and reliable segmentation results. Denoising and normalization of MR images together with optimization of the involved parameters play a key role in improving bone extraction accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. An eye movement based reading intervention in lexical and segmental readers with acquired dyslexia.

    PubMed

    Ablinger, Irene; von Heyden, Kerstin; Vorstius, Christian; Halm, Katja; Huber, Walter; Radach, Ralph

    2014-01-01

    Due to their brain damage, aphasic patients with acquired dyslexia often rely to a greater extent on lexical or segmental reading procedures. Thus, therapy intervention is mostly targeted on the more impaired reading strategy. In the present work we introduce a novel therapy approach based on real-time measurement of patients' eye movements as they attempt to read words. More specifically, an eye movement contingent technique of stepwise letter de-masking was used to support sequential reading, whereas fixation-dependent initial masking of non-central letters stimulated a lexical (parallel) reading strategy. Four lexical and four segmental readers with acquired central dyslexia received our intensive reading intervention. All participants showed remarkable improvements as evident in reduced total reading time, a reduced number of fixations per word and improved reading accuracy. Both types of intervention led to item-specific training effects in all subjects. A generalisation to untrained items was only found in segmental readers after the lexical training. Eye movement analyses were also used to compare word processing before and after therapy, indicating that all patients, with one exclusion, maintained their preferred reading strategy. However, in several cases the balance between sequential and lexical processing became less extreme, indicating a more effective individual interplay of both word processing routes.

  3. Experimental comparison of landmark-based methods for 3D elastic registration of pre- and postoperative liver CT data

    NASA Astrophysics Data System (ADS)

    Lange, Thomas; Wörz, Stefan; Rohr, Karl; Schlag, Peter M.

    2009-02-01

    The qualitative and quantitative comparison of pre- and postoperative image data is an important possibility to validate surgical procedures, in particular, if computer assisted planning and/or navigation is performed. Due to deformations after surgery, partially caused by the removal of tissue, a non-rigid registration scheme is a prerequisite for a precise comparison. Interactive landmark-based schemes are a suitable approach, if high accuracy and reliability is difficult to achieve by automatic registration approaches. Incorporation of a priori knowledge about the anatomical structures to be registered may help to reduce interaction time and improve accuracy. Concerning pre- and postoperative CT data of oncological liver resections the intrahepatic vessels are suitable anatomical structures. In addition to using branching landmarks for registration, we here introduce quasi landmarks at vessel segments with high localization precision perpendicular to the vessels and low precision along the vessels. A comparison of interpolating thin-plate splines (TPS), interpolating Gaussian elastic body splines (GEBS) and approximating GEBS on landmarks at vessel branchings as well as approximating GEBS on the introduced vessel segment landmarks is performed. It turns out that the segment landmarks provide registration accuracies as good as branching landmarks and can improve accuracy if combined with branching landmarks. For a low number of landmarks segment landmarks are even superior.

  4. Colony image acquisition and segmentation

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2007-12-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.

  5. A correlative approach to segmenting phases and ferrite morphologies in transformation-induced plasticity steel using electron back-scattering diffraction and energy dispersive X-ray spectroscopy.

    PubMed

    Gazder, Azdiar A; Al-Harbi, Fayez; Spanke, Hendrik Th; Mitchell, David R G; Pereloma, Elena V

    2014-12-01

    Using a combination of electron back-scattering diffraction and energy dispersive X-ray spectroscopy data, a segmentation procedure was developed to comprehensively distinguish austenite, martensite, polygonal ferrite, ferrite in granular bainite and bainitic ferrite laths in a thermo-mechanically processed low-Si, high-Al transformation-induced plasticity steel. The efficacy of the ferrite morphologies segmentation procedure was verified by transmission electron microscopy. The variation in carbon content between the ferrite in granular bainite and bainitic ferrite laths was explained on the basis of carbon partitioning during their growth. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Segmentation of lung fields using Chan-Vese active contour model in chest radiographs

    NASA Astrophysics Data System (ADS)

    Sohn, Kiwon

    2011-03-01

    A CAD tool for chest radiographs consists of several procedures and the very first step is segmentation of lung fields. We develop a novel methodology for segmentation of lung fields in chest radiographs that can satisfy the following two requirements. First, we aim to develop a segmentation method that does not need a training stage with manual estimation of anatomical features in a large training dataset of images. Secondly, for the ease of implementation, it is desirable to apply a well established model that is widely used for various image-partitioning practices. The Chan-Vese active contour model, which is based on Mumford-Shah functional in the level set framework, is applied for segmentation of lung fields. With the use of this model, segmentation of lung fields can be carried out without detailed prior knowledge on the radiographic anatomy of the chest, yet in some chest radiographs, the trachea regions are unfavorably segmented out in addition to the lung field contours. To eliminate artifacts from the trachea, we locate the upper end of the trachea, find a vertical center line of the trachea and delineate it, and then brighten the trachea region to make it less distinctive. The segmentation process is finalized by subsequent morphological operations. We randomly select 30 images from the Japanese Society of Radiological Technology image database to test the proposed methodology and the results are shown. We hope our segmentation technique can help to promote of CAD tools, especially for emerging chest radiographic imaging techniques such as dual energy radiography and chest tomosynthesis.

  7. Iterative-cuts: longitudinal and scale-invariant segmentation via user-defined templates for rectosigmoid colon in gynecological brachytherapy

    PubMed Central

    Lüddemann, Tobias; Egger, Jan

    2016-01-01

    Abstract. Among all types of cancer, gynecological malignancies belong to the fourth most frequent type of cancer among women. In addition to chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an organ-at-risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two-dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graph’s outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual result yielded a dice similarity coefficient value of 83.85±4.08, in comparison to 83.97±8.08% for the comparison of two manual segmentations by the same physician. Utilizing the proposed methodology resulted in a median time of 128  s/dataset, compared to 300 s needed for pure manual segmentation. PMID:27403448

  8. Hyperspectral image segmentation of the common bile duct

    NASA Astrophysics Data System (ADS)

    Samarov, Daniel; Wehner, Eleanor; Schwarz, Roderich; Zuzak, Karel; Livingston, Edward

    2013-03-01

    Over the course of the last several years hyperspectral imaging (HSI) has seen increased usage in biomedicine. Within the medical field in particular HSI has been recognized as having the potential to make an immediate impact by reducing the risks and complications associated with laparotomies (surgical procedures involving large incisions into the abdominal wall) and related procedures. There are several ongoing studies focused on such applications. Hyperspectral images were acquired during pancreatoduodenectomies (commonly referred to as Whipple procedures), a surgical procedure done to remove cancerous tumors involving the pancreas and gallbladder. As a result of the complexity of the local anatomy, identifying where the common bile duct (CBD) is can be difficult, resulting in comparatively high incidents of injury to the CBD and associated complications. It is here that HSI has the potential to help reduce the risk of such events from happening. Because the bile contained within the CBD exhibits a unique spectral signature, we are able to utilize HSI segmentation algorithms to help in identifying where the CBD is. In the work presented here we discuss approaches to this segmentation problem and present the results.

  9. Scale-space for empty catheter segmentation in PCI fluoroscopic images.

    PubMed

    Bacchuwar, Ketan; Cousty, Jean; Vaillant, Régis; Najman, Laurent

    2017-07-01

    In this article, we present a method for empty guiding catheter segmentation in fluoroscopic X-ray images. The guiding catheter, being a commonly visible landmark, its segmentation is an important and a difficult brick for Percutaneous Coronary Intervention (PCI) procedure modeling. In number of clinical situations, the catheter is empty and appears as a low contrasted structure with two parallel and partially disconnected edges. To segment it, we work on the level-set scale-space of image, the min tree, to extract curve blobs. We then propose a novel structural scale-space, a hierarchy built on these curve blobs. The deep connected component, i.e. the cluster of curve blobs on this hierarchy, that maximizes the likelihood to be an empty catheter is retained as final segmentation. We evaluate the performance of the algorithm on a database of 1250 fluoroscopic images from 6 patients. As a result, we obtain very good qualitative and quantitative segmentation performance, with mean precision and recall of 80.48 and 63.04% respectively. We develop a novel structural scale-space to segment a structured object, the empty catheter, in challenging situations where the information content is very sparse in the images. Fully-automatic empty catheter segmentation in X-ray fluoroscopic images is an important and preliminary step in PCI procedure modeling, as it aids in tagging the arrival and removal location of other interventional tools.

  10. Step-by-Step Technique for Segmental Reconstruction of Reverse Hill-Sachs Lesions Using Homologous Osteochondral Allograft.

    PubMed

    Alkaduhimi, Hassanin; van den Bekerom, Michel P J; van Deurzen, Derek F P

    2017-06-01

    Posterior shoulder dislocations are accompanied by high forces and can result in an anteromedial humeral head impression fracture called a reverse Hill-Sachs lesion. This reverse Hill-Sachs lesion can result in serious complications including posttraumatic osteoarthritis, posterior dislocations, osteonecrosis, persistent joint stiffness, and loss of shoulder function. Treatment is challenging and depends on the amount of bone loss. Several techniques have been reported to describe the surgical treatment of lesions larger than 20%. However, there is still limited evidence with regard to the optimal procedure. Favorable results have been reported by performing segmental reconstruction of the reverse Hill-Sachs lesion with bone allograft. Although the procedure of segmental reconstruction has been used in several studies, its technique has not yet been well described in detail. In this report we propose a step-by-step description of the technique how to perform a segmental reconstruction of a reverse Hill-Sachs defect.

  11. Automated MRI segmentation for individualized modeling of current flow in the human head.

    PubMed

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  12. Anatomic Patterns of Renal Arterial Sympathetic Innervation: New Aspects for Renal Denervation.

    PubMed

    Imnadze, Guram; Balzer, Stefan; Meyer, Baerbel; Neumann, Joerg; Krech, Rainer Horst; Thale, Joachim; Franz, Norbert; Warnecke, Henning; Awad, Khaled; Hayek, Salim S; Devireddy, Chandan

    2016-12-01

    Initial studies of catheter-based renal arterial sympathetic denervation to lower blood pressure in resistant hypertensive patients renewed interest in the sympathetic nervous system's role in the pathogenesis of hypertension. However, the SYMPLICITY HTN-3 study failed to meet its prespecified blood pressure lowering efficacy endpoint. To date, only a limited number of studies have described the microanatomy of renal nerves, of which, only two involve humans. Renal arteries were harvested from 15 cadavers from the Klinikum Osnabruck and Schuchtermann Klinik, Bad Rothenfelde. Each artery was divided longitudinally in equal thirds (proximal, middle, and distal), with each section then divided into equal superior, inferior, anterior, and posterior quadrants, which were then stained. Segments containing no renal nerves were given a score value = 0, 1-2 nerves with diameter <300 µm a score = 1; 3-4 nerves or nerve diameter 300-599 µm a score = 2, and >4 nerves or nerve diameter ≥600 µm a score = 3. A total of 22 renal arteries (9 right-sided, 13 left-sided) were suitable for examination. Overall, 691 sections of 5 mm thickness were prepared. Right renal arteries had significantly higher mean innervation grade (1.56 ± 0.85) compared to left renal arteries (1.09 ± 0.87) (P < 0.001). Medial (1.30 ± 0.59) and distal (1.39 ± 0.62) innervation was higher than the proximal (1.17 ± 0.55) segments (p < 0.001). When divided in quadrants, the anterior (1.52 ± 0.96) and superior (1.71 ± 0.89) segments were more innervated compared to posterior (0.96 ± 0.72) and inferior (0.90 ± 0.68) segments (P < 0.001). That the right renal artery has significantly higher innervation scores than the left. The anterior and superior quadrants of the renal arteries scored higher in innervation than the posterior and inferior quadrants did. The distal third of the renal arteries are more innervated than the more proximal segments. These findings warrant further evaluation of the spatial innervation patterns of the renal artery in order to understand how it may enhance catheter-based renal arterial denervation procedural strategy and outcomes. The SYMPLICITY HTN-3 study dealt a blow to the idea of the catheter-based renal arterial sympathetic denervation. We investigated the location and patterns of periarterial renal nerves in cadaveric human renal arteries. To quantify the density of the renal nerves we created a novel innervation score. On average the right renal arteries were significantly more densely innervated than the left renal arteries, the anterior and superior segments were significantly more innervated compared to the posterior and inferior segments, absolute innervation scores in the proximal third of the left or right renal arteries were always lower when compared to distal segments. These findings may enhance catheter-based renal arterial denervation procedural strategy and outcomes. © 2016, Wiley Periodicals, Inc.

  13. Segmentation for the enhancement of microcalcifications in digital mammograms.

    PubMed

    Milosevic, Marina; Jankovic, Dragan; Peulic, Aleksandar

    2014-01-01

    Microcalcification clusters appear as groups of small, bright particles with arbitrary shapes on mammographic images. They are the earliest sign of breast carcinomas and their detection is the key for improving breast cancer prognosis. But due to the low contrast of microcalcifications and same properties as noise, it is difficult to detect microcalcification. This work is devoted to developing a system for the detection of microcalcification in digital mammograms. After removing noise from mammogram using the Discrete Wavelet Transformation (DWT), we first selected the region of interest (ROI) in order to demarcate the breast region on a mammogram. Segmenting region of interest represents one of the most important stages of mammogram processing procedure. The proposed segmentation method is based on a filtering using the Sobel filter. This process will identify the significant pixels, that belong to edges of microcalcifications. Microcalcifications were detected by increasing the contrast of the images obtained by applying Sobel operator. In order to confirm the effectiveness of this microcalcification segmentation method, the Support Vector Machine (SVM) and k-Nearest Neighborhood (k-NN) algorithm are employed for the classification task using cross-validation technique.

  14. First ALPPS procedure using a total robotic approach.

    PubMed

    Vicente, E; Quijano, Y; Ielpo, B; Fabra, I

    2016-12-01

    ALPPS procedure is gaining interest. Indications and technical aspects of this technique are still under debate [1]. Only 4 totally laparoscopic ALPPS procedures have been described in the literature and none by robotic approach [2-4]. This video demonstrates the technical aspects of totally robotic ALPPS. A 58 year old man with sigmoid adenocarcinoma with multiple right liver metastases extended to segment IV and I underwent Xelox and 5 Fluoro-uracil neoadjuvancy. Preoperative CT volumetric scan showed a FLR/TLV (Future Liver Remnant/Total Liver Volume) of 28%. ALPPS totally robotic procedure was planned using the DaVinci Si. Tumor resection from the FLR (including segment I) is followed by parenchymal transection between the FLR and the diseased part of the liver with concomitant right portal vein ligation. Small branches to segment IV from left portal vein have been resected along the round ligament, at this step. The right biliary tract was resected as it was partially debilitated after its dissection as partially encircled by a metastasis at segment IV. Second stage was performed totally robotic on 13th postoperative days with a FLR/TLV of 40%. No strong adherences are found, making this stage much easer than open approach. During this step, right hepatic artery and right supra hepatic vein are resected. Finally, the specimen was retrieved inside a plastic bag through a Pfannenstiel incision. Postoperative pathology showed margins free from disease. ALPPS procedure performed by robotic approach could be a safe and feasible technique in experienced centers with advanced robotic skills. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Developing a Procedure for Segmenting Meshed Heat Networks of Heat Supply Systems without Outflows

    NASA Astrophysics Data System (ADS)

    Tokarev, V. V.

    2018-06-01

    The heat supply systems of cities have, as a rule, a ring structure with the possibility of redistributing the flows. Despite the fact that a ring structure is more reliable than a radial one, the operators of heat networks prefer to use them in normal modes according to the scheme without overflows of the heat carrier between the heat mains. With such a scheme, it is easier to adjust the networks and to detect and locate faults in them. The article proposes a formulation of the heat network segmenting problem. The problem is set in terms of optimization with the heat supply system's excessive hydraulic power used as the optimization criterion. The heat supply system computer model has a hierarchically interconnected multilevel structure. Since iterative calculations are only carried out for the level of trunk heat networks, decomposing the entire system into levels allows the dimensionality of the solved subproblems to be reduced by an order of magnitude. An attempt to solve the problem by fully enumerating possible segmentation versions does not seem to be feasible for systems of really existing sizes. The article suggests a procedure for searching rational segmentation of heat supply networks with limiting the search to versions of dividing the system into segments near the flow convergence nodes with subsequent refining of the solution. The refinement is performed in two stages according to the total excess hydraulic power criterion. At the first stage, the loads are redistributed among the sources. After that, the heat networks are divided into independent fragments, and the possibility of increasing the excess hydraulic power in the obtained fragments is checked by shifting the division places inside a fragment. The proposed procedure has been approbated taking as an example a municipal heat supply system involving six heat mains fed from a common source, 24 loops within the feeding mains plane, and more than 5000 consumers. Application of the proposed segmentation procedure made it possible to find a version with required hydraulic power in the heat supply system on 3% less than the one found using the simultaneous segmentation method.

  16. Modeling organizational justice improvements in a pediatric health service : a discrete-choice conjoint experiment.

    PubMed

    Cunningham, Charles E; Kostrzewa, Linda; Rimas, Heather; Chen, Yvonne; Deal, Ken; Blatz, Susan; Bowman, Alida; Buchanan, Don H; Calvert, Randy; Jennings, Barbara

    2013-01-01

    Patients value health service teams that function effectively. Organizational justice is linked to the performance, health, and emotional adjustment of the members of these teams. We used a discrete-choice conjoint experiment to study the organizational justice improvement preferences of pediatric health service providers. Using themes from a focus group with 22 staff, we composed 14 four-level organizational justice improvement attributes. A sample of 652 staff (76 % return) completed 30 choice tasks, each presenting three hospitals defined by experimentally varying the attribute levels. Latent class analysis yielded three segments. Procedural justice attributes were more important to the Decision Sensitive segment, 50.6 % of the sample. They preferred to contribute to and understand how all decisions were made and expected management to act promptly on more staff suggestions. Interactional justice attributes were more important to the Conduct Sensitive segment (38.5 %). A universal code of respectful conduct, consequences encouraging respectful interaction, and management's response when staff disagreed with them were more important to this segment. Distributive justice attributes were more important to the Benefit Sensitive segment, 10.9 % of the sample. Simulations predicted that, while Decision Sensitive (74.9 %) participants preferred procedural justice improvements, Conduct (74.6 %) and Benefit Sensitive (50.3 %) participants preferred interactional justice improvements. Overall, 97.4 % of participants would prefer an approach combining procedural and interactional justice improvements. Efforts to create the health service environments that patients value need to be comprehensive enough to address the preferences of segments of staff who are sensitive to different dimensions of organizational justice.

  17. Complete grain boundaries from incomplete EBSD maps: the influence of segmentation on grain size determinations

    NASA Astrophysics Data System (ADS)

    Heilbronner, Renée; Kilian, Ruediger

    2017-04-01

    Grain size analyses are carried out for a number of reasons, for example, the dynamically recrystallized grain size of quartz is used to assess the flow stresses during deformation. Typically a thin section or polished surface is used. If the expected grain size is large enough (10 µm or larger), the images can be obtained on a light microscope, if the grain size is smaller, the SEM is used. The grain boundaries are traced (the process is called segmentation and can be done manually or via image processing) and the size of the cross sectional areas (segments) is determined. From the resulting size distributions, 'the grain size' or 'average grain size', usually a mean diameter or similar, is derived. When carrying out such grain size analyses, a number of aspects are critical for the reproducibility of the result: the resolution of the imaging equipment (light microscope or SEM), the type of images that are used for segmentation (cross polarized, partial or full orientation images, CIP versus EBSD), the segmentation procedure (algorithm) itself, the quality of the segmentation and the mathematical definition and calculation of 'the average grain size'. The quality of the segmentation depends very strongly on the criteria that are used for identifying grain boundaries (for example, angles of misorientation versus shape considerations), on pre- and post-processing (filtering) and on the quality of the recorded images (most notably on the indexing ratio). In this contribution, we consider experimentally deformed Black Hills quartzite with dynamically re-crystallized grain sizes in the range of 2 - 15 µm. We compare two basic methods of segmentations of EBSD maps (orientation based versus shape based) and explore how the choice of methods influences the result of the grain size analysis. We also compare different measures for grain size (mean versus mode versus RMS, and 2D versus 3D) in order to determine which of the definitions of 'average grain size yields the most stable results.

  18. Accuracy and Reproducibility of Adipose Tissue Measurements in Young Infants by Whole Body Magnetic Resonance Imaging

    PubMed Central

    Bauer, Jan Stefan; Noël, Peter Benjamin; Vollhardt, Christiane; Much, Daniela; Degirmenci, Saliha; Brunner, Stefanie; Rummeny, Ernst Josef; Hauner, Hans

    2015-01-01

    Purpose MR might be well suited to obtain reproducible and accurate measures of fat tissues in infants. This study evaluates MR-measurements of adipose tissue in young infants in vitro and in vivo. Material and Methods MR images of ten phantoms simulating subcutaneous fat of an infant’s torso were obtained using a 1.5T MR scanner with and without simulated breathing. Scans consisted of a cartesian water-suppression turbo spin echo (wsTSE) sequence, and a PROPELLER wsTSE sequence. Fat volume was quantified directly and by MR imaging using k-means clustering and threshold-based segmentation procedures to calculate accuracy in vitro. Whole body MR was obtained in sleeping young infants (average age 67±30 days). This study was approved by the local review board. All parents gave written informed consent. To obtain reproducibility in vivo, cartesian and PROPELLER wsTSE sequences were repeated in seven and four young infants, respectively. Overall, 21 repetitions were performed for the cartesian sequence and 13 repetitions for the PROPELLER sequence. Results In vitro accuracy errors depended on the chosen segmentation procedure, ranging from 5.4% to 76%, while the sequence showed no significant influence. Artificial breathing increased the minimal accuracy error to 9.1%. In vivo reproducibility errors for total fat volume of the sleeping infants ranged from 2.6% to 3.4%. Neither segmentation nor sequence significantly influenced reproducibility. Conclusion With both cartesian and PROPELLER sequences an accurate and reproducible measure of body fat was achieved. Adequate segmentation was mandatory for high accuracy. PMID:25706876

  19. Segmentation and analysis of mouse pituitary cells with graphic user interface (GUI)

    NASA Astrophysics Data System (ADS)

    González, Erika; Medina, Lucía.; Hautefeuille, Mathieu; Fiordelisio, Tatiana

    2018-02-01

    In this work we present a method to perform pituitary cell segmentation in image stacks acquired by fluorescence microscopy from pituitary slice preparations. Although there exist many procedures developed to achieve cell segmentation tasks, they are generally based on the edge detection and require high resolution images. However in the biological preparations that we worked on, the cells are not well defined as experts identify their intracellular calcium activity due to fluorescence intensity changes in different regions over time. This intensity changes were associated with time series over regions, and because they present a particular behavior they were used into a classification procedure in order to perform cell segmentation. Two logistic regression classifiers were implemented for the time series classification task using as features the area under the curve and skewness in the first classifier and skewness and kurtosis in the second classifier. Once we have found both decision boundaries in two different feature spaces by training using 120 time series, the decision boundaries were tested over 12 image stacks through a python graphical user interface (GUI), generating binary images where white pixels correspond to cells and the black ones to background. Results show that area-skewness classifier reduces the time an expert dedicates in locating cells by up to 75% in some stacks versus a 92% for the kurtosis-skewness classifier, this evaluated on the number of regions the method found. Due to the promising results, we expect that this method will be improved adding more relevant features to the classifier.

  20. Accuracy and reproducibility of adipose tissue measurements in young infants by whole body magnetic resonance imaging.

    PubMed

    Bauer, Jan Stefan; Noël, Peter Benjamin; Vollhardt, Christiane; Much, Daniela; Degirmenci, Saliha; Brunner, Stefanie; Rummeny, Ernst Josef; Hauner, Hans

    2015-01-01

    MR might be well suited to obtain reproducible and accurate measures of fat tissues in infants. This study evaluates MR-measurements of adipose tissue in young infants in vitro and in vivo. MR images of ten phantoms simulating subcutaneous fat of an infant's torso were obtained using a 1.5T MR scanner with and without simulated breathing. Scans consisted of a cartesian water-suppression turbo spin echo (wsTSE) sequence, and a PROPELLER wsTSE sequence. Fat volume was quantified directly and by MR imaging using k-means clustering and threshold-based segmentation procedures to calculate accuracy in vitro. Whole body MR was obtained in sleeping young infants (average age 67±30 days). This study was approved by the local review board. All parents gave written informed consent. To obtain reproducibility in vivo, cartesian and PROPELLER wsTSE sequences were repeated in seven and four young infants, respectively. Overall, 21 repetitions were performed for the cartesian sequence and 13 repetitions for the PROPELLER sequence. In vitro accuracy errors depended on the chosen segmentation procedure, ranging from 5.4% to 76%, while the sequence showed no significant influence. Artificial breathing increased the minimal accuracy error to 9.1%. In vivo reproducibility errors for total fat volume of the sleeping infants ranged from 2.6% to 3.4%. Neither segmentation nor sequence significantly influenced reproducibility. With both cartesian and PROPELLER sequences an accurate and reproducible measure of body fat was achieved. Adequate segmentation was mandatory for high accuracy.

  1. Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ro

    2016-08-15

    Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as amore » target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.« less

  2. Effects of a Velocity-Vector Based Command Augmentation System and Synthetic Vision System Terrain Portrayal and Guidance Symbology Concepts on Single-Pilot Performance

    NASA Technical Reports Server (NTRS)

    Liu, Dahai; Goodrich, Kenneth H.; Peak, Bob

    2010-01-01

    This study investigated the effects of synthetic vision system (SVS) concepts and advanced flight controls on the performance of pilots flying a light, single-engine general aviation airplane. We evaluated the effects and interactions of two levels of terrain portrayal, guidance symbology, and flight control response type on pilot performance during the conduct of a relatively complex instrument approach procedure. The terrain and guidance presentations were evaluated as elements of an integrated primary flight display system. The approach procedure used in the study included a steeply descending, curved segment as might be encountered in emerging, required navigation performance (RNP) based procedures. Pilot performance measures consisted of flight technical performance, perceived workload, perceived situational awareness and subjective preference. The results revealed that an elevation based generic terrain portrayal significantly improved perceived situation awareness without adversely affecting flight technical performance or workload. Other factors (pilot instrument rating, control response type, and guidance symbology) were not found to significantly affect the performance measures.

  3. Definition of a European promotion concept

    NASA Astrophysics Data System (ADS)

    Andersen, T. A. E.; Blume, H. T.; Brouwer, M. P. A. M.; Dangelo, L.; Duwe, H.; Eilersen, N.; Gaida, M.; Herten, M.; Iversen, T.-H.; Jungius, C.

    1992-07-01

    A marketing strategy for the services offered by the Columbus program is presented. The marketing goals, activities and means, and the procedures for monitoring and control are defined within the context of a first tentative marketing plan for Columbus utilization. The proposed organizational structure, based on national user support organizations within Europe, allows as far as possible for a clear coupling of the organization to the market segment.

  4. From Phenomena to Objects: Segmentation of Fuzzy Objects and its Application to Oceanic Eddies

    NASA Astrophysics Data System (ADS)

    Wu, Qingling

    A challenging image analysis problem that has received limited attention to date is the isolation of fuzzy objects---i.e. those with inherently indeterminate boundaries---from continuous field data. This dissertation seeks to bridge the gap between, on the one hand, the recognized need for Object-Based Image Analysis of fuzzy remotely sensed features, and on the other, the optimization of existing image segmentation techniques for the extraction of more discretely bounded features. Using mesoscale oceanic eddies as a case study of a fuzzy object class evident in Sea Surface Height Anomaly (SSHA) imagery, the dissertation demonstrates firstly, that the widely used region-growing and watershed segmentation techniques can be optimized and made comparable in the absence of ground truth data using the principle of parsimony. However, they both have significant shortcomings, with the region growing procedure creating contour polygons that do not follow the shape of eddies while the watershed technique frequently subdivides eddies or groups together separate eddy objects. Secondly, it was determined that these problems can be remedied by using a novel Non-Euclidian Voronoi (NEV) tessellation technique. NEV is effective in isolating the extrema associated with eddies in SSHA data while using a non-Euclidian cost-distance based procedure (based on cumulative gradients in ocean height) to define the boundaries between fuzzy objects. Using this procedure as the first stage in isolating candidate eddy objects, a novel "region-shrinking" multicriteria eddy identification algorithm was developed that includes consideration of shape and vorticity. Eddies identified by this region-shrinking technique compare favorably with those identified by existing techniques, while simplifying and improving existing automated eddy detection algorithms. However, it also tends to find a larger number of eddies as a result of its ability to separate what other techniques identify as connected eddies. The research presented here is of significance not only to eddy research in oceanography, but also to other areas of Earth System Science for which the automated detection of features lacking rigid boundary definitions is of importance.

  5. Prosthetic component segmentation with blur compensation: a fast method for 3D fluoroscopy.

    PubMed

    Tarroni, Giacomo; Tersi, Luca; Corsi, Cristiana; Stagni, Rita

    2012-06-01

    A new method for prosthetic component segmentation from fluoroscopic images is presented. The hybrid approach we propose combines diffusion filtering, region growing and level-set techniques without exploiting any a priori knowledge of the analyzed geometry. The method was evaluated on a synthetic dataset including 270 images of knee and hip prosthesis merged to real fluoroscopic data simulating different conditions of blurring and illumination gradient. The performance of the method was assessed by comparing estimated contours to references using different metrics. Results showed that the segmentation procedure is fast, accurate, independent on the operator as well as on the specific geometrical characteristics of the prosthetic component, and able to compensate for amount of blurring and illumination gradient. Importantly, the method allows a strong reduction of required user interaction time when compared to traditional segmentation techniques. Its effectiveness and robustness in different image conditions, together with simplicity and fast implementation, make this prosthetic component segmentation procedure promising and suitable for multiple clinical applications including assessment of in vivo joint kinematics in a variety of cases.

  6. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  7. A univocal definition of the neuronal soma morphology using Gaussian mixture models.

    PubMed

    Luengo-Sanchez, Sergio; Bielza, Concha; Benavides-Piccione, Ruth; Fernaud-Espinosa, Isabel; DeFelipe, Javier; Larrañaga, Pedro

    2015-01-01

    The definition of the soma is fuzzy, as there is no clear line demarcating the soma of the labeled neurons and the origin of the dendrites and axon. Thus, the morphometric analysis of the neuronal soma is highly subjective. In this paper, we provide a mathematical definition and an automatic segmentation method to delimit the neuronal soma. We applied this method to the characterization of pyramidal cells, which are the most abundant neurons in the cerebral cortex. Since there are no benchmarks with which to compare the proposed procedure, we validated the goodness of this automatic segmentation method against manual segmentation by neuroanatomists to set up a framework for comparison. We concluded that there were no significant differences between automatically and manually segmented somata, i.e., the proposed procedure segments the neurons similarly to how a neuroanatomist does. It also provides univocal, justifiable and objective cutoffs. Thus, this study is a means of characterizing pyramidal neurons in order to objectively compare the morphometry of the somata of these neurons in different cortical areas and species.

  8. Using data mining to segment healthcare markets from patients' preference perspectives.

    PubMed

    Liu, Sandra S; Chen, Jie

    2009-01-01

    This paper aims to provide an example of how to use data mining techniques to identify patient segments regarding preferences for healthcare attributes and their demographic characteristics. Data were derived from a number of individuals who received in-patient care at a health network in 2006. Data mining and conventional hierarchical clustering with average linkage and Pearson correlation procedures are employed and compared to show how each procedure best determines segmentation variables. Data mining tools identified three differentiable segments by means of cluster analysis. These three clusters have significantly different demographic profiles. The study reveals, when compared with traditional statistical methods, that data mining provides an efficient and effective tool for market segmentation. When there are numerous cluster variables involved, researchers and practitioners need to incorporate factor analysis for reducing variables to clearly and meaningfully understand clusters. Interests and applications in data mining are increasing in many businesses. However, this technology is seldom applied to healthcare customer experience management. The paper shows that efficient and effective application of data mining methods can aid the understanding of patient healthcare preferences.

  9. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm.

    PubMed

    Ahmadian, Alireza; Ay, Mohammad R; Bidgoli, Javad H; Sarkar, Saeed; Zaidi, Habib

    2008-10-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (mumap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated mumaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in clinical setting. More importantly, correction of oral contrast artifacts improved the readability and interpretation of the PET scan and showed substantial decrease of the SUV (104.3%) after correction. An automated segmentation algorithm for classification of irregular shapes of regions containing contrast medium was developed for wider applicability of the SCC algorithm for correction of oral contrast artifacts during the CTAC procedure. The algorithm is being refined and further validated in clinical setting.

  10. On-Line Use of Three-Dimensional Marker Trajectory Estimation From Cone-Beam Computed Tomography Projections for Precise Setup in Radiotherapy for Targets With Respiratory Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worm, Esben S., E-mail: esbeworm@rm.dk; Department of Medical Physics, Aarhus University Hospital, Aarhus; Hoyer, Morten

    2012-05-01

    Purpose: To develop and evaluate accurate and objective on-line patient setup based on a novel semiautomatic technique in which three-dimensional marker trajectories were estimated from two-dimensional cone-beam computed tomography (CBCT) projections. Methods and Materials: Seven treatment courses of stereotactic body radiotherapy for liver tumors were delivered in 21 fractions in total to 6 patients by a linear accelerator. Each patient had two to three gold markers implanted close to the tumors. Before treatment, a CBCT scan with approximately 675 two-dimensional projections was acquired during a full gantry rotation. The marker positions were segmented in each projection. From this, the three-dimensionalmore » marker trajectories were estimated using a probability based method. The required couch shifts for patient setup were calculated from the mean marker positions along the trajectories. A motion phantom moving with known tumor trajectories was used to examine the accuracy of the method. Trajectory-based setup was retrospectively used off-line for the first five treatment courses (15 fractions) and on-line for the last two treatment courses (6 fractions). Automatic marker segmentation was compared with manual segmentation. The trajectory-based setup was compared with setup based on conventional CBCT guidance on the markers (first 15 fractions). Results: Phantom measurements showed that trajectory-based estimation of the mean marker position was accurate within 0.3 mm. The on-line trajectory-based patient setup was performed within approximately 5 minutes. The automatic marker segmentation agreed with manual segmentation within 0.36 {+-} 0.50 pixels (mean {+-} SD; pixel size, 0.26 mm in isocenter). The accuracy of conventional volumetric CBCT guidance was compromised by motion smearing ({<=}21 mm) that induced an absolute three-dimensional setup error of 1.6 {+-} 0.9 mm (maximum, 3.2) relative to trajectory-based setup. Conclusions: The first on-line clinical use of trajectory estimation from CBCT projections for precise setup in stereotactic body radiotherapy was demonstrated. Uncertainty in the conventional CBCT-based setup procedure was eliminated with the new method.« less

  11. Region-Based Building Rooftop Extraction and Change Detection

    NASA Astrophysics Data System (ADS)

    Tian, J.; Metzlaff, L.; d'Angelo, P.; Reinartz, P.

    2017-09-01

    Automatic extraction of building changes is important for many applications like disaster monitoring and city planning. Although a lot of research work is available based on 2D as well as 3D data, an improvement in accuracy and efficiency is still needed. The introducing of digital surface models (DSMs) to building change detection has strongly improved the resulting accuracy. In this paper, a post-classification approach is proposed for building change detection using satellite stereo imagery. Firstly, DSMs are generated from satellite stereo imagery and further refined by using a segmentation result obtained from the Sobel gradients of the panchromatic image. Besides the refined DSMs, the panchromatic image and the pansharpened multispectral image are used as input features for mean-shift segmentation. The DSM is used to calculate the nDSM, out of which the initial building candidate regions are extracted. The candidate mask is further refined by morphological filtering and by excluding shadow regions. Following this, all segments that overlap with a building candidate region are determined. A building oriented segments merging procedure is introduced to generate a final building rooftop mask. As the last step, object based change detection is performed by directly comparing the building rooftops extracted from the pre- and after-event imagery and by fusing the change indicators with the roof-top region map. A quantitative and qualitative assessment of the proposed approach is provided by using WorldView-2 satellite data from Istanbul, Turkey.

  12. Assessing age-related gray matter decline with voxel-based morphometry depends significantly on segmentation and normalization procedures

    PubMed Central

    Callaert, Dorothée V.; Ribbens, Annemie; Maes, Frederik; Swinnen, Stephan P.; Wenderoth, Nicole

    2014-01-01

    Healthy ageing coincides with a progressive decline of brain gray matter (GM) ultimately affecting the entire brain. For a long time, manual delineation-based volumetry within predefined regions of interest (ROI) has been the gold standard for assessing such degeneration. Voxel-Based Morphometry (VBM) offers an automated alternative approach that, however, relies critically on the segmentation and spatial normalization of a large collection of images from different subjects. This can be achieved via different algorithms, with SPM5/SPM8, DARTEL of SPM8 and FSL tools (FAST, FNIRT) being three of the most frequently used. We complemented these voxel based measurements with a ROI based approach, whereby the ROIs are defined by transforms of an atlas (containing different tissue probability maps as well as predefined anatomic labels) to the individual subject images in order to obtain volumetric information at the level of the whole brain or within separate ROIs. Comparing GM decline between 21 young subjects (mean age 23) and 18 elderly (mean age 66) revealed that volumetric measurements differed significantly between methods. The unified segmentation/normalization of SPM5/SPM8 revealed the largest age-related differences and DARTEL the smallest, with FSL being more similar to the DARTEL approach. Method specific differences were substantial after segmentation and most pronounced for the cortical structures in close vicinity to major sulci and fissures. Our findings suggest that algorithms that provide only limited degrees of freedom for local deformations (such as the unified segmentation and normalization of SPM5/SPM8) tend to overestimate between-group differences in VBM results when compared to methods providing more flexible warping. This difference seems to be most pronounced if the anatomy of one of the groups deviates from custom templates, a finding that is of particular importance when results are compared across studies using different VBM methods. PMID:25002845

  13. 36 CFR 223.195 - Procedures for identifying and marking unprocessed timber.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pattern may not be used to mark logs from any other source for a period of 24 months after all logs have..., they shall be replaced. If the log is cut into two or more segments, each segment shall be identified... preserve identification of log pieces shall not apply to logs cut into two or more segments as a part of...

  14. Evaluation of non-rigid registration parameters for atlas-based segmentation of CT images of human cochlea

    NASA Astrophysics Data System (ADS)

    Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.

    2017-02-01

    Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.

  15. Segmentation of Dilated Hemorrhoidal Veins in Hemorrhoidal Disease.

    PubMed

    Díaz-Flores, Lucio; Gutiérrez, Ricardo; González-Gómez, Miriam; García, Pino; Sáez, Francisco J; Díaz-Flores, Lucio; Carrasco, José Luis; Madrid, Juan F

    2018-06-18

    Vein segmentation is a vascular remodeling process mainly studied in experimental conditions and linked to hemodynamic factors, with clinical implications. The aim of this work is to assess the morphologic characteristics, associated findings, and mechanisms that participate in vein segmentation in humans. To this end, we examined 156 surgically obtained cases of hemorrhoidal disease. Segmentation occurred in 65 and was most prominent in 15, which were selected for serial sections, immunohistochemistry, and immunofluorescence procedures. The dilated veins showed differently sized spaces, separated by thin septa. Findings associated with vein segmentation were: (a) vascular channels formed from the vein intima endothelial cells (ECs) and located in the vein wall and/or intraluminal fibrin, (b) vascular loops formed by interconnected vascular channels (venous-venous connections), which encircled vein wall components or fibrin and formed folds/pillars/papillae (FPPs; the encircling ECs formed the FPP cover and the encircled components formed the core), and (c) FPP splitting, remodeling, alignment, and fusion, originating septa. Thrombosis was observed in some nonsegmented veins, while the segmented veins only occasionally contained thrombi. Dense microvasculature was also present in the interstitium and around veins. In conclusion, the findings suggest that hemorrhoidal vein segmentation is an adaptive process in which a piecemeal angiogenic mechanism participates, predominantly by intussusception, giving rise to intravascular FPPs, followed by linear rearrangement, remodeling and fusion of FPPs, and septa formation. Identification of other markers, as well as the molecular bases, hemodynamic relevance, and possible therapeutic implications of vein segmentation in dilated hemorrhoidal veins require further studies. © 2018 S. Karger AG, Basel.

  16. Doing More for More: Unintended Consequences of Financial Incentives for Oncology Specialty Care.

    PubMed

    O'Neil, Brock; Graves, Amy J; Barocas, Daniel A; Chang, Sam S; Penson, David F; Resnick, Matthew J

    2016-02-01

    Specialty care remains a significant contributor to health care spending but largely unaddressed in novel payment models aimed at promoting value-based delivery. Bladder cancer, chiefly managed by subspecialists, is among the most costly. In 2005, Centers for Medicare and Medicaid Services (CMS) dramatically increased physician payment for office-based interventions for bladder cancer to shift care from higher cost facilities, but the impact is unknown. This study evaluated the effect of financial incentives on patterns of fee-for-service (FFS) bladder cancer care. Data from a 5% sample of Medicare beneficiaries from 2001-2013 were evaluated using interrupted time-series analysis with segmented regression. Primary outcomes were the effects of CMS fee modifications on utilization and site of service for procedures associated with the diagnosis and treatment of bladder cancer. Rates of related bladder cancer procedures that were not affected by the fee change were concurrent controls. Finally, the effect of payment changes on both diagnostic yield and need for redundant procedures were studied. All statistical tests were two-sided. Utilization of clinic-based procedures increased by 644% (95% confidence interval [CI] = 584% to 704%) after the fee change, but without reciprocal decline in facility-based procedures. Procedures unaffected by the fee incentive remained unchanged throughout the study period. Diagnostic yield decreased by 17.0% (95% CI = 12.7% to 21.3%), and use of redundant office-based procedures increased by 76.0% (95% CI = 59% to 93%). Financial incentives in bladder cancer care have unintended and costly consequences in the current FFS environment. The observed price sensitivity is likely to remain a major issue in novel payment models failing to incorporate procedure-based specialty physicians. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Segmenting patients and physicians using preferences from discrete choice experiments.

    PubMed

    Deal, Ken

    2014-01-01

    People often form groups or segments that have similar interests and needs and seek similar benefits from health providers. Health organizations need to understand whether the same health treatments, prevention programs, services, and products should be applied to everyone in the relevant population or whether different treatments need to be provided to each of several segments that are relatively homogeneous internally but heterogeneous among segments. Our objective was to explain the purposes, benefits, and methods of segmentation for health organizations, and to illustrate the process of segmenting health populations based on preference coefficients from a discrete choice conjoint experiment (DCE) using an example study of prevention of cyberbullying among university students. We followed a two-level procedure for investigating segmentation incorporating several methods for forming segments in Level 1 using DCE preference coefficients and testing their quality, reproducibility, and usability by health decision makers. Covariates (demographic, behavioral, lifestyle, and health state variables) were included in Level 2 to further evaluate quality and to support the scoring of large databases and developing typing tools for assigning those in the relevant population, but not in the sample, to the segments. Several segmentation solution candidates were found during the Level 1 analysis, and the relationship of the preference coefficients to the segments was investigated using predictive methods. Those segmentations were tested for their quality and reproducibility and three were found to be very close in quality. While one seemed better than others in the Level 1 analysis, another was very similar in quality and proved ultimately better in predicting segment membership using covariates in Level 2. The two segments in the final solution were profiled for attributes that would support the development and acceptance of cyberbullying prevention programs among university students. Those segments were very different-where one wanted substantial penalties against cyberbullies and were willing to devote time to a prevention program, while the other felt no need to be involved in prevention and wanted only minor penalties. Segmentation recognizes key differences in why patients and physicians prefer different health programs and treatments. A viable segmentation solution may lead to adapting prevention programs and treatments for each targeted segment and/or to educating and communicating to better inform those in each segment of the program/treatment benefits. Segment members' revealed preferences showing behavioral changes provide the ultimate basis for evaluating the segmentation benefits to the health organization.

  18. Evaluation of the procedure for separating barley from other spring small grains. [North Dakota, South Dakota, Minnesota and Montana

    NASA Technical Reports Server (NTRS)

    Magness, E. R. (Principal Investigator)

    1980-01-01

    The success of the Transition Year procedure to separate and label barley and the other small grains was assessed. It was decided that developers of the procedure would carry out the exercise in order to prevent compounding procedural problems with implementation problems. The evaluation proceeded by labeling the sping small grains first. The accuracy of this labeling was, on the average, somewhat better than that in the Transition Year operations. Other departures from the original procedure included a regionalization of the labeling process, the use of trend analysis, and the removal of time constraints from the actual processing. Segment selection, ground truth derivation, and data available for each segment in the analysis are discussed. Labeling accuracy is examined for North Dakota, South Dakota, Minnesota, and Montana as well as for the entire four-state area. Errors are characterized.

  19. Treatment Using the SpyGlass Digital System in a Patient with Hepatolithiasis after a Whipple Procedure.

    PubMed

    Harima, Hirofumi; Hamabe, Kouichi; Hisano, Fusako; Matsuzaki, Yuko; Itoh, Tadahiko; Sanuki, Kazutoshi; Sakaida, Isao

    2018-05-23

    An 89-year-old man was referred to our hospital for treatment of hepatolithiasis causing recurrent cholangitis. He had undergone a prior Whipple procedure. Computed tomography demonstrated left-sided hepatolithiasis. First, we conducted peroral direct cholangioscopy (PDCS) using an ultraslim endoscope. Although PDCS was successfully conducted, it was unsuccessful in removing all the stones. The stones located in the B2 segment were difficult to remove because the endoscope could not be inserted deeply into this segment due to the small size of the intrahepatic bile duct. Next, we substituted the endoscope with an upper gastrointestinal endoscope. After positioning the endoscope, the SpyGlass digital system (SPY-DS) was successfully inserted deep into the B2 segment. Upon visualizing the residual stones, we conducted SPY-DS-guided electrohydraulic lithotripsy. The stones were disintegrated and completely removed. In cases of PDCS failure, a treatment strategy using the SPY-DS can be considered for patients with hepatolithiasis after a Whipple procedure.

  20. A multiscale Markov random field model in wavelet domain for image segmentation

    NASA Astrophysics Data System (ADS)

    Dai, Peng; Cheng, Yu; Wang, Shengchun; Du, Xinyu; Wu, Dan

    2017-07-01

    The human vision system has abilities for feature detection, learning and selective attention with some properties of hierarchy and bidirectional connection in the form of neural population. In this paper, a multiscale Markov random field model in the wavelet domain is proposed by mimicking some image processing functions of vision system. For an input scene, our model provides its sparse representations using wavelet transforms and extracts its topological organization using MRF. In addition, the hierarchy property of vision system is simulated using a pyramid framework in our model. There are two information flows in our model, i.e., a bottom-up procedure to extract input features and a top-down procedure to provide feedback controls. The two procedures are controlled simply by two pyramidal parameters, and some Gestalt laws are also integrated implicitly. Equipped with such biological inspired properties, our model can be used to accomplish different image segmentation tasks, such as edge detection and region segmentation.

  1. Recommendations for the classification of group A rotaviruses using all 11 genomic RNA segments.

    PubMed

    Matthijnssens, Jelle; Ciarlet, Max; Rahman, Mustafizur; Attoui, Houssam; Bányai, Krisztián; Estes, Mary K; Gentsch, Jon R; Iturriza-Gómara, Miren; Kirkwood, Carl D; Martella, Vito; Mertens, Peter P C; Nakagomi, Osamu; Patton, John T; Ruggeri, Franco M; Saif, Linda J; Santos, Norma; Steyer, Andrej; Taniguchi, Koki; Desselberger, Ulrich; Van Ranst, Marc

    2008-01-01

    Recently, a classification system was proposed for rotaviruses in which all the 11 genomic RNA segments are used (Matthijnssens et al. in J Virol 82:3204-3219, 2008). Based on nucleotide identity cut-off percentages, different genotypes were defined for each genome segment. A nomenclature for the comparison of complete rotavirus genomes was considered in which the notations Gx-P[x]-Ix-Rx-Cx-Mx-Ax-Nx-Tx-Ex-Hx are used for the VP7-VP4-VP6-VP1-VP2-VP3-NSP1-NSP2-NSP3-NSP4-NSP5/6 encoding genes, respectively. This classification system is an extension of the previously applied genotype-based system which made use of the rotavirus gene segments encoding VP4, VP7, VP6, and NSP4. In order to assign rotavirus strains to one of the established genotypes or a new genotype, a standard procedure is proposed in this report. As more human and animal rotavirus genomes will be completely sequenced, new genotypes for each of the 11 gene segments may be identified. A Rotavirus Classification Working Group (RCWG) including specialists in molecular virology, infectious diseases, epidemiology, and public health was formed, which can assist in the appropriate delineation of new genotypes, thus avoiding duplications and helping minimize errors. Scientists discovering a potentially new rotavirus genotype for any of the 11 gene segments are invited to send the novel sequence to the RCWG, where the sequence will be analyzed, and a new nomenclature will be advised as appropriate. The RCWG will update the list of classified strains regularly and make this accessible on a website. Close collaboration with the Study Group Reoviridae of the International Committee on the Taxonomy of Viruses will be maintained.

  2. Precise Alignment and Permanent Mounting of Thin and Lightweight X-ray Segments

    NASA Technical Reports Server (NTRS)

    Biskach, Michael P.; Chan, Kai-Wing; Hong, Melinda N.; Mazzarella, James R.; McClelland, Ryan S.; Norman, Michael J.; Saha, Timo T.; Zhang, William W.

    2012-01-01

    To provide observations to support current research efforts in high energy astrophysics. future X-ray telescope designs must provide matching or better angular resolution while significantly increasing the total collecting area. In such a design the permanent mounting of thin and lightweight segments is critical to the overall performance of the complete X-ray optic assembly. The thin and lightweight segments used in the assemhly of the modules are desigued to maintain and/or exceed the resolution of existing X-ray telescopes while providing a substantial increase in collecting area. Such thin and delicate X-ray segments are easily distorted and yet must be aligned to the arcsecond level and retain accurate alignment for many years. The Next Generation X-ray Optic (NGXO) group at NASA Goddard Space Flight Center has designed, assembled. and implemented new hardware and procedures mth the short term goal of aligning three pairs of X-ray segments in a technology demonstration module while maintaining 10 arcsec alignment through environmental testing as part of the eventual design and construction of a full sized module capable of housing hundreds of X-ray segments. The recent attempts at multiple segment pair alignment and permanent mounting is described along with an overview of the procedure used. A look into what the next year mll bring for the alignment and permanent segment mounting effort illustrates some of the challenges left to overcome before an attempt to populate a full sized module can begin.

  3. Identification of Matra Region and Overlapping Characters for OCR of Printed Bengali Scripts

    NASA Astrophysics Data System (ADS)

    Goswami, Subhra Sundar

    One of the important reasons for poor recognition rate in optical character recognition (OCR) system is the error in character segmentation. In case of Bangla scripts, the errors occur due to several reasons, which include incorrect detection of matra (headline), over-segmentation and under-segmentation. We have proposed a robust method for detecting the headline region. Existence of overlapping characters (in under-segmented parts) in scanned printed documents is a major problem in designing an effective character segmentation procedure for OCR systems. In this paper, a predictive algorithm is developed for effectively identifying overlapping characters and then selecting the cut-borders for segmentation. Our method can be successfully used in achieving high recognition result.

  4. Adaptive segmentation of cerebrovascular tree in time-of-flight magnetic resonance angiography.

    PubMed

    Hao, J T; Li, M L; Tang, F L

    2008-01-01

    Accurate segmentation of the human vasculature is an important prerequisite for a number of clinical procedures, such as diagnosis, image-guided neurosurgery and pre-surgical planning. In this paper, an improved statistical approach to extracting whole cerebrovascular tree in time-of-flight magnetic resonance angiography is proposed. Firstly, in order to get a more accurate segmentation result, a localized observation model is proposed instead of defining the observation model over the entire dataset. Secondly, for the binary segmentation, an improved Iterative Conditional Model (ICM) algorithm is presented to accelerate the segmentation process. The experimental results showed that the proposed algorithm can obtain more satisfactory segmentation results and save more processing time than conventional approaches, simultaneously.

  5. A LiDAR based analysis of hydraulic hazard mapping

    NASA Astrophysics Data System (ADS)

    Cazorzi, F.; De Luca, A.; Checchinato, A.; Segna, F.; Dalla Fontana, G.

    2012-04-01

    Mapping hydraulic hazard is a ticklish procedure as it involves technical and socio-economic aspects. On the one hand no dangerous areas should be excluded, on the other hand it is important not to exceed, beyond the necessary, with the surface assigned to some use limitations. The availability of a high resolution topographic survey allows nowadays to face this task with innovative procedures, both in the planning (mapping) and in the map validation phases. The latter is the object of the present work. It should be stressed that the described procedure is proposed purely as a preliminary analysis based on topography only, and therefore does not intend in any way to replace more sophisticated analysis methods requiring based on hydraulic modelling. The reference elevation model is a combination of the digital terrain model and the digital building model (DTM+DBM). The option of using the standard surface model (DSM) is not viable, as the DSM represents the vegetation canopy as a solid volume. This has the consequence of unrealistically considering the vegetation as a geometric obstacle to water flow. In some cases the topographic model construction requires the identification and digitization of the principal breaklines, such as river banks, ditches and similar natural or artificial structures. The geometrical and topological procedure for the validation of the hydraulic hazard maps is made of two steps. In the first step the whole area is subdivided into fluvial segments, with length chosen as a reasonable trade-off between the need to keep the hydrographical unit as complete as possible, and the need to separate sections of the river bed with significantly different morphology. Each of these segments is made of a single elongated polygon, whose shape can be quite complex, especially for meandering river sections, where the flow direction (i.e. the potential energy gradient associated to the talweg) is often inverted. In the second step the segments are analysed one by one. Therefore, each segment was split into many reaches, so that within any of them the slope of the piezometric line can be approximated to zero. As a consequence, the hydraulic profile (open channel flow) in every reach is assumed horizontal both downslope and on the cross-section. Each reach can be seen as a polygon, delimited laterally by the hazard mapping boundaries and longitudinally by two successive cross sections, usually orthogonal to the talweg line. Simulating the progressive increase of the river stage, with a horizontal piezometric line, allow the definition of the stage-area and stage-volume relationships. Such relationships are obtained exclusively by the geometric information as provided by the high resolution elevation model. The maximum flooded area resulting from the simulation is finally compared to the potentially floodable area described by the hazard maps, to give a flooding index for every reach. Index values lower than 100% show that the mapped hazard area exceeds the maximum floodable area. Very low index values identify spots where there is a significant incongruity between the hazard map and the topography, and where a specific verification is probably needed. The procedure was successfully used for the validation of many hazard maps across Italy.

  6. Numerical simulation and optimal design of Segmented Planar Imaging Detector for Electro-Optical Reconnaissance

    NASA Astrophysics Data System (ADS)

    Chu, Qiuhui; Shen, Yijie; Yuan, Meng; Gong, Mali

    2017-12-01

    Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is a cutting-edge electro-optical imaging technology to realize miniaturization and complanation of imaging systems. In this paper, the principle of SPIDER has been numerically demonstrated based on the partially coherent light theory, and a novel concept of adjustable baseline pairing SPIDER system has further been proposed. Based on the results of simulation, it is verified that the imaging quality could be effectively improved by adjusting the Nyquist sampling density, optimizing the baseline pairing method and increasing the spectral channel of demultiplexer. Therefore, an adjustable baseline pairing algorithm is established for further enhancing the image quality, and the optimal design procedure in SPIDER for arbitrary targets is also summarized. The SPIDER system with adjustable baseline pairing method can broaden its application and reduce cost under the same imaging quality.

  7. A Virtual Reality System for PTCD Simulation Using Direct Visuo-Haptic Rendering of Partially Segmented Image Data.

    PubMed

    Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz

    2016-01-01

    This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.

  8. Automatic morphometry in Alzheimer's disease and mild cognitive impairment☆☆☆

    PubMed Central

    Heckemann, Rolf A.; Keihaninejad, Shiva; Aljabar, Paul; Gray, Katherine R.; Nielsen, Casper; Rueckert, Daniel; Hajnal, Joseph V.; Hammers, Alexander

    2011-01-01

    This paper presents a novel, publicly available repository of anatomically segmented brain images of healthy subjects as well as patients with mild cognitive impairment and Alzheimer's disease. The underlying magnetic resonance images have been obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. T1-weighted screening and baseline images (1.5 T and 3 T) have been processed with the multi-atlas based MAPER procedure, resulting in labels for 83 regions covering the whole brain in 816 subjects. Selected segmentations were subjected to visual assessment. The segmentations are self-consistent, as evidenced by strong agreement between segmentations of paired images acquired at different field strengths (Jaccard coefficient: 0.802 ± 0.0146). Morphometric comparisons between diagnostic groups (normal; stable mild cognitive impairment; mild cognitive impairment with progression to Alzheimer's disease; Alzheimer's disease) showed highly significant group differences for individual regions, the majority of which were located in the temporal lobe. Additionally, significant effects were seen in the parietal lobe. Increased left/right asymmetry was found in posterior cortical regions. An automatically derived white-matter hypointensities index was found to be a suitable means of quantifying white-matter disease. This repository of segmentations is a potentially valuable resource to researchers working with ADNI data. PMID:21397703

  9. Crew procedures and workload of retrofit concepts for microwave landing system

    NASA Technical Reports Server (NTRS)

    Summers, Leland G.; Jonsson, Jon E.

    1989-01-01

    Crew procedures and workload for Microwave Landing Systems (MLS) that could be retrofitted into existing transport aircraft were evaluated. Two MLS receiver concepts were developed. One is capable of capturing a runway centerline and the other is capable of capturing a segmented approach path. Crew procedures were identified and crew task analyses were performed using each concept. Crew workload comparisons were made between the MLS concepts and an ILS baseline using a task-timeline workload model. Workload indexes were obtained for each scenario. The results showed that workload was comparable to the ILS baseline for the MLS centerline capture concept, but significantly higher for the segmented path capture concept.

  10. Different methods of image segmentation in the process of meat marbling evaluation

    NASA Astrophysics Data System (ADS)

    Ludwiczak, A.; Ślósarz, P.; Lisiak, D.; Przybylak, A.; Boniecki, P.; Stanisz, M.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Janczak, D.; Bykowska, M.

    2015-07-01

    The level of marbling in meat assessment based on digital images is very popular, as computer vision tools are becoming more and more advanced. However considering muscle cross sections as the data source for marbling level evaluation, there are still a few problems to cope with. There is a need for an accurate method which would facilitate this evaluation procedure and increase its accuracy. The presented research was conducted in order to compare the effect of different image segmentation tools considering their usefulness in meat marbling evaluation on the muscle anatomical cross - sections. However this study is considered to be an initial trial in the presented field of research and an introduction to ultrasonic images processing and analysis.

  11. The virtual craniofacial patient: 3D jaw modeling and animation.

    PubMed

    Enciso, Reyes; Memon, Ahmed; Fidaleo, Douglas A; Neumann, Ulrich; Mah, James

    2003-01-01

    In this paper, we present new developments in the area of 3D human jaw modeling and animation. CT (Computed Tomography) scans have traditionally been used to evaluate patients with dental implants, assess tumors, cysts, fractures and surgical procedures. More recently this data has been utilized to generate models. Researchers have reported semi-automatic techniques to segment and model the human jaw from CT images and manually segment the jaw from MRI images. Recently opto-electronic and ultrasonic-based systems (JMA from Zebris) have been developed to record mandibular position and movement. In this research project we introduce: (1) automatic patient-specific three-dimensional jaw modeling from CT data and (2) three-dimensional jaw motion simulation using jaw tracking data from the JMA system (Zebris).

  12. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 3: Demonstration problems

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    Program NSEG is a rapid mission analysis code based on the use of approximate flight path equations of motion. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelope performance mapping capabilities. For example, rate-of-climb, turn rates, and energy maneuverability parameter values may be mapped in the Mach-altitude plane. Approximate take off and landing analyses are also performed. At high speeds, centrifugal lift effects are accounted for. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  13. Laparoscopic ureteral reimplantation with Boari flap for the management of long- segment ureteral defect: A case series with review of the literature

    PubMed Central

    Bansal, Ankur; Sinha, Rahul Janak; Jhanwar, Ankur; Prakash, Gaurav; Purkait, Bimalesh; Singh, Vishwajeet

    2017-01-01

    Objective The incidence of ureteral stricture is showing a rising trend due to increased use of laparoscopic and upper urinary tract endoscopic procedures. Boari flap is the preferred method of repairing long- segment ureteral defects of 8–12 cm. The procedure has undergone change from classical open (transperitoneal and retroperitoneal) method to laparoscopic surgery and recently robotic surgery. Laparoscopic approach is cosmetically appealing, less morbid and with shorter hospital stay. In this case series, we report our experience of performing laparoscopic ureteral reimplantation with Boari flap in 3 patients. Material and methods This prospective study was conducted between January 2011 December 2014. The patients with a long- segment ureteral defect who had undergone laparoscopic Boari flap reconstruction were included in the study. Outcome of laparoscopic ureteral reimplantation with Boari flap for the manangement of long segment ureteral defect was evaluated. Results The procedure was performed on 3 patients, and male to female ratio was 1:2. One patient had bilateral and other two patient had left ureteral stricture. The mean length of ureteral stricture was 8.6 cm (range 8.2–9.2 cm). The mean operative time was 206 min (190 to 220 min). The average estimated blood loss was 100 mL (range 90–110 mL) and mean hospital stay was 6 days (range 5 to 7 days). The mean follow up was 19 months (range 17–22 months). None of the patients experienced any complication related to the procedure in perioperative period. Conclusion Laparoscopic ureteral reimplantation with Boari flap is safe, feasible and has excellent long term results. However, the procedure is technically challenging, requires extensive experience of intracorporeal suturing. PMID:28861304

  14. Laparoscopic ureteral reimplantation with Boari flap for the management of long- segment ureteral defect: A case series with review of the literature.

    PubMed

    Bansal, Ankur; Sinha, Rahul Janak; Jhanwar, Ankur; Prakash, Gaurav; Purkait, Bimalesh; Singh, Vishwajeet

    2017-09-01

    The incidence of ureteral stricture is showing a rising trend due to increased use of laparoscopic and upper urinary tract endoscopic procedures. Boari flap is the preferred method of repairing long- segment ureteral defects of 8-12 cm. The procedure has undergone change from classical open (transperitoneal and retroperitoneal) method to laparoscopic surgery and recently robotic surgery. Laparoscopic approach is cosmetically appealing, less morbid and with shorter hospital stay. In this case series, we report our experience of performing laparoscopic ureteral reimplantation with Boari flap in 3 patients. This prospective study was conducted between January 2011 December 2014. The patients with a long- segment ureteral defect who had undergone laparoscopic Boari flap reconstruction were included in the study. Outcome of laparoscopic ureteral reimplantation with Boari flap for the manangement of long segment ureteral defect was evaluated. The procedure was performed on 3 patients, and male to female ratio was 1:2. One patient had bilateral and other two patient had left ureteral stricture. The mean length of ureteral stricture was 8.6 cm (range 8.2-9.2 cm). The mean operative time was 206 min (190 to 220 min). The average estimated blood loss was 100 mL (range 90-110 mL) and mean hospital stay was 6 days (range 5 to 7 days). The mean follow up was 19 months (range 17-22 months). None of the patients experienced any complication related to the procedure in perioperative period. Laparoscopic ureteral reimplantation with Boari flap is safe, feasible and has excellent long term results. However, the procedure is technically challenging, requires extensive experience of intracorporeal suturing.

  15. Fully automated tumor segmentation based on improved fuzzy connectedness algorithm in brain MR images.

    PubMed

    Harati, Vida; Khayati, Rasoul; Farzan, Abdolreza

    2011-07-01

    Uncontrollable and unlimited cell growth leads to tumor genesis in the brain. If brain tumors are not diagnosed early and cured properly, they could cause permanent brain damage or even death to patients. As in all methods of treatments, any information about tumor position and size is important for successful treatment; hence, finding an accurate and a fully automated method to give information to physicians is necessary. A fully automatic and accurate method for tumor region detection and segmentation in brain magnetic resonance (MR) images is suggested. The presented approach is an improved fuzzy connectedness (FC) algorithm based on a scale in which the seed point is selected automatically. This algorithm is independent of the tumor type in terms of its pixels intensity. Tumor segmentation evaluation results based on similarity criteria (similarity index (SI), overlap fraction (OF), and extra fraction (EF) are 92.89%, 91.75%, and 3.95%, respectively) indicate a higher performance of the proposed approach compared to the conventional methods, especially in MR images, in tumor regions with low contrast. Thus, the suggested method is useful for increasing the ability of automatic estimation of tumor size and position in brain tissues, which provides more accurate investigation of the required surgery, chemotherapy, and radiotherapy procedures. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Automatic firearm class identification from cartridge cases

    NASA Astrophysics Data System (ADS)

    Kamalakannan, Sridharan; Mann, Christopher J.; Bingham, Philip R.; Karnowski, Thomas P.; Gleason, Shaun S.

    2011-03-01

    We present a machine vision system for automatic identification of the class of firearms by extracting and analyzing two significant properties from spent cartridge cases, namely the Firing Pin Impression (FPI) and the Firing Pin Aperture Outline (FPAO). Within the framework of the proposed machine vision system, a white light interferometer is employed to image the head of the spent cartridge cases. As a first step of the algorithmic procedure, the Primer Surface Area (PSA) is detected using a circular Hough transform. Once the PSA is detected, a customized statistical region-based parametric active contour model is initialized around the center of the PSA and evolved to segment the FPI. Subsequently, the scaled version of the segmented FPI is used to initialize a customized Mumford-Shah based level set model in order to segment the FPAO. Once the shapes of FPI and FPAO are extracted, a shape-based level set method is used in order to compare these extracted shapes to an annotated dataset of FPIs and FPAOs from varied firearm types. A total of 74 cartridge case images non-uniformly distributed over five different firearms are processed using the aforementioned scheme and the promising nature of the results (95% classification accuracy) demonstrate the efficacy of the proposed approach.

  17. Graph cuts with invariant object-interaction priors: application to intervertebral disc segmentation.

    PubMed

    Ben Ayed, Ismail; Punithakumar, Kumaradevan; Garvin, Gregory; Romano, Walter; Li, Shuo

    2011-01-01

    This study investigates novel object-interaction priors for graph cut image segmentation with application to intervertebral disc delineation in magnetic resonance (MR) lumbar spine images. The algorithm optimizes an original cost function which constrains the solution with learned prior knowledge about the geometric interactions between different objects in the image. Based on a global measure of similarity between distributions, the proposed priors are intrinsically invariant with respect to translation and rotation. We further introduce a scale variable from which we derive an original fixed-point equation (FPE), thereby achieving scale-invariance with only few fast computations. The proposed priors relax the need of costly pose estimation (or registration) procedures and large training sets (we used a single subject for training), and can tolerate shape deformations, unlike template-based priors. Our formulation leads to an NP-hard problem which does not afford a form directly amenable to graph cut optimization. We proceeded to a relaxation of the problem via an auxiliary function, thereby obtaining a nearly real-time solution with few graph cuts. Quantitative evaluations over 60 intervertebral discs acquired from 10 subjects demonstrated that the proposed algorithm yields a high correlation with independent manual segmentations by an expert. We further demonstrate experimentally the invariance of the proposed geometric attributes. This supports the fact that a single subject is sufficient for training our algorithm, and confirms the relevance of the proposed priors to disc segmentation.

  18. First magnetic resonance imaging-guided aortic stenting and cava filter placement using a polyetheretherketone-based magnetic resonance imaging-compatible guidewire in swine: proof of concept.

    PubMed

    Kos, Sebastian; Huegli, Rolf; Hofmann, Eugen; Quick, Harald H; Kuehl, Hilmar; Aker, Stephanie; Kaiser, Gernot M; Borm, Paul J A; Jacob, Augustinus L; Bilecen, Deniz

    2009-05-01

    The purpose of this study was to demonstrate feasibility of percutaneous transluminal aortic stenting and cava filter placement under magnetic resonance imaging (MRI) guidance exclusively using a polyetheretherketone (PEEK)-based MRI-compatible guidewire. Percutaneous transluminal aortic stenting and cava filter placement were performed in 3 domestic swine. Procedures were performed under MRI-guidance in an open-bore 1.5-T scanner. The applied 0.035-inch guidewire has a PEEK core reinforced by fibres, floppy tip, hydrophilic coating, and paramagnetic markings for passive visualization. Through an 11F sheath, the guidewire was advanced into the abdominal (swine 1) or thoracic aorta (swine 2), and the stents were deployed. The guidewire was advanced into the inferior vena cava (swine 3), and the cava filter was deployed. Postmortem autopsy was performed. Procedural success, guidewire visibility, pushability, and stent support were qualitatively assessed by consensus. Procedure times were documented. Guidewire guidance into the abdominal and thoracic aortas and the inferior vena cava was successful. Stent deployments were successful in the abdominal (swine 1) and thoracic (swine 2) segments of the descending aorta. Cava filter positioning and deployment was successful. Autopsy documented good stent and filter positioning. Guidewire visibility through applied markers was rated acceptable for aortic stenting and good for venous filter placement. Steerability, pushability, and device support were good. The PEEK-based guidewire allows either percutaneous MRI-guided aortic stenting in the thoracic and abdominal segments of the descending aorta and filter placement in the inferior vena cava with acceptable to good device visibility and offers good steerability, pushability, and device support.

  19. First Magnetic Resonance Imaging-Guided Aortic Stenting and Cava Filter Placement Using a Polyetheretherketone-Based Magnetic Resonance Imaging-Compatible Guidewire in Swine: Proof of Concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kos, Sebastian, E-mail: skos@gmx.d; Huegli, Rolf; Hofmann, Eugen

    The purpose of this study was to demonstrate feasibility of percutaneous transluminal aortic stenting and cava filter placement under magnetic resonance imaging (MRI) guidance exclusively using a polyetheretherketone (PEEK)-based MRI-compatible guidewire. Percutaneous transluminal aortic stenting and cava filter placement were performed in 3 domestic swine. Procedures were performed under MRI-guidance in an open-bore 1.5-T scanner. The applied 0.035-inch guidewire has a PEEK core reinforced by fibres, floppy tip, hydrophilic coating, and paramagnetic markings for passive visualization. Through an 11F sheath, the guidewire was advanced into the abdominal (swine 1) or thoracic aorta (swine 2), and the stents were deployed. Themore » guidewire was advanced into the inferior vena cava (swine 3), and the cava filter was deployed. Postmortem autopsy was performed. Procedural success, guidewire visibility, pushability, and stent support were qualitatively assessed by consensus. Procedure times were documented. Guidewire guidance into the abdominal and thoracic aortas and the inferior vena cava was successful. Stent deployments were successful in the abdominal (swine 1) and thoracic (swine 2) segments of the descending aorta. Cava filter positioning and deployment was successful. Autopsy documented good stent and filter positioning. Guidewire visibility through applied markers was rated acceptable for aortic stenting and good for venous filter placement. Steerability, pushability, and device support were good. The PEEK-based guidewire allows either percutaneous MRI-guided aortic stenting in the thoracic and abdominal segments of the descending aorta and filter placement in the inferior vena cava with acceptable to good device visibility and offers good steerability, pushability, and device support.« less

  20. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    PubMed

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.

  1. Automated main-chain model building by template matching and iterative fragment extension.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and beta-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and beta-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C(alpha) positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 A. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition.

  2. Novel Strategy to Evaluate Infectious Salmon Anemia Virus Variants by High Resolution Melting

    PubMed Central

    Sepúlveda, Dagoberto; Cárdenas, Constanza; Carmona, Marisela; Marshall, Sergio H.

    2012-01-01

    Genetic variability is a key problem in the prevention and therapy of RNA-based virus infections. Infectious Salmon Anemia virus (ISAv) is an RNA virus which aggressively attacks salmon producing farms worldwide and in particular in Chile. Just as with most of the Orthomyxovirus, ISAv displays high variability in its genome which is reflected by a wider infection potential, thus hampering management and prevention of the disease. Although a number of widely validated detection procedures exist, in this case there is a need of a more complex approach to the characterization of virus variability. We have adapted a procedure of High Resolution Melting (HRM) as a fine-tuning technique to fully differentiate viral variants detected in Chile and projected to other infective variants reported elsewhere. Out of the eight viral coding segments, the technique was adapted using natural Chilean variants for two of them, namely segments 5 and 6, recognized as virulence-associated factors. Our work demonstrates the versatility of the technique as well as its superior resolution capacity compared with standard techniques currently in use as key diagnostic tools. PMID:22719837

  3. [Presurgical alveolar molding using computer aided design in infants with unilateral complete cleft lip and palate].

    PubMed

    Zgong, Xin; Yu, Quan; Yu, Zhe-yuan; Wang, Guo-min; Qian, Yu-fen

    2012-04-01

    To establish a new method of presurgical alveolar molding using computer aided design(CAD) in infants with complete unilateral cleft lip and palate (UCLP). Ten infants with complete UCLP were recruited. A maxillary impression was taken at the first examination after birth. The study model was scanned by a non-contact three-dimensional laser scanner and a digital model was constructed and analyzed to simulate the alveolar molding procedure with reverse engineering software (RapidForm 2006). The digital geometrical data were exported to produce a scale model using rapid prototyping technology. The whole set of appliances was fabricated based on these solid models. The digital model could be viewed and measured from any direction by the software. By the end of the NAM treatment before surgical lip repair, the cleft was narrowed and the malformation of alveolar segments was aligned normally, significantly improving nasal symmetry and nostril shape. Presurgical NAM using CAD could simplify the treatment procedure and estimate the treatment objective, which enabled precise control of the force and direction of the alveolar segments movement.

  4. Estimation procedure of the efficiency of the heat network segment

    NASA Astrophysics Data System (ADS)

    Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.

    2017-07-01

    An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in accordance with other, economical, criteria for the specific project.

  5. Internal vibrations of a molecule consisting of rigid segments. I - Non-interacting internal vibrations

    NASA Technical Reports Server (NTRS)

    He, X. M.; Craven, B. M.

    1993-01-01

    For molecular crystals, a procedure is proposed for interpreting experimentally determined atomic mean square anisotropic displacement parameters (ADPs) in terms of the overall molecular vibration together with internal vibrations with the assumption that the molecule consists of a set of linked rigid segments. The internal librations (molecular torsional or bending modes) are described using the variable internal coordinates of the segmented body. With this procedure, the experimental ADPs obtained from crystal structure determinations involving six small molecules (sym-trinitrobenzene, adenosine, tetra-cyanoquinodimethane, benzamide, alpha-cyanoacetic acid hydrazide and N-acetyl-L-tryptophan methylamide) have been analyzed. As a consequence, vibrational corrections to the bond lengths and angles of the molecule are calculated as well as the frequencies and force constants for each internal torsional or bending vibration.

  6. A semi-automatic method for left ventricle volume estimate: an in vivo validation study

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.

    2001-01-01

    This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.

  7. A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy.

    PubMed

    Anas, Emran Mohammad Abu; Mousavi, Parvin; Abolmaesumi, Purang

    2018-06-01

    Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Line drawing extraction from gray level images by feature integration

    NASA Astrophysics Data System (ADS)

    Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.

    1994-10-01

    We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).

  9. Inferring Aquifer Transmissivity from River Flow Data

    NASA Astrophysics Data System (ADS)

    Trichakis, Ioannis; Pistocchi, Alberto

    2016-04-01

    Daily streamflow data is the measurable result of many different hydrological processes within a basin; therefore, it includes information about all these processes. In this work, recession analysis applied to a pan-European dataset of measured streamflow was used to estimate hydrogeological parameters of the aquifers that contribute to the stream flow. Under the assumption that base-flow in times of no precipitation is mainly due to groundwater, we estimated parameters of European shallow aquifers connected with the stream network, and identified on the basis of the 1:1,500,000 scale Hydrogeological map of Europe. To this end, Master recession curves (MRCs) were constructed based on the RECESS model of the USGS for 1601 stream gauge stations across Europe. The process consists of three stages. Firstly, the model analyses the stream flow time-series. Then, it uses regression to calculate the recession index. Finally, it infers characteristics of the aquifer from the recession index. During time-series analysis, the model identifies those segments, where the number of successive recession days is above a certain threshold. The reason for this pre-processing lies in the necessity for an adequate number of points when performing regression at a later stage. The recession index derives from the semi-logarithmic plot of stream flow over time, and the post processing involves the calculation of geometrical parameters of the watershed through a GIS platform. The program scans the full stream flow dataset of all the stations. For each station, it identifies the segments with continuous recession that exceed a predefined number of days. When the algorithm finds all the segments of a certain station, it analyses them and calculates the best linear fit between time and the logarithm of flow. The algorithm repeats this procedure for the full number of segments, thus it calculates many different values of recession index for each station. After the program has found all the recession segments, it performs calculations to determine the expression for the MRC. Further processing of the MRCs can yield estimates of transmissivity or response time representative of the aquifers upstream of the station. These estimates can be useful for large scale (e.g. continental) groundwater modelling. The above procedure allowed calculating values of transmissivity for a large share of European aquifers, ranging from Tmin = 4.13E-04 m²/d to Tmax = 8.12E+03 m²/d, with an average value Taverage = 9.65E+01 m²/d. These results are in line with the literature, indicating that the procedure may provide realistic results for large-scale groundwater modelling. In this contribution we present the results in the perspective of their application for the parameterization of a pan-European bi-dimensional shallow groundwater flow model.

  10. Implementation and evaluation of a new workflow for registration and segmentation of pulmonary MRI data for regional lung perfusion assessment.

    PubMed

    Böttger, T; Grunewald, K; Schöbinger, M; Fink, C; Risse, F; Kauczor, H U; Meinzer, H P; Wolf, Ivo

    2007-03-07

    Recently it has been shown that regional lung perfusion can be assessed using time-resolved contrast-enhanced magnetic resonance (MR) imaging. Quantification of the perfusion images has been attempted, based on definition of small regions of interest (ROIs). Use of complete lung segmentations instead of ROIs could possibly increase quantification accuracy. Due to the low signal-to-noise ratio, automatic segmentation algorithms cannot be applied. On the other hand, manual segmentation of the lung tissue is very time consuming and can become inaccurate, as the borders of the lung to adjacent tissues are not always clearly visible. We propose a new workflow for semi-automatic segmentation of the lung from additionally acquired morphological HASTE MR images. First the lung is delineated semi-automatically in the HASTE image. Next the HASTE image is automatically registered with the perfusion images. Finally, the transformation resulting from the registration is used to align the lung segmentation from the morphological dataset with the perfusion images. We evaluated rigid, affine and locally elastic transformations, suitable optimizers and different implementations of mutual information (MI) metrics to determine the best possible registration algorithm. We located the shortcomings of the registration procedure and under which conditions automatic registration will succeed or fail. Segmentation results were evaluated using overlap and distance measures. Integration of the new workflow reduces the time needed for post-processing of the data, simplifies the perfusion quantification and reduces interobserver variability in the segmentation process. In addition, the matched morphological data set can be used to identify morphologic changes as the source for the perfusion abnormalities.

  11. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  12. GRC-2007-C-01719

    NASA Image and Video Library

    2003-09-17

    The PF2 segment is an engineering model used to verify the fligh design and the flight manufacturing procedures prior to the start of flight manufacturing. PF2 is also being used to verify the in house operational procedures.

  13. Guidelenines in the management of obstructing cancer of the left colon: consensus conference of the world society of emergency surgery (WSES) and peritoneum and surgery (PnS) society

    PubMed Central

    2010-01-01

    Background Obstructive left colon carcinoma (OLCC) is a challenging matter in terms of obstruction release as well of oncological issues. Several options are available and no guidelines are established. The paper aims to generate evidenced based recommendations on management of OLCC. Methods The PubMed and Cochrane Library databases were queried for publications focusing on OLCC published prior to April 2010. A extensive retrieval, analyses, and grading of the literature was undertaken. The findings of the research were presented and largely discussed among panellist and audience at the Consensus Conference of the World Society of Emergency Surgery (WSES) and Peritoneum and Surgery (PnS) Society held in Bologna July 2010. Comparisons of techniques are presented and final committee recommendation are enounced. Results Hartmann's procedure should be preferred to loop colostomy (Grade 2B). Hartmann's procedure offers no survival benefit compared to segmental colonic resection with primary anastomosis (Grade 2C+); Hartmann's procedure should be considered in patients with high surgical risk (Grade 2C). Total colectomy and segmental colectomy with intraoperative colonic irrigation are associated with same mortality/morbidity, however total colectomy is associated with higher rates impaired bowel function (Grade 1A). Segmental resection and primary anastomosis either with manual decompression or intraoperative colonic irrigation are associated with same mortality/morbidity rate (Grade 1A). In palliation stent placement is associated with similar mortality/morbidity rates and shorter hospital stay (Grade 2B). Stents as a bridge to surgery seems associated with lower mortality rate, shorter hospital stay, and a lower colostomy formation rate (Grade 1B). Conclusions Loop colostomy and staged procedure should be adopted in case of dramatic scenario, when neoadjuvant therapy could be expected. Hartmann's procedure should be performed in case of high risk of anastomotic dehiscence. Subtotal and total colectomy should be attempted when cecal perforation or in case of synchronous colonic neoplasm. Primary resection and anastomosis with manual decompression seems the procedure of choice. Colonic stents represent the best option when skills are available. The literature power is relatively poor and the existing RCT are often not sufficiently robust in design thus, among 6 possible treatment modalities, only 2 reached the Grade A. PMID:21189148

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoiber, Marcus H.; Brown, James B.

    This software implements the first base caller for nanopore data that calls bases directly from raw data. The basecRAWller algorithm has two major advantages over current nanopore base calling software: (1) streaming base calling and (2) base calling from information rich raw signal. The ability to perform truly streaming base calling as signal is received from the sequencer can be very powerful as this is one of the major advantages of this technology as compared to other sequencing technologies. As such enabling as much streaming potential as possible will be incredibly important as this technology continues to become more widelymore » applied in biosciences. All other base callers currently employ the Viterbi algorithm which requires the whole sequence to employ the complete base calling procedure and thus precludes a natural streaming base calling procedure. The other major advantage of the basecRAWller algorithm is the prediction of bases from raw signal which contains much richer information than the segmented chunks that current algorithms employ. This leads to the potential for much more accurate base calls which would make this technology much more valuable to all of the growing user base for this technology.« less

  15. Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes

    NASA Astrophysics Data System (ADS)

    Kim, Ki Wan; Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Lee, Eui Chul; Park, Kang Ryoung

    2015-03-01

    The classification of eye openness and closure has been researched in various fields, e.g., driver drowsiness detection, physiological status analysis, and eye fatigue measurement. For a classification with high accuracy, accurate segmentation of the eye region is required. Most previous research used the segmentation method by image binarization on the basis that the eyeball is darker than skin, but the performance of this approach is frequently affected by thick eyelashes or shadows around the eye. Thus, we propose a fuzzy-based method for classifying eye openness and closure. First, the proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. Second, the eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadows around the eye. The combined image of I and K pixels is obtained through the fuzzy logic system. Third, in order to reflect the effect by all the inference values on calculating the output score of the fuzzy system, we use the revised weighted average method, where all the rectangular regions by all the inference values are considered for calculating the output score. Fourth, the classification of eye openness or closure is successfully made by the proposed fuzzy-based method with eye images of low resolution which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the chosen database. Experimental results with two databases of eye images show that our method is superior to previous approaches.

  16. Patterns of Emphysema Heterogeneity

    PubMed Central

    Valipour, Arschang; Shah, Pallav L.; Gesierich, Wolfgang; Eberhardt, Ralf; Snell, Greg; Strange, Charlie; Barry, Robert; Gupta, Avina; Henne, Erik; Bandyopadhyay, Sourish; Raffy, Philippe; Yin, Youbing; Tschirren, Juerg; Herth, Felix J.F.

    2016-01-01

    Background Although lobar patterns of emphysema heterogeneity are indicative of optimal target sites for lung volume reduction (LVR) strategies, the presence of segmental, or sublobar, heterogeneity is often underappreciated. Objective The aim of this study was to understand lobar and segmental patterns of emphysema heterogeneity, which may more precisely indicate optimal target sites for LVR procedures. Methods Patterns of emphysema heterogeneity were evaluated in a representative cohort of 150 severe (GOLD stage III/IV) chronic obstructive pulmonary disease (COPD) patients from the COPDGene study. High-resolution computerized tomography analysis software was used to measure tissue destruction throughout the lungs to compute heterogeneity (≥ 15% difference in tissue destruction) between (inter-) and within (intra-) lobes for each patient. Emphysema tissue destruction was characterized segmentally to define patterns of heterogeneity. Results Segmental tissue destruction revealed interlobar heterogeneity in the left lung (57%) and right lung (52%). Intralobar heterogeneity was observed in at least one lobe of all patients. No patient presented true homogeneity at a segmental level. There was true homogeneity across both lungs in 3% of the cohort when defining heterogeneity as ≥ 30% difference in tissue destruction. Conclusion Many LVR technologies for treatment of emphysema have focused on interlobar heterogeneity and target an entire lobe per procedure. Our observations suggest that a high proportion of patients with emphysema are affected by interlobar as well as intralobar heterogeneity. These findings prompt the need for a segmental approach to LVR in the majority of patients to treat only the most diseased segments and preserve healthier ones. PMID:26430783

  17. mizuRoute version 1: A river network routing tool for a continental domain water resources applications

    USGS Publications Warehouse

    Mizukami, Naoki; Clark, Martyn P.; Sampson, Kevin; Nijssen, Bart; Mao, Yixin; McMillan, Hilary; Viger, Roland; Markstrom, Steven; Hay, Lauren E.; Woods, Ross; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.

  18. Functional segmentation of dynamic PET studies: Open source implementation and validation of a leader-follower-based algorithm.

    PubMed

    Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José

    2016-02-01

    We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Variational-based segmentation of bio-pores in tomographic images

    NASA Astrophysics Data System (ADS)

    Bauer, Benjamin; Cai, Xiaohao; Peth, Stephan; Schladitz, Katja; Steidl, Gabriele

    2017-01-01

    X-ray computed tomography (CT) combined with a quantitative analysis of the resulting volume images is a fruitful technique in soil science. However, the variations in X-ray attenuation due to different soil components keep the segmentation of single components within these highly heterogeneous samples a challenging problem. Particularly demanding are bio-pores due to their elongated shape and the low gray value difference to the surrounding soil structure. Recently, variational models in connection with algorithms from convex optimization were successfully applied for image segmentation. In this paper we apply these methods for the first time for the segmentation of bio-pores in CT images of soil samples. We introduce a novel convex model which enforces smooth boundaries of bio-pores and takes the varying attenuation values in the depth into account. Segmentation results are reported for different real-world 3D data sets as well as for simulated data. These results are compared with two gray value thresholding methods, namely indicator kriging and a global thresholding procedure, and with a morphological approach. Pros and cons of the methods are assessed by considering geometric features of the segmented bio-pore systems. The variational approach features well-connected smooth pores while not detecting smaller or shallower pores. This is an advantage in cases where the main bio-pores network is of interest and where infillings, e.g., excrements of earthworms, would result in losing pore connections as observed for the other thresholding methods.

  20. A Method for the Evaluation of Thousands of Automated 3D Stem Cell Segmentations

    PubMed Central

    Bajcsy, Peter; Simon, Mylene; Florczyk, Stephen; Simon, Carl G.; Juba, Derek; Brady, Mary

    2016-01-01

    There is no segmentation method that performs perfectly with any data set in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of 3D image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate “ground truth” of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations, and (3) minimizing human labor needed to create surrogate “truth” by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z-stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z-stack segmentation. PMID:26268699

  1. UrQt: an efficient software for the Unsupervised Quality trimming of NGS data.

    PubMed

    Modolo, Laurent; Lerat, Emmanuelle

    2015-04-29

    Quality control is a necessary step of any Next Generation Sequencing analysis. Although customary, this step still requires manual interventions to empirically choose tuning parameters according to various quality statistics. Moreover, current quality control procedures that provide a "good quality" data set, are not optimal and discard many informative nucleotides. To address these drawbacks, we present a new quality control method, implemented in UrQt software, for Unsupervised Quality trimming of Next Generation Sequencing reads. Our trimming procedure relies on a well-defined probabilistic framework to detect the best segmentation between two segments of unreliable nucleotides, framing a segment of informative nucleotides. Our software only requires one user-friendly parameter to define the minimal quality threshold (phred score) to consider a nucleotide to be informative, which is independent of both the experiment and the quality of the data. This procedure is implemented in C++ in an efficient and parallelized software with a low memory footprint. We tested the performances of UrQt compared to the best-known trimming programs, on seven RNA and DNA sequencing experiments and demonstrated its optimality in the resulting tradeoff between the number of trimmed nucleotides and the quality objective. By finding the best segmentation to delimit a segment of good quality nucleotides, UrQt greatly increases the number of reads and of nucleotides that can be retained for a given quality objective. UrQt source files, binary executables for different operating systems and documentation are freely available (under the GPLv3) at the following address: https://lbbe.univ-lyon1.fr/-UrQt-.html .

  2. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977

  3. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  4. Does semi-automatic bone-fragment segmentation improve the reproducibility of the Letournel acetabular fracture classification?

    PubMed

    Boudissa, M; Orfeuvre, B; Chabanas, M; Tonetti, J

    2017-09-01

    The Letournel classification of acetabular fracture shows poor reproducibility in inexperienced observers, despite the introduction of 3D imaging. We therefore developed a method of semi-automatic segmentation based on CT data. The present prospective study aimed to assess: (1) whether semi-automatic bone-fragment segmentation increased the rate of correct classification; (2) if so, in which fracture types; and (3) feasibility using the open-source itksnap 3.0 software package without incurring extra cost for users. Semi-automatic segmentation of acetabular fractures significantly increases the rate of correct classification by orthopedic surgery residents. Twelve orthopedic surgery residents classified 23 acetabular fractures. Six used conventional 3D reconstructions provided by the center's radiology department (conventional group) and 6 others used reconstructions obtained by semi-automatic segmentation using the open-source itksnap 3.0 software package (segmentation group). Bone fragments were identified by specific colors. Correct classification rates were compared between groups on Chi 2 test. Assessment was repeated 2 weeks later, to determine intra-observer reproducibility. Correct classification rates were significantly higher in the "segmentation" group: 114/138 (83%) versus 71/138 (52%); P<0.0001. The difference was greater for simple (36/36 (100%) versus 17/36 (47%); P<0.0001) than complex fractures (79/102 (77%) versus 54/102 (53%); P=0.0004). Mean segmentation time per fracture was 27±3min [range, 21-35min]. The segmentation group showed excellent intra-observer correlation coefficients, overall (ICC=0.88), and for simple (ICC=0.92) and complex fractures (ICC=0.84). Semi-automatic segmentation, identifying the various bone fragments, was effective in increasing the rate of correct acetabular fracture classification on the Letournel system by orthopedic surgery residents. It may be considered for routine use in education and training. III: prospective case-control study of a diagnostic procedure. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  5. Time-independent Anisotropic Plastic Behavior by Mechanical Subelement Models

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.

    1983-01-01

    The paper describes a procedure for modelling the anisotropic elastic-plastic behavior of metals in plane stress state by the mechanical sub-layer model. In this model the stress-strain curves along the longitudinal and transverse directions are represented by short smooth segments which are considered as piecewise linear for simplicity. The model is incorporated in a finite element analysis program which is based on the assumed stress hybrid element and the iscoplasticity-theory.

  6. Medical image segmentation using genetic algorithms.

    PubMed

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.

  7. Radio Frequency Ablation Registration, Segmentation, and Fusion Tool

    PubMed Central

    McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.

    2008-01-01

    The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716

  8. FLIS Procedures Manual. Document Identifier Code Input/Output Formats (Fixed Length). Volume 8.

    DTIC Science & Technology

    1997-04-01

    DATA ELE- MENTS. SEGMENT R MAY BE REPEATED A MAXIMUM OF THREE (3) TIMES IN ORDER TO ACQUIRE THE REQUIRED MIX OF SEGMENTS OR INDIVIDUAL DATA ELEMENTS TO...preceding record. Marketing input DICs. QI Next DRN of appropriate segment will be QF The assigned NSN or PSCN being can- reflected in accordance with Table...Classified KFC Notification of Possible Duplicate (Sub- KRP Characteristics Data mitter) Follow-Up Interrogation LFU Notification of Return, SSR Transaction

  9. Finding Acoustic Regularities in Speech: Applications to Phonetic Recognition

    DTIC Science & Technology

    1988-12-01

    University Press, Indiana, I 1977. [12] N. Chomsky and M. Halle, The Sound Patterns of English, Harper and Row, New York, 1968. l 129 I BIBLIOGRAPHY [13] Y.L...segments are related to the phonemes by a grammar which is determined using. automated procedures operating on a set of training data. Thus important...segments which are described completely in acoustic terms. Next, these acous- tic segments are related to the phonemes by a grammar which is determined

  10. Procedural key steps in laparoscopic colorectal surgery, consensus through Delphi methodology.

    PubMed

    Dijkstra, Frederieke A; Bosker, Robbert J I; Veeger, Nicolaas J G M; van Det, Marc J; Pierie, Jean Pierre E N

    2015-09-01

    While several procedural training curricula in laparoscopic colorectal surgery have been validated and published, none have focused on dividing surgical procedures into well-identified segments, which can be trained and assessed separately. This enables the surgeon and resident to focus on a specific segment, or combination of segments, of a procedure. Furthermore, it will provide a consistent and uniform method of training for residents rotating through different teaching hospitals. The goal of this study was to determine consensus on the key steps of laparoscopic right hemicolectomy and laparoscopic sigmoid colectomy among experts in our University Medical Center and affiliated hospitals. This will form the basis for the INVEST video-assisted side-by-side training curriculum. The Delphi method was used for determining consensus on key steps of both procedures. A list of 31 steps for laparoscopic right hemicolectomy and 37 steps for laparoscopic sigmoid colectomy was compiled from textbooks and national and international guidelines. In an online questionnaire, 22 experts in 12 hospitals within our teaching region were invited to rate all steps on a Likert scale on importance for the procedure. Consensus was reached in two rounds. Sixteen experts agreed to participate. Of these 16 experts, 14 (88%) completed the questionnaire for both procedures. Of the 14 who completed the first round, 13 (93%) completed the second round. Cronbach's alpha was 0.79 for the right hemicolectomy and 0.91 for the sigmoid colectomy, showing high internal consistency between the experts. For the right hemicolectomy, 25 key steps were established; for the sigmoid colectomy, 24 key steps were established. Expert consensus on the key steps for laparoscopic right hemicolectomy and laparoscopic sigmoid colectomy was reached. These key steps will form the basis for a video-assisted teaching curriculum.

  11. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance.

    PubMed

    Yuan, Yading; Chao, Ming; Lo, Yeh-Chi

    2017-09-01

    Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.

  12. Three-dimensional illumination procedure for photodynamic therapy of dermatology

    NASA Astrophysics Data System (ADS)

    Hu, Xiao-ming; Zhang, Feng-juan; Dong, Fei; Zhou, Ya

    2014-09-01

    Light dosimetry is an important parameter that affects the efficacy of photodynamic therapy (PDT). However, the irregular morphologies of lesions complicate lesion segmentation and light irradiance adjustment. Therefore, this study developed an illumination demo system comprising a camera, a digital projector, and a computing unit to solve these problems. A three-dimensional model of a lesion was reconstructed using the developed system. Hierarchical segmentation was achieved with the superpixel algorithm. The expected light dosimetry on the targeted lesion was achieved with the proposed illumination procedure. Accurate control and optimization of light delivery can improve the efficacy of PDT.

  13. Multistate outbreak of toxic anterior segment syndrome, 2005.

    PubMed

    Kutty, Preeta K; Forster, Terri S; Wood-Koob, Carol; Thayer, Nancy; Nelson, Robert B; Berke, Stanley J; Pontacolone, Lillian; Beardsley, Thomas L; Edelhauser, Henry F; Arduino, Matthew J; Mamalis, Nick; Srinivasan, Arjun

    2008-04-01

    To present the findings of an outbreak of toxic anterior segment syndrome (TASS). Six states, 7 ophthalmology surgical centers, United States. Cases were identified through electronic communication networks and via reports to a national TASS referral center. Information on the procedure, details of instrument reprocessing, and products used during cataract surgery were also collected. Medications used during the procedures were tested for endotoxin using a kinetic assay. The search identified 112 case patients (median age 74 years) from 7 centers from July 19, 2005, through November 28, 2005. Common presenting clinical features included blurred vision (60%), anterior segment inflammation (49%), and cell deposition (56%). Of the patients, 100 (89%) had been exposed to a single brand of balanced salt solution manufactured by Cytosol Laboratories and distributed by Advanced Medical Optics as AMO Endosol. Two patients continued to have residual symptoms. There were no reports of significant breaches in sterile technique or instrument reprocessing. Of 14 balanced salt solution lots, 5 (35%) had levels exceeding the endotoxin limit (0.5 EU/mL). Based on these findings, the balanced salt solution product was withdrawn, resulting in a termination of the outbreak. This is the first known report of an outbreak of TASS caused by intrinsic contamination of a product with endotoxin. Ophthalmologists and epidemiologists should be aware of TASS and its common causes. To facilitate investigations of adverse outcomes such as TASS, those performing cataract surgeries should document the type and lot numbers of products used intraoperatively.

  14. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  15. Group-wise feature-based registration of CT and ultrasound images of spine

    NASA Astrophysics Data System (ADS)

    Rasoulian, Abtin; Mousavi, Parvin; Hedjazi Moghari, Mehdi; Foroughi, Pezhman; Abolmaesumi, Purang

    2010-02-01

    Registration of pre-operative CT and freehand intra-operative ultrasound of lumbar spine could aid surgeons in the spinal needle injection which is a common procedure for pain management. Patients are always in a supine position during the CT scan, and in the prone or sitting position during the intervention. This leads to a difference in the spinal curvature between the two imaging modalities, which means a single rigid registration cannot be used for all of the lumbar vertebrae. In this work, a method for group-wise registration of pre-operative CT and intra-operative freehand 2-D ultrasound images of the lumbar spine is presented. The approach utilizes a pointbased registration technique based on the unscented Kalman filter, taking as input segmented vertebrae surfaces in both CT and ultrasound data. Ultrasound images are automatically segmented using a dynamic programming approach, while the CT images are semi-automatically segmented using thresholding. Since the curvature of the spine is different between the pre-operative and the intra-operative data, the registration approach is designed to simultaneously align individual groups of points segmented from each vertebra in the two imaging modalities. A biomechanical model is used to constrain the vertebrae transformation parameters during the registration and to ensure convergence. The mean target registration error achieved for individual vertebrae on five spine phantoms generated from CT data of patients, is 2.47 mm with standard deviation of 1.14 mm.

  16. Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.

    PubMed

    Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C

    2013-06-01

    A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.

  17. Maxillary segmental distraction in children with unilateral clefts of lip, palate, and alveolus.

    PubMed

    Zemann, Wolfgang; Pichelmayer, Margit

    2011-06-01

    Alveolar clefts are commonly closed by a bone grafting procedure. In cases of wide clefts the deficiency of soft tissue in the cleft area may lead to wound dehiscence and loss of the bony graft. Segmental maxillary bony transfer has been mentioned to be useful in such cases. Standard distraction devices allow unidirectional movement of the transported segment. Ideally the distraction should strictly follow the dental arch. The aim of this study was to analyze distraction devices that were adapted to the individual clinical situation of the patients. The goal was to achieve a distraction strictly parallel to the dental arch. Six children with unilateral clefts of lip, palate, and alveolus between 12 and 13 years of age were included in the study. The width of the cleft was between 7 and 19 mm. Dental cast models were used to manufacture individual distraction devices that should allow a segmental bony transport strictly parallel to the dental arch. Segmental osteotomy was performed under general anesthesia. Distraction was started 5 days after surgery. All distracters were tooth fixed but supported by palatal inserted orthodontic miniscrews. In all patients, a closure of the alveolar cleft was achieved. Two patients required additional bone grafting after the distraction procedure. The distraction was strictly parallel to the dental arch in all cases. In 1 case a slight cranial displacement of the transported maxillary segment could be noticed, leading to minor modifications of the following distractors. Distraction osteogenesis is a proper method to close wide alveolar clefts. Linear segmental transport is required in the posterior part of the dental arch, whereas in the frontal part the bony transport should run strictly parallel to the dental arch. An exact guided segmental transport may reduce the postoperative orthodontic complexity. Copyright © 2011 Mosby, Inc. All rights reserved.

  18. 40 CFR 86.345-79 - Emission calculations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... gasoline-fueled engine test from the pre-test data. Apply the Y value to the K W equation for the entire test. (5) Calculate a separate Y value for each Diesel test segment from the pretest-segment data... New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86.345-79...

  19. 40 CFR 86.345-79 - Emission calculations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... gasoline-fueled engine test from the pre-test data. Apply the Y value to the K W equation for the entire test. (5) Calculate a separate Y value for each Diesel test segment from the pretest-segment data... New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86.345-79...

  20. 40 CFR 86.345-79 - Emission calculations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... gasoline-fueled engine test from the pre-test data. Apply the Y value to the K W equation for the entire test. (5) Calculate a separate Y value for each Diesel test segment from the pretest-segment data... New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86.345-79...

  1. 40 CFR 86.345-79 - Emission calculations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... gasoline-fueled engine test from the pre-test data. Apply the Y value to the K W equation for the entire test. (5) Calculate a separate Y value for each Diesel test segment from the pretest-segment data... New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86.345-79...

  2. Estimating acreage by double sampling using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)

    1982-01-01

    Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.

  3. Comparison of in vivo 3D cone-beam computed tomography tooth volume measurement protocols.

    PubMed

    Forst, Darren; Nijjar, Simrit; Flores-Mir, Carlos; Carey, Jason; Secanell, Marc; Lagravere, Manuel

    2014-12-23

    The objective of this study is to analyze a set of previously developed and proposed image segmentation protocols for precision in both intra- and inter-rater reliability for in vivo tooth volume measurements using cone-beam computed tomography (CBCT) images. Six 3D volume segmentation procedures were proposed and tested for intra- and inter-rater reliability to quantify maxillary first molar volumes. Ten randomly selected maxillary first molars were measured in vivo in random order three times with 10 days separation between measurements. Intra- and inter-rater agreement for all segmentation procedures was attained using intra-class correlation coefficient (ICC). The highest precision was for automated thresholding with manual refinements. A tooth volume measurement protocol for CBCT images employing automated segmentation with manual human refinement on a 2D slice-by-slice basis in all three planes of space possessed excellent intra- and inter-rater reliability. Three-dimensional volume measurements of the entire tooth structure are more precise than 3D volume measurements of only the dental roots apical to the cemento-enamel junction (CEJ).

  4. Paroxysmal atrial fibrillation prediction method with shorter HRV sequences.

    PubMed

    Boon, K H; Khalil-Hani, M; Malarvili, M B; Sia, C W

    2016-10-01

    This paper proposes a method that predicts the onset of paroxysmal atrial fibrillation (PAF), using heart rate variability (HRV) segments that are shorter than those applied in existing methods, while maintaining good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to stabilize (electrically) and prevent the onset of atrial arrhythmias with different pacing techniques. We investigate the effect of HRV features extracted from different lengths of HRV segments prior to PAF onset with the proposed PAF prediction method. The pre-processing stage of the predictor includes QRS detection, HRV quantification and ectopic beat correction. Time-domain, frequency-domain, non-linear and bispectrum features are then extracted from the quantified HRV. In the feature selection, the HRV feature set and classifier parameters are optimized simultaneously using an optimization procedure based on genetic algorithm (GA). Both full feature set and statistically significant feature subset are optimized by GA respectively. For the statistically significant feature subset, Mann-Whitney U test is used to filter non-statistical significance features that cannot pass the statistical test at 20% significant level. The final stage of our predictor is the classifier that is based on support vector machine (SVM). A 10-fold cross-validation is applied in performance evaluation, and the proposed method achieves 79.3% prediction accuracy using 15-minutes HRV segment. This accuracy is comparable to that achieved by existing methods that use 30-minutes HRV segments, most of which achieves accuracy of around 80%. More importantly, our method significantly outperforms those that applied segments shorter than 30 minutes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Contact-free determination of human body segment parameters by means of videometric image processing of an anthropomorphic body model

    NASA Astrophysics Data System (ADS)

    Hatze, Herbert; Baca, Arnold

    1993-01-01

    The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e,.g., orthopaedic gait analysis), bioengineering, sport biomechanics, and the various space programs. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject's body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass- element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM- compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a super video windows framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice. An example of its practical application illustrates the technique.

  6. Quantification of regional fat volume in rat MRI

    NASA Astrophysics Data System (ADS)

    Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren

    2003-05-01

    Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.

  7. A knowledge-based machine vision system for space station automation

    NASA Technical Reports Server (NTRS)

    Chipman, Laure J.; Ranganath, H. S.

    1989-01-01

    A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.

  8. The Precedence of Global Features in the Perception of Map Symbols

    DTIC Science & Technology

    1988-06-01

    be continually updated. The present study evaluated the feasibility of a serial model of visual processing. By comparing performance between a symbol...symbols, is based on a " filter - ing" procedure, consisting of a series of passive-to-active or global- to-local stages. Navon (1977, 1981a) has proposed a...packages or segments. This advances the earlier, static feature aggregation ap- proaches to comprise a "figure." According to the global precedence model

  9. Vulnerable Atherosclerotic Plaque Elasticity Reconstruction Based on a Segmentation-Driven Optimization Procedure Using Strain Measurements: Theoretical Framework

    PubMed Central

    Le Floc’h, Simon; Tracqui, Philippe; Finet, Gérard; Gharib, Ahmed M.; Maurice, Roch L.; Cloutier, Guy; Pettigrew, Roderic I.

    2016-01-01

    It is now recognized that prediction of the vulnerable coronary plaque rupture requires not only an accurate quantification of fibrous cap thickness and necrotic core morphology but also a precise knowledge of the mechanical properties of plaque components. Indeed, such knowledge would allow a precise evaluation of the peak cap-stress amplitude, which is known to be a good biomechanical predictor of plaque rupture. Several studies have been performed to reconstruct a Young’s modulus map from strain elastograms. It seems that the main issue for improving such methods does not rely on the optimization algorithm itself, but rather on preconditioning requiring the best estimation of the plaque components’ contours. The present theoretical study was therefore designed to develop: 1) a preconditioning model to extract the plaque morphology in order to initiate the optimization process, and 2) an approach combining a dynamic segmentation method with an optimization procedure to highlight the modulogram of the atherosclerotic plaque. This methodology, based on the continuum mechanics theory prescribing the strain field, was successfully applied to seven intravascular ultrasound coronary lesion morphologies. The reconstructed cap thickness, necrotic core area, calcium area, and the Young’s moduli of the calcium, necrotic core, and fibrosis were obtained with mean relative errors of 12%, 4% and 1%, 43%, 32%, and 2%, respectively. PMID:19164080

  10. A semi-automated image analysis procedure for in situ plankton imaging systems.

    PubMed

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups.

  11. A Semi-Automated Image Analysis Procedure for In Situ Plankton Imaging Systems

    PubMed Central

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C.; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M.

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups. PMID:26010260

  12. Superselective intra-arterial hepatic injection of indocyanine green (ICG) for fluorescence image-guided segmental positive staining: experimental proof of the concept.

    PubMed

    Diana, Michele; Liu, Yu-Yin; Pop, Raoul; Kong, Seong-Ho; Legnèr, Andras; Beaujeux, Remy; Pessaux, Patrick; Soler, Luc; Mutter, Didier; Dallemagne, Bernard; Marescaux, Jacques

    2017-03-01

    Intraoperative liver segmentation can be obtained by means of percutaneous intra-portal injection of a fluorophore and illumination with a near-infrared light source. However, the percutaneous approach is challenging in the minimally invasive setting. We aimed to evaluate the feasibility of fluorescence liver segmentation by superselective intra-hepatic arterial injection of indocyanine green (ICG). Eight pigs (mean weight: 26.01 ± 5.21 kg) were involved. Procedures were performed in a hybrid experimental operative suite equipped with the Artis Zeego ® , multiaxis robotic angiography system. A pneumoperitoneum was established and four laparoscopic ports were introduced. The celiac trunk was catheterized, and a microcatheter was advanced into different segmental hepatic artery branches. A near-infrared laparoscope (D-Light P, Karl Storz) was used to detect the fluorescent signal. To assess the correspondence between arterial-based fluorescence demarcation and liver volume, metallic markers were placed along the fluorescent border, followed by a 3D CT-scanning, after injecting intra-arterial radiological contrast (n = 3). To assess the correspondence between arterial and portal supplies, percutaneous intra-portal angiography and intra-arterial angiography were performed simultaneously (n = 1). Bright fluorescence signal enhancing the demarcation of target segments was obtained from 0.1 mg/mL, in matter of seconds. Correspondence between the volume of hepatic segments and arterial territories was confirmed by CT angiography. Higher background fluorescence noise was found after positive staining by intra-portal ICG injection, due to parenchymal accumulation and porto-systemic shunting. Intra-hepatic arterial ICG injection, rapidly highlights hepatic target segment borders, with a better signal-to-background ratio as compared to portal vein injection, in the experimental setting.

  13. Automated skin segmentation in ultrasonic evaluation of skin toxicity in breast cancer radiotherapy.

    PubMed

    Gao, Yi; Tannenbaum, Allen; Chen, Hao; Torres, Mylin; Yoshida, Emi; Yang, Xiaofeng; Wang, Yuefeng; Curran, Walter; Liu, Tian

    2013-11-01

    Skin toxicity is the most common side effect of breast cancer radiotherapy and impairs the quality of life of many breast cancer survivors. We, along with other researchers, have recently found quantitative ultrasound to be effective as a skin toxicity assessment tool. Although more reliable than standard clinical evaluations (visual observation and palpation), the current procedure for ultrasound-based skin toxicity measurements requires manual delineation of the skin layers (i.e., epidermis-dermis and dermis-hypodermis interfaces) on each ultrasound B-mode image. Manual skin segmentation is time consuming and subjective. Moreover, radiation-induced skin injury may decrease image contrast between the dermis and hypodermis, which increases the difficulty of delineation. Therefore, we have developed an automatic skin segmentation tool (ASST) based on the active contour model with two significant modifications: (i) The proposed algorithm introduces a novel dual-curve scheme for the double skin layer extraction, as opposed to the original single active contour method. (ii) The proposed algorithm is based on a geometric contour framework as opposed to the previous parametric algorithm. This ASST algorithm was tested on a breast cancer image database of 730 ultrasound breast images (73 ultrasound studies of 23 patients). We compared skin segmentation results obtained with the ASST with manual contours performed by two physicians. The average percentage differences in skin thickness between the ASST measurement and that of each physician were less than 5% (4.8 ± 17.8% and -3.8 ± 21.1%, respectively). In summary, we have developed an automatic skin segmentation method that ensures objective assessment of radiation-induced changes in skin thickness. Our ultrasound technology offers a unique opportunity to quantify tissue injury in a more meaningful and reproducible manner than the subjective assessments currently employed in the clinic. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  14. Phase retrieval in digital speckle pattern interferometry by application of two-dimensional active contours called snakes.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2006-03-20

    We propose a novel approach to retrieving the phase map coded by a single closed-fringe pattern in digital speckle pattern interferometry, which is based on the estimation of the local sign of the quadrature component. We obtain the estimate by calculating the local orientation of the fringes that have previously been denoised by a weighted smoothing spline method. We carry out the procedure of sign estimation by determining the local abrupt jumps of size pi in the orientation field of the fringes and by segmenting the regions defined by these jumps. The segmentation method is based on the application of two-dimensional active contours (snakes), with which one can also estimate absent jumps, i.e., those that cannot be detected from the local orientation of the fringes. The performance of the proposed phase-retrieval technique is evaluated for synthetic and experimental fringes and compared with the results obtained with the spiral-phase- and Fourier-transform methods.

  15. A spectral water index based on visual bands

    NASA Astrophysics Data System (ADS)

    Basaeed, Essa; Bhaskar, Harish; Al-Mualla, Mohammed

    2013-10-01

    Land-water segmentation is an important preprocessing step in a number of remote sensing applications such as target detection, environmental monitoring, and map updating. A Normalized Optical Water Index (NOWI) is proposed to accurately discriminate between land and water regions in multi-spectral satellite imagery data from DubaiSat-1. NOWI exploits the spectral characteristics of water content (using visible bands) and uses a non-linear normalization procedure that renders strong emphasize on small changes in lower brightness values whilst guaranteeing that the segmentation process remains image-independent. The NOWI representation is validated through systematic experiments, evaluated using robust metrics, and compared against various supervised classification algorithms. Analysis has indicated that NOWI has the advantages that it: a) is a pixel-based method that requires no global knowledge of the scene under investigation, b) can be easily implemented in parallel processing, c) is image-independent and requires no training, d) works in different environmental conditions, e) provides high accuracy and efficiency, and f) works directly on the input image without any form of pre-processing.

  16. Recognition of speaker-dependent continuous speech with KEAL

    NASA Astrophysics Data System (ADS)

    Mercier, G.; Bigorgne, D.; Miclet, L.; Le Guennec, L.; Querre, M.

    1989-04-01

    A description of the speaker-dependent continuous speech recognition system KEAL is given. An unknown utterance, is recognized by means of the followng procedures: acoustic analysis, phonetic segmentation and identification, word and sentence analysis. The combination of feature-based, speaker-independent coarse phonetic segmentation with speaker-dependent statistical classification techniques is one of the main design features of the acoustic-phonetic decoder. The lexical access component is essentially based on a statistical dynamic programming technique which aims at matching a phonemic lexical entry containing various phonological forms, against a phonetic lattice. Sentence recognition is achieved by use of a context-free grammar and a parsing algorithm derived from Earley's parser. A speaker adaptation module allows some of the system parameters to be adjusted by matching known utterances with their acoustical representation. The task to be performed, described by its vocabulary and its grammar, is given as a parameter of the system. Continuously spoken sentences extracted from a 'pseudo-Logo' language are analyzed and results are presented.

  17. Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation.

    PubMed

    Azami, Hamed; Escudero, Javier

    2016-05-01

    Signal segmentation and spike detection are two important biomedical signal processing applications. Often, non-stationary signals must be segmented into piece-wise stationary epochs or spikes need to be found among a background of noise before being further analyzed. Permutation entropy (PE) has been proposed to evaluate the irregularity of a time series. PE is conceptually simple, structurally robust to artifacts, and computationally fast. It has been extensively used in many applications, but it has two key shortcomings. First, when a signal is symbolized using the Bandt-Pompe procedure, only the order of the amplitude values is considered and information regarding the amplitudes is discarded. Second, in the PE, the effect of equal amplitude values in each embedded vector is not addressed. To address these issues, we propose a new entropy measure based on PE: the amplitude-aware permutation entropy (AAPE). AAPE is sensitive to the changes in the amplitude, in addition to the frequency, of the signals thanks to it being more flexible than the classical PE in the quantification of the signal motifs. To demonstrate how the AAPE method can enhance the quality of the signal segmentation and spike detection, a set of synthetic and realistic synthetic neuronal signals, electroencephalograms and neuronal data are processed. We compare the performance of AAPE in these problems against state-of-the-art approaches and evaluate the significance of the differences with a repeated ANOVA with post hoc Tukey's test. In signal segmentation, the accuracy of AAPE-based method is higher than conventional segmentation methods. AAPE also leads to more robust results in the presence of noise. The spike detection results show that AAPE can detect spikes well, even when presented with single-sample spikes, unlike PE. For multi-sample spikes, the changes in AAPE are larger than in PE. We introduce a new entropy metric, AAPE, that enables us to consider amplitude information in the formulation of PE. The AAPE algorithm can be used in almost every irregularity-based application in various signal and image processing fields. We also made freely available the Matlab code of the AAPE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Extracting built-up areas from TerraSAR-X data using object-oriented classification method

    NASA Astrophysics Data System (ADS)

    Wang, SuYun; Sun, Z. C.

    2017-02-01

    Based on single-polarized TerraSAR-X, the approach generates homogeneous segments on an arbitrary number of scale levels by applying a region-growing algorithm which takes the intensity of backscatter and shape-related properties into account. The object-oriented procedure consists of three main steps: firstly, the analysis of the local speckle behavior in the SAR intensity data, leading to the generation of a texture image; secondly, a segmentation based on the intensity image; thirdly, the classification of each segment using the derived texture file and intensity information in order to identify and extract build-up areas. In our research, the distribution of BAs in Dongying City is derived from single-polarized TSX SM image (acquired on 17th June 2013) with average ground resolution of 3m using our proposed approach. By cross-validating the random selected validation points with geo-referenced field sites, Quick Bird high-resolution imagery, confusion matrices with statistical indicators are calculated and used for assessing the classification results. The results demonstrate that an overall accuracy 92.89 and a kappa coefficient of 0.85 could be achieved. We have shown that connect texture information with the analysis of the local speckle divergence, combining texture and intensity of construction extraction is feasible, efficient and rapid.

  19. Robust membrane detection based on tensor voting for electron tomography.

    PubMed

    Martinez-Sanchez, Antonio; Garcia, Inmaculada; Asano, Shoh; Lucic, Vladan; Fernandez, Jose-Jesus

    2014-04-01

    Electron tomography enables three-dimensional (3D) visualization and analysis of the subcellular architecture at a resolution of a few nanometers. Segmentation of structural components present in 3D images (tomograms) is often necessary for their interpretation. However, it is severely hampered by a number of factors that are inherent to electron tomography (e.g. noise, low contrast, distortion). Thus, there is a need for new and improved computational methods to facilitate this challenging task. In this work, we present a new method for membrane segmentation that is based on anisotropic propagation of the local structural information using the tensor voting algorithm. The local structure at each voxel is then refined according to the information received from other voxels. Because voxels belonging to the same membrane have coherent structural information, the underlying global structure is strengthened. In this way, local information is easily integrated at a global scale to yield segmented structures. This method performs well under low signal-to-noise ratio typically found in tomograms of vitrified samples under cryo-tomography conditions and can bridge gaps present on membranes. The performance of the method is demonstrated by applications to tomograms of different biological samples and by quantitative comparison with standard template matching procedure. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches

    NASA Astrophysics Data System (ADS)

    Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul

    2018-07-01

    Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.

  1. Medical diagnosis imaging systems: image and signal processing applications aided by fuzzy logic

    NASA Astrophysics Data System (ADS)

    Hata, Yutaka

    2010-04-01

    First, we describe an automated procedure for segmenting an MR image of a human brain based on fuzzy logic for diagnosing Alzheimer's disease. The intensity thresholds for segmenting the whole brain of a subject are automatically determined by finding the peaks of the intensity histogram. After these thresholds are evaluated in a region growing, the whole brain can be identified. Next, we describe a procedure for decomposing the obtained whole brain into the left and right cerebral hemispheres, the cerebellum and the brain stem. Our method then identified the whole brain, the left cerebral hemisphere, the right cerebral hemisphere, the cerebellum and the brain stem. Secondly, we describe a transskull sonography system that can visualize the shape of the skull and brain surface from any point to examine skull fracture and some brain diseases. We employ fuzzy signal processing to determine the skull and brain surface. The phantom model, the animal model with soft tissue, the animal model with brain tissue, and a human subjects' forehead is applied in our system. The all shapes of the skin surface, skull surface, skull bottom, and brain tissue surface are successfully determined.

  2. Object-based Classification for Detecting Landslides and Stochastic Procedure to landslide susceptibility maps - A Case at Baolai Village, SW Taiwan

    NASA Astrophysics Data System (ADS)

    Lin, Ying-Tong; Chang, Kuo-Chen; Yang, Ci-Jian

    2017-04-01

    As the result of global warming in the past decades, Taiwan has experienced more and more extreme typhoons with hazardous massive landslides. In this study, we use object-oriented analysis method to classify landslide area at Baolai village by using Formosat-2 satellite images. We used for multiresolution segmented to generate the blocks, and used hierarchical logic to classified 5 different kinds of features. After that, classification the landslide into different type of landslide. Beside, we use stochastic procedure to integrate landslide susceptibility maps. This study assumed that in the extreme event, 2009 Typhoon Morakot, which precipitation goes to 1991.5mm in 5 days, and the highest landslide susceptible area. The results show that study area's landslide area was greatly changes, most of landslide was erosion by gully and made dip slope slide, or erosion by the stream, especially at undercut bank. From the landslide susceptibility maps, we know that the old landslide area have high potential to occur landslides in the extreme event. This study demonstrates the changing of landslide area and the landslide susceptible area. Keywords: Formosat-2, object-oriented, segmentation, classification, landslide, Baolai Village, SW Taiwan, FS

  3. [Anopexy according to Longo for hemorrhoids].

    PubMed

    Ruppert, R

    2016-11-01

    The treatment for hemorrhoids ranges from conservative management to surgical procedures. The procedures are tailored to the individual grading of hemorrhoids and the individual complaints. The standard Goligher classification of the hemorrhoids is the basis for further treatment and no differentiation is made between segmental hemorrhoids and circular hemorrhoids. In the case of advanced circular hemorrhoid disease the surgical procedure with a stapler, so-called stapler anopexy, is the procedure of choice.

  4. Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be; Department of Radiotherapy, Ghent University, Ghent; Wouters, Johan

    Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. Thismore » procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.« less

  5. Dual-energy-based metal segmentation for metal artifact reduction in dental computed tomography.

    PubMed

    Hegazy, Mohamed A A; Eldib, Mohamed Elsayed; Hernandez, Daniel; Cho, Myung Hye; Cho, Min Hyoung; Lee, Soo Yeol

    2018-02-01

    In a dental CT scan, the presence of dental fillings or dental implants generates severe metal artifacts that often compromise readability of the CT images. Many metal artifact reduction (MAR) techniques have been introduced, but dental CT scans still suffer from severe metal artifacts particularly when multiple dental fillings or implants exist around the region of interest. The high attenuation coefficient of teeth often causes erroneous metal segmentation, compromising the MAR performance. We propose a metal segmentation method for a dental CT that is based on dual-energy imaging with a narrow energy gap. Unlike a conventional dual-energy CT, we acquire two projection data sets at two close tube voltages (80 and 90 kV p ), and then, we compute the difference image between the two projection images with an optimized weighting factor so as to maximize the contrast of the metal regions. We reconstruct CT images from the weighted difference image to identify the metal region with global thresholding. We forward project the identified metal region to designate metal trace on the projection image. We substitute the pixel values on the metal trace with the ones computed by the region filling method. The region filling in the metal trace removes high-intensity data made by the metallic objects from the projection image. We reconstruct final CT images from the region-filled projection image with the fusion-based approach. We have done imaging experiments on a dental phantom and a human skull phantom using a lab-built micro-CT and a commercial dental CT system. We have corrected the projection images of a dental phantom and a human skull phantom using the single-energy and dual-energy-based metal segmentation methods. The single-energy-based method often failed in correcting the metal artifacts on the slices on which tooth enamel exists. The dual-energy-based method showed better MAR performances in all cases regardless of the presence of tooth enamel on the slice of interest. We have compared the MAR performances between both methods in terms of the relative error (REL), the sum of squared difference (SSD) and the normalized absolute difference (NAD). For the dental phantom images corrected by the single-energy-based method, the metric values were 95.3%, 94.5%, and 90.6%, respectively, while they were 90.1%, 90.05%, and 86.4%, respectively, for the images corrected by the dual-energy-based method. For the human skull phantom images, the metric values were improved from 95.6%, 91.5%, and 89.6%, respectively, to 88.2%, 82.5%, and 81.3%, respectively. The proposed dual-energy-based method has shown better performance in metal segmentation leading to better MAR performance in dental imaging. We expect the proposed metal segmentation method can be used to improve the MAR performance of existing MAR techniques that have metal segmentation steps in their correction procedures. © 2017 American Association of Physicists in Medicine.

  6. Extreme liver resections with preservation of segment 4 only

    PubMed Central

    Balzan, Silvio Marcio Pegoraro; Gava, Vinícius Grando; Magalhães, Marcelo Arbo; Dotto, Marcelo Luiz

    2017-01-01

    AIM To evaluate safety and outcomes of a new technique for extreme hepatic resections with preservation of segment 4 only. METHODS The new method of extreme liver resection consists of a two-stage hepatectomy. The first stage involves a right hepatectomy with middle hepatic vein preservation and induction of left lobe congestion; the second stage involves a left lobectomy. Thus, the remnant liver is represented by the segment 4 only (with or without segment 1, ± S1). Five patients underwent the new two-stage hepatectomy (congestion group). Data from volumetric assessment made before the second stage was compared with that of 10 matched patients (comparison group) that underwent a single-stage right hepatectomy with middle hepatic vein preservation. RESULTS The two stages of the procedure were successfully carried out on all 5 patients. For the congestion group, the overall volume of the left hemiliver had increased 103% (mean increase from 438 mL to 890 mL) at 4 wk after the first stage of the procedure. Hypertrophy of the future liver remnant (i.e., segment 4 ± S1) was higher than that of segments 2 and 3 (144% vs 54%, respectively, P < 0.05). The median remnant liver volume-to-body weight ratio was 0.3 (range, 0.28-0.40) before the first stage and 0.8 (range, 0.45-0.97) before the second stage. For the comparison group, the rate of hypertrophy of the left liver after right hepatectomy with middle hepatic vein preservation was 116% ± 34%. Hypertrophy rates of segments 2 and 3 (123% ± 47%) and of segment 4 (108% ± 60%, P > 0.05) were proportional. The mean preoperative volume of segments 2 and 3 was 256 ± 64 cc and increased to 572 ± 257 cc after right hepatectomy. Mean preoperative volume of segment 4 increased from 211 ± 75 cc to 439 ± 180 cc after surgery. CONCLUSION The proposed method for extreme hepatectomy with preservation of segment 4 only represents a technique that could allow complete resection of multiple bilateral liver metastases. PMID:28765703

  7. Automatic Nuclei Segmentation in H&E Stained Breast Cancer Histopathology Images

    PubMed Central

    Veta, Mitko; van Diest, Paul J.; Kornegoor, Robert; Huisman, André; Viergever, Max A.; Pluim, Josien P. W.

    2013-01-01

    The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8. PMID:23922958

  8. Automatic nuclei segmentation in H&E stained breast cancer histopathology images.

    PubMed

    Veta, Mitko; van Diest, Paul J; Kornegoor, Robert; Huisman, André; Viergever, Max A; Pluim, Josien P W

    2013-01-01

    The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.

  9. An automated method for accurate vessel segmentation.

    PubMed

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; Cheng, Kwang-Ting Tim

    2017-05-07

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm's growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008 European Conf. on Computer Vision; Law and Chung 2009 IEEE Trans. Image Process. 18 596-612; Wang 2015 J. Neurosci. Methods 241 30-6) with manually optimized parameters. Our system has also been applied clinically for cerebral aneurysm development analysis. Experimental results on 10 patients' data, with two 3D CT scans per patient, show that our system's automatic diagnosis outcomes are consistent with clinicians' manual measurements.

  10. An automated method for accurate vessel segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting

    2017-05-01

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008 European Conf. on Computer Vision; Law and Chung 2009 IEEE Trans. Image Process. 18 596-612; Wang 2015 J. Neurosci. Methods 241 30-6) with manually optimized parameters. Our system has also been applied clinically for cerebral aneurysm development analysis. Experimental results on 10 patients’ data, with two 3D CT scans per patient, show that our system’s automatic diagnosis outcomes are consistent with clinicians’ manual measurements.

  11. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  12. Inducing and assessing differentiated emotion-feeling states in the laboratory.

    PubMed

    Philippot, P

    1993-03-01

    Two questions are addressed. The first question pertains to the capacity of film segments to induce emotional states that are: (a) as comparable as possible to naturally occurring emotions; (b) similar across individuals; and (c) clearly differentiated across the intended emotions. The second question concerns the discriminant capacity of self-report questionnaires of emotion-feeling states differing in their theoretical assumptions. Subjects viewed six short film segments and rated the strength of their responses on one of three kinds of questionnaires. The questionnaires were: (1) the Differential Emotions Scale that postulates category-based distinctions between emotions; (2) the Semantic Differential that postulates that emotions are distinguished along bipolar dimensions; and (3) free labelling of their feelings by the subjects (control condition with no theoretical a priori). Overall, results indicate that film segments can elicit a diversity of predictable emotions, in the same way, in a majority of individuals. In the present procedure, the Differential Emotions Scale yielded a better discrimination between emotional states than the Semantic Differential. Implications for emotion research and theories of the cognitive structure of emotion are discussed.

  13. Small amounts of tissue preserve pancreatic function

    PubMed Central

    Lu, Zipeng; Yin, Jie; Wei, Jishu; Dai, Cuncai; Wu, Junli; Gao, Wentao; Xu, Qing; Dai, Hao; Li, Qiang; Guo, Feng; Chen, Jianmin; Xi, Chunhua; Wu, Pengfei; Zhang, Kai; Jiang, Kuirong; Miao, Yi

    2016-01-01

    Abstract Middle-segment preserving pancreatectomy (MPP) is a novel procedure for treating multifocal lesions of the pancreas while preserving pancreatic function. However, long-term pancreatic function after this procedure remains unclear. The aims of this current study are to investigate short- and long-term outcomes, especially long-term pancreatic endocrine function, after MPP. From September 2011 to December 2015, 7 patients underwent MPP in our institution, and 5 cases with long-term outcomes were further analyzed in a retrospective manner. Percentage of tissue preservation was calculated using computed tomography volumetry. Serum insulin and C-peptide levels after oral glucose challenge were evaluated in 5 patients. Beta-cell secreting function including modified homeostasis model assessment of beta-cell function (HOMA2-beta), area under the curve (AUC) for C-peptide, and C-peptide index were evaluated and compared with those after pancreaticoduodenectomy (PD) and total pancreatectomy. Exocrine function was assessed based on questionnaires. Our case series included 3 women and 2 men, with median age of 50 (37–81) years. Four patients underwent pylorus-preserving PD together with distal pancreatectomy (DP), including 1 with spleen preserved. The remaining patient underwent Beger procedure and spleen-preserving DP. Median operation time and estimated intraoperative blood loss were 330 (250–615) min and 800 (400–5500) mL, respectively. Histological examination revealed 3 cases of metastatic lesion to the pancreas, 1 case of chronic pancreatitis, and 1 neuroendocrine tumor. Major postoperative complications included 3 cases of delayed gastric emptying and 2 cases of postoperative pancreatic fistula. Imaging studies showed that segments representing 18.2% to 39.5% of the pancreas with good blood supply had been preserved. With a median 35.0 months of follow-ups on pancreatic functions, only 1 patient developed new-onset diabetes mellitus of the 4 preoperatively euglycemic patients. Beta-cell function parameters in this group of patients were quite comparable to those after Whipple procedure, and seemed better than those after total pancreatectomy. No symptoms of hypoglycemia were identified in any patient, although half of the patients reported symptoms of exocrine insufficiency. In conclusion, MPP is a feasible and effective procedure for middle-segment sparing multicentric lesions in the pancreas, and patients exhibit satisfied endocrine function after surgery. PMID:27861351

  14. Data reduction using cubic rational B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  15. Surgical anatomy of segmental liver transplantation.

    PubMed

    Deshpande, R R; Heaton, N D; Rela, M

    2002-09-01

    The emergence of split and living donor liver transplantation has necessitated re-evaluation of liver anatomy in greater depth and from a different perspective than before. Early attempts at split liver transplantation were met with significant numbers of vascular and biliary complications. Technical innovations in this field have evolved largely by recognizing anatomical anomalies and variations at operation, and devising novel ways of dealing with them. This has led to increasing acceptance of these procedures and decreased morbidity and mortality rates, similar to those observed with whole liver transplantation. The following review is based on clinical experience of more than 180 split and living related liver transplantations in adults and children, performed over a 7-year period from 1994 to 2001. A comprehensive understanding and application of surgical anatomy of the liver is essential to improve and maintain the excellent results of segmental liver transplantation.

  16. Calibration of 3D ultrasound to an electromagnetic tracking system

    NASA Astrophysics Data System (ADS)

    Lang, Andrew; Parthasarathy, Vijay; Jain, Ameet

    2011-03-01

    The use of electromagnetic (EM) tracking is an important guidance tool that can be used to aid procedures requiring accurate localization such as needle injections or catheter guidance. Using EM tracking, the information from different modalities can be easily combined using pre-procedural calibration information. These calibrations are performed individually, per modality, allowing different imaging systems to be mixed and matched according to the procedure at hand. In this work, a framework for the calibration of a 3D transesophageal echocardiography probe to EM tracking is developed. The complete calibration framework includes three required steps: data acquisition, needle segmentation, and calibration. Ultrasound (US) images of an EM tracked needle must be acquired with the position of the needles in each volume subsequently extracted by segmentation. The calibration transformation is determined through a registration between the segmented points and the recorded EM needle positions. Additionally, the speed of sound is compensated for since calibration is performed in water that has a different speed then is assumed by the US machine. A statistical validation framework has also been developed to provide further information related to the accuracy and consistency of the calibration. Further validation of the calibration showed an accuracy of 1.39 mm.

  17. A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.

    PubMed

    Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K

    2014-05-01

    Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.

  18. A region-based segmentation method for ultrasound images in HIFU therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Dong, E-mail: dongz@whu.edu.cn; Liu, Yu; Yang, Yan

    Purpose: Precisely and efficiently locating a tumor with less manual intervention in ultrasound-guided high-intensity focused ultrasound (HIFU) therapy is one of the keys to guaranteeing the therapeutic result and improving the efficiency of the treatment. The segmentation of ultrasound images has always been difficult due to the influences of speckle, acoustic shadows, and signal attenuation as well as the variety of tumor appearance. The quality of HIFU guidance images is even poorer than that of conventional diagnostic ultrasound images because the ultrasonic probe used for HIFU guidance usually obtains images without making contact with the patient’s body. Therefore, the segmentationmore » becomes more difficult. To solve the segmentation problem of ultrasound guidance image in the treatment planning procedure for HIFU therapy, a novel region-based segmentation method for uterine fibroids in HIFU guidance images is proposed. Methods: Tumor partitioning in HIFU guidance image without manual intervention is achieved by a region-based split-and-merge framework. A new iterative multiple region growing algorithm is proposed to first split the image into homogenous regions (superpixels). The features extracted within these homogenous regions will be more stable than those extracted within the conventional neighborhood of a pixel. The split regions are then merged by a superpixel-based adaptive spectral clustering algorithm. To ensure the superpixels that belong to the same tumor can be clustered together in the merging process, a particular construction strategy for the similarity matrix is adopted for the spectral clustering, and the similarity matrix is constructed by taking advantage of a combination of specifically selected first-order and second-order texture features computed from the gray levels and the gray level co-occurrence matrixes, respectively. The tumor region is picked out automatically from the background regions by an algorithm according to a priori information about the tumor position, shape, and size. Additionally, an appropriate cluster number for spectral clustering can be determined by the same algorithm, thus the automatic segmentation of the tumor region is achieved. Results: To evaluate the performance of the proposed method, 50 uterine fibroid ultrasound images from different patients receiving HIFU therapy were segmented, and the obtained tumor contours were compared with those delineated by an experienced radiologist. For area-based evaluation results, the mean values of the true positive ratio, the false positive ratio, and the similarity were 94.42%, 4.71%, and 90.21%, respectively, and the corresponding standard deviations were 2.54%, 3.12%, and 3.50%, respectively. For distance-based evaluation results, the mean values of the normalized Hausdorff distance and the normalized mean absolute distance were 4.93% and 0.90%, respectively, and the corresponding standard deviations were 2.22% and 0.34%, respectively. The running time of the segmentation process was 12.9 s for a 318 × 333 (pixels) image. Conclusions: Experiments show that the proposed method can segment the tumor region accurately and efficiently with less manual intervention, which provides for the possibility of automatic segmentation and real-time guidance in HIFU therapy.« less

  19. Multiscale 3-D shape representation and segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2007-04-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.

  20. Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron

    2013-01-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745

  1. SU-F-J-171: Robust Atlas Based Segmentation of the Prostate and Peripheral Zone Regions On MRI Utilizing Multiple MRI System Vendors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padgett, K; Pollack, A; Stoyanova, R

    Purpose: Automatically generated prostate MRI contours can be used to aid in image registration with CT or ultrasound and to reduce the burden of contouring for radiation treatment planning. In addition, prostate and zonal contours can assist to automate quantitative imaging features extraction and the analyses of longitudinal MRI studies. These potential gains are limited if the solutions are not compatible across different MRI vendors. The goal of this study is to characterize an atlas based automatic segmentation procedure of the prostate collected on MRI systems from multiple vendors. Methods: The prostate and peripheral zone (PZ) were manually contoured bymore » an expert radiation oncologist on T2-weighted scans acquired on both GE (n=31) and Siemens (n=33) 3T MRI systems. A leave-one-out approach was utilized where the target subject is removed from the atlas before the segmentation algorithm is initiated. The atlas-segmentation method finds the best nine matched atlas subjects and then performs a normalized intensity-based free-form deformable registration of these subjects to the target subject. These nine contours are then merged into a single contour using Simultaneous Truth and Performance Level Estimation (STAPLE). Contour comparisons were made using Dice similarity coefficients (DSC) and Hausdorff distances. Results: Using the T2 FatSat (FS) GE datasets the atlas generated contours resulted in an average DSC of 0.83±0.06 for prostate, 0.57±0.12 for PZ and 0.75±0.09 for CG. Similar results were found when using the Siemens data with a DSC of 0.79±0.14 for prostate, 0.54±0.16 and 0.70±0.9. Contrast between prostate and surrounding anatomy and between the PZ and CG contours for both vendors demonstrated superior contrast separation; significance was found for all comparisons p-value < 0.0001. Conclusion: Atlas-based segmentation yielded promising results for all contours compared to expertly defined contours in both Siemens and GE 3T systems providing fast and automatic segmentation of the prostate. Funding Support, Disclosures, and Conflict of Interest: AS Nelson is a partial owner of MIM Software, Inc. AS Nelson, and A Swallen are current employees at MIM Software, Inc.« less

  2. A semiautomatic segmentation method for prostate in CT images using local texture classification and statistical shape modeling.

    PubMed

    Shahedi, Maysam; Halicek, Martin; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2018-06-01

    Prostate segmentation in computed tomography (CT) images is useful for treatment planning and procedure guidance such as external beam radiotherapy and brachytherapy. However, because of the low, soft tissue contrast of CT images, manual segmentation of the prostate is a time-consuming task with high interobserver variation. In this study, we proposed a semiautomated, three-dimensional (3D) segmentation for prostate CT images using shape and texture analysis and we evaluated the method against manual reference segmentations. The prostate gland usually has a globular shape with a smoothly curved surface, and its shape could be accurately modeled or reconstructed having a limited number of well-distributed surface points. In a training dataset, using the prostate gland centroid point as the origin of a coordination system, we defined an intersubject correspondence between the prostate surface points based on the spherical coordinates. We applied this correspondence to generate a point distribution model for prostate shape using principal component analysis and to study the local texture difference between prostate and nonprostate tissue close to the different prostate surface subregions. We used the learned shape and texture characteristics of the prostate in CT images and then combined them with user inputs to segment a new image. We trained our segmentation algorithm using 23 CT images and tested the algorithm on two sets of 10 nonbrachytherapy and 37 postlow dose rate brachytherapy CT images. We used a set of error metrics to evaluate the segmentation results using two experts' manual reference segmentations. For both nonbrachytherapy and post-brachytherapy image sets, the average measured Dice similarity coefficient (DSC) was 88% and the average mean absolute distance (MAD) was 1.9 mm. The average measured differences between the two experts on both datasets were 92% (DSC) and 1.1 mm (MAD). The proposed, semiautomatic segmentation algorithm showed a fast, robust, and accurate performance for 3D prostate segmentation of CT images, specifically when no previous, intrapatient information, that is, previously segmented images, was available. The accuracy of the algorithm is comparable to the best performance results reported in the literature and approaches the interexpert variability observed in manual segmentation. © 2018 American Association of Physicists in Medicine.

  3. Successful reimplantation of extruded long bone segments in open fractures of lower limb--a report of 3 cases.

    PubMed

    Shanmuganathan, Rajasekaran; Chandra Mohan, Arun Kamal; Agraharam, Devendra; Perumal, Ramesh; Jayaramaraju, Dheenadhayalan; Kulkarni, Sunil

    2015-07-01

    Extruded bone segments are rare complication of high energy open fractures. Routinely these fractures are treated by debridement followed by bone loss management in the form of either bone transport or free fibula transfer. There are very few reports in the literature about reimplantation of extruded segments of bone and there are no clear guidelines regarding timing of reimplantation, bone stabilisation and sterilisation techniques. Reimplantation of extruded bone is a risky procedure due to high chances of infection which determines the final outcome and can result in secondary amputations. We present two cases of successful reimplantation of extruded diaphyseal segment of femur and one case of reimplantation of extruded segment of tibia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Human body segmentation via data-driven graph cut.

    PubMed

    Li, Shifeng; Lu, Huchuan; Shao, Xingqing

    2014-11-01

    Human body segmentation is a challenging and important problem in computer vision. Existing methods usually entail a time-consuming training phase for prior knowledge learning with complex shape matching for body segmentation. In this paper, we propose a data-driven method that integrates top-down body pose information and bottom-up low-level visual cues for segmenting humans in static images within the graph cut framework. The key idea of our approach is first to exploit human kinematics to search for body part candidates via dynamic programming for high-level evidence. Then, by using the body parts classifiers, obtaining bottom-up cues of human body distribution for low-level evidence. All the evidence collected from top-down and bottom-up procedures are integrated in a graph cut framework for human body segmentation. Qualitative and quantitative experiment results demonstrate the merits of the proposed method in segmenting human bodies with arbitrary poses from cluttered backgrounds.

  5. Youth Attitude Tracking Study II Wave 17 -- Fall 1986.

    DTIC Science & Technology

    1987-06-01

    decision, unless so designated by other official documentation. TABLE OF CONTENTS Page PREFACE ................................................. xi...Segmentation Analyses .......................... 2-7 .3. METHODOLOGY OF YATS II....................................... 3-1 A. Sampling Design Overview...Sampling Design , Estimation Procedures and Estimated Sampling Errors ................................. A-i Appendix B: Data Collection Procedures

  6. Federal Logistics Information System (FLIS) Procedures Manual, Volume 1, Change 1

    DTIC Science & Technology

    1996-07-01

    Se- 2 KAT Add FLIS Data Base Data 1 curity Classified Characteristics KDZ Delete Logistics Transfer 3 Data KFA Match Through Association I KFC File...a Cancelled menus normally furnished with this DIC NSNIPSCN, Related Generic or (2) the segment Z data pertains to an NSN. or Reference Number FSC...8/9 KEC Output Exceeds AUTODIN Limitations 4,5 vols 8/9 KFA Match through Association 4 vols 8/9 KFC File Data Minus Security Classified Character- 4

  7. Standard operating procedure for calculating genome-to-genome distances based on high-scoring segment pairs.

    PubMed

    Auch, Alexander F; Klenk, Hans-Peter; Göker, Markus

    2010-01-28

    DNA-DNA hybridization (DDH) is a widely applied wet-lab technique to obtain an estimate of the overall similarity between the genomes of two organisms. To base the species concept for prokaryotes ultimately on DDH was chosen by microbiologists as a pragmatic approach for deciding about the recognition of novel species, but also allowed a relatively high degree of standardization compared to other areas of taxonomy. However, DDH is tedious and error-prone and first and foremost cannot be used to incrementally establish a comparative database. Recent studies have shown that in-silico methods for the comparison of genome sequences can be used to replace DDH. Considering the ongoing rapid technological progress of sequencing methods, genome-based prokaryote taxonomy is coming into reach. However, calculating distances between genomes is dependent on multiple choices for software and program settings. We here provide an overview over the modifications that can be applied to distance methods based in high-scoring segment pairs (HSPs) or maximally unique matches (MUMs) and that need to be documented. General recommendations on determining HSPs using BLAST or other algorithms are also provided. As a reference implementation, we introduce the GGDC web server (http://ggdc.gbdp.org).

  8. A procedure for testing prospective remembering in persons with neurological impairments.

    PubMed

    Titov, N; Knight, R G

    2000-10-01

    A video-based procedure for assessing prospective remembering (PR) in brain-injured clients is described. In this task, a list of instructions is given, each comprising an action (buy a hamburger) and a cue (at McDonalds), which are to be recalled while watching a videotape segment showing the view of a person walking through a shopping area. A group of 12 clients with varying degrees of memory impairment undergoing rehabilitation completed both a video test and a comparable task in real-life. Significant correlations were found between the two measures, indicating that a video-based analogue can be used to estimate prospective remembering in real life. Scores on the PR task were associated with accuracy of recall on a word-list task, but not with the Working Memory Index of the Wechsler Memory Scale-III, suggesting that the task is sensitive to levels of amnesic deficit.

  9. Sequence analysis of the canine mitochondrial DNA control region from shed hair samples in criminal investigations.

    PubMed

    Berger, C; Berger, B; Parson, W

    2012-01-01

    In recent years, evidence from domestic dogs has increasingly been analyzed by forensic DNA testing. Especially, canine hairs have proved most suitable and practical due to the high rate of hair transfer occurring between dogs and humans. Starting with the description of a contamination-free sample handling procedure, we give a detailed workflow for sequencing hypervariable segments (HVS) of the mtDNA control region from canine evidence. After the hair material is lysed and the DNA extracted by Phenol/Chloroform, the amplification and sequencing strategy comprises the HVS I and II of the canine control region and is optimized for DNA of medium-to-low quality and quantity. The sequencing procedure is based on the Sanger Big-dye deoxy-terminator method and the separation of the sequencing reaction products is performed on a conventional multicolor fluorescence detection capillary electrophoresis platform. Finally, software-aided base calling and sequence interpretation are addressed exemplarily.

  10. Conjoint Analysis for New Service Development on Electricity Distribution in Indonesia

    NASA Astrophysics Data System (ADS)

    Widaningrum, D. L.; Chynthia; Astuti, L. D.; Seran, M. A. B.

    2017-07-01

    Many cases of illegal use of electricity in Indonesia is still rampant, especially for activities where the power source is not available, such as in the location of street vendors. It is not only detrimental to the state, but also harm the perpetrators of theft of electricity and the surrounding communities. The purpose of this study is to create New Service Development (NSD) to provide a new electricity source for street vendors' activity based on their preferences. The methods applied in NSD is Conjoint Analysis, Cluster Analysis, Quality Function Deployment (QFD), Service Blueprint, Process Flow Diagrams and Quality Control Plan. The results of this study are the attributes and their importance in the new electricity’s service based on street vendors’ preferences as customers, customer segmentation, service design for new service, designing technical response, designing operational procedures, the quality control plan of any existing operational procedures.

  11. Partial lesions of the intratemporal segment of the facial nerve: graft versus partial reconstruction.

    PubMed

    Bento, Ricardo F; Salomone, Raquel; Brito, Rubens; Tsuji, Robinson K; Hausen, Mariana

    2008-09-01

    In cases of partial lesions of the intratemporal segment of the facial nerve, should the surgeon perform an intraoperative partial reconstruction, or partially remove the injured segment and place a graft? We present results from partial lesion reconstruction on the intratemporal segment of the facial nerve. A retrospective study on 42 patients who presented partial lesions on the intratemporal segment of the facial nerve was performed between 1988 and 2005. The patients were divided into 3 groups based on the procedure used: interposition of the partial graft on the injured area of the nerve (group 1; 12 patients); keeping the preserved part and performing tubulization (group 2; 8 patients); and dividing the parts of the injured nerve (proximal and distal) and placing a total graft of the sural nerve (group 3; 22 patients). Fracture of the temporal bone was the most frequent cause of the lesion in all groups, followed by iatrogenic causes (p < 0.005). Those who obtained results lower than or equal to III on the House-Brackmann scale were 1 (8.3%) of the patients in group 1, none (0.0%) of the patients in group 2, and 15 (68.2%) of the patients in group 3 (p <0.001). The best surgical technique for therapy of a partial lesion of the facial nerve is still questionable. Among these 42 patients, the best results were those from the total graft of the facial nerve.

  12. Do Lordotic Cages Provide Better Segmental Lordosis Versus Nonlordotic Cages in Lateral Lumbar Interbody Fusion (LLIF)?

    PubMed

    Sembrano, Jonathan N; Horazdovsky, Ryan D; Sharma, Amit K; Yson, Sharon C; Santos, Edward R G; Polly, David W

    2017-05-01

    A retrospective comparative radiographic review. To evaluate the radiographic changes brought about by lordotic and nonlordotic cages on segmental and regional lumbar sagittal alignment and disk height in lateral lumbar interbody fusion (LLIF). The effects of cage design on operative level segmental lordosis in posterior interbody fusion procedures have been reported. However, there are no studies comparing the effect of sagittal implant geometry in LLIF. This is a comparative radiographic analysis of consecutive LLIF procedures performed with use of lordotic and nonlordotic interbody cages. Forty patients (61 levels) underwent LLIF. Average age was 57 years (range, 30-83 y). Ten-degree lordotic PEEK cages were used at 31 lumbar interbody levels, and nonlordotic cages were used at 30 levels. The following parameters were measured on preoperative and postoperative radiographs: segmental lordosis; anterior and posterior disk heights at operative level; segmental lordosis at supra-level and subjacent level; and overall lumbar (L1-S1) lordosis. Measurement changes for each cage group were compared using paired t test analysis. The use of lordotic cages in LLIF resulted in a significant increase in lordosis at operative levels (2.8 degrees; P=0.01), whereas nonlordotic cages did not (0.6 degrees; P=0.71) when compared with preoperative segmental lordosis. Anterior and posterior disk heights were significantly increased in both groups (P<0.01). Neither cage group showed significant change in overall lumbar lordosis (lordotic P=0.86 vs. nonlordotic P=0.25). Lordotic cages provided significant increase in operative level segmental lordosis compared with nonlordotic cages although overall lumbar lordosis remained unchanged. Anterior and posterior disk heights were significantly increased by both cages, providing basis for indirect spinal decompression.

  13. Assembly Test Article (ATA)

    NASA Technical Reports Server (NTRS)

    Ricks, Glen A.

    1988-01-01

    The assembly test article (ATA) consisted of two live loaded redesigned solid rocket motor (RSRM) segments which were assembled and disassembled to simulate the actual flight segment stacking process. The test assembly joint was flight RSRM design, which included the J-joint insulation design and metal capture feature. The ATA test was performed mid-November through 24 December 1987, at Kennedy Space Center (KSC), Florida. The purpose of the test was: certification that vertical RSRM segment mating and separation could be accomplished without any damage; verification and modification of the procedures in the segment stacking/destacking documents; and certification of various GSE to be used for flight assembly and inspection. The RSRM vertical segment assembly/disassembly is possible without any damage to the insulation, metal parts, or seals. The insulation J-joint contact area was very close to the predicted values. Numerous deviations and changes to the planning documents were made to ensure the flight segments are effectively and correctly stacked. Various GSE were also certified for use on flight segments, and are discussed in detail.

  14. Multiresolution multiscale active mask segmentation of fluorescence microscope images

    NASA Astrophysics Data System (ADS)

    Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena

    2009-08-01

    We propose an active mask segmentation framework that combines the advantages of statistical modeling, smoothing, speed and flexibility offered by the traditional methods of region-growing, multiscale, multiresolution and active contours respectively. At the crux of this framework is a paradigm shift from evolving contours in the continuous domain to evolving multiple masks in the discrete domain. Thus, the active mask framework is particularly suited to segment digital images. We demonstrate the use of the framework in practice through the segmentation of punctate patterns in fluorescence microscope images. Experiments reveal that statistical modeling helps the multiple masks converge from a random initial configuration to a meaningful one. This obviates the need for an involved initialization procedure germane to most of the traditional methods used to segment fluorescence microscope images. While we provide the mathematical details of the functions used to segment fluorescence microscope images, this is only an instantiation of the active mask framework. We suggest some other instantiations of the framework to segment different types of images.

  15. Sulci segmentation using geometric active contours

    NASA Astrophysics Data System (ADS)

    Torkaman, Mahsa; Zhu, Liangjia; Karasev, Peter; Tannenbaum, Allen

    2017-02-01

    Sulci are groove-like regions lying in the depth of the cerebral cortex between gyri, which together, form a folded appearance in human and mammalian brains. Sulci play an important role in the structural analysis of the brain, morphometry (i.e., the measurement of brain structures), anatomical labeling and landmark-based registration.1 Moreover, sulcal morphological changes are related to cortical thickness, whose measurement may provide useful information for studying variety of psychiatric disorders. Manually extracting sulci requires complying with complex protocols, which make the procedure both tedious and error prone.2 In this paper, we describe an automatic procedure, employing geometric active contours, which extract the sulci. Sulcal boundaries are obtained by minimizing a certain energy functional whose minimum is attained at the boundary of the given sulci.

  16. SU-C-201-04: Quantification of Perfusion Heterogeneity Based On Texture Analysis for Fully Automatic Detection of Ischemic Deficits From Myocardial Perfusion Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Y; Huang, H; Su, T

    Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less

  17. Automatic multi-organ segmentation using learning-based segmentation and level set optimization.

    PubMed

    Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.

  18. Mindfulness-Based Interventions for Older Adults: A Review of the Effects on Physical and Emotional Well-being

    PubMed Central

    Geiger, Paul J.; Boggero, Ian A.; Brake, C. Alex; Caldera, Carolina A.; Combs, Hannah L.; Peters, Jessica R.; Baer, Ruth A.

    2015-01-01

    This comprehensive review examined the effects of mindfulness-based interventions on the physical and emotional wellbeing of older adults, a rapidly growing segment of the general population. Search procedures yielded 15 treatment outcome studies meeting inclusion criteria. Support was found for the feasibility and acceptability of mindfulness-based interventions with older adults. Physical and emotional wellbeing outcome variables offered mixed support for the use of mindfulness-based interventions with older adults. Potential explanations of mixed findings may include methodological flaws, study limitations, and inconsistent modifications of protocols. These are discussed in detail and future avenues of research are discussed, emphasizing the need to incorporate geriatric populations into future mindfulness-based empirical research. PMID:27200109

  19. X-ray Computed Tomography Assessment of Air Void Distribution in Concrete

    NASA Astrophysics Data System (ADS)

    Lu, Haizhu

    Air void size and spatial distribution have long been regarded as critical parameters in the frost resistance of concrete. In cement-based materials, entrained air void systems play an important role in performance as related to durability, permeability, and heat transfer. Many efforts have been made to measure air void parameters in a more efficient and reliable manner in the past several decades. Standardized measurement techniques based on optical microscopy and stereology on flat cut and polished surfaces are widely used in research as well as in quality assurance and quality control applications. Other more automated methods using image processing have also been utilized, but still starting from flat cut and polished surfaces. The emergence of X-ray computed tomography (CT) techniques provides the capability of capturing the inner microstructure of materials at the micrometer and nanometer scale. X-ray CT's less demanding sample preparation and capability to measure 3D distributions of air voids directly provide ample prospects for its wider use in air void characterization in cement-based materials. However, due to the huge number of air voids that can exist within a limited volume, errors can easily arise in the absence of a formalized data processing procedure. In this study, air void parameters in selected types of cement-based materials (lightweight concrete, structural concrete elements, pavements, and laboratory mortars) have been measured using micro X-ray CT. The focus of this study is to propose a unified procedure for processing the data and to provide solutions to deal with common problems that arise when measuring air void parameters: primarily the reliable segmentation of objects of interest, uncertainty estimation of measured parameters, and the comparison of competing segmentation parameters.

  20. [Strategies and surgical management of endometriosis: CNGOF-HAS Endometriosis Guidelines].

    PubMed

    Roman, H; Ballester, M; Loriau, J; Canis, M; Bolze, P A; Niro, J; Ploteau, S; Rubod, C; Yazbeck, C; Collinet, P; Rabischong, B; Merlot, B; Fritel, X

    2018-03-01

    The article presents French guidelines for surgical management of endometriosis. Surgical treatment is recommended for mild to moderate endometriosis, as it decreases pelvic painful complaints and increases the likelihood of postoperative conception in infertile patients (A). Surgery may be proposed in symptomatic patients with ovarian endometriomas which diameter exceeds 20mm. Cystectomy allows for better postoperative pregnancy rates when compared to ablation using bipolar current, as well as for lower recurrences rates when compared to ablation using bipolar current or CO 2 laser. Ablation of ovarian endometriomas using bipolar current is not recommended (B). Surgery may be employed in patients with deep endometriosis infiltrating the colon and the rectum, with good impact on painful complaints and postoperative conception. In these patients, laparoscopic route increases the likelihood of postoperative spontaneous conception when compared to open route. When compared to conservative rectal procedures (shaving or disc excision), segmental colorectal resection increases the risk of postoperative stenosis, requiring additional endoscopic or surgical procedures. In large deep endometriosis infiltrating the rectum (>20mm length of bowel infiltration), conservative rectal procedures do not improve postoperative digestive function when compared to segmental resection. In patients with bowel anastomosis, placing anti-adhesion agents on contact with bowel suture is not recommended, due to higher risk of bowel fistula (C). Various other recommendations are proposed in the text, however, they are based on studies with low level of evidence. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  1. Patient-specific geometrical modeling of orthopedic structures with high efficiency and accuracy for finite element modeling and 3D printing.

    PubMed

    Huang, Huajun; Xiang, Chunling; Zeng, Canjun; Ouyang, Hanbin; Wong, Kelvin Kian Loong; Huang, Wenhua

    2015-12-01

    We improved the geometrical modeling procedure for fast and accurate reconstruction of orthopedic structures. This procedure consists of medical image segmentation, three-dimensional geometrical reconstruction, and assignment of material properties. The patient-specific orthopedic structures reconstructed by this improved procedure can be used in the virtual surgical planning, 3D printing of real orthopedic structures and finite element analysis. A conventional modeling consists of: image segmentation, geometrical reconstruction, mesh generation, and assignment of material properties. The present study modified the conventional method to enhance software operating procedures. Patient's CT images of different bones were acquired and subsequently reconstructed to give models. The reconstruction procedures were three-dimensional image segmentation, modification of the edge length and quantity of meshes, and the assignment of material properties according to the intensity of gravy value. We compared the performance of our procedures to the conventional procedures modeling in terms of software operating time, success rate and mesh quality. Our proposed framework has the following improvements in the geometrical modeling: (1) processing time: (femur: 87.16 ± 5.90 %; pelvis: 80.16 ± 7.67 %; thoracic vertebra: 17.81 ± 4.36 %; P < 0.05); (2) least volume reduction (femur: 0.26 ± 0.06 %; pelvis: 0.70 ± 0.47, thoracic vertebra: 3.70 ± 1.75 %; P < 0.01) and (3) mesh quality in terms of aspect ratio (femur: 8.00 ± 7.38 %; pelvis: 17.70 ± 9.82 %; thoracic vertebra: 13.93 ± 9.79 %; P < 0.05) and maximum angle (femur: 4.90 ± 5.28 %; pelvis: 17.20 ± 19.29 %; thoracic vertebra: 3.86 ± 3.82 %; P < 0.05). Our proposed patient-specific geometrical modeling requires less operating time and workload, but the orthopedic structures were generated at a higher rate of success as compared with the conventional method. It is expected to benefit the surgical planning of orthopedic structures with less operating time and high accuracy of modeling.

  2. Biodegradation Of thermoplastic polyurethanes from vegetable oils

    USDA-ARS?s Scientific Manuscript database

    Thermoplastic urethanes based on polyricinoleic acid soft segments and MDI/BD hard segments with varied soft segment concentration were prepared. Soft segment concentration was varied fro, 40 to 70 wt %. Biodegradation was studied by respirometry. Segmented polyurethanes with soft segments based ...

  3. Influence of Domain Shift Factors on Deep Segmentation of the Drivable Path of AN Autonomous Vehicle

    NASA Astrophysics Data System (ADS)

    Bormans, R. P. A.; Lindenbergh, R. C.; Karimi Nejadasl, F.

    2018-05-01

    One of the biggest challenges for an autonomous vehicle (and hence the WEpod) is to see the world as humans would see it. This understanding is the base for a successful and reliable future of autonomous vehicles. Real-world data and semantic segmentation generally are used to achieve full understanding of its surroundings. However, deploying a pretrained segmentation network to a new, previously unseen domain will not attain similar performance as it would on the domain where it is trained on due to the differences between the domains. Although research is done concerning the mitigation of this domain shift, the factors that cause these differences are not yet fully explored. We filled this gap with the investigation of several factors. A base network was created by a two-step finetuning procedure on a convolutional neural network (SegNet) which is pretrained on CityScapes (a dataset for semantic segmentation). The first tuning step is based on RobotCar (road scenery dataset recorded in Oxford, UK) while afterwards this network is fine-tuned for a second time but now on the KITTI (road scenery dataset recorded in Germany) dataset. With this base, experiments are used to obtain the importance of factors such as horizon line, colour and training order for a successful domain adaptation. In this case the domain adaptation is from the KITTI and RobotCar domain to the WEpod domain. For evaluation, groundtruth labels are created in a weakly-supervised setting. Negative influence was obtained for training on greyscale images instead of RGB images. This resulted in drops of IoU values up to 23.9 % for WEpod test images. The training order is a main contributor for domain adaptation with an increase in IoU of 4.7 %. This shows that the target domain (WEpod) is more closely related to RobotCar than to KITTI.

  4. Users manual for the US baseline corn and soybean segment classification procedure

    NASA Technical Reports Server (NTRS)

    Horvath, R.; Colwell, R. (Principal Investigator); Hay, C.; Metzler, M.; Mykolenko, O.; Odenweller, J.; Rice, D.

    1981-01-01

    A user's manual for the classification component of the FY-81 U.S. Corn and Soybean Pilot Experiment in the Foreign Commodity Production Forecasting Project of AgRISTARS is presented. This experiment is one of several major experiments in AgRISTARS designed to measure and advance the remote sensing technologies for cropland inventory. The classification procedure discussed is designed to produce segment proportion estimates for corn and soybeans in the U.S. Corn Belt (Iowa, Indiana, and Illinois) using LANDSAT data. The estimates are produced by an integrated Analyst/Machine procedure. The Analyst selects acquisitions, participates in stratification, and assigns crop labels to selected samples. In concert with the Analyst, the machine digitally preprocesses LANDSAT data to remove external effects, stratifies the data into field like units and into spectrally similar groups, statistically samples the data for Analyst labeling, and combines the labeled samples into a final estimate.

  5. 78 FR 18262 - Proposed Amendment of Class E Airspace; Ogallala, NE

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... accommodate new Standard Instrument Approach Procedures (SIAP) at Searle Field Airport. The FAA is taking this action to enhance the safety and management of Instrument Flight Rules (IFR) operations for SIAPs at the... standard instrument approach procedures at Searle Field Airport, Ogallala, NE. A small segment would extend...

  6. Off-Campus Registration Procedures.

    ERIC Educational Resources Information Center

    Maas, Michael L.

    Registration is one of the more critical functions that a college staff encounters each semester. To have a smooth, efficient, college-wide registration, it is essential that all segments of the college be aware of registration procedures as well as data control operations. This packet was designed to acquaint interested parties with the…

  7. A Stochastic-Variational Model for Soft Mumford-Shah Segmentation

    PubMed Central

    2006-01-01

    In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059

  8. Segmental hair analysis for differentiation of tilidine intake from external contamination using LC-ESI-MS/MS and MALDI-MS/MS imaging.

    PubMed

    Poetzsch, Michael; Baumgartner, Markus R; Steuer, Andrea E; Kraemer, Thomas

    2015-02-01

    Segmental hair analysis has been used for monitoring changes of consumption habit of drugs. Contamination from the environment or sweat might cause interpretative problems. For this reason, hair analysis results were compared in hair samples taken 24 h and 30 days after a single tilidine dose. The 24-h hair samples already showed high concentrations of tilidine and nortilidine. Analysis of wash water from sample preparation confirmed external contamination by sweat as reason. The 30-day hair samples were still positive for tilidine in all segments. Negative wash-water analysis proved incorporation from sweat into the hair matrix. Interpretation of a forensic case was requested where two children had been administered tilidine by their nanny and tilidine/nortilidine had been detected in all hair segments, possibly indicating multiple applications. Taking into consideration the results of the present study and of MALDI-MS imaging, a single application as cause for analytical results could no longer be excluded. Interpretation of consumption behaviour of tilidine based on segmental hair analysis has to be done with caution, even after typical wash procedures during sample preparation. External sweat contamination followed by incorporation into the hair matrix can mimic chronic intake. For assessment of external contamination, hair samples should not only be collected several weeks but also one to a few days after intake. MALDI-MS imaging of single hair can be a complementary tool for interpretation. Limitations for interpretation of segmental hair analysis shown here might also be applicable to drugs with comparable physicochemical and pharmacokinetic properties. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Surgical treatment of Parkinson’s disease: Past, present, and future

    PubMed Central

    Duker, Andrew P.; Espay, Alberto J.

    2013-01-01

    Advances in functional neurosurgery have expanded the treatment of Parkinson’s disease (PD), from early lesional procedures to targeted electrical stimulation of specific nodes in the basal ganglia circuitry. Deep brain stimulation (DBS), applied to selected patients with Parkinson’s disease (PD) and difficult-to-manage motor fluctuations, yields substantial reductions in off time and dyskinesia. Outcomes for DBS targeting the two major studied targets in PD, the subthalamic nucleus (STN) and the internal segment of the globus pallidus (GPi), appear to be broadly similar and the choice is best made based on individual patient factors and surgeon preference. Emerging concepts in DBS include examination of new targets, such as the potential efficacy of pedunculopontine nucleus (PPN) stimulation for treatment of freezing and falls, the utilization of pathologic oscillations in the beta band to construct an adaptive “closed-loop” DBS, and new technologies, including segmented electrodes to steer current toward specific neural populations. PMID:23896506

  10. LACIE performance predictor FOC users manual

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The LACIE Performance Predictor (LPP) is a computer simulation of the LACIE process for predicting worldwide wheat production. The simulation provides for the introduction of various errors into the system and provides estimates based on these errors, thus allowing the user to determine the impact of selected error sources. The FOC LPP simulates the acquisition of the sample segment data by the LANDSAT Satellite (DAPTS), the classification of the agricultural area within the sample segment (CAMS), the estimation of the wheat yield (YES), and the production estimation and aggregation (CAS). These elements include data acquisition characteristics, environmental conditions, classification algorithms, the LACIE aggregation and data adjustment procedures. The operational structure for simulating these elements consists of the following key programs: (1) LACIE Utility Maintenance Process, (2) System Error Executive, (3) Ephemeris Generator, (4) Access Generator, (5) Acquisition Selector, (6) LACIE Error Model (LEM), and (7) Post Processor.

  11. Image segmentation based upon topological operators: real-time implementation case study

    NASA Astrophysics Data System (ADS)

    Mahmoudi, R.; Akil, M.

    2009-02-01

    In miscellaneous applications of image treatment, thinning and crest restoring present a lot of interests. Recommended algorithms for these procedures are those able to act directly over grayscales images while preserving topology. But their strong consummation in term of time remains the major disadvantage in their choice. In this paper we present an efficient hardware implementation on RISC processor of two powerful algorithms of thinning and crest restoring developed by our team. Proposed implementation enhances execution time. A chain of segmentation applied to medical imaging will serve as a concrete example to illustrate the improvements brought thanks to the optimization techniques in both algorithm and architectural levels. The particular use of the SSE instruction set relative to the X86_32 processors (PIV 3.06 GHz) will allow a best performance for real time processing: a cadency of 33 images (512*512) per second is assured.

  12. Versatile robotic probe calibration for position tracking in ultrasound imaging.

    PubMed

    Bø, Lars Eirik; Hofstad, Erlend Fagertun; Lindseth, Frank; Hernes, Toril A N

    2015-05-07

    Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.

  13. Versatile robotic probe calibration for position tracking in ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Eirik Bø, Lars; Fagertun Hofstad, Erlend; Lindseth, Frank; Hernes, Toril A. N.

    2015-05-01

    Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.

  14. Multiple Spectral-Spatial Classification Approach for Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2010-01-01

    A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques.

  15. Automatic repositioning of jaw segments for three-dimensional virtual treatment planning of orthognathic surgery.

    PubMed

    Santos, Rodrigo Mologni Gonçalves Dos; De Martino, José Mario; Passeri, Luis Augusto; Attux, Romis Ribeiro de Faissol; Haiter Neto, Francisco

    2017-09-01

    To develop a computer-based method for automating the repositioning of jaw segments in the skull during three-dimensional virtual treatment planning of orthognathic surgery. The method speeds up the planning phase of the orthognathic procedure, releasing surgeons from laborious and time-consuming tasks. The method finds the optimal positions for the maxilla, mandibular body, and bony chin in the skull. Minimization of cephalometric differences between measured and standard values is considered. Cone-beam computed tomographic images acquired from four preoperative patients with skeletal malocclusion were used for evaluating the method. Dentofacial problems of the four patients were rectified, including skeletal malocclusion, facial asymmetry, and jaw discrepancies. The results show that the method is potentially able to be used in routine clinical practice as support for treatment-planning decisions in orthognathic surgery. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  16. Large Area Crop Inventory Experiment (LACIE). LACIE phase 1 and phase 2 accuracy assessment. [Kansas, Texas, Minnesota, Montana, and North Dakota

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The author has identified the following significant results. The initial CAS estimates, which were made for each month from April through August, were considerably higher than the USDA/SRS estimates. This was attributed to: (1) the practice of considering bare ground as potential wheat and counting it as wheat; (2) overestimation of the wheat proportions in segments having only a small amount of wheat; and (3) the classification of confusion crops as wheat. At the end of the season most of the segments were reworked using improved methods based on experience gained during the season. In particular, new procedures were developed to solve the three problems listed above. These and other improvements used in the rework experiment resulted in at-harvest estimates that were much closer to the USDA/SRS estimates than those obtained during the regular season.

  17. A Multiatlas Segmentation Using Graph Cuts with Applications to Liver Segmentation in CT Scans

    PubMed Central

    2014-01-01

    An atlas-based segmentation approach is presented that combines low-level operations, an affine probabilistic atlas, and a multiatlas-based segmentation. The proposed combination provides highly accurate segmentation due to registrations and atlas selections based on the regions of interest (ROIs) and coarse segmentations. Our approach shares the following common elements between the probabilistic atlas and multiatlas segmentation: (a) the spatial normalisation and (b) the segmentation method, which is based on minimising a discrete energy function using graph cuts. The method is evaluated for the segmentation of the liver in computed tomography (CT) images. Low-level operations define a ROI around the liver from an abdominal CT. We generate a probabilistic atlas using an affine registration based on geometry moments from manually labelled data. Next, a coarse segmentation of the liver is obtained from the probabilistic atlas with low computational effort. Then, a multiatlas segmentation approach improves the accuracy of the segmentation. Both the atlas selections and the nonrigid registrations of the multiatlas approach use a binary mask defined by coarse segmentation. We experimentally demonstrate that this approach performs better than atlas selections and nonrigid registrations in the entire ROI. The segmentation results are comparable to those obtained by human experts and to other recently published results. PMID:25276219

  18. Determination of Isthmocele Using a Foley Catheter During Laparoscopic Repair of Cesarean Scar Defect.

    PubMed

    Akdemir, Ali; Sahin, Cagdas; Ari, Sabahattin Anil; Ergenoglu, Mete; Ulukus, Murat; Karadadas, Nedim

    2018-01-01

    To demonstrate a new technique of isthmocele repair via laparoscopic surgery. Case report (Canadian Task Force classification III). The local Ethics Committee waived the requirement for approval. Isthmocele localized at a low uterine segment is a defect of a previous caesarean scar due to poor myometrial healing after surgery [1]. This pouch accumulates menstrual bleeding, which can cause various disturbances and irregularities, including abnormal uterine bleeding, infertility, pelvic pain, and scar pregnancy [2-6]. Given the absence of a clearly defined surgical method in the literature, choosing the proper approach to treating isthmocele can be arduous. Laparoscopy provides a minimally invasive procedure in women with previous caesarean scar defects. A 28-year-old woman, gravida 2 para 2, presented with a complaint of prolonged postmenstrual bleeding for 5 years. She had undergone 2 cesarean deliveries. Transvaginal ultrasonography revealed a hypoechogenic area with menstrual blood in the anterior lower uterine segment. Magnetic resonance imaging showed an isthmocele localized at the anterior left lateral side of the uterus, with an estimated volume of approximately 12 cm 3 . After patient preparation, laparoscopy was performed. To repair the defect, the uterovesical peritoneal fold was incised and the bladder was mobilized from the lower uterine segment. During this surgery, differentiating the isthmocele from the abdomen can be challenging. Here we used a Foley catheter to identify the isthmocele. To do this, after mobilizing the bladder from the lower uterine segment, we inserted a Foley catheter into the uterine cavity through the cervical canal. We then filled the balloon of the catheter at the lower uterine segment under laparoscopic view, which allowed clear identification of the isthmocele pouch. The uterine defect was then incised. The isthmocele cavity was accessed, the margins of the pouch were debrided, and the edges were surgically reapproximated with continuous nonlocking single layer 2-0 polydioxanone sutures. We believed that single-layer suturing could provide for proper healing without necrosis due to suturation. During the procedure, the vesicouterine space was dissected without difficulty. A urine bag was collected with clear urine, and there was no gas leakage; thus, we considered a safety test for the bladder superfluous. Based on concerns about the possible increased risk of adhesions, we did not cover peritoneum over the suture. The patients experienced no associated complications, and she reported complete resolution of prolonged postmenstrual bleeding at a 3-month follow-up. Even though the literature is cloudy in this area, a laparoscopic approach to repairing an isthmocele is a safe and minimally invasive procedure. Our approach described here involves inserting a Foley catheter in the uterine cavity through the cervical canal, then filling the balloon in the lower uterine segment under laparoscopic view to identify the isthmocele. Copyright © 2017 AAGL. Published by Elsevier Inc. All rights reserved.

  19. Computer Aided Solution for Automatic Segmenting and Measurements of Blood Leucocytes Using Static Microscope Images.

    PubMed

    Abdulhay, Enas; Mohammed, Mazin Abed; Ibrahim, Dheyaa Ahmed; Arunkumar, N; Venkatraman, V

    2018-02-17

    Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.

  20. Weberized Mumford-Shah Model with Bose-Einstein Photon Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Jianhong, E-mail: jhshen@math.umn.edu; Jung, Yoon-Mo

    Human vision works equally well in a large dynamic range of light intensities, from only a few photons to typical midday sunlight. Contributing to such remarkable flexibility is a famous law in perceptual (both visual and aural) psychology and psychophysics known as Weber's Law. The current paper develops a new segmentation model based on the integration of Weber's Law and the celebrated Mumford-Shah segmentation model (Comm. Pure Appl. Math., vol. 42, pp. 577-685, 1989). Explained in detail are issues concerning why the classical Mumford-Shah model lacks light adaptivity, and why its 'weberized' version can more faithfully reflect human vision's superiormore » segmentation capability in a variety of illuminance conditions from dawn to dusk. It is also argued that the popular Gaussian noise model is physically inappropriate for the weberization procedure. As a result, the intrinsic thermal noise of photon ensembles is introduced based on Bose and Einstein's distributions in quantum statistics, which turns out to be compatible with weberization both analytically and computationally. The current paper focuses on both the theory and computation of the weberized Mumford-Shah model with Bose-Einstein noise. In particular, Ambrosio-Tortorelli's {gamma}-convergence approximation theory is adapted (Boll. Un. Mat. Ital. B, vol. 6, pp. 105-123, 1992), and stable numerical algorithms are developed for the associated pair ofnonlinear Euler-Lagrange PDEs.« less

  1. Remote sensing image segmentation based on Hadoop cloud platform

    NASA Astrophysics Data System (ADS)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  2. Virtual modeling of polycrystalline structures of materials using particle packing algorithms and Laguerre cells

    NASA Astrophysics Data System (ADS)

    Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló

    2018-04-01

    The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.

  3. An automated retinal imaging method for the early diagnosis of diabetic retinopathy.

    PubMed

    Franklin, S Wilfred; Rajan, S Edward

    2013-01-01

    Diabetic retinopathy is a microvascular complication of long-term diabetes and is the major cause for eyesight loss due to changes in blood vessels of the retina. Major vision loss due to diabetic retinopathy is highly preventable with regular screening and timely intervention at the earlier stages. Retinal blood vessel segmentation methods help to identify the successive stages of such sight threatening diseases like diabetes. To develop and test a novel retinal imaging method which segments the blood vessels automatically from retinal images, which helps the ophthalmologists in the diagnosis and follow-up of diabetic retinopathy. This method segments each image pixel as vessel or nonvessel, which in turn, used for automatic recognition of the vasculature in retinal images. Retinal blood vessels were identified by means of a multilayer perceptron neural network, for which the inputs were derived from the Gabor and moment invariants-based features. Back propagation algorithm, which provides an efficient technique to change the weights in a feed forward network, is utilized in our method. Quantitative results of sensitivity, specificity and predictive values were obtained in our method and the measured accuracy of our segmentation algorithm was 95.3%, which is better than that presented by state-of-the-art approaches. The evaluation procedure used and the demonstrated effectiveness of our automated retinal imaging method proves itself as the most powerful tool to diagnose diabetic retinopathy in the earlier stages.

  4. Towards computer-assisted TTTS: Laser ablation detection for workflow segmentation from fetoscopic video.

    PubMed

    Vasconcelos, Francisco; Brandão, Patrick; Vercauteren, Tom; Ourselin, Sebastien; Deprest, Jan; Peebles, Donald; Stoyanov, Danail

    2018-06-27

    Intrauterine foetal surgery is the treatment option for several congenital malformations. For twin-to-twin transfusion syndrome (TTTS), interventions involve the use of laser fibre to ablate vessels in a shared placenta. The procedure presents a number of challenges for the surgeon, and computer-assisted technologies can potentially be a significant support. Vision-based sensing is the primary source of information from the intrauterine environment, and hence, vision approaches present an appealing approach for extracting higher level information from the surgical site. In this paper, we propose a framework to detect one of the key steps during TTTS interventions-ablation. We adopt a deep learning approach, specifically the ResNet101 architecture, for classification of different surgical actions performed during laser ablation therapy. We perform a two-fold cross-validation using almost 50 k frames from five different TTTS ablation procedures. Our results show that deep learning methods are a promising approach for ablation detection. To our knowledge, this is the first attempt at automating photocoagulation detection using video and our technique can be an important component of a larger assistive framework for enhanced foetal therapies. The current implementation does not include semantic segmentation or localisation of the ablation site, and this would be a natural extension in future work.

  5. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    PubMed

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  6. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis

    PubMed Central

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuhn, Heinz-Dieter.

    The Visual to Infrared SASE Amplifier (VISA) [1] FEL is designed to achieve saturation at radiation wavelengths between 800 and 600 nm with a 4-m pure permanent magnet undulator. The undulator comprises four 99-cm segments each of which has four FODO focusing cells superposed on the beam by means of permanent magnets in the gap alongside the beam. Each segment will also have two beam position monitors and two sets of x-y dipole correctors. The trajectory walk-off in each segment will be reduced to a value smaller than the rms beam radius by means of magnet sorting, precise fabrication, andmore » post-fabrication shimming and trim magnets. However, this leaves possible inter-segment alignment errors. A trajectory analysis code has been used in combination with the FRED3D [2] FEL code to simulate the effect of the shimming procedure and segment alignment errors on the electron beam trajectory and to determine the sensitivity of the FEL gain process to trajectory errors. The paper describes the technique used to establish tolerances for the segment alignment.« less

  8. Solving systems of linear equations by GPU-based matrix factorization in a Science Ground Segment

    NASA Astrophysics Data System (ADS)

    Legendre, Maxime; Schmidt, Albrecht; Moussaoui, Saïd; Lammers, Uwe

    2013-11-01

    Recently, Graphics Cards have been used to offload scientific computations from traditional CPUs for greater efficiency. This paper investigates the adaptation of a real-world linear system solver, which plays a central role in the data processing of the Science Ground Segment of ESA's astrometric Gaia mission. The paper quantifies the resource trade-offs between traditional CPU implementations and modern CUDA based GPU implementations. It also analyses the impact on the pipeline architecture and system development. The investigation starts from both a selected baseline algorithm with a reference implementation and a traditional linear system solver and then explores various modifications to control flow and data layout to achieve higher resource efficiency. It turns out that with the current state of the art, the modifications impact non-technical system attributes. For example, the control flow of the original modified Cholesky transform is modified so that locality of the code and verifiability deteriorate. The maintainability of the system is affected as well. On the system level, users will have to deal with more complex configuration control and testing procedures.

  9. Effects of aircraft and flight parameters on energy-efficient profile descents in time-based metered traffic

    NASA Technical Reports Server (NTRS)

    Dejarnette, F. R.

    1984-01-01

    Concepts to save fuel while preserving airport capacity by combining time based metering with profile descent procedures were developed. A computer algorithm is developed to provide the flight crew with the information needed to fly from an entry fix to a metering fix and arrive there at a predetermined time, altitude, and airspeed. The flight from the metering fix to an aim point near the airport was calculated. The flight path is divided into several descent and deceleration segments. Descents are performed at constant Mach numbers or calibrated airspeed, whereas decelerations occur at constant altitude. The time and distance associated with each segment are calculated from point mass equations of motion for a clean configuration with idle thrust. Wind and nonstandard atmospheric properties have a large effect on the flight path. It is found that uncertainty in the descent Mach number has a large effect on the predicted flight time. Of the possible combinations of Mach number and calibrated airspeed for a descent, only small changes were observed in the fuel consumed.

  10. Sequence-independent construction of ordered combinatorial libraries with predefined crossover points.

    PubMed

    Jézéquel, Laetitia; Loeper, Jacqueline; Pompon, Denis

    2008-11-01

    Combinatorial libraries coding for mosaic enzymes with predefined crossover points constitute useful tools to address and model structure-function relationships and for functional optimization of enzymes based on multivariate statistics. The presented method, called sequence-independent generation of a chimera-ordered library (SIGNAL), allows easy shuffling of any predefined amino acid segment between two or more proteins. This method is particularly well adapted to the exchange of protein structural modules. The procedure could also be well suited to generate ordered combinatorial libraries independent of sequence similarities in a robotized manner. Sequence segments to be recombined are first extracted by PCR from a single-stranded template coding for an enzyme of interest using a biotin-avidin-based method. This technique allows the reduction of parental template contamination in the final library. Specific PCR primers allow amplification of two complementary mosaic DNA fragments, overlapping in the region to be exchanged. Fragments are finally reassembled using a fusion PCR. The process is illustrated via the construction of a set of mosaic CYP2B enzymes using this highly modular approach.

  11. Sensitivity analysis of brain morphometry based on MRI-derived surface models

    NASA Astrophysics Data System (ADS)

    Klein, Gregory J.; Teng, Xia; Schoenemann, P. T.; Budinger, Thomas F.

    1998-07-01

    Quantification of brain structure is important for evaluating changes in brain size with growth and aging and for characterizing neurodegeneration disorders. Previous quantification efforts using ex vivo techniques suffered considerable error due to shrinkage of the cerebrum after extraction from the skull, deformation of slices during sectioning, and numerous other factors. In vivo imaging studies of brain anatomy avoid these problems and allow repetitive studies following progression of brain structure changes due to disease or natural processes. We have developed a methodology for obtaining triangular mesh models of the cortical surface from MRI brain datasets. The cortex is segmented from nonbrain tissue using a 2D region-growing technique combined with occasional manual edits. Once segmented, thresholding and image morphological operations (erosions and openings) are used to expose the regions between adjacent surfaces in deep cortical folds. A 2D region- following procedure is then used to find a set of contours outlining the cortical boundary on each slice. The contours on all slices are tiled together to form a closed triangular mesh model approximating the cortical surface. This model can be used for calculation of cortical surface area and volume, as well as other parameters of interest. Except for the initial segmentation of the cortex from the skull, the technique is automatic and requires only modest computation time on modern workstations. Though the use of image data avoids many of the pitfalls of ex vivo and sectioning techniques, our MRI-based technique is still vulnerable to errors that may impact the accuracy of estimated brain structure parameters. Potential inaccuracies include segmentation errors due to incorrect thresholding, missed deep sulcal surfaces, falsely segmented holes due to image noise and surface tiling artifacts. The focus of this paper is the characterization of these errors and how they affect measurements of cortical surface area and volume.

  12. Psychological Distance to Reward: Effects of S+ Duration and the Delay Reduction It Signals

    ERIC Educational Resources Information Center

    Alessandri, Jerome; Stolarz-Fantino, Stephanie; Fantino, Edmund

    2011-01-01

    A concurrent-chains procedure was used to examine choice between segmented (two-component chained schedules) and unsegmented schedules (simple schedules) in terminal links with equal inter-reinforcement intervals. Previous studies using this kind of experimental procedure showed preference for unsegmented schedules for both pigeons and humans. In…

  13. Degenerative changes of the canine cervical spine after discectomy procedures, an in vivo study.

    PubMed

    Grunert, Peter; Moriguchi, Yu; Grossbard, Brian P; Ricart Arbona, Rodolfo J; Bonassar, Lawrence J; Härtl, Roger

    2017-06-23

    Discectomies are a common surgical treatment for disc herniations in the canine spine. However, the effect of these procedures on intervertebral disc tissue is not fully understood. The objective of this study was to assess degenerative changes of cervical spinal segments undergoing discectomy procedures, in vivo. Discectomies led to a 60% drop in disc height and 24% drop in foraminal height. Segments did not fuse but showed osteophyte formation as well as endplate sclerosis. MR imaging revealed terminal degenerative changes with collapse of the disc space and loss of T2 signal intensity. The endplates showed degenerative type II Modic changes. Quantitative MR imaging revealed that over 95% of Nucleus Pulposus tissue was extracted and that the nuclear as well as overall disc hydration significantly decreased. Histology confirmed terminal degenerative changes with loss of NP tissue, loss of Annulus Fibrosus organization and loss of cartilage endplate tissue. The bony endplate displayed sclerotic changes. Discectomies lead to terminal degenerative changes. Therefore, these procedures should be indicated with caution specifically when performed for prophylactic purposes.

  14. Evaluation of Hardware and Procedures for Astronaut Assembly and Repair of Large Precision Reflectors

    NASA Technical Reports Server (NTRS)

    Lake, Mark S.; Heard, Walter L., Jr.; Watson, Judith J.; Collins, Timothy J.

    2000-01-01

    A detailed procedure is presented that enables astronauts in extravehicular activity (EVA) to efficiently assemble and repair large (i.e., greater than 10m-diameter) segmented reflectors, supported by a truss, for space-based optical or radio-frequency science instruments. The procedure, estimated timelines, and reflector hardware performance are verified in simulated 0-g (neutral buoyancy) assembly tests of a 14m-diameter, offset-focus, reflector test article. The test article includes a near-flight-quality, 315-member, doubly curved support truss and 7 mockup reflector panels (roughly 2m in diameter) representing a portion of the 37 total panels needed to fully populate the reflector. Data from the tests indicate that a flight version of the design (including all reflector panels) could be assembled in less than 5 hours - less than the 6 hours normally permitted for a single EVA. This assembly rate essentially matches pre-test predictions that were based on a vast amount of historical data on EVA assembly of structures produced by NASA Langley Research Center. Furthermore, procedures and a tool for the removal and replacement of a damaged reflector panel were evaluated, and it was shown that EVA repair of this type of reflector is feasible with the use of appropriate EVA crew aids.

  15. Generalized expectation-maximization segmentation of brain MR images

    NASA Astrophysics Data System (ADS)

    Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.

    2006-03-01

    Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.

  16. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age.

    PubMed

    Wilke, Marko

    2018-02-01

    This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.

  17. Biologically inspired EM image alignment and neural reconstruction.

    PubMed

    Knowles-Barley, Seymour; Butcher, Nancy J; Meinertzhagen, Ian A; Armstrong, J Douglas

    2011-08-15

    Three-dimensional reconstruction of consecutive serial-section transmission electron microscopy (ssTEM) images of neural tissue currently requires many hours of manual tracing and annotation. Several computational techniques have already been applied to ssTEM images to facilitate 3D reconstruction and ease this burden. Here, we present an alternative computational approach for ssTEM image analysis. We have used biologically inspired receptive fields as a basis for a ridge detection algorithm to identify cell membranes, synaptic contacts and mitochondria. Detected line segments are used to improve alignment between consecutive images and we have joined small segments of membrane into cell surfaces using a dynamic programming algorithm similar to the Needleman-Wunsch and Smith-Waterman DNA sequence alignment procedures. A shortest path-based approach has been used to close edges and achieve image segmentation. Partial reconstructions were automatically generated and used as a basis for semi-automatic reconstruction of neural tissue. The accuracy of partial reconstructions was evaluated and 96% of membrane could be identified at the cost of 13% false positive detections. An open-source reference implementation is available in the Supplementary information. seymour.kb@ed.ac.uk; douglas.armstrong@ed.ac.uk Supplementary data are available at Bioinformatics online.

  18. Numerical investigation of liver radioembolization via computational particle-hemodynamics: The role of the microcatheter distal direction and microsphere injection point and velocity.

    PubMed

    Aramburu, Jorge; Antón, Raúl; Rivas, Alejandro; Ramos, Juan Carlos; Sangro, Bruno; Bilbao, José Ignacio

    2016-11-07

    Liver radioembolization is a treatment option for patients with primary and secondary liver cancer. The procedure consists of injecting radiation-emitting microspheres via an intra-arterially placed microcatheter, enabling the deposition of the microspheres in the tumoral bed. The microcatheter location and the particle injection rate are determined during a pretreatment work-up. The purpose of this study was to numerically study the effects of the injection characteristics during the first stage of microsphere travel through the bloodstream in a patient-specific hepatic artery (i.e., the near-tip particle-hemodynamics and the segment-to-segment particle distribution). Specifically, the influence of the distal direction of an end-hole microcatheter and particle injection point and velocity were analyzed. Results showed that the procedure targeted the right lobe when injecting from two of the three injection points under study and the remaining injection point primarily targeted the left lobe. Changes in microcatheter direction and injection velocity resulted in an absolute difference in exiting particle percentage for a given liver segment of up to 20% and 30%, respectively. It can be concluded that even though microcatheter placement is presumably reproduced in the treatment session relative to the pretreatment angiography, the treatment may result in undesired segment-to-segment particle distribution and therefore undesired treatment outcomes due to modifications of any of the parameters studied, i.e., microcatheter direction and particle injection point and velocity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Incorporating scale into digital terrain analysis

    NASA Astrophysics Data System (ADS)

    Dragut, L. D.; Eisank, C.; Strasser, T.

    2009-04-01

    Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Drăguţ et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative method for generating scale levels in terrain-based environmental modeling. Based on segments, R squared improved up to a value of 0.47. Before integrating the procedure described above into a software application, thorough comparison between the results of different generalization techniques, on different datasets and terrain conditions is necessary. This is the subject of our ongoing research as part of the SCALA project (Scales and Hierarchies in Landform Classification). References: Drăguţ, L., Schauppenlehner, T., Muhar, A., Strobl, J. and Blaschke, T., in press. Optimization of scale and parametrization for terrain segmentation: an application to soil-landscape modeling, Computers & Geosciences.

  20. Dosimetric impact of dual-energy CT tissue segmentation for low-energy prostate brachytherapy: a Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Remy, Charlotte; Lalonde, Arthur; Béliveau-Nadeau, Dominic; Carrier, Jean-François; Bouchard, Hugo

    2018-01-01

    The purpose of this study is to evaluate the impact of a novel tissue characterization method using dual-energy over single-energy computed tomography (DECT and SECT) on Monte Carlo (MC) dose calculations for low-dose rate (LDR) prostate brachytherapy performed in a patient like geometry. A virtual patient geometry is created using contours from a real patient pelvis CT scan, where known elemental compositions and varying densities are overwritten in each voxel. A second phantom is made with additional calcifications. Both phantoms are the ground truth with which all results are compared. Simulated CT images are generated from them using attenuation coefficients taken from the XCOM database with a 100 kVp spectrum for SECT and 80 and 140Sn kVp for DECT. Tissue segmentation for Monte Carlo dose calculation is made using a stoichiometric calibration method for the simulated SECT images. For the DECT images, Bayesian eigentissue decomposition is used. A LDR prostate brachytherapy plan is defined with 125I sources and then calculated using the EGSnrc user-code Brachydose for each case. Dose distributions and dose-volume histograms (DVH) are compared to ground truth to assess the accuracy of tissue segmentation. For noiseless images, DECT-based tissue segmentation outperforms the SECT procedure with a root mean square error (RMS) on relative errors on dose distributions respectively of 2.39% versus 7.77%, and provides DVHs closest to the reference DVHs for all tissues. For a medium level of CT noise, Bayesian eigentissue decomposition still performs better on the overall dose calculation as the RMS error is found to be of 7.83% compared to 9.15% for SECT. Both methods give a similar DVH for the prostate while the DECT segmentation remains more accurate for organs at risk and in presence of calcifications, with less than 5% of RMS errors within the calcifications versus up to 154% for SECT. In a patient-like geometry, DECT-based tissue segmentation provides dose distributions with the highest accuracy and the least bias compared to SECT. When imaging noise is considered, benefits of DECT are noticeable if important calcifications are found within the prostate.

  1. A new user-assisted segmentation and tracking technique for an object-based video editing system

    NASA Astrophysics Data System (ADS)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  2. 23 CFR Appendix B to Subpart A of... - Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 23 Highways 1 2014-04-01 2014-04-01 false Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System B Appendix B to Subpart A of Part 470 Highways FEDERAL HIGHWAY...) Corridors as Parts of the Interstate System The following guidance is comparable to current procedures for...

  3. 23 CFR Appendix B to Subpart A of... - Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 23 Highways 1 2013-04-01 2013-04-01 false Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System B Appendix B to Subpart A of Part 470 Highways FEDERAL HIGHWAY...) Corridors as Parts of the Interstate System The following guidance is comparable to current procedures for...

  4. 23 CFR Appendix B to Subpart A of... - Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 23 Highways 1 2012-04-01 2012-04-01 false Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System B Appendix B to Subpart A of Part 470 Highways FEDERAL HIGHWAY...) Corridors as Parts of the Interstate System The following guidance is comparable to current procedures for...

  5. 23 CFR Appendix B to Subpart A of... - Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System B Appendix B to Subpart A of Part 470 Highways FEDERAL HIGHWAY...) Corridors as Parts of the Interstate System The following guidance is comparable to current procedures for...

  6. Statistical Segmentation of Surgical Instruments in 3D Ultrasound Images

    PubMed Central

    Linguraru, Marius George; Vasilyev, Nikolay V.; Del Nido, Pedro J.; Howe, Robert D.

    2008-01-01

    The recent development of real-time 3D ultrasound enables intracardiac beating heart procedures, but the distorted appearance of surgical instruments is a major challenge to surgeons. In addition, tissue and instruments have similar gray levels in US images and the interface between instruments and tissue is poorly defined. We present an algorithm that automatically estimates instrument location in intracardiac procedures. Expert-segmented images are used to initialize the statistical distributions of blood, tissue and instruments. Voxels are labeled through an iterative expectation-maximization algorithm using information from the neighboring voxels through a smoothing kernel. Once the three classes of voxels are separated, additional neighboring information is combined with the known shape characteristics of instruments in order to correct for misclassifications. We analyze the major axis of segmented data through their principal components and refine the results by a watershed transform, which corrects the results at the contact between instrument and tissue. We present results on 3D in-vitro data from a tank trial, and 3D in-vivo data from cardiac interventions on porcine beating hearts, using instruments of four types of materials. The comparison of algorithm results to expert-annotated images shows the correct segmentation and position of the instrument shaft. PMID:17521802

  7. Ground support system methodology and architecture

    NASA Technical Reports Server (NTRS)

    Schoen, P. D.

    1991-01-01

    A synergistic approach to systems test and support is explored. A building block architecture provides transportability of data, procedures, and knowledge. The synergistic approach also lowers cost and risk for life cycle of a program. The determination of design errors at the earliest phase reduces cost of vehicle ownership. Distributed scaleable architecture is based on industry standards maximizing transparency and maintainability. Autonomous control structure provides for distributed and segmented systems. Control of interfaces maximizes compatibility and reuse, reducing long term program cost. Intelligent data management architecture also reduces analysis time and cost (automation).

  8. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    PubMed

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  9. Cholecystokinin-Assisted Hydrodissection of the Gallbladder Fossa during FDG PET/CT-guided Liver Ablation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tewari, Sanjit O., E-mail: tewaris@mskcc.org; Petre, Elena N., E-mail: petree@mskcc.org; Osborne, Joseph, E-mail: osbornej@mskcc.org

    2013-12-15

    A 68-year-old female with colorectal cancer developed a metachronous isolated fluorodeoxyglucose-avid (FDG-avid) segment 5/6 gallbladder fossa hepatic lesion and was referred for percutaneous ablation. Pre-procedure computed tomography (CT) images demonstrated a distended gallbladder abutting the segment 5/6 hepatic metastasis. In order to perform ablation with clear margins and avoid direct puncture and aspiration of the gallbladder, cholecystokinin was administered intravenously to stimulate gallbladder contraction before hydrodissection. Subsequently, the lesion was ablated successfully with sufficient margins, of greater than 1.0 cm, using microwave with ultrasound and FDG PET/CT guidance. The patient tolerated the procedure very well and was discharged home themore » next day.« less

  10. Group 4: Instructor training and qualifications

    NASA Technical Reports Server (NTRS)

    Sessa, R.

    1981-01-01

    Each professional instructor or check airman used in LOFT training course should complete an FAA approved training course in the appropriate aircraft type. Instructors used in such courses need not be type-rated. If an instructor or check airman who is presently not line-qualified is used as a LOFT instructor, he or she should remain current in line-operational procedures by observing operating procedures from the jump seat on three typical line segments pr 90 days on the appropriate aircraft type. ("Line qualification" means completion as a flight crew member of at least three typical line segments per 90 days on the appropriate aircraft type.) The training should include the requirement of four hours of LOFT training, in lieu of actual aircraft training or line operating experience.

  11. Cyanotic Premature Babies: A Videodisc-Based Program

    PubMed Central

    Tinsley, L.R.; Ashton, G.C.; Boychuk, R.B.; Easa, D.J.

    1989-01-01

    This program for the IBM InfoWindow system is designed to assist medical students and pediatric residents with diagnosis and management of premature infants exhibiting cyanosis. The program consists of six diverse case simulations, with additional information available on diagnosis, procedures, and relevant drugs. Respiratory difficulties accompanied by cyanosis are a common problem in premature infants at or just after birth, but the full diversity of causes is rarely seen in a short training period. The purpose of the program is to assist the student or resident with diagnosis and management of a variety of conditions which they may or may not see during their training. The opening menu permits selection from six cases, covering (1) respiratory distress syndrome proceeding through patent ductus arteriosus to pneumothorax, (2) a congenital heart disorder, (3) sepsis/pneumonia, (4) persistent fetal circulation, (5) diaphragmatic hernia, and (6) tracheo-esophageal fistula. In each case the student is provided with relevant introductory information and must then proceed with diagnosis and management. At each decision point the student may view information about relevant procedures, obtain assistance with diagnosis, or see information about useful drugs. Segments between decision points may be repeated if required. Provision is made for backtracking and review of instructional segments. The program is written in IBM's InfoWindow Presentation System authoring language and the video segments are contained on one side of a standard 12″ laserdisc. The program runs on IBM's InfoWindow System, with the touch screen used to initiate all student actions. The extensive graphics in the program were developed with Storyboard Plus, using the 640×350 resolution mode. This program is one of a number being developed for the Health Sciences Interactive Videodisc Consortium, and was funded in part by IBM Corporation.

  12. Comparison of enterotomy leak pressure among fresh, cooled, and frozen-thawed porcine jejunal segments.

    PubMed

    Aeschlimann, Kimberly A; Mann, F A; Middleton, John R; Belter, Rebecca C

    2018-05-01

    OBJECTIVE To determine whether stored (cooled or frozen-thawed) jejunal segments can be used to obtain dependable leak pressure data after enterotomy closure. SAMPLE 36 jejunal segments from 3 juvenile pigs. PROCEDURES Jejunal segments were harvested from euthanized pigs and assigned to 1 of 3 treatment groups (n = 12 segments/group) as follows: fresh (used within 4 hours after collection), cooled (stored overnight at 5°C before use), and frozen-thawed (frozen at -12°C for 8 days and thawed at room temperature [23°C] for 1 hour before use). Jejunal segments were suspended and 2-cm enterotomy incisions were made on the antimesenteric border. Enterotomies were closed with a simple continuous suture pattern. Lactated Ringer solution was infused into each segment until failure at the suture line was detected. Leak pressure was measured by use of a digital transducer. RESULTS Mean ± SD leak pressure for fresh, cooled, and frozen-thawed segments was 68.3 ± 23.7 mm Hg, 55.3 ± 28.1 mm Hg, and 14.4 ± 14.8 mm Hg, respectively. Overall, there were no significant differences in mean leak pressure among pigs, but a significant difference in mean leak pressure was detected among treatment groups. Mean leak pressure was significantly lower for frozen-thawed segments than for fresh or cooled segments, but mean leak pressure did not differ significantly between fresh and cooled segments. CONCLUSIONS AND CLINICAL RELEVANCE Fresh porcine jejunal segments or segments cooled overnight may be used for determining intestinal leak pressure, but frozen-thawed segments should not be used.

  13. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata

    PubMed Central

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-01-01

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664

  14. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.

    PubMed

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-03-29

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.

  15. Automatic moment segmentation and peak detection analysis of heart sound pattern via short-time modified Hilbert transform.

    PubMed

    Sun, Shuping; Jiang, Zhongwei; Wang, Haibin; Fang, Yu

    2014-05-01

    This paper proposes a novel automatic method for the moment segmentation and peak detection analysis of heart sound (HS) pattern, with special attention to the characteristics of the envelopes of HS and considering the properties of the Hilbert transform (HT). The moment segmentation and peak location are accomplished in two steps. First, by applying the Viola integral waveform method in the time domain, the envelope (E(T)) of the HS signal is obtained with an emphasis on the first heart sound (S1) and the second heart sound (S2). Then, based on the characteristics of the E(T) and the properties of the HT of the convex and concave functions, a novel method, the short-time modified Hilbert transform (STMHT), is proposed to automatically locate the moment segmentation and peak points for the HS by the zero crossing points of the STMHT. A fast algorithm for calculating the STMHT of E(T) can be expressed by multiplying the E(T) by an equivalent window (W(E)). According to the range of heart beats and based on the numerical experiments and the important parameters of the STMHT, a moving window width of N=1s is validated for locating the moment segmentation and peak points for HS. The proposed moment segmentation and peak location procedure method is validated by sounds from Michigan HS database and sounds from clinical heart diseases, such as a ventricular septal defect (VSD), an aortic septal defect (ASD), Tetralogy of Fallot (TOF), rheumatic heart disease (RHD), and so on. As a result, for the sounds where S2 can be separated from S1, the average accuracies achieved for the peak of S1 (AP₁), the peak of S2 (AP₂), the moment segmentation points from S1 to S2 (AT₁₂) and the cardiac cycle (ACC) are 98.53%, 98.31% and 98.36% and 97.37%, respectively. For the sounds where S1 cannot be separated from S2, the average accuracies achieved for the peak of S1 and S2 (AP₁₂) and the cardiac cycle ACC are 100% and 96.69%. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Engineering simulation development and evaluation of the two-segment noise abatement approach conducted in the B-727-222 flight simulator

    NASA Technical Reports Server (NTRS)

    Nylen, W. E.

    1974-01-01

    Profile modification as a means of reducing ground level noise from jet aircraft in the landing approach is evaluated. A flight simulator was modified to incorporate the cockpit hardware which would be in the prototype airplane installation. The two-segment system operational and aircraft interface logic was accurately emulated in software. Programs were developed to permit data to be recorded in real time on the line printer, a 14-channel oscillograph, and an x-y plotter. The two-segment profile and procedures which were developed are described with emphasis on operational concepts and constraints. The two-segment system operational logic and the flight simulator capabilities are described. The findings influenced the ultimate system design and aircraft interface.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruland, Robert

    The Visible-Infrared SASE Amplifier (VISA) undulator consists of four 99cm long segments. Each undulator segment is set up on a pulsed-wire bench, to characterize the magnetic properties and to locate the magnetic axis of the FODO array. Subsequently, the location of the magnetic axis, as defined by the wire, is referenced to tooling balls on each magnet segment by means of a straightness interferometer. After installation in the vacuum chamber, the four magnet segments are aligned with respect to themselves and globally to the beam line reference laser. A specially designed alignment fixture is used to mount one straightness interferometermore » each in the horizontal and vertical plane of the beam. The goal of these procedures is to keep the combined rms trajectory error, due to magnetic and alignment errors, to 50{micro}m.« less

  18. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  19. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging.

    PubMed

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart

    2015-02-21

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  20. Bronchoscopic Thermal Vapor Ablation: Best Practice Recommendations from an Expert Panel on Endoscopic Lung Volume Reduction.

    PubMed

    Gompelmann, Daniela; Shah, Pallav L; Valipour, Arschang; Herth, Felix J F

    2018-06-12

    Bronchoscopic thermal vapor ablation (BTVA) represents one of the endoscopic lung volume reduction (ELVR) techniques that aims at hyperinflation reduction in patients with advanced emphysema to improve respiratory mechanics. By targeted segmental vapor ablation, an inflammatory response leads to tissue and volume reduction of the most diseased emphysematous segments. So far, BTVA has been demonstrated in several single-arm trials and 1 multinational randomized controlled trial to improve lung function, exercise capacity, and quality of life in patients with upper lobe-predominant emphysema irrespective of the collateral ventilation. In this review, we emphasize the practical aspects of this ELVR method. Patients with upper lobe-predominant emphysema, forced expiratory volume in 1 second (FEV1) between 20 and 45% of predicted, residual volume (RV) > 175% of predicted, and carbon monoxide diffusing capacity (DLCO) ≥20% of predicted can be considered for BTVA treatment. Prior to the procedure, a special software assists in identifying the target segments with the highest emphysema index, volume and the highest heterogeneity index to the untreated ipsilateral lung lobes. The procedure may be performed under deep sedation or preferably under general anesthesia. After positioning of the BTVA catheter and occlusion of the target segment by the occlusion balloon, heated water vapor is delivered in a predetermined specified time according to the vapor dose. After the procedure, patients should be strictly monitored to proactively detect symptoms of localized inflammatory reaction that may temporarily worsen the clinical status of the patient and to detect complications. As the data are still very limited, BTVA should be performed within clinical trials or comprehensive registries where the product is commercially available. © 2018 S. Karger AG, Basel.

  1. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  2. Multi-object model-based multi-atlas segmentation for rodent brains using dense discrete correspondences

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Kim, Sun Hyung; Styner, Martin

    2016-03-01

    The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.

  3. Segmentation quality evaluation using region-based precision and recall measures for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun

    2015-04-01

    Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.

  4. Deep learning and texture-based semantic label fusion for brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Vidyaratne, L.; Alam, M.; Shboul, Z.; Iftekharuddin, K. M.

    2018-02-01

    Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.

  5. Deep Learning and Texture-Based Semantic Label Fusion for Brain Tumor Segmentation.

    PubMed

    Vidyaratne, L; Alam, M; Shboul, Z; Iftekharuddin, K M

    2018-01-01

    Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.

  6. Tracking transcriptional activities with high-content epifluorescent imaging

    NASA Astrophysics Data System (ADS)

    Hua, Jianping; Sima, Chao; Cypert, Milana; Gooden, Gerald C.; Shack, Sonsoles; Alla, Lalitamba; Smith, Edward A.; Trent, Jeffrey M.; Dougherty, Edward R.; Bittner, Michael L.

    2012-04-01

    High-content cell imaging based on fluorescent protein reporters has recently been used to track the transcriptional activities of multiple genes under different external stimuli for extended periods. This technology enhances our ability to discover treatment-induced regulatory mechanisms, temporally order their onsets and recognize their relationships. To fully realize these possibilities and explore their potential in biological and pharmaceutical applications, we introduce a new data processing procedure to extract information about the dynamics of cell processes based on this technology. The proposed procedure contains two parts: (1) image processing, where the fluorescent images are processed to identify individual cells and allow their transcriptional activity levels to be quantified; and (2) data representation, where the extracted time course data are summarized and represented in a way that facilitates efficient evaluation. Experiments show that the proposed procedure achieves fast and robust image segmentation with sufficient accuracy. The extracted cellular dynamics are highly reproducible and sensitive enough to detect subtle activity differences and identify mechanisms responding to selected perturbations. This method should be able to help biologists identify the alterations of cellular mechanisms that allow drug candidates to change cell behavior and thereby improve the efficiency of drug discovery and treatment design.

  7. Retroperitoneal hemorrhage from an unrecognized puncture of the lumbar right segmental artery during lumbar chemical sympathectomy: diagnosis and management.

    PubMed

    Shin, Ho-Jin; Choi, Yun-Mi; Kim, Hye-Jin; Lee, Sun-Jae; Yoon, Seok-Hyun; Kim, Kyung-Hoon

    2014-12-01

    Lumbar chemical sympathectomy has been performed using fluoroscopic guidance for needle positioning. An 84 year old woman with atherosclerosis obliterans was referred to the pain clinic for intractable cold allodynia of her right foot. A thermogram showed decreased temperature of both feet compared with temperatures above both ankles. The patient agreed to undergo lumbar chemical sympathectomy using fluoroscopy after being informed of the associated risks of nerve injury, hemorrhage, infection, transient back pain, and transient hypotension. During the procedure and three hours afterward, no abnormal signs or symptoms were found except an increase in right leg temperature. The patient was ambulatory after the procedure. However, one day after undergoing lumbar chemical sympathectomy, she visited our emergency department for abdominal discomfort and postural dizziness. Her blood pressure was 80/50 mmHg, and flank tenderness was noted. Retroperitoneal hemorrhage from the second right lumbar segmental artery was shown on computed tomography and angiography. Vital signs were stabilized immediately after embolization into the right lumbar segmental artery. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Liver hanging maneuver for right hemiliver in situ donation--anatomical considerations.

    PubMed

    Trotovsek, B; Gadzijev, E M; Ravnik, D; Hribernik, M

    2006-01-01

    An anatomical study was carried out to evaluate the safety of the liver hanging maneuver for the right hemiliver in living donor and in situ splitting transplantation. During this procedure a 4-6 cm blind dissection is performed between the inferior vena cava and the liver. Short subhepatic veins entering the inferior vena cava from segments 1 and 9 could be torn with consequent hemorrhage. One hundred corrosive casts of livers were evaluated to establish the position and diameter of short subhepatic veins and the inferior right hepatic vein. The average distance from the right border of the inferior vena cava to the opening of segment 1 veins was 16.7+/-3.4 mm and to the entrance of segment 9 veins was 5.0+/-0.5 mm. The width of the narrowest point on the route of blind dissection was determined, with the average value being 8.7+/-2.3 mm (range 2-15 mm). The results show that the liver hanging maneuver is a safe procedure. A proposed route of dissection minimizes the risk of disrupting short subhepatic veins (7%).

  9. Percutaneous coronary angioplasty of a left anterior descending artery implanted on a Dacron coronary prosthesis on an aortic conduit.

    PubMed

    Drobinski, G; Thomas, D; Funck, F; Metzger, J P; Canny, M; Grosgogeat, Y

    1986-08-01

    Certain surgical techniques may make it difficult to catheterize the coronary ostia and perform percutaneous coronary angioplasty. We report the case of a 48 year old patient who developed unstable angina four years after a Bentall's procedure with reimplantation of the coronary arteries on a Dacron coronary prosthesis. The anginal pain was related to very severe stenosis of the proximal segment of the left anterior descending artery. The difficulties encountered during the dilatation procedure were due to: (a) the ectopic position of the ostium of the prosthesis on the anterior aortic wall; (b) the forces exerted on the aortic prosthesis wall and on the valvular prosthesis during positioning of the guiding catheter which were poorly tolerated and induced a vagal reaction; (c) the direction taken by the distal tip of the guiding catheter, perpendicular to the wall of the aortic prosthesis; (d) the sinuosity of the arterial trajectory: the left coronary segment of the coronary prosthesis was directed towards the left circumflex artery rather than towards the left anterior descending artery. Coronary angioplasty succeeded after relatively complex technical procedures: special guiding catheter, unusual intra-aortic manoeuvres for positioning the guiding catheter, dilatation catheter change on a 3-metre long guide wire in order to cross the stenotic segment; this was performed with a super low-profiled dilatation catheter. There were no complications and anginal pain disappeared.

  10. Posterior Surgery for Adolescent Idiopathic Scoliosis With Pedicle Screws and Ultrahigh-Molecular Weight Polyethylene Tape: Achieving the Ideal Thoracic Kyphosis.

    PubMed

    Imagama, Shiro; Ito, Zenya; Wakao, Norimitsu; Ando, Kei; Hirano, Kenichi; Tauchi, Ryoji; Muramoto, Akio; Matsui, Hiroki; Matsumoto, Tomohiro; Sakai, Yoshihito; Katayama, Yoshito; Matsuyama, Yukihiro; Ishiguro, Naoki

    2016-10-01

    Prospective clinical case series. To describe our surgical procedure and results for posterior correction and fusion with a hybrid approach using pedicle screws, hooks, and ultrahigh-molecular weight polyethylene tape with direct vertebral rotation (DVR) (the PSTH-DVR procedure) for treatment of adolescent idiopathic scoliosis (AIS) with satisfactory correction in the coronal and sagittal planes. Introduction of segmental pedicle screws in posterior surgery for AIS has facilitated good correction and fusion. However, procedures using only pedicle screws have risks during screw insertion, higher costs, and decreased postoperative thoracic kyphosis. We have obtained good outcomes compared with segmental pedicle screw fixation in surgery for AIS using a relatively simple operative procedure (PSTH-DVR) that uses fewer pedicle screws. The subjects were 30 consecutive patients with AIS who underwent the PSTH-DVR procedure and were followed for a minimum of 2 years. Preoperative flexibility, preoperative and postoperative Cobb angles, correction rates, loss of correction, thoracic kyphotic angles (T5-T12), coronal balance, sagittal balance, and shoulder balance were measured on plain radiographs. Rib hump, operation time, estimated blood loss, spinal cord monitoring findings, complications, and scoliosis research society (SRS)-22 scores were also examined. The mean preoperative curve of 58.0 degrees (range, 40-96 degrees) was corrected to a mean of 9.9 degrees postoperatively, and the correction rate was 83.6%. Fusion was obtained in all patients without loss of correction. In 10 cases with preoperative kyphosis angles (T5-T12) <10 degrees, the preoperative mean of 5.8 degrees improved to 20.2 degrees at the final follow-up. Rib hump and coronal, sagittal and shoulder balances were also improved, and good SRS-22 scores were achieved at final follow-up. The correction of deformity with PSTH-DVR is equivalent to that of all-pedicle screw constructs. The procedure gives favorable correction, is advantageous for kyphosis compared with segmental screw fixation, and uses the minimum number of pedicle screws. Therefore, the PSTH-DVR procedure may be useful for treatment of idiopathic scoliosis.

  11. A systematic review of image segmentation methodology, used in the additive manufacture of patient-specific 3D printed models of the cardiovascular system.

    PubMed

    Byrne, N; Velasco Forte, M; Tandon, A; Valverde, I; Hussain, T

    2016-01-01

    Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ) and segmentation software were recorded. Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports). The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992-2015). The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods.

  12. Contour-Driven Atlas-Based Segmentation

    PubMed Central

    Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina

    2016-01-01

    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202

  13. At least seven days delayed stenting using minimalist immediate mechanical intervention (MIMI) in ST-segment elevation myocardial infarction: the SUPER-MIMI study.

    PubMed

    Mester, Petru; Bouvaist, Helene; Delarche, Nicolas; Bouisset, Frédéric; Abdellaoui, Mohamed; Petiteau, Pierre-Yves; Dubreuil, Olivier; Boueri, Ziad; Chettibi, Mohamed; Souteyrand, Géraud; Madiot, Hend; Belle, Loic

    2017-07-20

    The aim of this study was to ascertain whether a minimalist immediate mechanical intervention (MIMI) aiming to restore an optimal Thrombolysis In Myocardial Infarction (TIMI) flow in the culprit artery, followed ≥7 days later by a second percutaneous coronary intervention with intentional stenting, is safe in patients with ST-segment elevation myocardial infarction and large thrombotic burden. SUPER-MIMI was a prospective, observational trial conducted between January 2014 and April 2015 in 14 French centres. A total of 155 patients were enrolled. The pharmacological therapy was left to the operator's discretion. Eighty-one patients (52.3%) had glycoprotein IIb/IIIa inhibitors (GPI) initiated before the end of the first procedure. The median (interquartile range [IQR]) delay between the two procedures was eight (seven to 12) days. Infarct-related artery reocclusion between the two procedures (primary endpoint) occurred in two patients (1.3%), neither of whom received GPI treatment. TIMI flow was maintained or improved between the end of the first procedure and the beginning of the second procedure in all patients. Thrombotic burden and stenosis severity diminished significantly between the two procedures. Stents were ultimately implanted in 97 patients (62.6%). Deferred stenting (≥7 days) in patients with a high thrombus burden was safe on a background of GPI therapy.

  14. Mathematical modelling of the growth of human fetus anatomical structures.

    PubMed

    Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech

    2017-09-01

    The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.

  15. Laparoscopic liver resection: Experience based guidelines

    PubMed Central

    Coelho, Fabricio Ferreira; Kruger, Jaime Arthur Pirola; Fonseca, Gilton Marques; Araújo, Raphael Leonardo Cunha; Jeismann, Vagner Birk; Perini, Marcos Vinícius; Lupinacci, Renato Micelli; Cecconello, Ivan; Herman, Paulo

    2016-01-01

    Laparoscopic liver resection (LLR) has been progressively developed along the past two decades. Despite initial skepticism, improved operative results made laparoscopic approach incorporated to surgical practice and operations increased in frequency and complexity. Evidence supporting LLR comes from case-series, comparative studies and meta-analysis. Despite lack of level 1 evidence, the body of literature is stronger and existing data confirms the safety, feasibility and benefits of laparoscopic approach when compared to open resection. Indications for LLR do not differ from those for open surgery. They include benign and malignant (both primary and metastatic) tumors and living donor liver harvesting. Currently, resection of lesions located on anterolateral segments and left lateral sectionectomy are performed systematically by laparoscopy in hepatobiliary specialized centers. Resection of lesions located on posterosuperior segments (1, 4a, 7, 8) and major liver resections were shown to be feasible but remain technically demanding procedures, which should be reserved to experienced surgeons. Hand-assisted and laparoscopy-assisted procedures appeared to increase the indications of minimally invasive liver surgery and are useful strategies applied to difficult and major resections. LLR proved to be safe for malignant lesions and offers some short-term advantages over open resection. Oncological results including resection margin status and long-term survival were not inferior to open resection. At present, surgical community expects high quality studies to base the already perceived better outcomes achieved by laparoscopy in major centers’ practice. Continuous surgical training, as well as new technologies should augment the application of laparoscopic liver surgery. Future applicability of new technologies such as robot assistance and image-guided surgery is still under investigation. PMID:26843910

  16. Segmentation of images of abdominal organs.

    PubMed

    Wu, Jie; Kamath, Markad V; Noseworthy, Michael D; Boylan, Colm; Poehlman, Skip

    2008-01-01

    Abdominal organ segmentation, which is, the delineation of organ areas in the abdomen, plays an important role in the process of radiological evaluation. Attempts to automate segmentation of abdominal organs will aid radiologists who are required to view thousands of images daily. This review outlines the current state-of-the-art semi-automated and automated methods used to segment abdominal organ regions from computed tomography (CT), magnetic resonance imaging (MEI), and ultrasound images. Segmentation methods generally fall into three categories: pixel based, region based and boundary tracing. While pixel-based methods classify each individual pixel, region-based methods identify regions with similar properties. Boundary tracing is accomplished by a model of the image boundary. This paper evaluates the effectiveness of the above algorithms with an emphasis on their advantages and disadvantages for abdominal organ segmentation. Several evaluation metrics that compare machine-based segmentation with that of an expert (radiologist) are identified and examined. Finally, features based on intensity as well as the texture of a small region around a pixel are explored. This review concludes with a discussion of possible future trends for abdominal organ segmentation.

  17. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  18. Pulmonary embolism detection using localized vessel-based features in dual energy CT

    NASA Astrophysics Data System (ADS)

    Dicente Cid, Yashin; Depeursinge, Adrien; Foncubierta Rodríguez, Antonio; Platon, Alexandra; Poletti, Pierre-Alexandre; Müller, Henning

    2015-03-01

    Pulmonary embolism (PE) affects up to 600,000 patients and contributes to at least 100,000 deaths every year in the United States alone. Diagnosis of PE can be difficult as most symptoms are unspecific and early diagnosis is essential for successful treatment. Computed Tomography (CT) images can show morphological anomalies that suggest the existence of PE. Various image-based procedures have been proposed for improving computer-aided diagnosis of PE. We propose a novel method for detecting PE based on localized vessel-based features computed in Dual Energy CT (DECT) images. DECT provides 4D data indexed by the three spatial coordinates and the energy level. The proposed features encode the variation of the Hounsfield Units across the different levels and the CT attenuation related to the amount of iodine contrast in each vessel. A local classification of the vessels is obtained through the classification of these features. Moreover, the localization of the vessel in the lung provides better comparison between patients. Results show that the simple features designed are able to classify pulmonary embolism patients with an AUC (area under the receiver operating curve) of 0.71 on a lobe basis. Prior segmentation of the lung lobes is not necessary because an automatic atlas-based segmentation obtains similar AUC levels (0.65) for the same dataset. The automatic atlas reaches 0.80 AUC in a larger dataset with more control cases.

  19. Using external and internal locking plates in a two-stage protocol for treatment of segmental tibial fractures.

    PubMed

    Ma, Ching-Hou; Tu, Yuan-Kun; Yeh, Jih-Hsi; Yang, Shih-Chieh; Wu, Chin-Hsien

    2011-09-01

    The tibial segmental fractures usually follow high-energy trauma and are often associated with many complications. We designed a two-stage protocol for these complex injuries. The aim of this study was to assess the outcome of tibial segmental fractures treated according to this protocol. A prospective series of 25 consecutive segmental tibial fractures were treated using a two-stage procedure. In the first stage, a low-profile locking plate was applied as an external fixator to temporarily immobilize the fractures after anatomic reduction had been achieved followed by soft-tissue reconstruction. The second stage involved definitive internal fixation with a locking plate using a minimally invasive percutaneous plate osteosynthesis technique. The median follow-up was 32 months (range, 20-44 months). All fractures achieved union. The median time for the proximal fracture union was 23 weeks (range, 12-30 weeks) and that for distal fracture union was 27 weeks (range, 12-46 weeks; p = 0.08). Functional results were excellent in 21 patients and good in 4 patients. There were three cases of delayed union of distal fracture. Valgus malunion >5 degrees occurred in two patients, and length discrepancy >1 cm was observed in two patients. Pin tract infection occurred in three patients. Use of the two-stage procedure for treatment of segmental tibial fractures is recommended. Surgeons can achieve good reduction with stable temporary fixation, soft-tissue reconstruction, ease of subsequent definitive fixation, and high union rates. Our patients obtained excellent knee and ankle joint motion, good functional outcomes, and a comfortable clinical course.

  20. A combined learning algorithm for prostate segmentation on 3D CT images.

    PubMed

    Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2017-11-01

    Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate. © 2017 American Association of Physicists in Medicine.

  1. An anatomically based protocol for the description of foot segment kinematics during gait.

    PubMed

    Leardini, A; Benedetti, M G; Catani, F; Simoncini, L; Giannini, S

    1999-10-01

    To design a technique for the in vivo description of ankle and other foot joint rotations to be applied in routine functional evaluation using non-invasive stereophotogrammetry. Position and orientation of tibia/fibula, calcaneus, mid-foot, 1st metatarsal and hallux segments were tracked during the stance phase of walking in nine asymptomatic subjects. Rigid clusters of reflective markers were used for foot segment pose estimation. Anatomical landmark calibration was applied for the reconstruction of anatomical landmarks. Previous studies have analysed only a limited number of joints or have proposed invasive techniques. Anatomical landmark trajectories were reconstructed in the laboratory frame using data from the anatomical calibration procedure. Anatomical co-ordinate frames were defined using the obtained landmark trajectories. Joint co-ordinate systems were used to calculate corresponding joint rotations in all three anatomical planes. The patterns of the joint rotations were highly repeatable within subjects. Consistent patterns between subjects were also exhibited at most of the joints. The method proposed enables a detailed description of ankle and other foot joint rotations on an anatomical base. Joint rotations can therefore be expressed in the well-established terminology necessary for their clinical interpretation. Functional evaluation of patients affected by foot diseases has recently called for more detailed and non-invasive protocols for the description of foot joint rotations during gait. The proposed method can help clinicians to distinguish between normal and pathological pattern of foot joint rotations, and to quantitatively assess the restoration of normal function after treatment.

  2. Three-dimensional reconstruction of teeth and jaws based on segmentation of CT images using watershed transformation.

    PubMed

    Naumovich, S S; Naumovich, S A; Goncharenko, V G

    2015-01-01

    The objective of the present study was the development and clinical testing of a three-dimensional (3D) reconstruction method of teeth and a bone tissue of the jaw on the basis of CT images of the maxillofacial region. 3D reconstruction was performed using the specially designed original software based on watershed transformation. Computed tomograms in digital imaging and communications in medicine format obtained on multispiral CT and CBCT scanners were used for creation of 3D models of teeth and the jaws. The processing algorithm is realized in the stepwise threshold image segmentation with the placement of markers in the mode of a multiplanar projection in areas relating to the teeth and a bone tissue. The developed software initially creates coarse 3D models of the entire dentition and the jaw. Then, certain procedures specify the model of the jaw and cut the dentition into separate teeth. The proper selection of the segmentation threshold is very important for CBCT images having a low contrast and high noise level. The developed semi-automatic algorithm of multispiral and cone beam computed tomogram processing allows 3D models of teeth to be created separating them from a bone tissue of the jaws. The software is easy to install in a dentist's workplace, has an intuitive interface and takes little time in processing. The obtained 3D models can be used for solving a wide range of scientific and clinical tasks.

  3. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  4. FEL Trajectory Analysis for the VISA Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuhn, Heinz-Dieter

    1998-10-06

    The Visual to Infrared SASE Amplifier (VISA) [1] FEL is designed to achieve saturation at radiation wavelengths between 800 and 600 nm with a 4-m pure permanent magnet undulator. The undulator comprises four 99-cm segments each of which has four FODO focusing cells superposed on the beam by means of permanent magnets in the gap alongside the beam. Each segment will also have two beam position monitors and two sets of x-y dipole correctors. The trajectory walk-off in each segment will be reduced to a value smaller than the rms beam radius by means of magnet sorting, precise fabrication, andmore » post-fabrication shimming and trim magnets. However, this leaves possible inter-segment alignment errors. A trajectory analysis code has been used in combination with the FRED3D [2] FEL code to simulate the effect of the shimming procedure and segment alignment errors on the electron beam trajectory and to determine the sensitivity of the FEL gain process to trajectory errors. The paper describes the technique used to establish tolerances for the segment alignment.« less

  5. Hepatic parenchymal atrophy induction for intractable segmental bile duct injury after liver resection.

    PubMed

    Hwang, Shin; Park, Gil-Chun; Ha, Tae-Yong; Ko, Gi-Young; Gwon, Dong-Il; Choi, Young-Il; Song, Gi-Won; Lee, Sung-Gyu

    2012-05-01

    Liver resection can result in various types of bile duct injuries but their treatment is usually difficult and often leads to intractable clinical course. We present an unusual case of hepatic segment III duct (B3) injury, which occurred after left medial sectionectomy for large hepatocellular carcinoma and was incidentally detected 1 week later due to bile leak. Since the pattern of this B3 injury was not adequate for operative biliary reconstruction, atrophy induction of the involved hepatic parenchyma was attempted. This treatment consisted of embolization of the segment III portal branch to inhibit bile production, induction of heavy adhesion at the bile leak site and clamping of the percutaneous transhepatic biliary drainage (PTBD) tube to accelerate segment III atrophy. This entire procedure, from liver resection to PTBD tube removal took 4 months. This patient has shown no other complication or tumor recurrence for 4 years to date. These findings suggest that percutaneous segmental portal vein embolization, followed by intentional clamping of external biliary drainage, can effectively control intractable bile leak from segmental bile duct injury.

  6. Development of OCR system for portable passport and visa reader

    NASA Astrophysics Data System (ADS)

    Visilter, Yury V.; Zheltov, Sergey Y.; Lukin, Anton A.

    1999-01-01

    The modern passport and visa documents include special machine-readable zones satisfied the ICAO standards. This allows to develop the special passport and visa automatic readers. However, there are some special problems in such OCR systems: low resolution of character images captured by CCD-camera (down to 150 dpi), essential shifts and slopes (up to 10 degrees), rich paper texture under the character symbols, non-homogeneous illumination. This paper presents the structure and some special aspects of OCR system for portable passport and visa reader. In our approach the binarization procedure is performed after the segmentation step, and it is applied to the each character site separately. Character recognition procedure uses the structural information of machine-readable zone. Special algorithms are developed for machine-readable zone extraction and character segmentation.

  7. Calibration of a semi-automated segmenting method for quantification of adipose tissue compartments from magnetic resonance images of mice.

    PubMed

    Garteiser, Philippe; Doblas, Sabrina; Towner, Rheal A; Griffin, Timothy M

    2013-11-01

    To use an automated water-suppressed magnetic resonance imaging (MRI) method to objectively assess adipose tissue (AT) volumes in whole body and specific regional body components (subcutaneous, thoracic and peritoneal) of obese and lean mice. Water-suppressed MR images were obtained on a 7T, horizontal-bore MRI system in whole bodies (excluding head) of 26 week old male C57BL6J mice fed a control (10% kcal fat) or high-fat diet (60% kcal fat) for 20 weeks. Manual (outlined regions) versus automated (Gaussian fitting applied to threshold-weighted images) segmentation procedures were compared for whole body AT and regional AT volumes (i.e., subcutaneous, thoracic, and peritoneal). The AT automated segmentation method was compared to dual-energy X-ray (DXA) analysis. The average AT volumes for whole body and individual compartments correlated well between the manual outlining and the automated methods (R2>0.77, p<0.05). Subcutaneous, peritoneal, and total body AT volumes were increased 2-3 fold and thoracic AT volume increased more than 5-fold in diet-induced obese mice versus controls (p<0.05). MRI and DXA-based method comparisons were highly correlative (R2=0.94, p<0.0001). Automated AT segmentation of water-suppressed MRI data using a global Gaussian filtering algorithm resulted in a fairly accurate assessment of total and regional AT volumes in a pre-clinical mouse model of obesity. © 2013 Elsevier Inc. All rights reserved.

  8. Increasing the feasibility of minimally invasive procedures in type A aortic dissections: a framework for segmentation and quantification.

    PubMed

    Morariu, Cosmin Adrian; Terheiden, Tobias; Dohle, Daniel Sebastian; Tsagakis, Konstantinos; Pauli, Josef

    2016-02-01

    Our goal is to provide precise measurements of the aortic dimensions in case of dissection pathologies. Quantification of surface lengths and aortic radii/diameters together with the visualization of the dissection membrane represents crucial prerequisites for enabling minimally invasive treatment of type A dissections, which always also imply the ascending aorta. We seek a measure invariant to luminance and contrast for aortic outer wall segmentation. Therefore, we propose a 2D graph-based approach using phase congruency combined with additional features. Phase congruency is extended to 3D by designing a novel conic directional filter and adding a lowpass component to the 3D Log-Gabor filterbank for extracting the fine dissection membrane, which separates the true lumen from the false one within the aorta. The result of the outer wall segmentation is compared with manually annotated axial slices belonging to 11 CTA datasets. Quantitative assessment of our novel 2D/3D membrane extraction algorithms has been obtained for 10 datasets and reveals subvoxel accuracy in all cases. Aortic inner and outer surface lengths, determined within 2 cadaveric CT datasets, are validated against manual measurements performed by a vascular surgeon on excised aortas of the body donors. This contribution proposes a complete pipeline for segmentation and quantification of aortic dissections. Validation against ground truth of the 3D contour lengths quantification represents a significant step toward custom-designed stent-grafts.

  9. A novel line segment detection algorithm based on graph search

    NASA Astrophysics Data System (ADS)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  10. [Mexican Cardiology Society Guidelines on the management of patients with unstable angina and non-ST-segment elevation myocardial infarction. Cancún, Quintana Roo 15-16 November 2002. Cooperative Group of Consensus].

    PubMed

    Lupi-Herrera, Eulo

    2002-01-01

    Mexican Cardiology Society guidelines for the Management of patients with unstable angina and non-ST--segment elevation myocardial infarction are presented. The Mexican Society of Cardiology has engaged in the elaboration of these guidelines in the area of acute coronary syndromes based on the recent report of RENASICA [National Registry of Acute Coronary Syndromes]: 70% of the ACS correspond to patients with unstable angina and non-ST--segment elevation myocardial infarction seen in the emergency departments during the years 1999-2001 in hospitals of 2nd and 3rd level of medical attention. Experts in the subject under consideration were selected to examine subject-specific data and to write guidelines. Special groups were specifically chosen to perform a formal literature review, to weight the strength of evidences for or against a particular treatment or procedure, and to include estimates of expected health outcomes where data exist. Current classifications were used in the recommendations that summarize both the evidence and expert opinion and provide final recommendation for both patient evaluation and therapy. These guidelines represent an attempt to define practices that meet the needs of most patients in most circumstances in Mexico. The ultimate judgment regarding the care of a particular patient must be made by the physician and patient in light of all of the available information and the circumstances presented by that patient. The present guidelines for the management of patients with unstable angina and non-ST--segment elevation myocardial infarction should be reviewed in the next coming future by Mexican cardiologists according to the forthcoming advances in ACS without ST-segment elevation.

  11. Texture segmentation of non-cooperative spacecrafts images based on wavelet and fractal dimension

    NASA Astrophysics Data System (ADS)

    Wu, Kanzhi; Yue, Xiaokui

    2011-06-01

    With the increase of on-orbit manipulations and space conflictions, missions such as tracking and capturing the target spacecrafts are aroused. Unlike cooperative spacecrafts, fixing beacons or any other marks on the targets is impossible. Due to the unknown shape and geometry features of non-cooperative spacecraft, in order to localize the target and obtain the latitude, we need to segment the target image and recognize the target from the background. The data and errors during the following procedures such as feature extraction and matching can also be reduced. Multi-resolution analysis of wavelet theory reflects human beings' recognition towards images from low resolution to high resolution. In addition, spacecraft is the only man-made object in the image compared to the natural background and the differences will be certainly observed between the fractal dimensions of target and background. Combined wavelet transform and fractal dimension, in this paper, we proposed a new segmentation algorithm for the images which contains complicated background such as the universe and planet surfaces. At first, Daubechies wavelet basis is applied to decompose the image in both x axis and y axis, thus obtain four sub-images. Then, calculate the fractal dimensions in four sub-images using different methods; after analyzed the results of fractal dimensions in sub-images, we choose Differential Box Counting in low resolution image as the principle to segment the texture which has the greatest divergences between different sub-images. This paper also presents the results of experiments by using the algorithm above. It is demonstrated that an accurate texture segmentation result can be obtained using the proposed technique.

  12. Adaptive attenuation of aliased ground roll using the shearlet transform

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Abolfazl; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-01-01

    Attenuation of ground roll is an essential step in seismic data processing. Spatial aliasing of the ground roll may cause the overlap of the ground roll with reflections in the f-k domain. The shearlet transform is a directional and multidimensional transform that separates the events with different dips and generates subimages in different scales and directions. In this study, the shearlet transform was used adaptively to attenuate aliased and non-aliased ground roll. After defining a filtering zone, an input shot record is divided into segments. Each segment overlaps adjacent segments. To apply the shearlet transform on each segment, the subimages containing aliased and non-aliased ground roll, the locations of these events on each subimage are selected adaptively. Based on these locations, mute is applied on the selected subimages. The filtered segments are merged together, using the Hanning function, after applying the inverse shearlet transform. This adaptive process of ground roll attenuation was tested on synthetic data, and field shot records from west of Iran. Analysis of the results using the f-k spectra revealed that the non-aliased and most of the aliased ground roll were attenuated using the proposed adaptive attenuation procedure. Also, we applied this method on shot records of a 2D land survey, and the data sets before and after ground roll attenuation were stacked and compared. The stacked section after ground roll attenuation contained less linear ground roll noise and more continuous reflections in comparison with the stacked section before the ground roll attenuation. The proposed method has some drawbacks such as more run time in comparison with traditional methods such as f-k filtering and reduced performance when the dip and frequency content of aliased ground roll are the same as those of the reflections.

  13. A minimally interactive method to segment enlarged lymph nodes in 3D thoracic CT images using a rotatable spiral-scanning technique

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Moltz, Jan H.; Bornemann, Lars; Hahn, Horst K.

    2012-03-01

    Precise size measurement of enlarged lymph nodes is a significant indicator for diagnosing malignancy, follow-up and therapy monitoring of cancer diseases. The presence of diverse sizes and shapes, inhomogeneous enhancement and the adjacency to neighboring structures with similar intensities, make the segmentation task challenging. We present a semi-automatic approach requiring minimal user interactions to fast and robustly segment the enlarged lymph nodes. First, a stroke approximating the largest diameter of a specific lymph node is drawn manually from which a volume of interest (VOI) is determined. Second, Based on the statistical analysis of the intensities on the dilated stroke area, a region growing procedure is utilized within the VOI to create an initial segmentation of the target lymph node. Third, a rotatable spiral-scanning technique is proposed to resample the 3D boundary surface of the lymph node to a 2D boundary contour in a transformed polar image. The boundary contour is found by seeking the optimal path in 2D polar image with dynamic programming algorithm and eventually transformed back to 3D. Ultimately, the boundary surface of the lymph node is determined using an interpolation scheme followed by post-processing steps. To test the robustness and efficiency of our method, a quantitative evaluation was conducted with a dataset of 315 lymph nodes acquired from 79 patients with lymphoma and melanoma. Compared to the reference segmentations, an average Dice coefficient of 0.88 with a standard deviation of 0.08, and an average absolute surface distance of 0.54mm with a standard deviation of 0.48mm, were achieved.

  14. Fully convolutional neural networks for polyp segmentation in colonoscopy

    NASA Astrophysics Data System (ADS)

    Brandao, Patrick; Mazomenos, Evangelos; Ciuti, Gastone; Caliò, Renato; Bianchi, Federico; Menciassi, Arianna; Dario, Paolo; Koulaouzidis, Anastasios; Arezzo, Alberto; Stoyanov, Danail

    2017-03-01

    Colorectal cancer (CRC) is one of the most common and deadliest forms of cancer, accounting for nearly 10% of all forms of cancer in the world. Even though colonoscopy is considered the most effective method for screening and diagnosis, the success of the procedure is highly dependent on the operator skills and level of hand-eye coordination. In this work, we propose to adapt fully convolution neural networks (FCN), to identify and segment polyps in colonoscopy images. We converted three established networks into a fully convolution architecture and fine-tuned their learned representations to the polyp segmentation task. We validate our framework on the 2015 MICCAI polyp detection challenge dataset, surpassing the state-of-the-art in automated polyp detection. Our method obtained high segmentation accuracy and a detection precision and recall of 73.61% and 86.31%, respectively.

  15. Segmenting Bone Parts for Bone Age Assessment using Point Distribution Model and Contour Modelling

    NASA Astrophysics Data System (ADS)

    Kaur, Amandeep; Singh Mann, Kulwinder, Dr.

    2018-01-01

    Bone age assessment (BAA) is a task performed on radiographs by the pediatricians in hospitals to predict the final adult height, to diagnose growth disorders by monitoring skeletal development. For building an automatic bone age assessment system the step in routine is to do image pre-processing of the bone X-rays so that features row can be constructed. In this research paper, an enhanced point distribution algorithm using contours has been implemented for segmenting bone parts as per well-established procedure of bone age assessment that would be helpful in building feature row and later on; it would be helpful in construction of automatic bone age assessment system. Implementation of the segmentation algorithm shows high degree of accuracy in terms of recall and precision in segmenting bone parts from left hand X-Rays.

  16. Bio-medical flow sensor. [intrvenous procedures

    NASA Technical Reports Server (NTRS)

    Winkler, H. E. (Inventor)

    1981-01-01

    A bio-medical flow sensor including a packageable unit of a bottle, tubing and hypodermic needle which can be pre-sterilized and is disposable. The tubing has spaced apart tubular metal segments. The temperature of the metal segments and fluid flow therein is sensed by thermistors and at a downstream location heat is input by a resistor to the metal segment by a control electronics. The fluids flow and the electrical power required for the resisto to maintain a constant temperature differential between the tubular metal segments is a measurable function of fluid flow through the tubing. The differential temperature measurement is made in a control electronics and also can be used to control a flow control valve or pump on the tubing to maintain a constant flow in the tubing and to shut off the tubing when air is present in the tubing.

  17. Estimates of Median Flows for Streams on the 1999 Kansas Surface Water Register

    USGS Publications Warehouse

    Perry, Charles A.; Wolock, David M.; Artman, Joshua C.

    2004-01-01

    The Kansas State Legislature, by enacting Kansas Statute KSA 82a?2001 et. seq., mandated the criteria for determining which Kansas stream segments would be subject to classification by the State. One criterion for the selection as a classified stream segment is based on the statistic of median flow being equal to or greater than 1 cubic foot per second. As specified by KSA 82a?2001 et. seq., median flows were determined from U.S. Geological Survey streamflow-gaging-station data by using the most-recent 10 years of gaged data (KSA) for each streamflow-gaging station. Median flows also were determined by using gaged data from the entire period of record (all-available hydrology, AAH). Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating median flows for uncontrolled stream segments. The drainage area of the gaging stations on uncontrolled stream segments used in the regression analyses ranged from 2.06 to 12,004 square miles. A logarithmic transformation of the data was needed to develop the best linear relation for computing median flows. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. Tobit analyses of KSA data yielded a model standard error of prediction of 0.285 logarithmic units, and the best equations using Tobit analyses of AAH data had a model standard error of prediction of 0.250 logarithmic units. These regression equations and an interpolation procedure were used to compute median flows for the uncontrolled stream segments on the 1999 Kansas Surface Water Register. Measured median flows from gaging stations were incorporated into the regression-estimated median flows along the stream segments where available. The segments that were uncontrolled were interpolated using gaged data weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled segments of Kansas streams, the median flow information was interpolated between gaging stations using only gaged data weighted by drainage area. Of the 2,232 total stream segments on the Kansas Surface Water Register, 34.5 percent of the segments had an estimated median streamflow of less than 1 cubic foot per second when the KSA analysis was used. When the AAH analysis was used, 36.2 percent of the segments had an estimated median streamflow of less than 1 cubic foot per second. This report supercedes U.S. Geological Survey Water-Resources Investigations Report 02?4292.

  18. Energy flow during Olympic weight lifting.

    PubMed

    Garhammer, J

    1982-01-01

    Data obtained from 16-mm film of world caliber Olympic weight lifters performing at major competitions were analyzed to study energy changes during body segment and barbell movements, energy transfer to the barbell, and energy transfer between segments during the lifting movements contested. Determination of barbell and body segment kinematics and use of rigid-link modeling and energy flow techniques permitted the calculation of segment energy content and energy transfer between segments. Energy generation within and transfer to and from segments were determined at 0.04-s intervals by comparing mechanical energy changes of a segment with energy transfer at the joints, calculated from the scalar product of net joint force with absolute joint velocity, and the product of net joint torque due to muscular activity with absolute segment angular velocity. The results provided a detailed understanding of the magnitude and temporal input of energy from dominant muscle groups during a lift. This information also provided a means of quantifying lifting technique. Comparison of segment energy changes determined by the two methods were satisfactory but could likely be improved by employing more sophisticated data smoothing methods. The procedures used in this study could easily be applied to weight training and rehabilitative exercises to help determine their efficacy in producing desired results or to ergonomic situations where a more detailed understanding of the demands made on the body during lifting tasks would be useful.

  19. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  20. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.« less

Top