DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarroll, R; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX; Beadle, B
Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrectmore » contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to flag auto-contours for special review or used with safety margins in a fully automatic treatment planning system.« less
Automated consensus contour building for prostate MRI.
Khalvati, Farzad
2014-01-01
Inter-observer variability is the lack of agreement among clinicians in contouring a given organ or tumour in a medical image. The variability in medical image contouring is a source of uncertainty in radiation treatment planning. Consensus contour of a given case, which was proposed to reduce the variability, is generated by combining the manually generated contours of several clinicians. However, having access to several clinicians (e.g., radiation oncologists) to generate a consensus contour for one patient is costly. This paper presents an algorithm that automatically generates a consensus contour for a given case using the atlases of different clinicians. The algorithm was applied to prostate MR images of 15 patients manually contoured by 5 clinicians. The automatic consensus contours were compared to manual consensus contours where a median Dice similarity coefficient (DSC) of 88% was achieved.
Generic and robust method for automatic segmentation of PET images using an active contour model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuang, Mingzan
Purpose: Although positron emission tomography (PET) images have shown potential to improve the accuracy of targeting in radiation therapy planning and assessment of response to treatment, the boundaries of tumors are not easily distinguishable from surrounding normal tissue owing to the low spatial resolution and inherent noisy characteristics of PET images. The objective of this study is to develop a generic and robust method for automatic delineation of tumor volumes using an active contour model and to evaluate its performance using phantom and clinical studies. Methods: MASAC, a method for automatic segmentation using an active contour model, incorporates the histogrammore » fuzzy C-means clustering, and localized and textural information to constrain the active contour to detect boundaries in an accurate and robust manner. Moreover, the lattice Boltzmann method is used as an alternative approach for solving the level set equation to make it faster and suitable for parallel programming. Twenty simulated phantom studies and 16 clinical studies, including six cases of pharyngolaryngeal squamous cell carcinoma and ten cases of nonsmall cell lung cancer, were included to evaluate its performance. Besides, the proposed method was also compared with the contourlet-based active contour algorithm (CAC) and Schaefer’s thresholding method (ST). The relative volume error (RE), Dice similarity coefficient (DSC), and classification error (CE) metrics were used to analyze the results quantitatively. Results: For the simulated phantom studies (PSs), MASAC and CAC provide similar segmentations of the different lesions, while ST fails to achieve reliable results. For the clinical datasets (2 cases with connected high-uptake regions excluded) (CSs), CAC provides for the lowest mean RE (−8.38% ± 27.49%), while MASAC achieves the best mean DSC (0.71 ± 0.09) and mean CE (53.92% ± 12.65%), respectively. MASAC could reliably quantify different types of lesions assessed in this work with good accuracy, resulting in a mean RE of −13.35% ± 11.87% and −11.15% ± 23.66%, a mean DSC of 0.89 ± 0.05 and 0.71 ± 0.09, and a mean CE of 19.19% ± 7.89% and 53.92% ± 12.65%, for PSs and CSs, respectively. Conclusions: The authors’ results demonstrate that the developed novel PET segmentation algorithm is applicable to various types of lesions in the authors’ study and is capable of producing accurate and consistent target volume delineations, potentially resulting in reduced intraobserver and interobserver variabilities observed when using manual delineation and improved accuracy in treatment planning and outcome evaluation.« less
NASA Astrophysics Data System (ADS)
Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut
2005-04-01
Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.
Automatic classification of killer whale vocalizations using dynamic time warping.
Brown, Judith C; Miller, Patrick J O
2007-08-01
A set of killer whale sounds from Marineland were recently classified automatically [Brown et al., J. Acoust. Soc. Am. 119, EL34-EL40 (2006)] into call types using dynamic time warping (DTW), multidimensional scaling, and kmeans clustering to give near-perfect agreement with a perceptual classification. Here the effectiveness of four DTW algorithms on a larger and much more challenging set of calls by Northern Resident whales will be examined, with each call consisting of two independently modulated pitch contours and having considerable overlap in contours for several of the perceptual call types. Classification results are given for each of the four algorithms for the low frequency contour (LFC), the high frequency contour (HFC), their derivatives, and weighted sums of the distances corresponding to LFC with HFC, LFC with its derivative, and HFC with its derivative. The best agreement with the perceptual classification was 90% attained by the Sakoe-Chiba algorithm for the low frequency contours alone.
Generation algorithm of craniofacial structure contour in cephalometric images
NASA Astrophysics Data System (ADS)
Mondal, Tanmoy; Jain, Ashish; Sardana, H. K.
2010-02-01
Anatomical structure tracing on cephalograms is a significant way to obtain cephalometric analysis. Computerized cephalometric analysis involves both manual and automatic approaches. The manual approach is limited in accuracy and repeatability. In this paper we have attempted to develop and test a novel method for automatic localization of craniofacial structure based on the detected edges on the region of interest. According to the grey scale feature at the different region of the cephalometric images, an algorithm for obtaining tissue contour is put forward. Using edge detection with specific threshold an improved bidirectional contour tracing approach is proposed by an interactive selection of the starting edge pixels, the tracking process searches repetitively for an edge pixel at the neighborhood of previously searched edge pixel to segment images, and then craniofacial structures are obtained. The effectiveness of the algorithm is demonstrated by the preliminary experimental results obtained with the proposed method.
Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano
2016-07-07
Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.
NASA Astrophysics Data System (ADS)
Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano
2016-07-01
Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.
Evaluation of an automatic segmentation algorithm for definition of head and neck organs at risk.
Thomson, David; Boylan, Chris; Liptrot, Tom; Aitkenhead, Adam; Lee, Lip; Yap, Beng; Sykes, Andrew; Rowbottom, Carl; Slevin, Nicholas
2014-08-03
The accurate definition of organs at risk (OARs) is required to fully exploit the benefits of intensity-modulated radiotherapy (IMRT) for head and neck cancer. However, manual delineation is time-consuming and there is considerable inter-observer variability. This is pertinent as function-sparing and adaptive IMRT have increased the number and frequency of delineation of OARs. We evaluated accuracy and potential time-saving of Smart Probabilistic Image Contouring Engine (SPICE) automatic segmentation to define OARs for salivary-, swallowing- and cochlea-sparing IMRT. Five clinicians recorded the time to delineate five organs at risk (parotid glands, submandibular glands, larynx, pharyngeal constrictor muscles and cochleae) for each of 10 CT scans. SPICE was then used to define these structures. The acceptability of SPICE contours was initially determined by visual inspection and the total time to modify them recorded per scan. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm created a reference standard from all clinician contours. Clinician, SPICE and modified contours were compared against STAPLE by the Dice similarity coefficient (DSC) and mean/maximum distance to agreement (DTA). For all investigated structures, SPICE contours were less accurate than manual contours. However, for parotid/submandibular glands they were acceptable (median DSC: 0.79/0.80; mean, maximum DTA: 1.5 mm, 14.8 mm/0.6 mm, 5.7 mm). Modified SPICE contours were also less accurate than manual contours. The utilisation of SPICE did not result in time-saving/improve efficiency. Improvements in accuracy of automatic segmentation for head and neck OARs would be worthwhile and are required before its routine clinical implementation.
Zhen, Xin; Zhou, Ling-hong; Lu, Wen-ting; Zhang, Shu-xu; Zhou, Lu
2010-12-01
To validate the efficiency and accuracy of an improved Demons deformable registration algorithm and evaluate its application in contour recontouring in 4D-CT. To increase the additional Demons force and reallocate the bilateral forces to accelerate convergent speed, we propose a novel energy function as the similarity measure, and utilize a BFGS method for optimization to avoid specifying the numbers of iteration. Mathematical transformed deformable CT images and home-made deformable phantom were used to validate the accuracy of the improved algorithm, and its effectiveness for contour recontouring was tested. The improved algorithm showed a relatively high registration accuracy and speed when compared with the classic Demons algorithm and optical flow based method. Visual inspection of the positions and shapes of the deformed contours agreed well with the physician-drawn contours. Deformable registration is a key technique in 4D-CT, and this improved Demons algorithm for contour recontouring can significantly reduce the workload of the physicians. The registration accuracy of this method proves to be sufficient for clinical needs.
Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye
2017-10-01
Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia
NASA Astrophysics Data System (ADS)
Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin
2013-10-01
This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoang Duc, Albert K., E-mail: albert.hoangduc.ucl@gmail.com; McClelland, Jamie; Modat, Marc
Purpose: The aim of this study was to assess whether clinically acceptable segmentations of organs at risk (OARs) in head and neck cancer can be obtained automatically and efficiently using the novel “similarity and truth estimation for propagated segmentations” (STEPS) compared to the traditional “simultaneous truth and performance level estimation” (STAPLE) algorithm. Methods: First, 6 OARs were contoured by 2 radiation oncologists in a dataset of 100 patients with head and neck cancer on planning computed tomography images. Each image in the dataset was then automatically segmented with STAPLE and STEPS using those manual contours. Dice similarity coefficient (DSC) wasmore » then used to compare the accuracy of these automatic methods. Second, in a blind experiment, three separate and distinct trained physicians graded manual and automatic segmentations into one of the following three grades: clinically acceptable as determined by universal delineation guidelines (grade A), reasonably acceptable for clinical practice upon manual editing (grade B), and not acceptable (grade C). Finally, STEPS segmentations graded B were selected and one of the physicians manually edited them to grade A. Editing time was recorded. Results: Significant improvements in DSC can be seen when using the STEPS algorithm on large structures such as the brainstem, spinal canal, and left/right parotid compared to the STAPLE algorithm (all p < 0.001). In addition, across all three trained physicians, manual and STEPS segmentation grades were not significantly different for the brainstem, spinal canal, parotid (right/left), and optic chiasm (all p > 0.100). In contrast, STEPS segmentation grades were lower for the eyes (p < 0.001). Across all OARs and all physicians, STEPS produced segmentations graded as well as manual contouring at a rate of 83%, giving a lower bound on this rate of 80% with 95% confidence. Reduction in manual interaction time was on average 61% and 93% when automatic segmentations did and did not, respectively, require manual editing. Conclusions: The STEPS algorithm showed better performance than the STAPLE algorithm in segmenting OARs for radiotherapy of the head and neck. It can automatically produce clinically acceptable segmentation of OARs, with results as relevant as manual contouring for the brainstem, spinal canal, the parotids (left/right), and optic chiasm. A substantial reduction in manual labor was achieved when using STEPS even when manual editing was necessary.« less
NASA Astrophysics Data System (ADS)
Nanayakkara, Nuwan D.; Samarabandu, Jagath; Fenster, Aaron
2006-04-01
Estimation of prostate location and volume is essential in determining a dose plan for ultrasound-guided brachytherapy, a common prostate cancer treatment. However, manual segmentation is difficult, time consuming and prone to variability. In this paper, we present a semi-automatic discrete dynamic contour (DDC) model based image segmentation algorithm, which effectively combines a multi-resolution model refinement procedure together with the domain knowledge of the image class. The segmentation begins on a low-resolution image by defining a closed DDC model by the user. This contour model is then deformed progressively towards higher resolution images. We use a combination of a domain knowledge based fuzzy inference system (FIS) and a set of adaptive region based operators to enhance the edges of interest and to govern the model refinement using a DDC model. The automatic vertex relocation process, embedded into the algorithm, relocates deviated contour points back onto the actual prostate boundary, eliminating the need of user interaction after initialization. The accuracy of the prostate boundary produced by the proposed algorithm was evaluated by comparing it with a manually outlined contour by an expert observer. We used this algorithm to segment the prostate boundary in 114 2D transrectal ultrasound (TRUS) images of six patients scheduled for brachytherapy. The mean distance between the contours produced by the proposed algorithm and the manual outlines was 2.70 ± 0.51 pixels (0.54 ± 0.10 mm). We also showed that the algorithm is insensitive to variations of the initial model and parameter values, thus increasing the accuracy and reproducibility of the resulting boundaries in the presence of noise and artefacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumarasiri, Akila, E-mail: akumara1@hfhs.org; Siddiqui, Farzan; Liu, Chang
2014-12-15
Purpose: To evaluate the clinical potential of deformable image registration (DIR)-based automatic propagation of physician-drawn contours from a planning CT to midtreatment CT images for head and neck (H and N) adaptive radiotherapy. Methods: Ten H and N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken approximately 3–4 week into treatment, were considered retrospectively. Clinically relevant organs and targets were manually delineated by a radiation oncologist on both sets of images. Four commercial DIR algorithms, two B-spline-based and two Demons-based, were used to deform CT1 and the relevant contour sets onto corresponding CT2 images. Agreementmore » of the propagated contours with manually drawn contours on CT2 was visually rated by four radiation oncologists in a scale from 1 to 5, the volume overlap was quantified using Dice coefficients, and a distance analysis was done using center of mass (CoM) displacements and Hausdorff distances (HDs). Performance of these four commercial algorithms was validated using a parameter-optimized Elastix DIR algorithm. Results: All algorithms attained Dice coefficients of >0.85 for organs with clear boundaries and those with volumes >9 cm{sup 3}. Organs with volumes <3 cm{sup 3} and/or those with poorly defined boundaries showed Dice coefficients of ∼0.5–0.6. For the propagation of small organs (<3 cm{sup 3}), the B-spline-based algorithms showed higher mean Dice values (Dice = 0.60) than the Demons-based algorithms (Dice = 0.54). For the gross and planning target volumes, the respective mean Dice coefficients were 0.8 and 0.9. There was no statistically significant difference in the Dice coefficients, CoM, or HD among investigated DIR algorithms. The mean radiation oncologist visual scores of the four algorithms ranged from 3.2 to 3.8, which indicated that the quality of transferred contours was “clinically acceptable with minor modification or major modification in a small number of contours.” Conclusions: Use of DIR-based contour propagation in the routine clinical setting is expected to increase the efficiency of H and N replanning, reducing the amount of time needed for manual target and organ delineations.« less
Dilated contour extraction and component labeling algorithm for object vector representation
NASA Astrophysics Data System (ADS)
Skourikhine, Alexei N.
2005-08-01
Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.
Film grain synthesis and its application to re-graining
NASA Astrophysics Data System (ADS)
Schallauer, Peter; Mörzinger, Roland
2006-01-01
Digital film restoration and special effects compositing require more and more automatic procedures for movie regraining. Missing or inhomogeneous grain decreases perceived quality. For the purpose of grain synthesis an existing texture synthesis algorithm has been evaluated and optimized. We show that this algorithm can produce synthetic grain which is perceptually similar to a given grain template, which has high spatial and temporal variation and which can be applied to multi-spectral images. Furthermore a re-grain application framework is proposed, which synthesises based on an input grain template artificial grain and composites this together with the original image content. Due to its modular approach this framework supports manual as well as automatic re-graining applications. Two example applications are presented, one for re-graining an entire movie and one for fully automatic re-graining of image regions produced by restoration algorithms. Low computational cost of the proposed algorithms allows application in industrial grade software.
Prostate contouring in MRI guided biopsy.
Vikal, Siddharth; Haker, Steven; Tempany, Clare; Fichtinger, Gabor
2009-03-27
With MRI possibly becoming a modality of choice for detection and staging of prostate cancer, fast and accurate outlining of the prostate is required in the volume of clinical interest. We present a semi-automatic algorithm that uses a priori knowledge of prostate shape to arrive at the final prostate contour. The contour of one slice is then used as initial estimate in the neighboring slices. Thus we propagate the contour in 3D through steps of refinement in each slice. The algorithm makes only minimum assumptions about the prostate shape. A statistical shape model of prostate contour in polar transform space is employed to narrow search space. Further, shape guidance is implicitly imposed by allowing only plausible edge orientations using template matching. The algorithm does not require region-homogeneity, discriminative edge force, or any particular edge profile. Likewise, it makes no assumption on the imaging coils and pulse sequences used and it is robust to the patient's pose (supine, prone, etc.). The contour method was validated using expert segmentation on clinical MRI data. We recorded a mean absolute distance of 2.0 ± 0.6 mm and dice similarity coefficient of 0.93 ± 0.3 in midsection. The algorithm takes about 1 second per slice.
Prostate contouring in MRI guided biopsy
Vikal, Siddharth; Haker, Steven; Tempany, Clare; Fichtinger, Gabor
2010-01-01
With MRI possibly becoming a modality of choice for detection and staging of prostate cancer, fast and accurate outlining of the prostate is required in the volume of clinical interest. We present a semi-automatic algorithm that uses a priori knowledge of prostate shape to arrive at the final prostate contour. The contour of one slice is then used as initial estimate in the neighboring slices. Thus we propagate the contour in 3D through steps of refinement in each slice. The algorithm makes only minimum assumptions about the prostate shape. A statistical shape model of prostate contour in polar transform space is employed to narrow search space. Further, shape guidance is implicitly imposed by allowing only plausible edge orientations using template matching. The algorithm does not require region-homogeneity, discriminative edge force, or any particular edge profile. Likewise, it makes no assumption on the imaging coils and pulse sequences used and it is robust to the patient's pose (supine, prone, etc.). The contour method was validated using expert segmentation on clinical MRI data. We recorded a mean absolute distance of 2.0 ± 0.6 mm and dice similarity coefficient of 0.93 ± 0.3 in midsection. The algorithm takes about 1 second per slice. PMID:21132083
Contour classification in thermographic images for detection of breast cancer
NASA Astrophysics Data System (ADS)
Okuniewski, Rafał; Nowak, Robert M.; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Oleszkiewicz, Witold
2016-09-01
Thermographic images of breast taken by the Braster device are uploaded into web application which uses different classification algorithms to automatically decide whether a patient should be more thoroughly examined. This article presents the approach to the task of classifying contours visible on thermographic images of breast taken by the Braster device in order to make the decision about the existence of cancerous tumors in breast. It presents the results of the researches conducted on the different classification algorithms.
Automated extraction and classification of time-frequency contours in humpback vocalizations.
Ou, Hui; Au, Whitlow W L; Zurk, Lisa M; Lammers, Marc O
2013-01-01
A time-frequency contour extraction and classification algorithm was created to analyze humpback whale vocalizations. The algorithm automatically extracted contours of whale vocalization units by searching for gray-level discontinuities in the spectrogram images. The unit-to-unit similarity was quantified by cross-correlating the contour lines. A library of distinctive humpback units was then generated by applying an unsupervised, cluster-based learning algorithm. The purpose of this study was to provide a fast and automated feature selection tool to describe the vocal signatures of animal groups. This approach could benefit a variety of applications such as species description, identification, and evolution of song structures. The algorithm was tested on humpback whale song data recorded at various locations in Hawaii from 2002 to 2003. Results presented in this paper showed low probability of false alarm (0%-4%) under noisy environments with small boat vessels and snapping shrimp. The classification algorithm was tested on a controlled set of 30 units forming six unit types, and all the units were correctly classified. In a case study on humpback data collected in the Auau Chanel, Hawaii, in 2002, the algorithm extracted 951 units, which were classified into 12 distinctive types.
SU-E-J-129: Atlas Development for Cardiac Automatic Contouring Using Multi-Atlas Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, R; Yang, J; Pan, T
Purpose: To develop a set of atlases for automatic contouring of cardiac structures to determine heart radiation dose and the associated toxicity. Methods: Six thoracic cancer patients with both contrast and non-contrast CT images were acquired for this study. Eight radiation oncologists manually and independently delineated cardiac contours on the non-contrast CT by referring to the fused contrast CT and following the RTOG 1106 atlas contouring guideline. Fifteen regions of interest (ROIs) were delineated, including heart, four chambers, four coronary arteries, pulmonary artery and vein, inferior and superior vena cava, and ascending and descending aorta. Individual expert contours were fusedmore » using the simultaneous truth and performance level estimation (STAPLE) algorithm for each ROI and each patient. The fused contours became atlases for an in-house multi-atlas segmentation. Using leave-one-out test, we generated auto-segmented contours for each ROI and each patient. The auto-segmented contours were compared with the fused contours using the Dice similarity coefficient (DSC) and the mean surface distance (MSD). Results: Inter-observer variability was not obvious for heart, chambers, and aorta but was large for other structures that were not clearly distinguishable on CT image. The average DSC between individual expert contours and the fused contours were less than 50% for coronary arteries and pulmonary vein, and the average MSD were greater than 4.0 mm. The largest MSD of expert contours deviating from the fused contours was 2.5 cm. The mean DSC and MSD of auto-segmented contours were within one standard deviation of expert contouring variability except the right coronary artery. The coronary arteries, vena cava, and pulmonary vein had DSC<70% and MSD>3.0 mm. Conclusion: A set of cardiac atlases was created for cardiac automatic contouring, the accuracy of which was comparable to the variability in expert contouring. However, substantial modification may need for auto-segmented contours of indistinguishable small structures.« less
Delpon, Grégory; Escande, Alexandre; Ruef, Timothée; Darréon, Julien; Fontaine, Jimmy; Noblet, Caroline; Supiot, Stéphane; Lacornerie, Thomas; Pasquier, David
2016-01-01
Automated atlas-based segmentation (ABS) algorithms present the potential to reduce the variability in volume delineation. Several vendors offer software that are mainly used for cranial, head and neck, and prostate cases. The present study will compare the contours produced by a radiation oncologist to the contours computed by different automated ABS algorithms for prostate bed cases, including femoral heads, bladder, and rectum. Contour comparison was evaluated by different metrics such as volume ratio, Dice coefficient, and Hausdorff distance. Results depended on the volume of interest showed some discrepancies between the different software. Automatic contours could be a good starting point for the delineation of organs since efficient editing tools are provided by different vendors. It should become an important help in the next few years for organ at risk delineation. PMID:27536556
Segmenting Bone Parts for Bone Age Assessment using Point Distribution Model and Contour Modelling
NASA Astrophysics Data System (ADS)
Kaur, Amandeep; Singh Mann, Kulwinder, Dr.
2018-01-01
Bone age assessment (BAA) is a task performed on radiographs by the pediatricians in hospitals to predict the final adult height, to diagnose growth disorders by monitoring skeletal development. For building an automatic bone age assessment system the step in routine is to do image pre-processing of the bone X-rays so that features row can be constructed. In this research paper, an enhanced point distribution algorithm using contours has been implemented for segmenting bone parts as per well-established procedure of bone age assessment that would be helpful in building feature row and later on; it would be helpful in construction of automatic bone age assessment system. Implementation of the segmentation algorithm shows high degree of accuracy in terms of recall and precision in segmenting bone parts from left hand X-Rays.
Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco
2016-05-01
The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This "calibration curve" was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo; Mazzarotto, Renzo
Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracymore » was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This “calibration curve” was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. Conclusions: The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.« less
Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.
Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C
2013-06-01
A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.
Automatic segmentation of mandible in panoramic x-ray.
Abdi, Amir Hossein; Kasaei, Shohreh; Mehdizadeh, Mojdeh
2015-10-01
As the panoramic x-ray is the most common extraoral radiography in dentistry, segmentation of its anatomical structures facilitates diagnosis and registration of dental records. This study presents a fast and accurate method for automatic segmentation of mandible in panoramic x-rays. In the proposed four-step algorithm, a superior border is extracted through horizontal integral projections. A modified Canny edge detector accompanied by morphological operators extracts the inferior border of the mandible body. The exterior borders of ramuses are extracted through a contour tracing method based on the average model of mandible. The best-matched template is fetched from the atlas of mandibles to complete the contour of left and right processes. The algorithm was tested on a set of 95 panoramic x-rays. Evaluating the results against manual segmentations of three expert dentists showed that the method is robust. It achieved an average performance of [Formula: see text] in Dice similarity, specificity, and sensitivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stützer, Kristin; Haase, Robert; Exner, Florian
2016-09-15
Purpose: Rating both a lung segmentation algorithm and a deformable image registration (DIR) algorithm for subsequent lung computed tomography (CT) images by different evaluation techniques. Furthermore, investigating the relative performance and the correlation of the different evaluation techniques to address their potential value in a clinical setting. Methods: Two to seven subsequent CT images (69 in total) of 15 lung cancer patients were acquired prior, during, and after radiochemotherapy. Automated lung segmentations were compared to manually adapted contours. DIR between the first and all following CT images was performed with a fast algorithm specialized for lung tissue registration, requiring themore » lung segmentation as input. DIR results were evaluated based on landmark distances, lung contour metrics, and vector field inconsistencies in different subvolumes defined by eroding the lung contour. Correlations between the results from the three methods were evaluated. Results: Automated lung contour segmentation was satisfactory in 18 cases (26%), failed in 6 cases (9%), and required manual correction in 45 cases (66%). Initial and corrected contours had large overlap but showed strong local deviations. Landmark-based DIR evaluation revealed high accuracy compared to CT resolution with an average error of 2.9 mm. Contour metrics of deformed contours were largely satisfactory. The median vector length of inconsistency vector fields was 0.9 mm in the lung volume and slightly smaller for the eroded volumes. There was no clear correlation between the three evaluation approaches. Conclusions: Automatic lung segmentation remains challenging but can assist the manual delineation process. Proven by three techniques, the inspected DIR algorithm delivers reliable results for the lung CT data sets acquired at different time points. Clinical application of DIR demands a fast DIR evaluation to identify unacceptable results, for instance, by combining different automated DIR evaluation methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nam, Hyeong Soo; Kim, Chang-Soo; Yoo, Hongki, E-mail: kjwmm@korea.ac.kr, E-mail: hyoo@hanyang.ac.kr
Purpose: Intravascular optical coherence tomography (IV-OCT) is a high-resolution imaging method used to visualize the microstructure of arterial walls in vivo. IV-OCT enables the clinician to clearly observe and accurately measure stent apposition and neointimal coverage of coronary stents, which are associated with side effects such as in-stent thrombosis. In this study, the authors present an algorithm for quantifying stent apposition and neointimal coverage by automatically detecting lumen contours and stent struts in IV-OCT images. Methods: The algorithm utilizes OCT intensity images and their first and second gradient images along the axial direction to detect lumen contours and stent strutmore » candidates. These stent strut candidates are classified into true and false stent struts based on their features, using an artificial neural network with one hidden layer and ten nodes. After segmentation, either the protrusion distance (PD) or neointimal thickness (NT) for each strut is measured automatically. In randomly selected image sets covering a large variety of clinical scenarios, the results of the algorithm were compared to those of manual segmentation by IV-OCT readers. Results: Stent strut detection showed a 96.5% positive predictive value and a 92.9% true positive rate. In addition, case-by-case validation also showed comparable accuracy for most cases. High correlation coefficients (R > 0.99) were observed for PD and NT between the algorithmic and the manual results, showing little bias (0.20 and 0.46 μm, respectively) and a narrow range of limits of agreement (36 and 54 μm, respectively). In addition, the algorithm worked well in various clinical scenarios and even in cases with a low level of stent malapposition and neointimal coverage. Conclusions: The presented automatic algorithm enables robust and fast detection of lumen contours and stent struts and provides quantitative measurements of PD and NT. In addition, the algorithm was validated using various clinical cases to demonstrate its reliability. Therefore, this technique can be effectively utilized for clinical trials on stent-related side effects, including in-stent thrombosis and in-stent restenosis.« less
Denoising and 4D visualization of OCT images
Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.
2009-01-01
We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509
Active contour-based visual tracking by integrating colors, shapes, and motions.
Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen
2013-05-01
In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.
A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-08
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.
[Development of a Software for Automatically Generated Contours in Eclipse TPS].
Xie, Zhao; Hu, Jinyou; Zou, Lian; Zhang, Weisha; Zou, Yuxin; Luo, Kelin; Liu, Xiangxiang; Yu, Luxin
2015-03-01
The automatic generation of planning targets and auxiliary contours have achieved in Eclipse TPS 11.0. The scripting language autohotkey was used to develop a software for automatically generated contours in Eclipse TPS. This software is named Contour Auto Margin (CAM), which is composed of operational functions of contours, script generated visualization and script file operations. RESULTS Ten cases in different cancers have separately selected, in Eclipse TPS 11.0 scripts generated by the software could not only automatically generate contours but also do contour post-processing. For different cancers, there was no difference between automatically generated contours and manually created contours. The CAM is a user-friendly and powerful software, and can automatically generated contours fast in Eclipse TPS 11.0. With the help of CAM, it greatly save plan preparation time and improve working efficiency of radiation therapy physicists.
An automatic method to detect and track the glottal gap from high speed videoendoscopic images.
Andrade-Miranda, Gustavo; Godino-Llorente, Juan I; Moro-Velázquez, Laureano; Gómez-García, Jorge Andrés
2015-10-29
The image-based analysis of the vocal folds vibration plays an important role in the diagnosis of voice disorders. The analysis is based not only on the direct observation of the video sequences, but also in an objective characterization of the phonation process by means of features extracted from the recorded images. However, such analysis is based on a previous accurate identification of the glottal gap, which is the most challenging step for a further automatic assessment of the vocal folds vibration. In this work, a complete framework to automatically segment and track the glottal area (or glottal gap) is proposed. The algorithm identifies a region of interest that is adapted along time, and combine active contours and watershed transform for the final delineation of the glottis and also an automatic procedure for synthesize different videokymograms is proposed. Thanks to the ROI implementation, our technique is robust to the camera shifting and also the objective test proved the effectiveness and performance of the approach in the most challenging scenarios that it is when exist an inappropriate closure of the vocal folds. The novelties of the proposed algorithm relies on the used of temporal information for identify an adaptive ROI and the use of watershed merging combined with active contours for the glottis delimitation. Additionally, an automatic procedure for synthesize multiline VKG by the identification of the glottal main axis is developed.
Diffusion tensor driven contour closing for cell microinjection targeting.
Becattini, Gabriele; Mattos, Leonardo S; Caldwell, Darwin G
2010-01-01
This article introduces a novel approach to robust automatic detection of unstained living cells in bright-field (BF) microscope images with the goal of producing a target list for an automated microinjection system. The overall image analysis process is described and includes: preprocessing, ridge enhancement, image segmentation, shape analysis and injection point definition. The developed algorithm implements a new version of anisotropic contour completion (ACC) based on the partial differential equation (PDE) for heat diffusion which improves the cell segmentation process by elongating the edges only along their tangent direction. The developed ACC algorithm is equivalent to a dilation of the binary edge image with a continuous elliptic structural element that takes into account local orientation of the contours preventing extension towards normal direction. Experiments carried out on real images of 10 to 50 microm CHO-K1 adherent cells show a remarkable reliability in the algorithm along with up to 85% success for cell detection and injection point definition.
Automated recognition of the pericardium contour on processed CT images using genetic algorithms.
Rodrigues, É O; Rodrigues, L O; Oliveira, L S N; Conci, A; Liatsis, P
2017-08-01
This work proposes the use of Genetic Algorithms (GA) in tracing and recognizing the pericardium contour of the human heart using Computed Tomography (CT) images. We assume that each slice of the pericardium can be modelled by an ellipse, the parameters of which need to be optimally determined. An optimal ellipse would be one that closely follows the pericardium contour and, consequently, separates appropriately the epicardial and mediastinal fats of the human heart. Tracing and automatically identifying the pericardium contour aids in medical diagnosis. Usually, this process is done manually or not done at all due to the effort required. Besides, detecting the pericardium may improve previously proposed automated methodologies that separate the two types of fat associated to the human heart. Quantification of these fats provides important health risk marker information, as they are associated with the development of certain cardiovascular pathologies. Finally, we conclude that GA offers satisfiable solutions in a feasible amount of processing time. Copyright © 2017 Elsevier Ltd. All rights reserved.
SU-F-J-113: Multi-Atlas Based Automatic Organ Segmentation for Lung Radiotherapy Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J; Han, J; Ailawadi, S
Purpose: Normal organ segmentation is one time-consuming and labor-intensive step for lung radiotherapy treatment planning. The aim of this study is to evaluate the performance of a multi-atlas based segmentation approach for automatic organs at risk (OAR) delineation. Methods: Fifteen Lung stereotactic body radiation therapy patients were randomly selected. Planning CT images and OAR contours of the heart - HT, aorta - AO, vena cava - VC, pulmonary trunk - PT, and esophagus – ES were exported and used as reference and atlas sets. For automatic organ delineation for a given target CT, 1) all atlas sets were deformably warpedmore » to the target CT, 2) the deformed sets were accumulated and normalized to produce organ probability density (OPD) maps, and 3) the OPD maps were converted to contours via image thresholding. Optimal threshold for each organ was empirically determined by comparing the auto-segmented contours against their respective reference contours. The delineated results were evaluated by measuring contour similarity metrics: DICE, mean distance (MD), and true detection rate (TD), where DICE=(intersection volume/sum of two volumes) and TD = {1.0 - (false positive + false negative)/2.0}. Diffeomorphic Demons algorithm was employed for CT-CT deformable image registrations. Results: Optimal thresholds were determined to be 0.53 for HT, 0.38 for AO, 0.28 for PT, 0.43 for VC, and 0.31 for ES. The mean similarity metrics (DICE[%], MD[mm], TD[%]) were (88, 3.2, 89) for HT, (79, 3.2, 82) for AO, (75, 2.7, 77) for PT, (68, 3.4, 73) for VC, and (51,2.7, 60) for ES. Conclusion: The investigated multi-atlas based approach produced reliable segmentations for the organs with large and relatively clear boundaries (HT and AO). However, the detection of small and narrow organs with diffused boundaries (ES) were challenging. Sophisticated atlas selection and multi-atlas fusion algorithms may further improve the quality of segmentations.« less
Three-dimensional contour edge detection algorithm
NASA Astrophysics Data System (ADS)
Wang, Yizhou; Ong, Sim Heng; Kassim, Ashraf A.; Foong, Kelvin W. C.
2000-06-01
This paper presents a novel algorithm for automatically extracting 3D contour edges, which are points of maximum surface curvature in a surface range image. The 3D image data are represented as a surface polygon mesh. The algorithm transforms the range data, obtained by scanning a dental plaster cast, into a 2D gray scale image by linearly converting the z-value of each vertex to a gray value. The Canny operator is applied to the median-filtered image to obtain the edge pixels and their orientations. A vertex in the 3D object corresponding to the detected edge pixel and its neighbors in the direction of the edge gradient are further analyzed with respect to their n-curvatures to extract the real 3D contour edges. This algorithm provides a fast method of reducing and sorting the unwieldy data inherent in the surface mesh representation. It employs powerful 2D algorithms to extract features from the transformed 3D models and refers to the 3D model for further analysis of selected data. This approach substantially reduces the computational burden without losing accuracy. It is also easily extended to detect 3D landmarks and other geometrical features, thus making it applicable to a wide range of applications.
Digital computer programs for generating oblique orthographic projections and contour plots
NASA Technical Reports Server (NTRS)
Giles, G. L.
1975-01-01
User and programer documentation is presented for two programs for automatic plotting of digital data. One of the programs generates oblique orthographic projections of three-dimensional numerical models and the other program generates contour plots of data distributed in an arbitrary planar region. A general description of the computational algorithms, user instructions, and complete listings of the programs is given. Several plots are included to illustrate various program options, and a single example is described to facilitate learning the use of the programs.
Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy
NASA Astrophysics Data System (ADS)
Nabizadeh, Nooshin; John, Nigel
2014-03-01
Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.
Automatic rocks detection and classification on high resolution images of planetary surfaces
NASA Astrophysics Data System (ADS)
Aboudan, A.; Pacifici, A.; Murana, A.; Cannarsa, F.; Ori, G. G.; Dell'Arciprete, I.; Allemand, P.; Grandjean, P.; Portigliotti, S.; Marcer, A.; Lorenzoni, L.
2013-12-01
High-resolution images can be used to obtain rocks location and size on planetary surfaces. In particular rock size-frequency distribution is a key parameter to evaluate the surface roughness, to investigate the geologic processes that formed the surface and to assess the hazards related with spacecraft landing. The manual search for rocks on high-resolution images (even for small areas) can be a very intensive work. An automatic or semi-automatic algorithm to identify rocks is mandatory to enable further processing as determining the rocks presence, size, height (by means of shadows) and spatial distribution over an area of interest. Accurate rocks and shadows contours localization are the key steps for rock detection. An approach to contour detection based on morphological operators and statistical thresholding is presented in this work. The identified contours are then fitted using a proper geometric model of the rocks or shadows and used to estimate salient rocks parameters (position, size, area, height). The performances of this approach have been evaluated both on images of Martian analogue area of Morocco desert and on HiRISE images. Results have been compared with ground truth obtained by means of manual rock mapping and proved the effectiveness of the algorithm. The rock abundance and rocks size-frequency distribution derived on selected HiRISE images have been compared with the results of similar analyses performed for the landing site certification of Mars landers (Viking, Pathfinder, MER, MSL) and with the available thermal data from IRTM and TES.
NASA Astrophysics Data System (ADS)
Leavens, Claudia; Vik, Torbjørn; Schulz, Heinrich; Allaire, Stéphane; Kim, John; Dawson, Laura; O'Sullivan, Brian; Breen, Stephen; Jaffray, David; Pekar, Vladimir
2008-03-01
Manual contouring of target volumes and organs at risk in radiation therapy is extremely time-consuming, in particular for treating the head-and-neck area, where a single patient treatment plan can take several hours to contour. As radiation treatment delivery moves towards adaptive treatment, the need for more efficient segmentation techniques will increase. We are developing a method for automatic model-based segmentation of the head and neck. This process can be broken down into three main steps: i) automatic landmark identification in the image dataset of interest, ii) automatic landmark-based initialization of deformable surface models to the patient image dataset, and iii) adaptation of the deformable models to the patient-specific anatomical boundaries of interest. In this paper, we focus on the validation of the first step of this method, quantifying the results of our automatic landmark identification method. We use an image atlas formed by applying thin-plate spline (TPS) interpolation to ten atlas datasets, using 27 manually identified landmarks in each atlas/training dataset. The principal variation modes returned by principal component analysis (PCA) of the landmark positions were used by an automatic registration algorithm, which sought the corresponding landmarks in the clinical dataset of interest using a controlled random search algorithm. Applying a run time of 60 seconds to the random search, a root mean square (rms) distance to the ground-truth landmark position of 9.5 +/- 0.6 mm was calculated for the identified landmarks. Automatic segmentation of the brain, mandible and brain stem, using the detected landmarks, is demonstrated.
SU-F-T-423: Automating Treatment Planning for Cervical Cancer in Low- and Middle- Income Countries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisling, K; Zhang, L; Yang, J
Purpose: To develop and test two independent algorithms that automatically create the photon treatment fields for a four-field box beam arrangement, a common treatment technique for cervical cancer in low- and middle-income countries. Methods: Two algorithms were developed and integrated into Eclipse using its Advanced Programming Interface:3D Method: We automatically segment bony anatomy on CT using an in-house multi-atlas contouring tool and project the structures into the beam’s-eye-view. We identify anatomical landmarks on the projections to define the field apertures. 2D Method: We generate DRRs for all four beams. An atlas of DRRs for six standard patients with corresponding fieldmore » apertures are deformably registered to the test patient DRRs. The set of deformed atlas apertures are fitted to an expected shape to define the final apertures. Both algorithms were tested on 39 patient CTs, and the resulting treatment fields were scored by a radiation oncologist. We also investigated the feasibility of using one algorithm as an independent check of the other algorithm. Results: 96% of the 3D-Method-generated fields and 79% of the 2D-method-generated fields were scored acceptable for treatment (“Per Protocol” or “Acceptable Variation”). The 3D Method generated more fields scored “Per Protocol” than the 2D Method (62% versus 17%). The 4% of the 3D-Method-generated fields that were scored “Unacceptable Deviation” were all due to an improper L5 vertebra contour resulting in an unacceptable superior jaw position. When these same patients were planned with the 2D method, the superior jaw was acceptable, suggesting that the 2D method can be used to independently check the 3D method. Conclusion: Our results show that our 3D Method is feasible for automatically generating cervical treatment fields. Furthermore, the 2D Method can serve as an automatic, independent check of the automatically-generated treatment fields. These algorithms will be implemented for fully automated cervical treatment planning.« less
Analysis and synthesis of intonation using the Tilt model.
Taylor, P
2000-03-01
This paper introduces the Tilt intonational model and describes how this model can be used to automatically analyze and synthesize intonation. In the model, intonation is represented as a linear sequence of events, which can be pitch accents or boundary tones. Each event is characterized by continuous parameters representing amplitude, duration, and tilt (a measure of the shape of the event). The paper describes an event detector, in effect an intonational recognition system, which produces a transcription of an utterance's intonation. The features and parameters of the event detector are discussed and performance figures are shown on a variety of read and spontaneous speaker independent conversational speech databases. Given the event locations, algorithms are described which produce an automatic analysis of each event in terms of the Tilt parameters. Synthesis algorithms are also presented which generate F0 contours from Tilt representations. The accuracy of these is shown by comparing synthetic F0 contours to real F0 contours. The paper concludes with an extensive discussion on linguistic representations of intonation and gives evidence that the Tilt model goes a long way to satisfying the desired goals of such a representation in that it has the right number of degrees of freedom to be able to describe and synthesize intonation accurately.
Automated skin segmentation in ultrasonic evaluation of skin toxicity in breast cancer radiotherapy.
Gao, Yi; Tannenbaum, Allen; Chen, Hao; Torres, Mylin; Yoshida, Emi; Yang, Xiaofeng; Wang, Yuefeng; Curran, Walter; Liu, Tian
2013-11-01
Skin toxicity is the most common side effect of breast cancer radiotherapy and impairs the quality of life of many breast cancer survivors. We, along with other researchers, have recently found quantitative ultrasound to be effective as a skin toxicity assessment tool. Although more reliable than standard clinical evaluations (visual observation and palpation), the current procedure for ultrasound-based skin toxicity measurements requires manual delineation of the skin layers (i.e., epidermis-dermis and dermis-hypodermis interfaces) on each ultrasound B-mode image. Manual skin segmentation is time consuming and subjective. Moreover, radiation-induced skin injury may decrease image contrast between the dermis and hypodermis, which increases the difficulty of delineation. Therefore, we have developed an automatic skin segmentation tool (ASST) based on the active contour model with two significant modifications: (i) The proposed algorithm introduces a novel dual-curve scheme for the double skin layer extraction, as opposed to the original single active contour method. (ii) The proposed algorithm is based on a geometric contour framework as opposed to the previous parametric algorithm. This ASST algorithm was tested on a breast cancer image database of 730 ultrasound breast images (73 ultrasound studies of 23 patients). We compared skin segmentation results obtained with the ASST with manual contours performed by two physicians. The average percentage differences in skin thickness between the ASST measurement and that of each physician were less than 5% (4.8 ± 17.8% and -3.8 ± 21.1%, respectively). In summary, we have developed an automatic skin segmentation method that ensures objective assessment of radiation-induced changes in skin thickness. Our ultrasound technology offers a unique opportunity to quantify tissue injury in a more meaningful and reproducible manner than the subjective assessments currently employed in the clinic. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Prostate segmentation in MRI using fused T2-weighted and elastography images
NASA Astrophysics Data System (ADS)
Nir, Guy; Sahebjavaher, Ramin S.; Baghani, Ali; Sinkus, Ralph; Salcudean, Septimiu E.
2014-03-01
Segmentation of the prostate in medical imaging is a challenging and important task for surgical planning and delivery of prostate cancer treatment. Automatic prostate segmentation can improve speed, reproducibility and consistency of the process. In this work, we propose a method for automatic segmentation of the prostate in magnetic resonance elastography (MRE) images. The method utilizes the complementary property of the elastogram and the corresponding T2-weighted image, which are obtained from the phase and magnitude components of the imaging signal, respectively. It follows a variational approach to propagate an active contour model based on the combination of region statistics in the elastogram and the edge map of the T2-weighted image. The method is fast and does not require prior shape information. The proposed algorithm is tested on 35 clinical image pairs from five MRE data sets, and is evaluated in comparison with manual contouring. The mean absolute distance between the automatic and manual contours is 1.8mm, with a maximum distance of 5.6mm. The relative area error is 7.6%, and the duration of the segmentation process is 2s per slice.
SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, S; Dolly, S; Cai, B
Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less
Brain tumor segmentation with Vander Lugt correlator based active contour.
Essadike, Abdelaziz; Ouabida, Elhoussaine; Bouzid, Abdenbi
2018-07-01
The manual segmentation of brain tumors from medical images is an error-prone, sensitive, and time-absorbing process. This paper presents an automatic and fast method of brain tumor segmentation. In the proposed method, a numerical simulation of the optical Vander Lugt correlator is used for automatically detecting the abnormal tissue region. The tumor filter, used in the simulated optical correlation, is tailored to all the brain tumor types and especially to the Glioblastoma, which considered to be the most aggressive cancer. The simulated optical correlation, computed between Magnetic Resonance Images (MRI) and this filter, estimates precisely and automatically the initial contour inside the tumorous tissue. Further, in the segmentation part, the detected initial contour is used to define an active contour model and presenting the problematic as an energy minimization problem. As a result, this initial contour assists the algorithm to evolve an active contour model towards the exact tumor boundaries. Equally important, for a comparison purposes, we considered different active contour models and investigated their impact on the performance of the segmentation task. Several images from BRATS database with tumors anywhere in images and having different sizes, contrast, and shape, are used to test the proposed system. Furthermore, several performance metrics are computed to present an aggregate overview of the proposed method advantages. The proposed method achieves a high accuracy in detecting the tumorous tissue by a parameter returned by the simulated optical correlation. In addition, the proposed method yields better performance compared to the active contour based methods with the averages of Sensitivity=0.9733, Dice coefficient = 0.9663, Hausdroff distance = 2.6540, Specificity = 0.9994, and faster with a computational time average of 0.4119 s per image. Results reported on BRATS database reveal that our proposed system improves over the recently published state-of-the-art methods in brain tumor detection and segmentation. Copyright © 2018 Elsevier B.V. All rights reserved.
Lymph node segmentation by dynamic programming and active contours.
Tan, Yongqiang; Lu, Lin; Bonde, Apurva; Wang, Deling; Qi, Jing; Schwartz, Lawrence H; Zhao, Binsheng
2018-03-03
Enlarged lymph nodes are indicators of cancer staging, and the change in their size is a reflection of treatment response. Automatic lymph node segmentation is challenging, as the boundary can be unclear and the surrounding structures complex. This work communicates a new three-dimensional algorithm for the segmentation of enlarged lymph nodes. The algorithm requires a user to draw a region of interest (ROI) enclosing the lymph node. Rays are cast from the center of the ROI, and the intersections of the rays and the boundary of the lymph node form a triangle mesh. The intersection points are determined by dynamic programming. The triangle mesh initializes an active contour which evolves to low-energy boundary. Three radiologists independently delineated the contours of 54 lesions from 48 patients. Dice coefficient was used to evaluate the algorithm's performance. The mean Dice coefficient between computer and the majority vote results was 83.2%. The mean Dice coefficients between the three radiologists' manual segmentations were 84.6%, 86.2%, and 88.3%. The performance of this segmentation algorithm suggests its potential clinical value for quantifying enlarged lymph nodes. © 2018 American Association of Physicists in Medicine.
2013-01-01
Background Activity of disease in patients with multiple sclerosis (MS) is monitored by detecting and delineating hyper-intense lesions on MRI scans. The Minimum Area Contour Change (MACC) algorithm has been created with two main goals: a) to improve inter-operator agreement on outlining regions of interest (ROIs) and b) to automatically propagate longitudinal ROIs from the baseline scan to a follow-up scan. Methods The MACC algorithm first identifies an outer bound for the solution path, forms a high number of iso-contour curves based on equally spaced contour values, and then selects the best contour value to outline the lesion. The MACC software was tested on a set of 17 FLAIR MRI images evaluated by a pair of human experts and a longitudinal dataset of 12 pairs of T2-weighted Fluid Attenuated Inversion Recovery (FLAIR) images that had lesion analysis ROIs drawn by a single expert operator. Results In the tests where two human experts evaluated the same MRI images, the MACC program demonstrated that it could markedly reduce inter-operator outline error. In the longitudinal part of the study, the MACC program created ROIs on follow-up scans that were in close agreement to the original expert’s ROIs. Finally, in a post-hoc analysis of 424 follow-up scans 91% of propagated MACC were accepted by an expert and only 9% of the final accepted ROIS had to be created or edited by the expert. Conclusion When used with an expert operator's verification of automatically created ROIs, MACC can be used to improve inter- operator agreement and decrease analysis time, which should improve data collected and analyzed in multicenter clinical trials. PMID:24004511
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labine, Alexandre; Carrier, Jean-François; Bedwani, Stéphane
2014-08-15
Purpose: To investigate an automatic bronchial and vessel bifurcations detection algorithm for deformable image registration (DIR) assessment to improve lung cancer radiation treatment. Methods: 4DCT datasets were acquired and exported to Varian treatment planning system (TPS) EclipseTM for contouring. The lungs TPS contour was used as the prior shape for a segmentation algorithm based on hierarchical surface deformation that identifies the deformed lungs volumes of the 10 breathing phases. Hounsfield unit (HU) threshold filter was applied within the segmented lung volumes to identify blood vessels and airways. Segmented blood vessels and airways were skeletonised using a hierarchical curve-skeleton algorithm basedmore » on a generalized potential field approach. A graph representation of the computed skeleton was generated to assign one of three labels to each node: the termination node, the continuation node or the branching node. Results: 320 ± 51 bifurcations were detected in the right lung of a patient for the 10 breathing phases. The bifurcations were visually analyzed. 92 ± 10 bifurcations were found in the upper half of the lung and 228 ± 45 bifurcations were found in the lower half of the lung. Discrepancies between ten vessel trees were mainly ascribed to large deformation and in regions where the HU varies. Conclusions: We established an automatic method for DIR assessment using the morphological information of the patient anatomy. This approach allows a description of the lung's internal structure movement, which is needed to validate the DIR deformation fields for accurate 4D cancer treatment planning.« less
Automatic exudate detection by fusing multiple active contours and regionwise classification.
Harangi, Balazs; Hajdu, Andras
2014-11-01
In this paper, we propose a method for the automatic detection of exudates in digital fundus images. Our approach can be divided into three stages: candidate extraction, precise contour segmentation and the labeling of candidates as true or false exudates. For candidate detection, we borrow a grayscale morphology-based method to identify possible regions containing these bright lesions. Then, to extract the precise boundary of the candidates, we introduce a complex active contour-based method. Namely, to increase the accuracy of segmentation, we extract additional possible contours by taking advantage of the diverse behavior of different pre-processing methods. After selecting an appropriate combination of the extracted contours, a region-wise classifier is applied to remove the false exudate candidates. For this task, we consider several region-based features, and extract an appropriate feature subset to train a Naïve-Bayes classifier optimized further by an adaptive boosting technique. Regarding experimental studies, the method was tested on publicly available databases both to measure the accuracy of the segmentation of exudate regions and to recognize their presence at image-level. In a proper quantitative evaluation on publicly available datasets the proposed approach outperformed several state-of-the-art exudate detector algorithms. Copyright © 2014 Elsevier Ltd. All rights reserved.
A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-01
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system. PACS number(s): 87.57.nm, 87.57.N-, 87.61.Tg. © 2016 The Authors.
A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa
2016-01-01
On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on‐board MR‐IGRT system. PACS number(s): 87.57.nm, 87.57.N‐, 87.61.Tg
Segmentation algorithm on smartphone dual camera: application to plant organs in the wild
NASA Astrophysics Data System (ADS)
Bertrand, Sarah; Cerutti, Guillaume; Tougne, Laure
2018-04-01
In order to identify the species of a tree, the different organs that are the leaves, the bark, the flowers and the fruits, are inspected by botanists. So as to develop an algorithm that identifies automatically the species, we need to extract these objects of interest from their complex natural environment. In this article, we focus on the segmentation of flowers and fruits and we present a new method of segmentation based on an active contour algorithm using two probability maps. The first map is constructed via the dual camera that we can find on the back of the latest smartphones. The second map is made with the help of a multilayer perceptron (MLP). The combination of these two maps to drive the evolution of the object contour allows an efficient segmentation of the organ from a natural background.
Computer-aided US diagnosis of breast lesions by using cell-based contour grouping.
Cheng, Jie-Zhi; Chou, Yi-Hong; Huang, Chiun-Sheng; Chang, Yeun-Chung; Tiu, Chui-Mei; Chen, Kuei-Wu; Chen, Chung-Ming
2010-06-01
To develop a computer-aided diagnostic algorithm with automatic boundary delineation for differential diagnosis of benign and malignant breast lesions at ultrasonography (US) and investigate the effect of boundary quality on the performance of a computer-aided diagnostic algorithm. This was an institutional review board-approved retrospective study with waiver of informed consent. A cell-based contour grouping (CBCG) segmentation algorithm was used to delineate the lesion boundaries automatically. Seven morphologic features were extracted. The classifier was a logistic regression function. Five hundred twenty breast US scans were obtained from 520 subjects (age range, 15-89 years), including 275 benign (mean size, 15 mm; range, 5-35 mm) and 245 malignant (mean size, 18 mm; range, 8-29 mm) lesions. The newly developed computer-aided diagnostic algorithm was evaluated on the basis of boundary quality and differentiation performance. The segmentation algorithms and features in two conventional computer-aided diagnostic algorithms were used for comparative study. The CBCG-generated boundaries were shown to be comparable with the manually delineated boundaries. The area under the receiver operating characteristic curve (AUC) and differentiation accuracy were 0.968 +/- 0.010 and 93.1% +/- 0.7, respectively, for all 520 breast lesions. At the 5% significance level, the newly developed algorithm was shown to be superior to the use of the boundaries and features of the two conventional computer-aided diagnostic algorithms in terms of AUC (0.974 +/- 0.007 versus 0.890 +/- 0.008 and 0.788 +/- 0.024, respectively). The newly developed computer-aided diagnostic algorithm that used a CBCG segmentation method to measure boundaries achieved a high differentiation performance. Copyright RSNA, 2010
Segmentation of radiographic images under topological constraints: application to the femur.
Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang
2010-09-01
A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.
Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve
2013-12-21
Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine.
Human body motion tracking based on quantum-inspired immune cloning algorithm
NASA Astrophysics Data System (ADS)
Han, Hong; Yue, Lichuan; Jiao, Licheng; Wu, Xing
2009-10-01
In a static monocular camera system, to gain a perfect 3D human body posture is a great challenge for Computer Vision technology now. This paper presented human postures recognition from video sequences using the Quantum-Inspired Immune Cloning Algorithm (QICA). The algorithm included three parts. Firstly, prior knowledge of human beings was used, the key joint points of human could be detected automatically from the human contours and skeletons which could be thinning from the contours; And due to the complexity of human movement, a forecasting mechanism of occlusion joint points was addressed to get optimum 2D key joint points of human body; And then pose estimation recovered by optimizing between the 2D projection of 3D human key joint points and 2D detection key joint points using QICA, which recovered the movement of human body perfectly, because this algorithm could acquire not only the global optimal solution, but the local optimal solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoot, A. J. A. J. van de, E-mail: a.j.schootvande@amc.uva.nl; Schooneveldt, G.; Wognum, S.
Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used tomore » guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation results significantly (p < 0.01) based on DSC (6.72%) and SD of contour-to-contour distances (0.08 cm) and decreased the 95% confidence intervals of the bladder volume differences. Moreover, expanding the shape model improved the segmentation results significantly (p < 0.01) based on DSC and SD of contour-to-contour distances. Conclusions: This patient-specific shape model based automatic bladder segmentation method on CBCT is accurate and generic. Our segmentation method only needs two pretreatment imaging data sets as prior knowledge, is independent of patient gender and patient treatment position and has the possibility to manually adapt the segmentation locally.« less
Mishra, Ajay; Aloimonos, Yiannis
2009-01-01
The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.
NASA Astrophysics Data System (ADS)
Cheng, Guanghui; Yang, Xiaofeng; Wu, Ning; Xu, Zhijian; Zhao, Hongfu; Wang, Yuefeng; Liu, Tian
2013-02-01
Xerostomia (dry mouth), resulting from radiation damage to the parotid glands, is one of the most common and distressing side effects of head-and-neck cancer radiotherapy. Recent MRI studies have demonstrated that the volume reduction of parotid glands is an important indicator for radiation damage and xerostomia. In the clinic, parotid-volume evaluation is exclusively based on physicians' manual contours. However, manual contouring is time-consuming and prone to inter-observer and intra-observer variability. Here, we report a fully automated multi-atlas-based registration method for parotid-gland delineation in 3D head-and-neck MR images. The multi-atlas segmentation utilizes a hybrid deformable image registration to map the target subject to multiple patients' images, applies the transformation to the corresponding segmented parotid glands, and subsequently uses the multiple patient-specific pairs (head-and-neck MR image and transformed parotid-gland mask) to train support vector machine (SVM) to reach consensus to segment the parotid gland of the target subject. This segmentation algorithm was tested with head-and-neck MRIs of 5 patients following radiotherapy for the nasopharyngeal cancer. The average parotid-gland volume overlapped 85% between the automatic segmentations and the physicians' manual contours. In conclusion, we have demonstrated the feasibility of an automatic multi-atlas based segmentation algorithm to segment parotid glands in head-and-neck MR images.
Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura
2016-01-01
The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.
Breast boundary detection with active contours
NASA Astrophysics Data System (ADS)
Balic, I.; Goyal, P.; Roy, O.; Duric, N.
2014-03-01
Ultrasound tomography is a modality that can be used to image various characteristics of the breast, such as sound speed, attenuation, and reflectivity. In the considered setup, the breast is immersed in water and scanned along the coronal axis from the chest wall to the nipple region. To improve image visualization, it is desirable to remove the water background. To this end, the 3D boundary of the breast must be accurately estimated. We present an iterative algorithm based on active contours that automatically detects the boundary of a breast using a 3D stack of attenuation images obtained from an ultrasound tomography scanner. We build upon an existing method to design an algorithm that is fast, fully automated, and reliable. We demonstrate the effectiveness of the proposed technique using clinical data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padgett, K; Pollack, A; Stoyanova, R
Purpose: Automatically generated prostate MRI contours can be used to aid in image registration with CT or ultrasound and to reduce the burden of contouring for radiation treatment planning. In addition, prostate and zonal contours can assist to automate quantitative imaging features extraction and the analyses of longitudinal MRI studies. These potential gains are limited if the solutions are not compatible across different MRI vendors. The goal of this study is to characterize an atlas based automatic segmentation procedure of the prostate collected on MRI systems from multiple vendors. Methods: The prostate and peripheral zone (PZ) were manually contoured bymore » an expert radiation oncologist on T2-weighted scans acquired on both GE (n=31) and Siemens (n=33) 3T MRI systems. A leave-one-out approach was utilized where the target subject is removed from the atlas before the segmentation algorithm is initiated. The atlas-segmentation method finds the best nine matched atlas subjects and then performs a normalized intensity-based free-form deformable registration of these subjects to the target subject. These nine contours are then merged into a single contour using Simultaneous Truth and Performance Level Estimation (STAPLE). Contour comparisons were made using Dice similarity coefficients (DSC) and Hausdorff distances. Results: Using the T2 FatSat (FS) GE datasets the atlas generated contours resulted in an average DSC of 0.83±0.06 for prostate, 0.57±0.12 for PZ and 0.75±0.09 for CG. Similar results were found when using the Siemens data with a DSC of 0.79±0.14 for prostate, 0.54±0.16 and 0.70±0.9. Contrast between prostate and surrounding anatomy and between the PZ and CG contours for both vendors demonstrated superior contrast separation; significance was found for all comparisons p-value < 0.0001. Conclusion: Atlas-based segmentation yielded promising results for all contours compared to expertly defined contours in both Siemens and GE 3T systems providing fast and automatic segmentation of the prostate. Funding Support, Disclosures, and Conflict of Interest: AS Nelson is a partial owner of MIM Software, Inc. AS Nelson, and A Swallen are current employees at MIM Software, Inc.« less
Automatic liver contouring for radiotherapy treatment planning
NASA Astrophysics Data System (ADS)
Li, Dengwang; Liu, Li; Kapp, Daniel S.; Xing, Lei
2015-09-01
To develop automatic and efficient liver contouring software for planning 3D-CT and four-dimensional computed tomography (4D-CT) for application in clinical radiation therapy treatment planning systems. The algorithm comprises three steps for overcoming the challenge of similar intensities between the liver region and its surrounding tissues. First, the total variation model with the L1 norm (TV-L1), which has the characteristic of multi-scale decomposition and an edge-preserving property, is used for removing the surrounding muscles and tissues. Second, an improved level set model that contains both global and local energy functions is utilized to extract liver contour information sequentially. In the global energy function, the local correlation coefficient (LCC) is constructed based on the gray level co-occurrence matrix both of the initial liver region and the background region. The LCC can calculate the correlation of a pixel with the foreground and background regions, respectively. The LCC is combined with intensity distribution models to classify pixels during the evolutionary process of the level set based method. The obtained liver contour is used as the candidate liver region for the following step. In the third step, voxel-based texture characterization is employed for refining the liver region and obtaining the final liver contours. The proposed method was validated based on the planning CT images of a group of 25 patients undergoing radiation therapy treatment planning. These included ten lung cancer patients with normal appearing livers and ten patients with hepatocellular carcinoma or liver metastases. The method was also tested on abdominal 4D-CT images of a group of five patients with hepatocellular carcinoma or liver metastases. The false positive volume percentage, the false negative volume percentage, and the dice similarity coefficient between liver contours obtained by a developed algorithm and a current standard delineated by the expert group are on an average 2.15-2.57%, 2.96-3.23%, and 91.01-97.21% for the CT images with normal appearing livers, 2.28-3.62%, 3.15-4.33%, and 86.14-93.53% for the CT images with hepatocellular carcinoma or liver metastases, and 2.37-3.96%, 3.25-4.57%, and 82.23-89.44% for the 4D-CT images also with hepatocellular carcinoma or liver metastases, respectively. The proposed three-step method can achieve efficient automatic liver contouring for planning CT and 4D-CT images with follow-up treatment planning and should find widespread applications in future treatment planning systems.
NASA Astrophysics Data System (ADS)
Mohammad, Fatimah; Ansari, Rashid; Shahidi, Mahnaz
2013-03-01
The visibility and continuity of the inner segment outer segment (ISOS) junction layer of the photoreceptors on spectral domain optical coherence tomography images is known to be related to visual acuity in patients with age-related macular degeneration (AMD). Automatic detection and segmentation of lesions and pathologies in retinal images is crucial for the screening, diagnosis, and follow-up of patients with retinal diseases. One of the challenges of using the classical level-set algorithms for segmentation involves the placement of the initial contour. Manually defining the contour or randomly placing it in the image may lead to segmentation of erroneous structures. It is important to be able to automatically define the contour by using information provided by image features. We explored a level-set method which is based on the classical Chan-Vese model and which utilizes image feature information for automatic contour placement for the segmentation of pathologies in fluorescein angiograms and en face retinal images of the ISOS layer. This was accomplished by exploiting a priori knowledge of the shape and intensity distribution allowing the use of projection profiles to detect the presence of pathologies that are characterized by intensity differences with surrounding areas in retinal images. We first tested our method by applying it to fluorescein angiograms. We then applied our method to en face retinal images of patients with AMD. The experimental results included demonstrate that the proposed method provided a quick and improved outcome as compared to the classical Chan-Vese method in which the initial contour is randomly placed, thus indicating the potential to provide a more accurate and detailed view of changes in pathologies due to disease progression and treatment.
Semi-automatic 3D lung nodule segmentation in CT using dynamic programming
NASA Astrophysics Data System (ADS)
Sargent, Dustin; Park, Sun Young
2017-02-01
We present a method for semi-automatic segmentation of lung nodules in chest CT that can be extended to general lesion segmentation in multiple modalities. Most semi-automatic algorithms for lesion segmentation or similar tasks use region-growing or edge-based contour finding methods such as level-set. However, lung nodules and other lesions are often connected to surrounding tissues, which makes these algorithms prone to growing the nodule boundary into the surrounding tissue. To solve this problem, we apply a 3D extension of the 2D edge linking method with dynamic programming to find a closed surface in a spherical representation of the nodule ROI. The algorithm requires a user to draw a maximal diameter across the nodule in the slice in which the nodule cross section is the largest. We report the lesion volume estimation accuracy of our algorithm on the FDA lung phantom dataset, and the RECIST diameter estimation accuracy on the lung nodule dataset from the SPIE 2016 lung nodule classification challenge. The phantom results in particular demonstrate that our algorithm has the potential to mitigate the disparity in measurements performed by different radiologists on the same lesions, which could improve the accuracy of disease progression tracking.
NASA Astrophysics Data System (ADS)
Khamwan, Kitiwat; Krisanachinda, Anchali; Pluempitiwiriyawej, Charnchai
2012-10-01
This study presents an automatic method to trace the boundary of the tumour in positron emission tomography (PET) images. It has been discovered that Otsu's threshold value is biased when the within-class variances between the object and the background are significantly different. To solve the problem, a double-stage threshold search that minimizes the energy between the first Otsu's threshold and the maximum intensity value is introduced. Such shifted-optimal thresholding is embedded into a region-based active contour so that both algorithms are performed consecutively. The efficiency of the method is validated using six sphere inserts (0.52-26.53 cc volume) of the IEC/2001 torso phantom. Both spheres and phantom were filled with 18F solution with four source-to-background ratio (SBR) measurements of PET images. The results illustrate that the tumour volumes segmented by combined algorithm are of higher accuracy than the traditional active contour. The method had been clinically implemented in ten oesophageal cancer patients. The results are evaluated and compared with the manual tracing by an experienced radiation oncologist. The advantage of the algorithm is the reduced erroneous delineation that improves the precision and accuracy of PET tumour contouring. Moreover, the combined method is robust, independent of the SBR threshold-volume curves, and it does not require prior lesion size measurement.
Multi-sensor image registration based on algebraic projective invariants.
Li, Bin; Wang, Wei; Ye, Hao
2013-04-22
A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dise, J; Liang, X; Lin, L
Purpose: To evaluate an automatic interstitial catheter digitization algorithm that reduces treatment planning time and provide means for adaptive re-planning in HDR Brachytherapy of Gynecologic Cancers. Methods: The semi-automatic catheter digitization tool utilizes a region growing algorithm in conjunction with a spline model of the catheters. The CT images were first pre-processed to enhance the contrast between the catheters and soft tissue. Several seed locations were selected in each catheter for the region growing algorithm. The spline model of the catheters assisted in the region growing by preventing inter-catheter cross-over caused by air or metal artifacts. Source dwell positions frommore » day one CT scans were applied to subsequent CTs and forward calculated using the automatically digitized catheter positions. This method was applied to 10 patients who had received HDR interstitial brachytherapy on an IRB approved image-guided radiation therapy protocol. The prescribed dose was 18.75 or 20 Gy delivered in 5 fractions, twice daily, over 3 consecutive days. Dosimetric comparisons were made between automatic and manual digitization on day two CTs. Results: The region growing algorithm, assisted by the spline model of the catheters, was able to digitize all catheters. The difference between automatic and manually digitized positions was 0.8±0.3 mm. The digitization time ranged from 34 minutes to 43 minutes with a mean digitization time of 37 minutes. The bulk of the time was spent on manual selection of initial seed positions and spline parameter adjustments. There was no significance difference in dosimetric parameters between the automatic and manually digitized plans. D90% to the CTV was 91.5±4.4% for the manual digitization versus 91.4±4.4% for the automatic digitization (p=0.56). Conclusion: A region growing algorithm was developed to semi-automatically digitize interstitial catheters in HDR brachytherapy using the Syed-Neblett template. This automatic digitization tool was shown to be accurate compared to manual digitization.« less
Memory-Efficient Onboard Rock Segmentation
NASA Technical Reports Server (NTRS)
Burl, Michael C.; Thompson, David R.; Bornstein, Benjamin J.; deGranville, Charles K.
2013-01-01
Rockster-MER is an autonomous perception capability that was uploaded to the Mars Exploration Rover Opportunity in December 2009. This software provides the vision front end for a larger software system known as AEGIS (Autonomous Exploration for Gathering Increased Science), which was recently named 2011 NASA Software of the Year. As the first step in AEGIS, Rockster-MER analyzes an image captured by the rover, and detects and automatically identifies the boundary contours of rocks and regions of outcrop present in the scene. This initial segmentation step reduces the data volume from millions of pixels into hundreds (or fewer) of rock contours. Subsequent stages of AEGIS then prioritize the best rocks according to scientist- defined preferences and take high-resolution, follow-up observations. Rockster-MER has performed robustly from the outset on the Mars surface under challenging conditions. Rockster-MER is a specially adapted, embedded version of the original Rockster algorithm ("Rock Segmentation Through Edge Regrouping," (NPO- 44417) Software Tech Briefs, September 2008, p. 25). Although the new version performs the same basic task as the original code, the software has been (1) significantly upgraded to overcome the severe onboard re source limitations (CPU, memory, power, time) and (2) "bulletproofed" through code reviews and extensive testing and profiling to avoid the occurrence of faults. Because of the limited computational power of the RAD6000 flight processor on Opportunity (roughly two orders of magnitude slower than a modern workstation), the algorithm was heavily tuned to improve its speed. Several functional elements of the original algorithm were removed as a result of an extensive cost/benefit analysis conducted on a large set of archived rover images. The algorithm was also required to operate below a stringent 4MB high-water memory ceiling; hence, numerous tricks and strategies were introduced to reduce the memory footprint. Local filtering operations were re-coded to operate on horizontal data stripes across the image. Data types were reduced to smaller sizes where possible. Binary- valued intermediate results were squeezed into a more compact, one-bit-per-pixel representation through bit packing and bit manipulation macros. An estimated 16-fold reduction in memory footprint relative to the original Rockster algorithm was achieved. The resulting memory footprint is less than four times the base image size. Also, memory allocation calls were modified to draw from a static pool and consolidated to reduce memory management overhead and fragmentation. Rockster-MER has now been run onboard Opportunity numerous times as part of AEGIS with exceptional performance. Sample results are available on the AEGIS website at http://aegis.jpl.nasa.gov.
NASA Astrophysics Data System (ADS)
Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.
2012-02-01
Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Y; Chen, I; Kashani, R
Purpose: In MRI-guided online adaptive radiation therapy, re-contouring of bowel is time-consuming and can impact the overall time of patients on table. The study aims to auto-segment bowel on volumetric MR images by using an interactive multi-region labeling algorithm. Methods: 5 Patients with locally advanced pancreatic cancer underwent fractionated radiotherapy (18–25 fractions each, total 118 fractions) on an MRI-guided radiation therapy system with a 0.35 Tesla magnet and three Co-60 sources. At each fraction, a volumetric MR image of the patient was acquired when the patient was in the treatment position. An interactive two-dimensional multi-region labeling technique based on graphmore » cut solver was applied on several typical MRI images to segment the large bowel and small bowel, followed by a shape based contour interpolation for generating entire bowel contours along all image slices. The resulted contours were compared with the physician’s manual contouring by using metrics of Dice coefficient and Hausdorff distance. Results: Image data sets from the first 5 fractions of each patient were selected (total of 25 image data sets) for the segmentation test. The algorithm segmented the large and small bowel effectively and efficiently. All bowel segments were successfully identified, auto-contoured and matched with manual contours. The time cost by the algorithm for each image slice was within 30 seconds. For large bowel, the calculated Dice coefficients and Hausdorff distances (mean±std) were 0.77±0.07 and 13.13±5.01mm, respectively; for small bowel, the corresponding metrics were 0.73±0.08and 14.15±4.72mm, respectively. Conclusion: The preliminary results demonstrated the potential of the proposed algorithm in auto-segmenting large and small bowel on low field MRI images in MRI-guided adaptive radiation therapy. Further work will be focused on improving its segmentation accuracy and lessening human interaction.« less
Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs
NASA Astrophysics Data System (ADS)
Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Jiang Graves, Yan; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve
2013-12-01
Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine. This work was originally presented at the 54th AAPM annual meeting in Charlotte, NC, July 29-August 2, 2012.
Welding deviation detection algorithm based on extremum of molten pool image contour
NASA Astrophysics Data System (ADS)
Zou, Yong; Jiang, Lipei; Li, Yunhua; Xue, Long; Huang, Junfen; Huang, Jiqiang
2016-01-01
The welding deviation detection is the basis of robotic tracking welding, but the on-line real-time measurement of welding deviation is still not well solved by the existing methods. There is plenty of information in the gas metal arc welding(GMAW) molten pool images that is very important for the control of welding seam tracking. The physical meaning for the curvature extremum of molten pool contour is revealed by researching the molten pool images, that is, the deviation information points of welding wire center and the molten tip center are the maxima and the local maxima of the contour curvature, and the horizontal welding deviation is the position difference of these two extremum points. A new method of weld deviation detection is presented, including the process of preprocessing molten pool images, extracting and segmenting the contours, obtaining the contour extremum points, and calculating the welding deviation, etc. Extracting the contours is the premise, segmenting the contour lines is the foundation, and obtaining the contour extremum points is the key. The contour images can be extracted with the method of discrete dyadic wavelet transform, which is divided into two sub contours including welding wire and molten tip separately. The curvature value of each point of the two sub contour lines is calculated based on the approximate curvature formula of multi-points for plane curve, and the two points of the curvature extremum are the characteristics needed for the welding deviation calculation. The results of the tests and analyses show that the maximum error of the obtained on-line welding deviation is 2 pixels(0.16 mm), and the algorithm is stable enough to meet the requirements of the pipeline in real-time control at a speed of less than 500 mm/min. The method can be applied to the on-line automatic welding deviation detection.
On a program manifold's stability of one contour automatic control systems
NASA Astrophysics Data System (ADS)
Zumatov, S. S.
2017-12-01
Methodology of analysis of stability is expounded to the one contour systems automatic control feedback in the presence of non-linearities. The methodology is based on the use of the simplest mathematical models of the nonlinear controllable systems. Stability of program manifolds of one contour automatic control systems is investigated. The sufficient conditions of program manifold's absolute stability of one contour automatic control systems are obtained. The Hurwitz's angle of absolute stability was determined. The sufficient conditions of program manifold's absolute stability of control systems by the course of plane in the mode of autopilot are obtained by means Lyapunov's second method.
An automated skin segmentation of Breasts in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Lee, Chia-Yen; Chang, Tzu-Fang; Chang, Nai-Yun; Chang, Yeun-Chung
2018-04-18
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is used to diagnose breast disease. Obtaining anatomical information from DCE-MRI requires the skin be manually removed so that blood vessels and tumors can be clearly observed by physicians and radiologists; this requires considerable manpower and time. We develop an automated skin segmentation algorithm where the surface skin is removed rapidly and correctly. The rough skin area is segmented by the active contour model, and analyzed in segments according to the continuity of the skin thickness for accuracy. Blood vessels and mammary glands are retained, which remedies the defect of removing some blood vessels in active contours. After three-dimensional imaging, the DCE-MRIs without the skin can be used to see internal anatomical information for clinical applications. The research showed the Dice's coefficients of the 3D reconstructed images using the proposed algorithm and the active contour model for removing skins are 93.2% and 61.4%, respectively. The time performance of segmenting skins automatically is about 165 times faster than manually. The texture information of the tumors position with/without the skin is compared by the paired t-test yielded all p < 0.05, which suggested the proposed algorithm may enhance observability of tumors at the significance level of 0.05.
Wang, Jinke; Guo, Haoyan
2016-01-01
This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) 11.15 ± 69.63 cm 3 , volume overlap error (VOE) 3.5057 ± 1.3719%, average surface distance (ASD) 0.7917 ± 0.2741 mm, root mean square distance (RMSD) 1.6957 ± 0.6568 mm, maximum symmetric absolute surface distance (MSD) 21.3430 ± 8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255
Yang, C; Paulson, E; Li, X
2012-06-01
To develop and evaluate a tool that can improve the accuracy of contour transfer between different image modalities under challenging conditions of low image contrast and large image deformation, comparing to a few commonly used methods, for radiation treatment planning. The software tool includes the following steps and functionalities: (1) accepting input of images of different modalities, (2) converting existing contours on reference images (e.g., MRI) into delineated volumes and adjusting the intensity within the volumes to match target images (e.g., CT) intensity distribution for enhanced similarity metric, (3) registering reference and target images using appropriate deformable registration algorithms (e.g., B-spline, demons) and generate deformed contours, (4) mapping the deformed volumes on target images, calculating mean, variance, and center of mass as the initialization parameters for consecutive fuzzy connectedness (FC) image segmentation on target images, (5) generate affinity map from FC segmentation, (6) achieving final contours by modifying the deformed contours using the affinity map with a gradient distance weighting algorithm. The tool was tested with the CT and MR images of four pancreatic cancer patients acquired at the same respiration phase to minimize motion distortion. Dice's Coefficient was calculated against direct delineation on target image. Contours generated by various methods, including rigid transfer, auto-segmentation, deformable only transfer and proposed method, were compared. Fuzzy connected image segmentation needs careful parameter initialization and user involvement. Automatic contour transfer by multi-modality deformable registration leads up to 10% of accuracy improvement over the rigid transfer. Two extra proposed steps of adjusting intensity distribution and modifying the deformed contour with affinity map improve the transfer accuracy further to 14% averagely. Deformable image registration aided by contrast adjustment and fuzzy connectedness segmentation improves the contour transfer accuracy between multi-modality images, particularly with large deformation and low image contrast. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki
This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.
Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao
2014-01-01
Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.
Automatic contouring of geologic fabric and finite strain data on the unit hyperboloid
NASA Astrophysics Data System (ADS)
Vollmer, Frederick W.
2018-06-01
Fabric and finite strain analysis, an integral part of studies of geologic structures and orogenic belts, is commonly done by the analysis of particles whose shapes can be approximated as ellipses. Given a sample of such particles, the mean and confidence intervals of particular parameters can be calculated, however, taking the extra step of plotting and contouring the density distribution can identify asymmetries or modes related to sedimentary fabrics or other factors. A common graphical strain analysis technique is to plot final ellipse ratios, Rf , versus orientations, ϕf on polar Elliott or Rf / ϕ plots to examine the density distribution. The plot may be contoured, however, it is desirable to have a contouring method that is rapid, reproducible, and based on the underlying geometry of the data. The unit hyperboloid, H2 , gives a natural parameter space for two-dimensional strain, and various projections, including equal-area and stereographic, have useful properties for examining density distributions for anisotropy. An index, Ia , is given to quantify the magnitude and direction of anisotropy. Elliott and Rf / ϕ plots can be understood by applying hyperbolic geometry and recognizing them as projections of H2 . These both distort area, however, so the equal-area projection is preferred for examining density distributions. The algorithm presented here gives fast, accurate, and reproducible contours of density distributions calculated directly on H2 . The algorithm back-projects the data onto H2 , where the density calculation is done at regular nodes using a weighting value based on the hyperboloid distribution, which is then contoured. It is implemented as an Octave compatible MATLAB function that plots ellipse data using a variety of projections, and calculates and displays contours of their density distribution on H2 .
Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2006-09-01
This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.
Contour detection improved by context-adaptive surround suppression.
Sang, Qiang; Cai, Biao; Chen, Hao
2017-01-01
Recently, many image processing applications have taken advantage of a psychophysical and neurophysiological mechanism, called "surround suppression" to extract object contour from a natural scene. However, these traditional methods often adopt a single suppression model and a fixed input parameter called "inhibition level", which needs to be manually specified. To overcome these drawbacks, we propose a novel model, called "context-adaptive surround suppression", which can automatically control the effect of surround suppression according to image local contextual features measured by a surface estimator based on a local linear kernel. Moreover, a dynamic suppression method and its stopping mechanism are introduced to avoid manual intervention. The proposed algorithm is demonstrated and validated by a broad range of experimental results.
Eyeglasses Lens Contour Extraction from Facial Images Using an Efficient Shape Description
Borza, Diana; Darabant, Adrian Sergiu; Danescu, Radu
2013-01-01
This paper presents a system that automatically extracts the position of the eyeglasses and the accurate shape and size of the frame lenses in facial images. The novelty brought by this paper consists in three key contributions. The first one is an original model for representing the shape of the eyeglasses lens, using Fourier descriptors. The second one is a method for generating the search space starting from a finite, relatively small number of representative lens shapes based on Fourier morphing. Finally, we propose an accurate lens contour extraction algorithm using a multi-stage Monte Carlo sampling technique. Multiple experiments demonstrate the effectiveness of our approach. PMID:24152926
Characterization of Intraventricular and Intracerebral Hematomas in Non-Contrast CT
Nowinski, Wieslaw L; Gomolka, Ryszard S; Qian, Guoyu; Gupta, Varsha; Ullman, Natalie L; Hanley, Daniel F
2014-01-01
Summary Characterization of hematomas is essential in scan reading, manual delineation, and designing automatic segmentation algorithms. Our purpose is to characterize the distribution of intraventricular (IVH) and intracerebral hematomas (ICH) in NCCT scans, study their relationship to gray matter (GM), and to introduce a new tool for quantitative hematoma delineation. We used 289 serial retrospective scans of 51 patients. Hematomas were manually delineated in a two-stage process. Hematoma contours generated in the first stage were quantified and enhanced in the second stage. Delineation was based on new quantitative rules and hematoma profiling, and assisted by a dedicated tool superimposing quantitative information on scans with 3D hematoma display. The tool provides: density maps (40-85HU), contrast maps (8/15HU), mean horizontal/vertical contrasts for hematoma contours, and hematoma contours below a specified mean contrast (8HU). White matter (WM) and GM were segmented automatically. IVH/ICH on serial NCCT is characterized by 59.0HU mean, 60.0HU median, 11.6HU standard deviation, 23.9HU mean contrast, –0.99HU/day slope, and –0.24 skewness (changing over time from negative to positive). Its 0.1st-99.9th percentile range corresponds to 25-88HU range. WM and GM are highly correlated (R 2=0.88; p<10–10) whereas the GM-GS correlation is weak (R 2=0.14; p<10–10). The intersection point of mean GM-hematoma density distributions is at 55.6±5.8HU with the corresponding GM/hematoma percentiles of 88th/40th. Objective characterization of IVH/ICH and stating the rules quantitatively will aid raters to delineate hematomas more robustly and facilitate designing algorithms for automatic hematoma segmentation. Our two-stage process is general and potentially applicable to delineate other pathologies on various modalities more robustly and quantitatively. PMID:24976197
Characterization of intraventricular and intracerebral hematomas in non-contrast CT.
Nowinski, Wieslaw L; Gomolka, Ryszard S; Qian, Guoyu; Gupta, Varsha; Ullman, Natalie L; Hanley, Daniel F
2014-06-01
Characterization of hematomas is essential in scan reading, manual delineation, and designing automatic segmentation algorithms. Our purpose is to characterize the distribution of intraventricular (IVH) and intracerebral hematomas (ICH) in NCCT scans, study their relationship to gray matter (GM), and to introduce a new tool for quantitative hematoma delineation. We used 289 serial retrospective scans of 51 patients. Hematomas were manually delineated in a two-stage process. Hematoma contours generated in the first stage were quantified and enhanced in the second stage. Delineation was based on new quantitative rules and hematoma profiling, and assisted by a dedicated tool superimposing quantitative information on scans with 3D hematoma display. The tool provides: density maps (40-85HU), contrast maps (8/15HU), mean horizontal/vertical contrasts for hematoma contours, and hematoma contours below a specified mean contrast (8HU). White matter (WM) and GM were segmented automatically. IVH/ICH on serial NCCT is characterized by 59.0HU mean, 60.0HU median, 11.6HU standard deviation, 23.9HU mean contrast, -0.99HU/day slope, and -0.24 skewness (changing over time from negative to positive). Its 0.1(st)-99.9(th) percentile range corresponds to 25-88HU range. WM and GM are highly correlated (R (2)=0.88; p<10(-10)) whereas the GM-GS correlation is weak (R (2)=0.14; p<10(-10)). The intersection point of mean GM-hematoma density distributions is at 55.6±5.8HU with the corresponding GM/hematoma percentiles of 88(th)/40(th). Objective characterization of IVH/ICH and stating the rules quantitatively will aid raters to delineate hematomas more robustly and facilitate designing algorithms for automatic hematoma segmentation. Our two-stage process is general and potentially applicable to delineate other pathologies on various modalities more robustly and quantitatively.
Automatic choroid cells segmentation and counting in fluorescence microscopic image
NASA Astrophysics Data System (ADS)
Fei, Jianjun; Zhu, Weifang; Shi, Fei; Xiang, Dehui; Lin, Xiao; Yang, Lei; Chen, Xinjian
2016-03-01
In this paper, we proposed a method to automatically segment and count the rhesus choroid-retinal vascular endothelial cells (RF/6A) in fluorescence microscopic images which is based on shape classification, bottleneck detection and accelerated Dijkstra algorithm. The proposed method includes four main steps. First, a thresholding filter and morphological operations are applied to reduce the noise. Second, a shape classifier is used to decide whether a connected component is needed to be segmented. In this step, the AdaBoost classifier is applied with a set of shape features. Third, the bottleneck positions are found based on the contours of the connected components. Finally, the cells segmentation and counting are completed based on the accelerated Dijkstra algorithm with the gradient information between the bottleneck positions. The results show the feasibility and efficiency of the proposed method.
Automatic extraction of planetary image features
NASA Technical Reports Server (NTRS)
LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)
2013-01-01
A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.
A compressed sensing method with analytical results for lidar feature classification
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Yuan, Jiangbo; Liu, Xiuwen; Rahmes, Mark
2011-04-01
We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting's unique ability to minimize or eliminate undesirable terrain data artifacts.
Gatos, Ilias; Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Theotokas, Ioannis; Zoumpoulis, Pavlos; Hazle, John D; Kagadis, George C
2015-07-01
Detect and classify focal liver lesions (FLLs) from contrast-enhanced ultrasound (CEUS) imaging by means of an automated quantification algorithm. The proposed algorithm employs a sophisticated segmentation method to detect and contour focal lesions from 52 CEUS video sequences (30 benign and 22 malignant). Lesion detection involves wavelet transform zero crossings utilization as an initialization step to the Markov random field model toward the lesion contour extraction. After FLL detection across frames, time intensity curve (TIC) is computed which provides the contrast agents' behavior at all vascular phases with respect to adjacent parenchyma for each patient. From each TIC, eight features were automatically calculated and employed into the support vector machines (SVMs) classification algorithm in the design of the image analysis model. With regard to FLLs detection accuracy, all lesions detected had an average overlap value of 0.89 ± 0.16 with manual segmentations for all CEUS frame-subsets included in the study. Highest classification accuracy from the SVM model was 90.3%, misdiagnosing three benign and two malignant FLLs with sensitivity and specificity values of 93.1% and 86.9%, respectively. The proposed quantification system that employs FLLs detection and classification algorithms may be of value to physicians as a second opinion tool for avoiding unnecessary invasive procedures.
Automatic pre-processing for an object-oriented distributed hydrological model using GRASS-GIS
NASA Astrophysics Data System (ADS)
Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.
2012-04-01
Landscapes are very heterogeneous, which impact the hydrological processes occurring in the catchments, especially in the modeling of peri-urban catchments. The Hydrological Response Units (HRUs), resulting from the intersection of different maps, such as land use, soil types and geology, and flow networks, allow the representation of these elements in an explicit way, preserving natural and artificial contours of the different layers. These HRUs are used as model mesh in some distributed object-oriented hydrological models, allowing the application of a topological oriented approach. The connectivity between polygons and polylines provides a detailed representation of the water balance and overland flow in these distributed hydrological models, based on irregular hydro-landscape units. When computing fluxes between these HRUs, the geometrical parameters, such as the distance between the centroid of gravity of the HRUs and the river network, and the length of the perimeter, can impact the realism of the calculated overland, sub-surface and groundwater fluxes. Therefore, it is necessary to process the original model mesh in order to avoid these numerical problems. We present an automatic pre-processing implemented in the open source GRASS-GIS software, for which several Python scripts or some algorithms already available were used, such as the Triangle software. First, some scripts were developed to improve the topology of the various elements, such as snapping of the river network to the closest contours. When data are derived with remote sensing, such as vegetation areas, their perimeter has lots of right angles that were smoothed. Second, the algorithms more particularly address bad-shaped elements of the model mesh such as polygons with narrow shapes, marked irregular contours and/or the centroid outside of the polygons. To identify these elements we used shape descriptors. The convexity index was considered the best descriptor to identify them with a threshold of 0.75. Segmentation procedures were implemented and applied with criteria of homogeneous slope, convexity of the elements and maximum area of the HRUs. These tasks were implemented using a triangulation approach, applying the Triangle software, in order to dissolve the polygons according to the convexity index criteria. The automatic pre-processing was applied to two peri-urban French catchment, the Mercier and Chaudanne catchments, with 7.3 km2 and 4.1 km2 respectively. We show that the optimized mesh allows a substantial improvement of the overland flow pathways, because the segmentation procedure gives a more realistic representation of the drainage network. KEYWORDS: GRASS-GIS, Hydrological Response Units, Automatic processing, Peri-urban catchments, Geometrical Algorithms
Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin
2013-01-01
Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139
NASA Astrophysics Data System (ADS)
Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.
This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.
Shimol, Eli Ben; Joskowicz, Leo; Eliahou, Ruth; Shoshan, Yigal
2018-02-01
Stereotactic radiosurgery (SRS) is a common treatment for intracranial meningiomas. SRS is planned on a pre-therapy gadolinium-enhanced T1-weighted MRI scan (Gd-T1w MRI) in which the meningioma contours have been delineated. Post-SRS therapy serial Gd-T1w MRI scans are then acquired for longitudinal treatment evaluation. Accurate tumor volume change quantification is required for treatment efficacy evaluation and for treatment continuation. We present a new algorithm for the automatic segmentation and volumetric assessment of meningioma in post-therapy Gd-T1w MRI scans. The inputs are the pre- and post-therapy Gd-T1w MRI scans and the meningioma delineation in the pre-therapy scan. The output is the meningioma delineations and volumes in the post-therapy scan. The algorithm uses the pre-therapy scan and its meningioma delineation to initialize an extended Chan-Vese active contour method and as a strong patient-specific intensity and shape prior for the post-therapy scan meningioma segmentation. The algorithm is automatic, obviates the need for independent tumor localization and segmentation initialization, and incorporates the same tumor delineation criteria in both the pre- and post-therapy scans. Our experimental results on retrospective pre- and post-therapy scans with a total of 32 meningiomas with volume ranges 0.4-26.5 cm[Formula: see text] yield a Dice coefficient of [Formula: see text]% with respect to ground-truth delineations in post-therapy scans created by two clinicians. These results indicate a high correspondence to the ground-truth delineations. Our algorithm yields more reliable and accurate tumor volume change measurements than other stand-alone segmentation methods. It may be a useful tool for quantitative meningioma prognosis evaluation after SRS.
Hautvast, Gilion L T F; Salton, Carol J; Chuang, Michael L; Breeuwer, Marcel; O'Donnell, Christopher J; Manning, Warren J
2012-05-01
Quantitative analysis of short-axis functional cardiac magnetic resonance images can be performed using automatic contour detection methods. The resulting myocardial contours must be reviewed and possibly corrected, which can be time-consuming, particularly when performed across all cardiac phases. We quantified the impact of manual contour corrections on both analysis time and quantitative measurements obtained from left ventricular short-axis cine images acquired from 1555 participants of the Framingham Heart Study Offspring cohort using computer-aided contour detection methods. The total analysis time for a single case was 7.6 ± 1.7 min for an average of 221 ± 36 myocardial contours per participant. This included 4.8 ± 1.6 min for manual contour correction of 2% of all automatically detected endocardial contours and 8% of all automatically detected epicardial contours. However, the impact of these corrections on global left ventricular parameters was limited, introducing differences of 0.4 ± 4.1 mL for end-diastolic volume, -0.3 ± 2.9 mL for end-systolic volume, 0.7 ± 3.1 mL for stroke volume, and 0.3 ± 1.8% for ejection fraction. We conclude that left ventricular functional parameters can be obtained under 5 min from short-axis functional cardiac magnetic resonance images using automatic contour detection methods. Manual correction more than doubles analysis time, with minimal impact on left ventricular volumes and ejection fraction. Copyright © 2011 Wiley Periodicals, Inc.
Deeley, MA; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, EF; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Dawant, BM
2013-01-01
Image segmentation has become a vital and often rate limiting step in modern radiotherapy treatment planning. In recent years the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumors in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: STAPLE and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. PMID:23685866
Automatic vasculature identification in coronary angiograms by adaptive geometrical tracking.
Xiao, Ruoxiu; Yang, Jian; Goyal, Mahima; Liu, Yue; Wang, Yongtian
2013-01-01
As the uneven distribution of contrast agents and the perspective projection principle of X-ray, the vasculatures in angiographic image are with low contrast and are generally superposed with other organic tissues; therefore, it is very difficult to identify the vasculature and quantitatively estimate the blood flow directly from angiographic images. In this paper, we propose a fully automatic algorithm named adaptive geometrical vessel tracking (AGVT) for coronary artery identification in X-ray angiograms. Initially, the ridge enhancement (RE) image is obtained utilizing multiscale Hessian information. Then, automatic initialization procedures including seed points detection, and initial directions determination are performed on the RE image. The extracted ridge points can be adjusted to the geometrical centerline points adaptively through diameter estimation. Bifurcations are identified by discriminating connecting relationship of the tracked ridge points. Finally, all the tracked centerlines are merged and smoothed by classifying the connecting components on the vascular structures. Synthetic angiographic images and clinical angiograms are used to evaluate the performance of the proposed algorithm. The proposed algorithm is compared with other two vascular tracking techniques in terms of the efficiency and accuracy, which demonstrate successful applications of the proposed segmentation and extraction scheme in vasculature identification.
Automatic Contour Tracking in Ultrasound Images
ERIC Educational Resources Information Center
Li, Min; Kambhamettu, Chandra; Stone, Maureen
2005-01-01
In this paper, a new automatic contour tracking system, EdgeTrak, for the ultrasound image sequences of human tongue is presented. The images are produced by a head and transducer support system (HATS). The noise and unrelated high-contrast edges in ultrasound images make it very difficult to automatically detect the correct tongue surfaces. In…
A Novel Method for Reconstructing Broken Contour Lines Extracted from Scanned Topographic Maps
NASA Astrophysics Data System (ADS)
Wang, Feng; Liu, Pingzhi; Yang, Yun; Wei, Haiping; An, Xiaoya
2018-05-01
It is known that after segmentation and morphological operations on scanned topographic maps, gaps occur in contour lines. It is also well known that filling these gaps and reconstruction of contour lines with high accuracy and completeness is not an easy problem. In this paper, a novel method is proposed dedicated in automatic or semiautomatic filling up caps and reconstructing broken contour lines in binary images. The key part of end points' auto-matching and reconnecting is deeply discussed after introducing the procedure of reconstruction, in which some key algorithms and mechanisms are presented and realized, including multiple incremental backing trace to get weighted average direction angle of end points, the max constraint angle control mechanism based on the multiple gradient ranks, combination of weighted Euclidean distance and deviation angle to determine the optimum matching end point, bidirectional parabola control, etc. Lastly, experimental comparisons based on typically samples are complemented between proposed method and the other representative method, the results indicate that the former holds higher accuracy and completeness, better stability and applicability.
Automatic metastatic brain tumor segmentation for stereotactic radiosurgery applications.
Liu, Yan; Stojadinovic, Strahinja; Hrycushko, Brian; Wardak, Zabi; Lu, Weiguo; Yan, Yulong; Jiang, Steve B; Timmerman, Robert; Abdulrahman, Ramzi; Nedzi, Lucien; Gu, Xuejun
2016-12-21
The objective of this study is to develop an automatic segmentation strategy for efficient and accurate metastatic brain tumor delineation on contrast-enhanced T1-weighted (T1c) magnetic resonance images (MRI) for stereotactic radiosurgery (SRS) applications. The proposed four-step automatic brain metastases segmentation strategy is comprised of pre-processing, initial contouring, contour evolution, and contour triage. First, T1c brain images are preprocessed to remove the skull. Second, an initial tumor contour is created using a multi-scaled adaptive threshold-based bounding box and a super-voxel clustering technique. Third, the initial contours are evolved to the tumor boundary using a regional active contour technique. Fourth, all detected false-positive contours are removed with geometric characterization. The segmentation process was validated on a realistic virtual phantom containing Gaussian or Rician noise. For each type of noise distribution, five different noise levels were tested. Twenty-one cases from the multimodal brain tumor image segmentation (BRATS) challenge dataset and fifteen clinical metastases cases were also included in validation. Segmentation performance was quantified by the Dice coefficient (DC), normalized mutual information (NMI), structural similarity (SSIM), Hausdorff distance (HD), mean value of surface-to-surface distance (MSSD) and standard deviation of surface-to-surface distance (SDSSD). In the numerical phantom study, the evaluation yielded a DC of 0.98 ± 0.01, an NMI of 0.97 ± 0.01, an SSIM of 0.999 ± 0.001, an HD of 2.2 ± 0.8 mm, an MSSD of 0.1 ± 0.1 mm, and an SDSSD of 0.3 ± 0.1 mm. The validation on the BRATS data resulted in a DC of 0.89 ± 0.08, which outperform the BRATS challenge algorithms. Evaluation on clinical datasets gave a DC of 0.86 ± 0.09, an NMI of 0.80 ± 0.11, an SSIM of 0.999 ± 0.001, an HD of 8.8 ± 12.6 mm, an MSSD of 1.5 ± 3.2 mm, and an SDSSD of 1.8 ± 3.4 mm when comparing to the physician drawn ground truth. The result indicated that the developed automatic segmentation strategy yielded accurate brain tumor delineation and presented as a useful clinical tool for SRS applications.
Automatic metastatic brain tumor segmentation for stereotactic radiosurgery applications
NASA Astrophysics Data System (ADS)
Liu, Yan; Stojadinovic, Strahinja; Hrycushko, Brian; Wardak, Zabi; Lu, Weiguo; Yan, Yulong; Jiang, Steve B.; Timmerman, Robert; Abdulrahman, Ramzi; Nedzi, Lucien; Gu, Xuejun
2016-12-01
The objective of this study is to develop an automatic segmentation strategy for efficient and accurate metastatic brain tumor delineation on contrast-enhanced T1-weighted (T1c) magnetic resonance images (MRI) for stereotactic radiosurgery (SRS) applications. The proposed four-step automatic brain metastases segmentation strategy is comprised of pre-processing, initial contouring, contour evolution, and contour triage. First, T1c brain images are preprocessed to remove the skull. Second, an initial tumor contour is created using a multi-scaled adaptive threshold-based bounding box and a super-voxel clustering technique. Third, the initial contours are evolved to the tumor boundary using a regional active contour technique. Fourth, all detected false-positive contours are removed with geometric characterization. The segmentation process was validated on a realistic virtual phantom containing Gaussian or Rician noise. For each type of noise distribution, five different noise levels were tested. Twenty-one cases from the multimodal brain tumor image segmentation (BRATS) challenge dataset and fifteen clinical metastases cases were also included in validation. Segmentation performance was quantified by the Dice coefficient (DC), normalized mutual information (NMI), structural similarity (SSIM), Hausdorff distance (HD), mean value of surface-to-surface distance (MSSD) and standard deviation of surface-to-surface distance (SDSSD). In the numerical phantom study, the evaluation yielded a DC of 0.98 ± 0.01, an NMI of 0.97 ± 0.01, an SSIM of 0.999 ± 0.001, an HD of 2.2 ± 0.8 mm, an MSSD of 0.1 ± 0.1 mm, and an SDSSD of 0.3 ± 0.1 mm. The validation on the BRATS data resulted in a DC of 0.89 ± 0.08, which outperform the BRATS challenge algorithms. Evaluation on clinical datasets gave a DC of 0.86 ± 0.09, an NMI of 0.80 ± 0.11, an SSIM of 0.999 ± 0.001, an HD of 8.8 ± 12.6 mm, an MSSD of 1.5 ± 3.2 mm, and an SDSSD of 1.8 ± 3.4 mm when comparing to the physician drawn ground truth. The result indicated that the developed automatic segmentation strategy yielded accurate brain tumor delineation and presented as a useful clinical tool for SRS applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J; Ates, O; Li, X
Purpose: To develop a tool that can quickly and automatically assess contour quality generated from auto segmentation during online adaptive replanning. Methods: Due to the strict time requirement of online replanning and lack of ‘ground truth’ contours in daily images, our method starts with assessing image registration accuracy focusing on the surface of the organ in question. Several metrics tightly related to registration accuracy including Jacobian maps, contours shell deformation, and voxel-based root mean square (RMS) analysis were computed. To identify correct contours, additional metrics and an adaptive decision tree are introduced. To approve in principle, tests were performed withmore » CT sets, planned and daily CTs acquired using a CT-on-rails during routine CT-guided RT delivery for 20 prostate cancer patients. The contours generated on daily CTs using an auto-segmentation tool (ADMIRE, Elekta, MIM) based on deformable image registration of the planning CT and daily CT were tested. Results: The deformed contours of 20 patients with total of 60 structures were manually checked as baselines. The incorrect rate of total contours is 49%. To evaluate the quality of local deformation, the Jacobian determinant (1.047±0.045) on contours has been analyzed. In an analysis of rectum contour shell deformed, the higher rate (0.41) of error contours detection was obtained compared to 0.32 with manual check. All automated detections took less than 5 seconds. Conclusion: The proposed method can effectively detect contour errors in micro and macro scope by evaluating multiple deformable registration metrics in a parallel computing process. Future work will focus on improving practicability and optimizing calculation algorithms and metric selection.« less
Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak
2013-01-01
The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.
Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2010-11-01
Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Yun-Gang; Ko, Jacky Kl; Shi, Lin; Guan, Yuefeng; Li, Linong; Qin, Jing; Heng, Pheng-Ann; Chu, Winnie Cw; Wang, Defeng
2015-07-01
Myocardial iron loading thalassemia patients could be identified using T2* magnetic resonance images (MRI). To quantitatively assess cardiac iron loading, we proposed an effective algorithm to segment aligned free induction decay sequential myocardium images based on morphological operations and geodesic active contour (GAC). Nine patients with thalassemia major were recruited (10 male and 16 female) to undergo a thoracic MRI scan in the short axis view. Free induction decay images were registered for T2* mapping. The GAC were utilized to segment aligned MR images with a robust initialization. Segmented myocardium regions were divided into sectors for a region-based quantification of cardiac iron loading. Our proposed automatic segmentation approach achieve a true positive rate at 84.6% and false positive rate at 53.8%. The area difference between manual and automatic segmentation was 25.5% after 1000 iterations. Results from T2* analysis indicated that regions with intensity lower than 20 ms were suffered from heavy iron loading in thalassemia major patients. The proposed method benefited from abundant edge information of the free induction decay sequential MRI. Experiment results demonstrated that the proposed method is feasible in myocardium segmentation and was clinically applicable to measure myocardium iron loading.
Surface- and Contour-Preserving Origamic Architecture Paper Pop-Ups.
Le, Sang N; Leow, Su-Jun; Le-Nguyen, Tuong-Vu; Ruiz, Conrado; Low, Kok-Lim
2013-08-02
Origamic architecture (OA) is a form of papercraft that involves cutting and folding a single sheet of paper to produce a 3D pop-up, and is commonly used to depict architectural structures. Because of the strict geometric and physical constraints, OA design requires considerable skill and effort. In this paper, we present a method to automatically generate an OA design that closely depicts an input 3D model. Our algorithm is guided by a novel set of geometric conditions to guarantee the foldability and stability of the generated pop-ups. The generality of the conditions allows our algorithm to generate valid pop-up structures that are previously not accounted for by other algorithms. Our method takes a novel image-domain approach to convert the input model to an OA design. It performs surface segmentation of the input model in the image domain, and carefully represents each surface with a set of parallel patches. Patches are then modified to make the entire structure foldable and stable. Visual and quantitative comparisons of results have shown our algorithm to be significantly better than the existing methods in the preservation of contours, surfaces and volume. The designs have also been shown to more closely resemble those created by real artists.
Surface and contour-preserving origamic architecture paper pop-ups.
Le, Sang N; Leow, Su-Jun; Le-Nguyen, Tuong-Vu; Ruiz, Conrado; Low, Kok-Lim
2014-02-01
Origamic architecture (OA) is a form of papercraft that involves cutting and folding a single sheet of paper to produce a 3D pop-up, and is commonly used to depict architectural structures. Because of the strict geometric and physical constraints, OA design requires considerable skill and effort. In this paper, we present a method to automatically generate an OA design that closely depicts an input 3D model. Our algorithm is guided by a novel set of geometric conditions to guarantee the foldability and stability of the generated pop-ups. The generality of the conditions allows our algorithm to generate valid pop-up structures that are previously not accounted for by other algorithms. Our method takes a novel image-domain approach to convert the input model to an OA design. It performs surface segmentation of the input model in the image domain, and carefully represents each surface with a set of parallel patches. Patches are then modified to make the entire structure foldable and stable. Visual and quantitative comparisons of results have shown our algorithm to be significantly better than the existing methods in the preservation of contours, surfaces, and volume. The designs have also been shown to more closely resemble those created by real artists.
Automated identification of the lung contours in positron emission tomography
NASA Astrophysics Data System (ADS)
Nery, F.; Silvestre Silva, J.; Ferreira, N. C.; Caramelo, F. J.; Faustino, R.
2013-03-01
Positron Emission Tomography (PET) is a nuclear medicine imaging technique that permits to analyze, in three dimensions, the physiological processes in vivo. One of the areas where PET has demonstrated its advantages is in the staging of lung cancer, where it offers better sensitivity and specificity than other techniques such as CT. On the other hand, accurate segmentation, an important procedure for Computer Aided Diagnostics (CAD) and automated image analysis, is a challenging task given the low spatial resolution and the high noise that are intrinsic characteristics of PET images. This work presents an algorithm for the segmentation of lungs in PET images, to be used in CAD and group analysis in a large patient database. The lung boundaries are automatically extracted from a PET volume through the application of a marker-driven watershed segmentation procedure which is robust to the noise. In order to test the effectiveness of the proposed method, we compared the segmentation results in several slices using our approach with the results obtained from manual delineation. The manual delineation was performed by nuclear medicine physicians that used a software routine that we developed specifically for this task. To quantify the similarity between the contours obtained from the two methods, we used figures of merit based on region and also on contour definitions. Results show that the performance of the algorithm was similar to the performance of human physicians. Additionally, we found that the algorithm-physician agreement is similar (statistically significant) to the inter-physician agreement.
Comparative study on the performance of textural image features for active contour segmentation.
Moraru, Luminita; Moldovanu, Simona
2012-07-01
We present a computerized method for the semi-automatic detection of contours in ultrasound images. The novelty of our study is the introduction of a fast and efficient image function relating to parametric active contour models. This new function is a combination of the gray-level information and first-order statistical features, called standard deviation parameters. In a comprehensive study, the developed algorithm and the efficiency of segmentation were first tested for synthetic images. Tests were also performed on breast and liver ultrasound images. The proposed method was compared with the watershed approach to show its efficiency. The performance of the segmentation was estimated using the area error rate. Using the standard deviation textural feature and a 5×5 kernel, our curve evolution was able to produce results close to the minimal area error rate (namely 8.88% for breast images and 10.82% for liver images). The image resolution was evaluated using the contrast-to-gradient method. The experiments showed promising segmentation results.
NASA Astrophysics Data System (ADS)
Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat
2018-04-01
Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.
Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng
2016-09-01
Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.
Interactive segmentation of tongue contours in ultrasound video sequences using quality maps
NASA Astrophysics Data System (ADS)
Ghrenassia, Sarah; Ménard, Lucie; Laporte, Catherine
2014-03-01
Ultrasound (US) imaging is an effective and non invasive way of studying the tongue motions involved in normal and pathological speech, and the results of US studies are of interest for the development of new strategies in speech therapy. State-of-the-art tongue shape analysis techniques based on US images depend on semi-automated tongue segmentation and tracking techniques. Recent work has mostly focused on improving the accuracy of the tracking techniques themselves. However, occasional errors remain inevitable, regardless of the technique used, and the tongue tracking process must thus be supervised by a speech scientist who will correct these errors manually or semi-automatically. This paper proposes an interactive framework to facilitate this process. In this framework, the user is guided towards potentially problematic portions of the US image sequence by a segmentation quality map that is based on the normalized energy of an active contour model and automatically produced during tracking. When a problematic segmentation is identified, corrections to the segmented contour can be made on one image and propagated both forward and backward in the problematic subsequence, thereby improving the user experience. The interactive tools were tested in combination with two different tracking algorithms. Preliminary results illustrate the potential of the proposed framework, suggesting that the proposed framework generally improves user interaction time, with little change in segmentation repeatability.
A GENERAL ALGORITHM FOR THE CONSTRUCTION OF CONTOUR PLOTS
NASA Technical Reports Server (NTRS)
Johnson, W.
1994-01-01
The graphical presentation of experimentally or theoretically generated data sets frequently involves the construction of contour plots. A general computer algorithm has been developed for the construction of contour plots. The algorithm provides for efficient and accurate contouring with a modular approach which allows flexibility in modifying the algorithm for special applications. The algorithm accepts as input data values at a set of points irregularly distributed over a plane. The algorithm is based on an interpolation scheme in which the points in the plane are connected by straight line segments to form a set of triangles. In general, the data is smoothed using a least-squares-error fit of the data to a bivariate polynomial. To construct the contours, interpolation along the edges of the triangles is performed, using the bivariable polynomial if data smoothing was performed. Once the contour points have been located, the contour may be drawn. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 100K of 8-bit bytes. This computer algorithm was developed in 1981.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Spencer; Rodrigues, George, E-mail: george.rodrigues@lhsc.on.ca; Department of Epidemiology/Biostatistics, University of Western Ontario, London
2013-01-01
Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual,more » N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings were seen for all physicians irrespective of experience level and baseline manual contouring speed.« less
Intelligent vision guide for automatic ventilation grommet insertion into the tympanic membrane.
Gao, Wenchao; Tan, Kok Kiong; Liang, Wenyu; Gan, Chee Wee; Lim, Hsueh Yee
2016-03-01
Otitis media with effusion is a worldwide ear disease. The current treatment is to surgically insert a ventilation grommet into the tympanic membrane. A robotic device allowing automatic grommet insertion has been designed in a previous study; however, the part of the membrane where the malleus bone is attached to the inner surface is to be avoided during the insertion process. This paper proposes a synergy of optical flow technique and a gradient vector flow active contours algorithm to achieve an online tracking of the malleus under endoscopic vision, to guide the working channel to move efficiently during the surgery. The proposed method shows a more stable and accurate tracking performance than the current tracking methods in preclinical tests. With satisfactory tracking results, vision guidance of a suitable insertion spot can be provided to the device to perform the surgery in an automatic way. Copyright © 2015 John Wiley & Sons, Ltd.
Computer analysis of gallbladder ultrasonic images towards recognition of pathological lesions
NASA Astrophysics Data System (ADS)
Ogiela, M. R.; Bodzioch, S.
2011-06-01
This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards automatic detection and interpretation of disease symptoms on processed US images. First, in this paper, there is presented a new heuristic method of filtering gallbladder contours from images. A major stage in this filtration is to segment and section off areas occupied by the said organ. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours, based on rank filtration, as well as on the analysis of line profile sections on tested organs. The second part concerns detecting the most important lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. The methodology of computer analysis of US gallbladder images presented here is clearly utilitarian in nature and after standardising can be used as a technique for supporting the diagnostics of selected gallbladder disorders using the images of this organ.
Optimizing the 3D-reconstruction technique for serial block-face scanning electron microscopy.
Wernitznig, Stefan; Sele, Mariella; Urschler, Martin; Zankel, Armin; Pölt, Peter; Rind, F Claire; Leitinger, Gerd
2016-05-01
Elucidating the anatomy of neuronal circuits and localizing the synaptic connections between neurons, can give us important insights in how the neuronal circuits work. We are using serial block-face scanning electron microscopy (SBEM) to investigate the anatomy of a collision detection circuit including the Lobula Giant Movement Detector (LGMD) neuron in the locust, Locusta migratoria. For this, thousands of serial electron micrographs are produced that allow us to trace the neuronal branching pattern. The reconstruction of neurons was previously done manually by drawing cell outlines of each cell in each image separately. This approach was very time consuming and troublesome. To make the process more efficient a new interactive software was developed. It uses the contrast between the neuron under investigation and its surrounding for semi-automatic segmentation. For segmentation the user sets starting regions manually and the algorithm automatically selects a volume within the neuron until the edges corresponding to the neuronal outline are reached. Internally the algorithm optimizes a 3D active contour segmentation model formulated as a cost function taking the SEM image edges into account. This reduced the reconstruction time, while staying close to the manual reference segmentation result. Our algorithm is easy to use for a fast segmentation process, unlike previous methods it does not require image training nor an extended computing capacity. Our semi-automatic segmentation algorithm led to a dramatic reduction in processing time for the 3D-reconstruction of identified neurons. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y; Liao, Z; Jiang, W
Purpose: To evaluate the feasibility of using an automatic segmentation tool to delineate cardiac substructures from computed tomography (CT) images for cardiac toxicity analysis for non-small cell lung cancer (NSCLC) patients after radiotherapy. Methods: A multi-atlas segmentation tool developed in-house was used to delineate eleven cardiac substructures including the whole heart, four heart chambers, and six greater vessels automatically from the averaged 4DCT planning images for 49 NSCLC patients. The automatic segmented contours were edited appropriately by two experienced radiation oncologists. The modified contours were compared with the auto-segmented contours using Dice similarity coefficient (DSC) and mean surface distance (MSD)more » to evaluate how much modification was needed. In addition, the dose volume histogram (DVH) of the modified contours were compared with that of the auto-segmented contours to evaluate the dosimetric difference between modified and auto-segmented contours. Results: Of the eleven structures, the averaged DSC values ranged from 0.73 ± 0.08 to 0.95 ± 0.04 and the averaged MSD values ranged from 1.3 ± 0.6 mm to 2.9 ± 5.1mm for the 49 patients. Overall, the modification is small. The pulmonary vein (PV) and the inferior vena cava required the most modifications. The V30 (volume receiving 30 Gy or above) for the whole heart and the mean dose to the whole heart and four heart chambers did not show statistically significant difference between modified and auto-segmented contours. The maximum dose to the greater vessels did not show statistically significant difference except for the PV. Conclusion: The automatic segmentation of the cardiac substructures did not require substantial modification. The dosimetric evaluation showed no statistically significant difference between auto-segmented and modified contours except for the PV, which suggests that auto-segmented contours for the cardiac dose response study are feasible in the clinical practice with a minor modification to the PV vessel.« less
Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos
2014-01-01
Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866
Chen, C; Li, H; Zhou, X; Wong, S T C
2008-05-01
Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.
NASA Astrophysics Data System (ADS)
Schmitt, R.; Niggemann, C.; Mersmann, C.
2008-04-01
Fibre-reinforced plastics (FRP) are particularly suitable for components where light-weight structures with advanced mechanical properties are required, e.g. for aerospace parts. Nevertheless, many manufacturing processes for FRP include manual production steps without an integrated quality control. A vital step in the process chain is the lay-up of the textile preform, as it greatly affects the geometry and the mechanical performance of the final part. In order to automate the FRP production, an inline machine vision system is needed for a closed-loop control of the preform lay-up. This work describes the development of a novel laser light-section sensor for optical inspection of textile preforms and its integration and validation in a machine vision prototype. The proposed method aims at the determination of the contour position of each textile layer through edge scanning. The scanning route is automatically derived by using texture analysis algorithms in a preliminary step. As sensor output a distinct stage profile is computed from the acquired greyscale image. The contour position is determined with sub-pixel accuracy using a novel algorithm based on a non-linear least-square fitting to a sigmoid function. The whole contour position is generated through data fusion of the measured edge points. The proposed method provides robust process automation for the FRP production improving the process quality and reducing the scrap quota. Hence, the range of economically feasible FRP products can be increased and new market segments with cost sensitive products can be addressed.
Schaly, B; Bauman, G S; Battista, J J; Van Dyk, J
2005-02-07
The goal of this study is to validate a deformable model using contour-driven thin-plate splines for application to radiation therapy dose mapping. Our testing includes a virtual spherical phantom as well as real computed tomography (CT) data from ten prostate cancer patients with radio-opaque markers surgically implanted into the prostate and seminal vesicles. In the spherical mathematical phantom, homologous control points generated automatically given input contour data in CT slice geometry were compared to homologous control point placement using analytical geometry as the ground truth. The dose delivered to specific voxels driven by both sets of homologous control points were compared to determine the accuracy of dose tracking via the deformable model. A 3D analytical spherically symmetric dose distribution with a dose gradient of approximately 10% per mm was used for this phantom. This test showed that the uncertainty in calculating the delivered dose to a tissue element depends on slice thickness and the variation in defining homologous landmarks, where dose agreement of 3-4% in high dose gradient regions was achieved. In the patient data, radio-opaque marker positions driven by the thin-plate spline algorithm were compared to the actual marker positions as identified in the CT scans. It is demonstrated that the deformable model is accurate (approximately 2.5 mm) to within the intra-observer contouring variability. This work shows that the algorithm is appropriate for describing changes in pelvic anatomy and for the dose mapping application with dose gradients characteristic of conformal and intensity modulated radiation therapy.
NASA Astrophysics Data System (ADS)
Bruynooghe, Michel M.
1998-04-01
In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.
Research on detection method of UAV obstruction based on binocular vision
NASA Astrophysics Data System (ADS)
Zhu, Xiongwei; Lei, Xusheng; Sui, Zhehao
2018-04-01
For the autonomous obstacle positioning and ranging in the process of UAV (unmanned aerial vehicle) flight, a system based on binocular vision is constructed. A three-stage image preprocessing method is proposed to solve the problem of the noise and brightness difference in the actual captured image. The distance of the nearest obstacle is calculated by using the disparity map that generated by binocular vision. Then the contour of the obstacle is extracted by post-processing of the disparity map, and a color-based adaptive parameter adjustment algorithm is designed to extract contours of obstacle automatically. Finally, the safety distance measurement and obstacle positioning during the UAV flight process are achieved. Based on a series of tests, the error of distance measurement can keep within 2.24% of the measuring range from 5 m to 20 m.
Unsupervised quantification of abdominal fat from CT images using Greedy Snakes
NASA Astrophysics Data System (ADS)
Agarwal, Chirag; Dallal, Ahmed H.; Arbabshirani, Mohammad R.; Patel, Aalpen; Moore, Gregory
2017-02-01
Adipose tissue has been associated with adverse consequences of obesity. Total adipose tissue (TAT) is divided into subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT). Intra-abdominal fat (VAT), located inside the abdominal cavity, is a major factor for the classic obesity related pathologies. Since direct measurement of visceral and subcutaneous fat is not trivial, substitute metrics like waist circumference (WC) and body mass index (BMI) are used in clinical settings to quantify obesity. Abdominal fat can be assessed effectively using CT or MRI, but manual fat segmentation is rather subjective and time-consuming. Hence, an automatic and accurate quantification tool for abdominal fat is needed. The goal of this study is to extract TAT, VAT and SAT fat from abdominal CT in a fully automated unsupervised fashion using energy minimization techniques. We applied a four step framework consisting of 1) initial body contour estimation, 2) approximation of the body contour, 3) estimation of inner abdominal contour using Greedy Snakes algorithm, and 4) voting, to segment the subcutaneous and visceral fat. We validated our algorithm on 952 clinical abdominal CT images (from 476 patients with a very wide BMI range) collected from various radiology departments of Geisinger Health System. To our knowledge, this is the first study of its kind on such a large and diverse clinical dataset. Our algorithm obtained a 3.4% error for VAT segmentation compared to manual segmentation. These personalized and accurate measurements of fat can complement traditional population health driven obesity metrics such as BMI and WC.
CT liver volumetry using geodesic active contour segmentation with a level-set algorithm
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard
2010-03-01
Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.
Novel 3-D free-form surface profilometry for reverse engineering
NASA Astrophysics Data System (ADS)
Chen, Liang-Chia; Huang, Zhi-Xue
2005-01-01
This article proposes an innovative 3-D surface contouring approach for automatic and accurate free-form surface reconstruction using a sensor integration concept. The study addresses a critical problem in accurate measurement of free-form surfaces by developing an automatic reconstruction approach. Unacceptable measuring accuracy issues are mainly due to the errors arising from the use of inadequate measuring strategies, ending up with inaccurate digitised data and costly post-data processing in Reverse Engineering (RE). This article is thus aimed to develop automatic digitising strategies for ensuring surface reconstruction efficiency, as well as accuracy. The developed approach consists of two main stages, namely the rapid shape identification (RSI) and the automated laser scanning (ALS) for completing 3-D surface profilometry. This developed approach effectively utilises the advantages of on-line geometric information to evaluate the degree of satisfaction of user-defined digitising accuracy under a triangular topological patch. An industrial case study was used to attest the feasibility of the approach.
Ghita, Ovidiu; Dietlmeier, Julia; Whelan, Paul F
2014-10-01
In this paper, we investigate the segmentation of closed contours in subcellular data using a framework that primarily combines the pairwise affinity grouping principles with a graph partitioning contour searching approach. One salient problem that precluded the application of these methods to large scale segmentation problems is the onerous computational complexity required to generate comprehensive representations that include all pairwise relationships between all pixels in the input data. To compensate for this problem, a practical solution is to reduce the complexity of the input data by applying an over-segmentation technique prior to the application of the computationally demanding strands of the segmentation process. This approach opens the opportunity to build specific shape and intensity models that can be successfully employed to extract the salient structures in the input image which are further processed to identify the cycles in an undirected graph. The proposed framework has been applied to the segmentation of mitochondria membranes in electron microscopy data which are characterized by low contrast and low signal-to-noise ratio. The algorithm has been quantitatively evaluated using two datasets where the segmentation results have been compared with the corresponding manual annotations. The performance of the proposed algorithm has been measured using standard metrics, such as precision and recall, and the experimental results indicate a high level of segmentation accuracy.
NASA Astrophysics Data System (ADS)
Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen
2017-06-01
Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.
ACE Design Study and Experiments
1976-06-01
orthophoto on off-line printer o Automatically compute contours on UNIVAC 1108 and plot on CALCOMP o Manually trace planimetry and drainage from... orthophoto * o Manually edit and trace plotted contours to obtain completed contour manuscript* - Edit errors - Add missing contour detail - Combine...stereomodels - Contours adjusted to drainage chart and spot elevations - Referring to orthophoto , rectified photos, original photos o Normal
Rodrigues, Pedro L.; Rodrigues, Nuno F.; Duque, Duarte; Granja, Sara; Correia-Pinto, Jorge; Vilaça, João L.
2014-01-01
Background. Regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. This work presents a new methodology to accurately quantify the epithelial, outer contour, and peripheral airway buds of lung explants during cellular development from microscopic images. Methods. The outer contour was defined using an adaptive and multiscale threshold algorithm whose level was automatically calculated based on an entropy maximization criterion. The inner lung epithelium was defined by a clustering procedure that groups small image regions according to the minimum description length principle and local statistical properties. Finally, the number of peripheral buds was counted as the skeleton branched ends from a skeletonized image of the lung inner epithelia. Results. The time for lung branching morphometric analysis was reduced in 98% in contrast to the manual method. Best results were obtained in the first two days of cellular development, with lesser standard deviations. Nonsignificant differences were found between the automatic and manual results in all culture days. Conclusions. The proposed method introduces a series of advantages related to its intuitive use and accuracy, making the technique suitable to images with different lighting characteristics and allowing a reliable comparison between different researchers. PMID:25250057
ERIC Educational Resources Information Center
Keane, Brian P.; Mettler, Everett; Tsoi, Vicky; Kellman, Philip J.
2011-01-01
Multiple object tracking (MOT) is an attentional task wherein observers attempt to track multiple targets among moving distractors. Contour interpolation is a perceptual process that fills-in nonvisible edges on the basis of how surrounding edges (inducers) are spatiotemporally related. In five experiments, we explored the automaticity of…
NASA Astrophysics Data System (ADS)
Chaisaowong, Kraisorn; Kraus, Thomas
2014-03-01
Pleural thickenings can be caused by asbestos exposure and may evolve into malignant pleural mesothelioma. While an early diagnosis plays the key role to an early treatment, and therefore helping to reduce morbidity, the growth rate of a pleural thickening can be in turn essential evidence to an early diagnosis of the pleural mesothelioma. The detection of pleural thickenings is today done by a visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. Computer-assisted diagnosis systems to automatically assess pleural mesothelioma have been reported worldwide. But in this paper, an image analysis pipeline to automatically detect pleural thickenings and measure their volume is described. We first delineate automatically the pleural contour in the CT images. An adaptive surface-base smoothing technique is then applied to the pleural contours to identify all potential thickenings. A following tissue-specific topology-oriented detection based on a probabilistic Hounsfield Unit model of pleural plaques specify then the genuine pleural thickenings among them. The assessment of the detected pleural thickenings is based on the volumetry of the 3D model, created by mesh construction algorithm followed by Laplace-Beltrami eigenfunction expansion surface smoothing technique. Finally, the spatiotemporal matching of pleural thickenings from consecutive CT data is carried out based on the semi-automatic lung registration towards the assessment of its growth rate. With these methods, a new computer-assisted diagnosis system is presented in order to assure a precise and reproducible assessment of pleural thickenings towards the diagnosis of the pleural mesothelioma in its early stage.
NASA Astrophysics Data System (ADS)
Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei
2017-01-01
Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.
Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei
2017-01-07
Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.
Scheinker, Alexander; Baily, Scott; Young, Daniel; ...
2014-08-01
In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic fieldmore » cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.« less
Automatic structural matching of 3D image data
NASA Astrophysics Data System (ADS)
Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor
2015-10-01
A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.
Building Extraction Based on Openstreetmap Tags and Very High Spatial Resolution Image in Urban Area
NASA Astrophysics Data System (ADS)
Kang, L.; Wang, Q.; Yan, H. W.
2018-04-01
How to derive contour of buildings from VHR images is the essential problem for automatic building extraction in urban area. To solve this problem, OSM data is introduced to offer vector contour information of buildings which is hard to get from VHR images. First, we import OSM data into database. The line string data of OSM with tags of building, amenity, office etc. are selected and combined into completed contours; Second, the accuracy of contours of buildings is confirmed by comparing with the real buildings in Google Earth; Third, maximum likelihood classification is conducted with the confirmed building contours, and the result demonstrates that the proposed approach is effective and accurate. The approach offers a new way for automatic interpretation of VHR images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Tan, J; Kavanaugh, J
Purpose: Radiotherapy (RT) contours delineated either manually or semiautomatically require verification before clinical usage. Manual evaluation is very time consuming. A new integrated software tool using supervised pattern contour recognition was thus developed to facilitate this process. Methods: The contouring tool was developed using an object-oriented programming language C# and application programming interfaces, e.g. visualization toolkit (VTK). The C# language served as the tool design basis. The Accord.Net scientific computing libraries were utilized for the required statistical data processing and pattern recognition, while the VTK was used to build and render 3-D mesh models from critical RT structures in real-timemore » and 360° visualization. Principal component analysis (PCA) was used for system self-updating geometry variations of normal structures based on physician-approved RT contours as a training dataset. The inhouse design of supervised PCA-based contour recognition method was used for automatically evaluating contour normality/abnormality. The function for reporting the contour evaluation results was implemented by using C# and Windows Form Designer. Results: The software input was RT simulation images and RT structures from commercial clinical treatment planning systems. Several abilities were demonstrated: automatic assessment of RT contours, file loading/saving of various modality medical images and RT contours, and generation/visualization of 3-D images and anatomical models. Moreover, it supported the 360° rendering of the RT structures in a multi-slice view, which allows physicians to visually check and edit abnormally contoured structures. Conclusion: This new software integrates the supervised learning framework with image processing and graphical visualization modules for RT contour verification. This tool has great potential for facilitating treatment planning with the assistance of an automatic contour evaluation module in avoiding unnecessary manual verification for physicians/dosimetrists. In addition, its nature as a compact and stand-alone tool allows for future extensibility to include additional functions for physicians’ clinical needs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsin-Chen; Tan, Jun; Dolly, Steven
2015-02-15
Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy basedmore » on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately employed to test the effectiveness of the proposed contouring error detection strategy. Results: An evaluation tool was implemented to illustrate how the proposed strategy automatically detects the radiation therapy contouring errors for a given patient and provides 3D graphical visualization of error detection results as well. The contouring error detection results were achieved with an average sensitivity of 0.954/0.906 and an average specificity of 0.901/0.909 on the centroid/volume related contouring errors of all the tested samples. As for the detection results on structural shape related contouring errors, an average sensitivity of 0.816 and an average specificity of 0.94 on all the tested samples were obtained. The promising results indicated the feasibility of the proposed strategy for the detection of contouring errors with low false detection rate. Conclusions: The proposed strategy can reliably identify contouring errors based upon inter- and intrastructural constraints derived from clinically approved contours. It holds great potential for improving the radiation therapy workflow. ROC and box plot analyses allow for analytically tuning of the system parameters to satisfy clinical requirements. Future work will focus on the improvement of strategy reliability by utilizing more training sets and additional geometric attribute constraints.« less
Wang, Zhuo; Camino, Acner; Hagag, Ahmed M; Wang, Jie; Weleber, Richard G; Yang, Paul; Pennesi, Mark E; Huang, David; Li, Dengwang; Jia, Yali
2018-05-01
Optical coherence tomography (OCT) can demonstrate early deterioration of the photoreceptor integrity caused by inherited retinal degeneration diseases (IRDs). A machine learning method based on random forests was developed to automatically detect continuous areas of preserved ellipsoid zone structure (an easily recognizable part of the photoreceptors on OCT) in 16 eyes of patients with choroideremia (a type of IRD). Pseudopodial extensions protruding from the preserved ellipsoid zone areas are detected separately by a local active contour routine. The algorithm is implemented on en face images with minimum segmentation requirements, only needing delineation of the Bruch's membrane, thus evading the inaccuracies and technical challenges associated with automatic segmentation of the ellipsoid zone in eyes with severe retinal degeneration. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, J; Gong, Y; Bar-Ad, V
Purpose: Accurate contour delineation is crucial for radiotherapy. Atlas based automatic segmentation tools can be used to increase the efficiency of contour accuracy evaluation. This study aims to optimize technical parameters utilized in the tool by exploring the impact of library size and atlas number on the accuracy of cardiac contour evaluation. Methods: Patient CT DICOMs from RTOG 0617 were used for this study. Five experienced physicians delineated the cardiac structures including pericardium, atria and ventricles following an atlas guideline. The consistency of cardiac structured delineation using the atlas guideline was verified by a study with four observers and seventeenmore » patients. The CT and cardiac structure DICOM files were then used for the ABAS technique.To study the impact of library size (LS) and atlas number (AN) on automatic contour accuracy, automatic contours were generated with varied technique parameters for five randomly selected patients. Three LS (20, 60, and 100) were studied using commercially available software. The AN was four, recommended by the manufacturer. Using the manual contour as the gold standard, Dice Similarity Coefficient (DSC) was calculated between the manual and automatic contours. Five-patient averaged DSCs were calculated for comparison for each cardiac structure.In order to study the impact of AN, the LS was set 100, and AN was tested from one to five. The five-patient averaged DSCs were also calculated for each cardiac structure. Results: DSC values are highest when LS is 100 and AN is four. The DSC is 0.90±0.02 for pericardium, 0.75±0.06 for atria, and 0.86±0.02 for ventricles. Conclusion: By comparing DSC values, the combination AN=4 and LS=100 gives the best performance. This project was supported by NCI grants U24CA12014, U24CA180803, U10CA180868, U10CA180822, PA CURE grant and Bristol-Myers Squibb and Eli Lilly.« less
Broch, Ole; Bein, Berthold; Gruenewald, Matthias; Masing, Sarah; Huenges, Katharina; Haneya, Assad; Steinfath, Markus; Renner, Jochen
2016-01-01
Objective. Today, there exist several different pulse contour algorithms for calculation of cardiac output (CO). The aim of the present study was to compare the accuracy of nine different pulse contour algorithms with transpulmonary thermodilution before and after cardiopulmonary bypass (CPB). Methods. Thirty patients scheduled for elective coronary surgery were studied before and after CPB. A passive leg raising maneuver was also performed. Measurements included CO obtained by transpulmonary thermodilution (CO TPTD ) and by nine pulse contour algorithms (CO X1-9 ). Calibration of pulse contour algorithms was performed by esophageal Doppler ultrasound after induction of anesthesia and 15 min after CPB. Correlations, Bland-Altman analysis, four-quadrant, and polar analysis were also calculated. Results. There was only a poor correlation between CO TPTD and CO X1-9 during passive leg raising and in the period before and after CPB. Percentage error exceeded the required 30% limit. Four-quadrant and polar analysis revealed poor trending ability for most algorithms before and after CPB. The Liljestrand-Zander algorithm revealed the best reliability. Conclusions. Estimation of CO by nine different pulse contour algorithms revealed poor accuracy compared with transpulmonary thermodilution. Furthermore, the less-invasive algorithms showed an insufficient capability for trending hemodynamic changes before and after CPB. The Liljestrand-Zander algorithm demonstrated the highest reliability. This trial is registered with NCT02438228 (ClinicalTrials.gov).
Data mining for multiagent rules, strategies, and fuzzy decision tree structure
NASA Astrophysics Data System (ADS)
Smith, James F., III; Rhyne, Robert D., II; Fisher, Kristin
2002-03-01
A fuzzy logic based resource manager (RM) has been developed that automatically allocates electronic attack resources in real-time over many dissimilar platforms. Two different data mining algorithms have been developed to determine rules, strategies, and fuzzy decision tree structure. The first data mining algorithm uses a genetic algorithm as a data mining function and is called from an electronic game. The game allows a human expert to play against the resource manager in a simulated battlespace with each of the defending platforms being exclusively directed by the fuzzy resource manager and the attacking platforms being controlled by the human expert or operating autonomously under their own logic. This approach automates the data mining problem. The game automatically creates a database reflecting the domain expert's knowledge. It calls a data mining function, a genetic algorithm, for data mining of the database as required and allows easy evaluation of the information mined in the second step. The criterion for re- optimization is discussed as well as experimental results. Then a second data mining algorithm that uses a genetic program as a data mining function is introduced to automatically discover fuzzy decision tree structures. Finally, a fuzzy decision tree generated through this process is discussed.
Tumour auto-contouring on 2d cine MRI for locally advanced lung cancer: A comparative study.
Fast, Martin F; Eiben, Björn; Menten, Martin J; Wetscherek, Andreas; Hawkes, David J; McClelland, Jamie R; Oelfke, Uwe
2017-12-01
Radiotherapy guidance based on magnetic resonance imaging (MRI) is currently becoming a clinical reality. Fast 2d cine MRI sequences are expected to increase the precision of radiation delivery by facilitating tumour delineation during treatment. This study compares four auto-contouring algorithms for the task of delineating the primary tumour in six locally advanced (LA) lung cancer patients. Twenty-two cine MRI sequences were acquired using either a balanced steady-state free precession or a spoiled gradient echo imaging technique. Contours derived by the auto-contouring algorithms were compared against manual reference contours. A selection of eight image data sets was also used to assess the inter-observer delineation uncertainty. Algorithmically derived contours agreed well with the manual reference contours (median Dice similarity index: ⩾0.91). Multi-template matching and deformable image registration performed significantly better than feature-driven registration and the pulse-coupled neural network (PCNN). Neither MRI sequence nor image orientation was a conclusive predictor for algorithmic performance. Motion significantly degraded the performance of the PCNN. The inter-observer variability was of the same order of magnitude as the algorithmic performance. Auto-contouring of tumours on cine MRI is feasible in LA lung cancer patients. Despite large variations in implementation complexity, the different algorithms all have relatively similar performance. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Automatic correction of dental artifacts in PET/MRI
Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune. H.; Beyer, Thomas; Law, Ian; Højgaard, Liselotte; Darkner, Sune; Lauze, Francois
2015-01-01
Abstract. A challenge when using current magnetic resonance (MR)-based attenuation correction in positron emission tomography/MR imaging (PET/MRI) is that the MRIs can have a signal void around the dental fillings that is segmented as artificial air-regions in the attenuation map. For artifacts connected to the background, we propose an extension to an existing active contour algorithm to delineate the outer contour using the nonattenuation corrected PET image and the original attenuation map. We propose a combination of two different methods for differentiating the artifacts within the body from the anatomical air-regions by first using a template of artifact regions, and second, representing the artifact regions with a combination of active shape models and k-nearest-neighbors. The accuracy of the combined method has been evaluated using 25 F18-fluorodeoxyglucose PET/MR patients. Results showed that the approach was able to correct an average of 97±3% of the artifact areas. PMID:26158104
Automatic estimation of heart boundaries and cardiothoracic ratio from chest x-ray images
NASA Astrophysics Data System (ADS)
Dallal, Ahmed H.; Agarwal, Chirag; Arbabshirani, Mohammad R.; Patel, Aalpen; Moore, Gregory
2017-03-01
Cardiothoracic ratio (CTR) is a widely used radiographic index to assess heart size on chest X-rays (CXRs). Recent studies have suggested that also two-dimensional CTR might contain clinical information about the heart function. However, manual measurement of such indices is both subjective and time consuming. This study proposes a fast algorithm to automatically estimate CTR indices based on CXRs. The algorithm has three main steps: 1) model based lung segmentation, 2) estimation of heart boundaries from lung contours, and 3) computation of cardiothoracic indices from the estimated boundaries. We extended a previously employed lung detection algorithm to automatically estimate heart boundaries without using ground truth heart markings. We used two datasets: a publicly available dataset with 247 images as well as clinical dataset with 167 studies from Geisinger Health System. The models of lung fields are learned from both datasets. The lung regions in a given test image are estimated by registering the learned models to patient CXRs. Then, heart region is estimated by applying Harris operator on segmented lung fields to detect the corner points corresponding to the heart boundaries. The algorithm calculates three indices, CTR1D, CTR2D, and cardiothoracic area ratio (CTAR). The method was tested on 103 clinical CXRs and average error rates of 7.9%, 25.5%, and 26.4% (for CTR1D, CTR2D, and CTAR respectively) were achieved. The proposed method outperforms previous CTR estimation methods without using any heart templates. This method can have important clinical implications as it can provide fast and accurate estimate of cardiothoracic indices.
Information mining in remote sensing imagery
NASA Astrophysics Data System (ADS)
Li, Jiang
The volume of remotely sensed imagery continues to grow at an enormous rate due to the advances in sensor technology, and our capability for collecting and storing images has greatly outpaced our ability to analyze and retrieve information from the images. This motivates us to develop image information mining techniques, which is very much an interdisciplinary endeavor drawing upon expertise in image processing, databases, information retrieval, machine learning, and software design. This dissertation proposes and implements an extensive remote sensing image information mining (ReSIM) system prototype for mining useful information implicitly stored in remote sensing imagery. The system consists of three modules: image processing subsystem, database subsystem, and visualization and graphical user interface (GUI) subsystem. Land cover and land use (LCLU) information corresponding to spectral characteristics is identified by supervised classification based on support vector machines (SVM) with automatic model selection, while textural features that characterize spatial information are extracted using Gabor wavelet coefficients. Within LCLU categories, textural features are clustered using an optimized k-means clustering approach to acquire search efficient space. The clusters are stored in an object-oriented database (OODB) with associated images indexed in an image database (IDB). A k-nearest neighbor search is performed using a query-by-example (QBE) approach. Furthermore, an automatic parametric contour tracing algorithm and an O(n) time piecewise linear polygonal approximation (PLPA) algorithm are developed for shape information mining of interesting objects within the image. A fuzzy object-oriented database based on the fuzzy object-oriented data (FOOD) model is developed to handle the fuzziness and uncertainty. Three specific applications are presented: integrated land cover and texture pattern mining, shape information mining for change detection of lakes, and fuzzy normalized difference vegetation index (NDVI) pattern mining. The study results show the effectiveness of the proposed system prototype and the potentials for other applications in remote sensing.
La Macchia, Mariangela; Fellin, Francesco; Amichetti, Maurizio; Cianchetti, Marco; Gianolini, Stefano; Paola, Vitali; Lomax, Antony J; Widesott, Lamberto
2012-09-18
To validate, in the context of adaptive radiotherapy, three commercial software solutions for atlas-based segmentation. Fifteen patients, five for each group, with cancer of the Head&Neck, pleura, and prostate were enrolled in the study. In addition to the treatment planning CT (pCT) images, one replanning CT (rCT) image set was acquired for each patient during the RT course. Three experienced physicians outlined on the pCT and rCT all the volumes of interest (VOIs). We used three software solutions (VelocityAI 2.6.2 (V), MIM 5.1.1 (M) by MIMVista and ABAS 2.0 (A) by CMS-Elekta) to generate the automatic contouring on the repeated CT. All the VOIs obtained with automatic contouring (AC) were successively corrected manually. We recorded the time needed for: 1) ex novo ROIs definition on rCT; 2) generation of AC by the three software solutions; 3) manual correction of AC.To compare the quality of the volumes obtained automatically by the software and manually corrected with those drawn from scratch on rCT, we used the following indexes: overlap coefficient (DICE), sensitivity, inclusiveness index, difference in volume, and displacement differences on three axes (x, y, z) from the isocenter. The time saved by the three software solutions for all the sites, compared to the manual contouring from scratch, is statistically significant and similar for all the three software solutions. The time saved for each site are as follows: about an hour for Head&Neck, about 40 minutes for prostate, and about 20 minutes for mesothelioma. The best DICE similarity coefficient index was obtained with the manual correction for: A (contours for prostate), A and M (contours for H&N), and M (contours for mesothelioma). From a clinical point of view, the automated contouring workflow was shown to be significantly shorter than the manual contouring process, even though manual correction of the VOIs is always needed.
Geometry Processing of Conventionally Produced Mouse Brain Slice Images.
Agarwal, Nitin; Xu, Xiangmin; Gopi, M
2018-04-21
Brain mapping research in most neuroanatomical laboratories relies on conventional processing techniques, which often introduce histological artifacts such as tissue tears and tissue loss. In this paper we present techniques and algorithms for automatic registration and 3D reconstruction of conventionally produced mouse brain slices in a standardized atlas space. This is achieved first by constructing a virtual 3D mouse brain model from annotated slices of Allen Reference Atlas (ARA). Virtual re-slicing of the reconstructed model generates ARA-based slice images corresponding to the microscopic images of histological brain sections. These image pairs are aligned using a geometric approach through contour images. Histological artifacts in the microscopic images are detected and removed using Constrained Delaunay Triangulation before performing global alignment. Finally, non-linear registration is performed by solving Laplace's equation with Dirichlet boundary conditions. Our methods provide significant improvements over previously reported registration techniques for the tested slices in 3D space, especially on slices with significant histological artifacts. Further, as one of the application we count the number of neurons in various anatomical regions using a dataset of 51 microscopic slices from a single mouse brain. To the best of our knowledge the presented work is the first that automatically registers both clean as well as highly damaged high-resolutions histological slices of mouse brain to a 3D annotated reference atlas space. This work represents a significant contribution to this subfield of neuroscience as it provides tools to neuroanatomist for analyzing and processing histological data. Copyright © 2018 Elsevier B.V. All rights reserved.
Lavdas, Ioannis; Glocker, Ben; Kamnitsas, Konstantinos; Rueckert, Daniel; Mair, Henrietta; Sandhu, Amandeep; Taylor, Stuart A; Aboagye, Eric O; Rockall, Andrea G
2017-10-01
As part of a program to implement automatic lesion detection methods for whole body magnetic resonance imaging (MRI) in oncology, we have developed, evaluated, and compared three algorithms for fully automatic, multiorgan segmentation in healthy volunteers. The first algorithm is based on classification forests (CFs), the second is based on 3D convolutional neural networks (CNNs) and the third algorithm is based on a multi-atlas (MA) approach. We examined data from 51 healthy volunteers, scanned prospectively with a standardized, multiparametric whole body MRI protocol at 1.5 T. The study was approved by the local ethics committee and written consent was obtained from the participants. MRI data were used as input data to the algorithms, while training was based on manual annotation of the anatomies of interest by clinical MRI experts. Fivefold cross-validation experiments were run on 34 artifact-free subjects. We report three overlap and three surface distance metrics to evaluate the agreement between the automatic and manual segmentations, namely the dice similarity coefficient (DSC), recall (RE), precision (PR), average surface distance (ASD), root-mean-square surface distance (RMSSD), and Hausdorff distance (HD). Analysis of variances was used to compare pooled label metrics between the three algorithms and the DSC on a 'per-organ' basis. A Mann-Whitney U test was used to compare the pooled metrics between CFs and CNNs and the DSC on a 'per-organ' basis, when using different imaging combinations as input for training. All three algorithms resulted in robust segmenters that were effectively trained using a relatively small number of datasets, an important consideration in the clinical setting. Mean overlap metrics for all the segmented structures were: CFs: DSC = 0.70 ± 0.18, RE = 0.73 ± 0.18, PR = 0.71 ± 0.14, CNNs: DSC = 0.81 ± 0.13, RE = 0.83 ± 0.14, PR = 0.82 ± 0.10, MA: DSC = 0.71 ± 0.22, RE = 0.70 ± 0.34, PR = 0.77 ± 0.15. Mean surface distance metrics for all the segmented structures were: CFs: ASD = 13.5 ± 11.3 mm, RMSSD = 34.6 ± 37.6 mm and HD = 185.7 ± 194.0 mm, CNNs; ASD = 5.48 ± 4.84 mm, RMSSD = 17.0 ± 13.3 mm and HD = 199.0 ± 101.2 mm, MA: ASD = 4.22 ± 2.42 mm, RMSSD = 6.13 ± 2.55 mm, and HD = 38.9 ± 28.9 mm. The pooled performance of CFs improved when all imaging combinations (T2w + T1w + DWI) were used as input, while the performance of CNNs deteriorated, but in neither case, significantly. CNNs with T2w images as input, performed significantly better than CFs with all imaging combinations as input for all anatomical labels, except for the bladder. Three state-of-the-art algorithms were developed and used to automatically segment major organs and bones in whole body MRI; good agreement to manual segmentations performed by clinical MRI experts was observed. CNNs perform favorably, when using T2w volumes as input. Using multimodal MRI data as input to CNNs did not improve the segmentation performance. © 2017 American Association of Physicists in Medicine.
Daisne, Jean-François; Blumhofer, Andreas
2013-06-26
Intensity modulated radiotherapy for head and neck cancer necessitates accurate definition of organs at risk (OAR) and clinical target volumes (CTV). This crucial step is time consuming and prone to inter- and intra-observer variations. Automatic segmentation by atlas deformable registration may help to reduce time and variations. We aim to test a new commercial atlas algorithm for automatic segmentation of OAR and CTV in both ideal and clinical conditions. The updated Brainlab automatic head and neck atlas segmentation was tested on 20 patients: 10 cN0-stages (ideal population) and 10 unselected N-stages (clinical population). Following manual delineation of OAR and CTV, automatic segmentation of the same set of structures was performed and afterwards manually corrected. Dice Similarity Coefficient (DSC), Average Surface Distance (ASD) and Maximal Surface Distance (MSD) were calculated for "manual to automatic" and "manual to corrected" volumes comparisons. In both groups, automatic segmentation saved about 40% of the corresponding manual segmentation time. This effect was more pronounced for OAR than for CTV. The edition of the automatically obtained contours significantly improved DSC, ASD and MSD. Large distortions of normal anatomy or lack of iodine contrast were the limiting factors. The updated Brainlab atlas-based automatic segmentation tool for head and neck Cancer patients is timesaving but still necessitates review and corrections by an expert.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rong, Y; Rao, S; Daly, M
Purpose: Adaptive radiotherapy requires complete new sets of regions of interests (ROIs) delineation on the mid-treatment CT images. This work aims at evaluating the accuracy of the RayStation hybrid deformable image registration (DIR) algorithm for its overall integrity and accuracy in contour propagation for adaptive planning. Methods: The hybrid DIR is based on the combination of intensity-based algorithm and anatomical information provided by contours. Patients who received mid-treatment CT scans were identified for the study, including six lung patients (two mid-treatment CTs) and six head-and-neck (HN) patients (one mid-treatment CT). DIRpropagated ROIs were compared with physician-drawn ROIs for 8 ITVsmore » and 7 critical organs (lungs, heart, esophagus, and etc.) for the lung patients, as well as 14 GTVs and 20 critical organs (mandible, eyes, parotids, and etc.) for the HN patients. Volume difference, center of mass (COM) difference, and Dice index were used for evaluation. Clinical-relevance of each propagated ROI was scored by two physicians, and correlated with the Dice index. Results: For critical organs, good agreement (Dice>0.9) were seen on all 7 for lung patients and 13 out of 20 for HN patients, with the rest requiring minimal edits. For targets, COM differences were within 5 mm on average for all patients. For Lung, 5 out of 8 ITVs required minimal edits (Dice 0.8–0.9), with the rest 2 needed re-drawn due to their small volumes (<10 cc). However, the propagated HN GTVs resulted in relatively low Dice values (0.5–0.8) due to their small volumes (3–40 cc) and high variability, among which 2 required re-drawn due to new nodal target identified on the mid-treatment CT scans. Conclusion: The hybrid DIR algorithm was found to be clinically useful and efficient for lung and HN patients, especially for propagated critical organ ROIs. It has potential to significantly improve the workflow in adaptive planning.« less
Zhou, Wu
2014-01-01
The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846
Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid
2016-05-01
Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Soffientini, Chiara D; De Bernardi, Elisabetta; Casati, Rosangela; Baselli, Giuseppe; Zito, Felicia
2017-01-01
Design, realization, scan, and characterization of a phantom for PET Automatic Segmentation (PET-AS) assessment are presented. Radioactive zeolites immersed in a radioactive heterogeneous background simulate realistic wall-less lesions with known irregular shape and known homogeneous or heterogeneous internal activity. Three different zeolite families were evaluated in terms of radioactive uptake homogeneity, necessary to define activity and contour ground truth. Heterogeneous lesions were simulated by the perfect matching of two portions of a broken zeolite, soaked in two different 18 F-FDG radioactive solutions. Heterogeneous backgrounds were obtained with tissue paper balls and sponge pieces immersed into radioactive solutions. Natural clinoptilolite proved to be the most suitable zeolite for the construction of artificial objects mimicking homogeneous and heterogeneous uptakes in 18 F-FDG PET lesions. Heterogeneous backgrounds showed a coefficient of variation equal to 269% and 443% of a uniform radioactive solution. Assembled phantom included eight lesions with volumes ranging from 1.86 to 7.24 ml and lesion to background contrasts ranging from 4.8:1 to 21.7:1. A novel phantom for the evaluation of PET-AS algorithms was developed. It is provided with both reference contours and activity ground truth, and it covers a wide range of volumes and lesion to background contrasts. The dataset is open to the community of PET-AS developers and utilizers. © 2016 American Association of Physicists in Medicine.
Segmenting overlapping nano-objects in atomic force microscopy image
NASA Astrophysics Data System (ADS)
Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko
2018-01-01
Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.
Geraghty, John P; Grogan, Garry; Ebert, Martin A
2013-04-30
This study investigates the variation in segmentation of several pelvic anatomical structures on computed tomography (CT) between multiple observers and a commercial automatic segmentation method, in the context of quality assurance and evaluation during a multicentre clinical trial. CT scans of two prostate cancer patients ('benchmarking cases'), one high risk (HR) and one intermediate risk (IR), were sent to multiple radiotherapy centres for segmentation of prostate, rectum and bladder structures according to the TROG 03.04 "RADAR" trial protocol definitions. The same structures were automatically segmented using iPlan software for the same two patients, allowing structures defined by automatic segmentation to be quantitatively compared with those defined by multiple observers. A sample of twenty trial patient datasets were also used to automatically generate anatomical structures for quantitative comparison with structures defined by individual observers for the same datasets. There was considerable agreement amongst all observers and automatic segmentation of the benchmarking cases for bladder (mean spatial variations < 0.4 cm across the majority of image slices). Although there was some variation in interpretation of the superior-inferior (cranio-caudal) extent of rectum, human-observer contours were typically within a mean 0.6 cm of automatically-defined contours. Prostate structures were more consistent for the HR case than the IR case with all human observers segmenting a prostate with considerably more volume (mean +113.3%) than that automatically segmented. Similar results were seen across the twenty sample datasets, with disagreement between iPlan and observers dominant at the prostatic apex and superior part of the rectum, which is consistent with observations made during quality assurance reviews during the trial. This study has demonstrated quantitative analysis for comparison of multi-observer segmentation studies. For automatic segmentation algorithms based on image-registration as in iPlan, it is apparent that agreement between observer and automatic segmentation will be a function of patient-specific image characteristics, particularly for anatomy with poor contrast definition. For this reason, it is suggested that automatic registration based on transformation of a single reference dataset adds a significant systematic bias to the resulting volumes and their use in the context of a multicentre trial should be carefully considered.
A semi-automatic computer-aided method for surgical template design
NASA Astrophysics Data System (ADS)
Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan
2016-02-01
This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.
A semi-automatic computer-aided method for surgical template design
Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan
2016-01-01
This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method. PMID:26843434
A semi-automatic computer-aided method for surgical template design.
Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan
2016-02-04
This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.
Automatized alignment control of wing mechanization in aerodynamic contour of aircraft
NASA Astrophysics Data System (ADS)
Odnokurtsev, K. A.
2018-05-01
The method of automatized control of accuracy of an aircraft aerodynamic contour when mounting wing mechanization elements is described in the article. A control device in the stand of the wing assembling, equipped with the distance sensors, is suggested to be used. The measurement of control points’ inaccuracies is made automatically in a special computer program. Two kinds of sensor calibration are made in advance in order to increase the accuracy of measurements. As a result, the duration of control and adjustment of mechanization elements is reduced.
A man-made object detection for underwater TV
NASA Astrophysics Data System (ADS)
Cheng, Binbin; Wang, Wenwu; Chen, Yao
2018-03-01
It is a great challenging task to complete an automatic search of objects underwater. Usually the forward looking sonar is used to find the target, and then the initial identification of the target is completed by the side-scan sonar, and finally the confirmation of the target is accomplished by underwater TV. This paper presents an efficient method for automatic extraction of man-made sensitive targets in underwater TV. Firstly, the image of underwater TV is simplified with taking full advantage of the prior knowledge of the target and the background; then template matching technology is used for target detection; finally the target is confirmed by extracting parallel lines on the target contour. The algorithm is formulated for real-time execution on limited-memory commercial-of-the-shelf platforms and is capable of detection objects in underwater TV.
Aishima, Jun; Russel, Daniel S; Guibas, Leonidas J; Adams, Paul D; Brunger, Axel T
2005-10-01
Automatic fitting methods that build molecules into electron-density maps usually fail below 3.5 A resolution. As a first step towards addressing this problem, an algorithm has been developed using an approximation of the medial axis to simplify an electron-density isosurface. This approximation captures the central axis of the isosurface with a graph which is then matched against a graph of the molecular model. One of the first applications of the medial axis to X-ray crystallography is presented here. When applied to ligand fitting, the method performs at least as well as methods based on selecting peaks in electron-density maps. Generalization of the method to recognition of common features across multiple contour levels could lead to powerful automatic fitting methods that perform well even at low resolution.
Atlas ranking and selection for automatic segmentation of the esophagus from CT scans
NASA Astrophysics Data System (ADS)
Yang, Jinzhong; Haas, Benjamin; Fang, Raymond; Beadle, Beth M.; Garden, Adam S.; Liao, Zhongxing; Zhang, Lifei; Balter, Peter; Court, Laurence
2017-12-01
In radiation treatment planning, the esophagus is an important organ-at-risk that should be spared in patients with head and neck cancer or thoracic cancer who undergo intensity-modulated radiation therapy. However, automatic segmentation of the esophagus from CT scans is extremely challenging because of the structure’s inconsistent intensity, low contrast against the surrounding tissues, complex and variable shape and location, and random air bubbles. The goal of this study is to develop an online atlas selection approach to choose a subset of optimal atlases for multi-atlas segmentation to the delineate esophagus automatically. We performed atlas selection in two phases. In the first phase, we used the correlation coefficient of the image content in a cubic region between each atlas and the new image to evaluate their similarity and to rank the atlases in an atlas pool. A subset of atlases based on this ranking was selected, and deformable image registration was performed to generate deformed contours and deformed images in the new image space. In the second phase of atlas selection, we used Kullback-Leibler divergence to measure the similarity of local-intensity histograms between the new image and each of the deformed images, and the measurements were used to rank the previously selected atlases. Deformed contours were overlapped sequentially, from the most to the least similar, and the overlap ratio was examined. We further identified a subset of optimal atlases by analyzing the variation of the overlap ratio versus the number of atlases. The deformed contours from these optimal atlases were fused together using a modified simultaneous truth and performance level estimation algorithm to produce the final segmentation. The approach was validated with promising results using both internal data sets (21 head and neck cancer patients and 15 thoracic cancer patients) and external data sets (30 thoracic patients).
The contour-buildup algorithm to calculate the analytical molecular surface.
Totrov, M; Abagyan, R
1996-01-01
A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, M; Yuan, Y; Lo, Y
Purpose: To develop a novel strategy to extract the lung tumor motion from cone beam CT (CBCT) projections by an active contour model with interpolated respiration learned from diaphragm motion. Methods: Tumor tracking on CBCT projections was accomplished with the templates derived from planning CT (pCT). There are three major steps in the proposed algorithm: 1) The pCT was modified to form two CT sets: a tumor removed pCT and a tumor only pCT, the respective digitally reconstructed radiographs DRRtr and DRRto following the same geometry of the CBCT projections were generated correspondingly. 2) The DRRtr was rigidly registered withmore » the CBCT projections on the frame-by-frame basis. Difference images between CBCT projections and the registered DRRtr were generated where the tumor visibility was appreciably enhanced. 3) An active contour method was applied to track the tumor motion on the tumor enhanced projections with DRRto as templates to initialize the tumor tracking while the respiratory motion was compensated for by interpolating the diaphragm motion estimated by our novel constrained linear regression approach. CBCT and pCT from five patients undergoing stereotactic body radiotherapy were included in addition to scans from a Quasar phantom programmed with known motion. Manual tumor tracking was performed on CBCT projections and was compared to the automatic tracking to evaluate the algorithm accuracy. Results: The phantom study showed that the error between the automatic tracking and the ground truth was within 0.2mm. For the patients the discrepancy between the calculation and the manual tracking was between 1.4 and 2.2 mm depending on the location and shape of the lung tumor. Similar patterns were observed in the frequency domain. Conclusion: The new algorithm demonstrated the feasibility to track the lung tumor from noisy CBCT projections, providing a potential solution to better motion management for lung radiation therapy.« less
An automatic frequency control loop using overlapping DFTs (Discrete Fourier Transforms)
NASA Technical Reports Server (NTRS)
Aguirre, S.
1988-01-01
An automatic frequency control (AFC) loop is introduced and analyzed in detail. The new scheme is a generalization of the well known Cross Product AFC loop that uses running overlapping discrete Fourier transforms (DFTs) to create a discriminator curve. Linear analysis is included and supported with computer simulations. The algorithm is tested in a low carrier to noise ratio (CNR) dynamic environment, and the probability of loss of lock is estimated via computer simulations. The algorithm discussed is a suboptimum tracking scheme with a larger frequency error variance compared to an optimum strategy, but offers simplicity of implementation and a very low operating threshold CNR. This technique can be applied during the carrier acquisition and re-acquisition process in the Advanced Receiver.
Bernard, Florian; Deuter, Christian Eric; Gemmar, Peter; Schachinger, Hartmut
2013-10-01
Using the positions of the eyelids is an effective and contact-free way for the measurement of startle induced eye-blinks, which plays an important role in human psychophysiological research. To the best of our knowledge, no methods for an efficient detection and tracking of the exact eyelid contours in image sequences captured at high-speed exist that are conveniently usable by psychophysiological researchers. In this publication a semi-automatic model-based eyelid contour detection and tracking algorithm for the analysis of high-speed video recordings from an eye tracker is presented. As a large number of images have been acquired prior to method development it was important that our technique is able to deal with images that are recorded without any special parametrisation of the eye tracker. The method entails pupil detection, specular reflection removal and makes use of dynamic model adaption. In a proof-of-concept study we could achieve a correct detection rate of 90.6%. With this approach, we provide a feasible method to accurately assess eye-blinks from high-speed video recordings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Esophagus segmentation in CT via 3D fully convolutional neural network and random walk.
Fechter, Tobias; Adebahr, Sonja; Baltas, Dimos; Ben Ayed, Ismail; Desrosiers, Christian; Dolz, Jose
2017-12-01
Precise delineation of organs at risk is a crucial task in radiotherapy treatment planning for delivering high doses to the tumor while sparing healthy tissues. In recent years, automated segmentation methods have shown an increasingly high performance for the delineation of various anatomical structures. However, this task remains challenging for organs like the esophagus, which have a versatile shape and poor contrast to neighboring tissues. For human experts, segmenting the esophagus from CT images is a time-consuming and error-prone process. To tackle these issues, we propose a random walker approach driven by a 3D fully convolutional neural network (CNN) to automatically segment the esophagus from CT images. First, a soft probability map is generated by the CNN. Then, an active contour model (ACM) is fitted to the CNN soft probability map to get a first estimation of the esophagus location. The outputs of the CNN and ACM are then used in conjunction with a probability model based on CT Hounsfield (HU) values to drive the random walker. Training and evaluation were done on 50 CTs from two different datasets, with clinically used peer-reviewed esophagus contours. Results were assessed regarding spatial overlap and shape similarity. The esophagus contours generated by the proposed algorithm showed a mean Dice coefficient of 0.76 ± 0.11, an average symmetric square distance of 1.36 ± 0.90 mm, and an average Hausdorff distance of 11.68 ± 6.80, compared to the reference contours. These results translate to a very good agreement with reference contours and an increase in accuracy compared to existing methods. Furthermore, when considering the results reported in the literature for the publicly available Synapse dataset, our method outperformed all existing approaches, which suggests that the proposed method represents the current state-of-the-art for automatic esophagus segmentation. We show that a CNN can yield accurate estimations of esophagus location, and that the results of this model can be refined by a random walk step taking pixel intensities and neighborhood relationships into account. One of the main advantages of our network over previous methods is that it performs 3D convolutions, thus fully exploiting the 3D spatial context and performing an efficient volume-wise prediction. The whole segmentation process is fully automatic and yields esophagus delineations in very good agreement with the gold standard, showing that it can compete with previously published methods. © 2017 American Association of Physicists in Medicine.
A fast hidden line algorithm with contour option. M.S. Thesis
NASA Technical Reports Server (NTRS)
Thue, R. E.
1984-01-01
The JonesD algorithm was modified to allow the processing of N-sided elements and implemented in conjunction with a 3-D contour generation algorithm. The total hidden line and contour subsystem is implemented in the MOVIE.BYU Display package, and is compared to the subsystems already existing in the MOVIE.BYU package. The comparison reveals that the modified JonesD hidden line and contour subsystem yields substantial processing time savings, when processing moderate sized models comprised of 1000 elements or less. There are, however, some limitations to the modified JonesD subsystem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClain, B; Olsen, J; Green, O
2015-06-15
Purpose: Online adaptive therapy (ART) relies on auto-contouring using deformable image registration (DIR). DIR’s inherent uncertainties require user intervention and manual edits while the patient is on the table. We investigated the dosimetric impact of DIR errors on the quality of re-optimized plans, and used the findings to establish regions for focusing manual edits to where DIR errors can Result in clinically relevant dose differences. Methods: Our clinical implementation of online adaptive MR-IGRT involves using DIR to transfer contours from CT to daily MR, followed by a physicians’ edits. The plan is then re-optimized to meet the organs at riskmore » (OARs) constraints. Re-optimized abdomen and pelvis plans generated based on physician edited OARs were selected as the baseline for evaluation. Plans were then re-optimized on auto-deformed contours with manual edits limited to pre-defined uniform rings (0 to 5cm) around the PTV. A 0cm ring indicates that the auto-deformed OARs were used without editing. The magnitude of the variations caused by the non-deterministic optimizer was quantified by repeat re-optimizations on the same geometry to determine the mean and standard deviation (STD). For each re-optimized plan, various volumetric parameters for the PTV, the OARs were extracted along with DVH and isodose evaluation. A plan was deemed acceptable if the variation from the baseline plan was within one STD. Results: Initial results show that for abdomen and pancreas cases, a minimum of 5cm margin around the PTV is required for contour corrections, while for pelvic and liver cases a 2–3 cm margin is sufficient. Conclusion: Focusing manual contour edits to regions of dosimetric relevance can reduce contouring time in the online ART process while maintaining a clinically comparable plan. Future work will further refine the contouring region by evaluating the path along the beams, dose gradients near the target and OAR dose metrics.« less
Deformable planning CT to cone-beam CT image registration in head-and-neck cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou Jidong; Guerrero, Mariana; Chen, Wenjuan
2011-04-15
Purpose: The purpose of this work was to implement and validate a deformable CT to cone-beam computed tomography (CBCT) image registration method in head-and-neck cancer to eventually facilitate automatic target delineation on CBCT. Methods: Twelve head-and-neck cancer patients underwent a planning CT and weekly CBCT during the 5-7 week treatment period. The 12 planning CT images (moving images) of these patients were registered to their weekly CBCT images (fixed images) via the symmetric force Demons algorithm and using a multiresolution scheme. Histogram matching was used to compensate for the intensity difference between the two types of images. Using nine knownmore » anatomic points as registration targets, the accuracy of the registration was evaluated using the target registration error (TRE). In addition, region-of-interest (ROI) contours drawn on the planning CT were morphed to the CBCT images and the volume overlap index (VOI) between registered contours and manually delineated contours was evaluated. Results: The mean TRE value of the nine target points was less than 3.0 mm, the slice thickness of the planning CT. Of the 369 target points evaluated for registration accuracy, the average TRE value was 2.6{+-}0.6 mm. The mean TRE for bony tissue targets was 2.4{+-}0.2 mm, while the mean TRE for soft tissue targets was 2.8{+-}0.2 mm. The average VOI between the registered and manually delineated ROI contours was 76.2{+-}4.6%, which is consistent with that reported in previous studies. Conclusions: The authors have implemented and validated a deformable image registration method to register planning CT images to weekly CBCT images in head-and-neck cancer cases. The accuracy of the TRE values suggests that they can be used as a promising tool for automatic target delineation on CBCT.« less
Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P
2017-01-01
Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository. © 2016 The Authors. The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.
Auroux, Didier; Cohen, Laurent D.; Masmoudi, Mohamed
2011-01-01
We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734
Contour Connection Method for automated identification and classification of landslide deposits
NASA Astrophysics Data System (ADS)
Leshchinsky, Ben A.; Olsen, Michael J.; Tanyu, Burak F.
2015-01-01
Landslides are a common hazard worldwide that result in major economic, environmental and social impacts. Despite their devastating effects, inventorying existing landslides, often the regions at highest risk of reoccurrence, is challenging, time-consuming, and expensive. Current landslide mapping techniques include field inventorying, photogrammetric approaches, and use of bare-earth (BE) lidar digital terrain models (DTMs) to highlight regions of instability. However, many techniques do not have sufficient resolution, detail, and accuracy for mapping across landscape scale with the exception of using BE DTMs, which can reveal the landscape beneath vegetation and other obstructions, highlighting landslide features, including scarps, deposits, fans and more. Current approaches to landslide inventorying with lidar to create BE DTMs include manual digitizing, statistical or machine learning approaches, and use of alternate sensors (e.g., hyperspectral imaging) with lidar. This paper outlines a novel algorithm to automatically and consistently detect landslide deposits on a landscape scale. The proposed method is named as the Contour Connection Method (CCM) and is primarily based on bare earth lidar data requiring minimal user input such as the landslide scarp and deposit gradients. The CCM algorithm functions by applying contours and nodes to a map, and using vectors connecting the nodes to evaluate gradient and associated landslide features based on the user defined input criteria. Furthermore, in addition to the detection capabilities, CCM also provides an opportunity to be potentially used to classify different landscape features. This is possible because each landslide feature has a distinct set of metadata - specifically, density of connection vectors on each contour - that provides a unique signature for each landslide. In this paper, demonstrations of using CCM are presented by applying the algorithm to the region surrounding the Oso landslide in Washington (March 2014), as well as two 14,000 ha DTMs in Oregon, which were used as a comparison of CCM and manually delineated landslide deposits. The results show the capability of the CCM with limited data requirements and the agreement with manual delineation but achieving the results at a much faster time.
Simmat, I; Georg, P; Georg, D; Birkfellner, W; Goldner, G; Stock, M
2012-09-01
The goal of the current study was to evaluate the commercially available atlas-based autosegmentation software for clinical use in prostate radiotherapy. The accuracy was benchmarked against interobserver variability. A total of 20 planning computed tomographs (CTs) and 10 cone-beam CTs (CBCTs) were selected for prostate, rectum, and bladder delineation. The images varied regarding to individual (age, body mass index) and setup parameters (contrast agent, rectal balloon, implanted markers). Automatically created contours with ABAS(®) and iPlan(®) were compared to an expert's delineation by calculating the Dice similarity coefficient (DSC) and conformity index. Demo-atlases of both systems showed different results for bladder (DSC(ABAS) 0.86 ± 0.17, DSC(iPlan) 0.51 ± 0.30) and prostate (DSC(ABAS) 0.71 ± 0.14, DSC(iPlan) 0.57 ± 0.19). Rectum delineation (DSC(ABAS) 0.78 ± 0.11, DSC(iPlan) 0.84 ± 0.08) demonstrated differences between the systems but better correlation of the automatically drawn volumes. ABAS(®) was closest to the interobserver benchmark. Autosegmentation with iPlan(®), ABAS(®) and manual segmentation took 0.5, 4 and 15-20 min, respectively. Automatic contouring on CBCT showed high dependence on image quality (DSC bladder 0.54, rectum 0.42, prostate 0.34). For clinical routine, efforts are still necessary to either redesign algorithms implemented in autosegmentation or to optimize image quality for CBCT to guarantee required accuracy and time savings for adaptive radiotherapy.
NASA Technical Reports Server (NTRS)
Mikic, I.; Krucinski, S.; Thomas, J. D.
1998-01-01
This paper presents a method for segmentation and tracking of cardiac structures in ultrasound image sequences. The developed algorithm is based on the active contour framework. This approach requires initial placement of the contour close to the desired position in the image, usually an object outline. Best contour shape and position are then calculated, assuming that at this configuration a global energy function, associated with a contour, attains its minimum. Active contours can be used for tracking by selecting a solution from a previous frame as an initial position in a present frame. Such an approach, however, fails for large displacements of the object of interest. This paper presents a technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects. The algorithm was tested on several ultrasound image sequences, each covering one complete cardiac cycle. The contour successfully tracked boundaries of mitral valve leaflets, aortic root and endocardial borders of the left ventricle. The algorithm-generated outlines were compared against manual tracings by expert physicians. The automated method resulted in contours that were within the boundaries of intraobserver variability.
Image smoothing and enhancement via min/max curvature flow
NASA Astrophysics Data System (ADS)
Malladi, Ravikanth; Sethian, James A.
1996-03-01
We present a class of PDE-based algorithms suitable for a wide range of image processing applications. The techniques are applicable to both salt-and-pepper gray-scale noise and full- image continuous noise present in black and white images, gray-scale images, texture images and color images. At the core, the techniques rely on a level set formulation of evolving curves and surfaces and the viscosity in profile evolution. Essentially, the method consists of moving the isointensity contours in an image under curvature dependent speed laws to achieve enhancement. Compared to existing techniques, our approach has several distinct advantages. First, it contains only one enhancement parameter, which in most cases is automatically chosen. Second, the scheme automatically stops smoothing at some optimal point; continued application of the scheme produces no further change. Third, the method is one of the fastest possible schemes based on a curvature-controlled approach.
Technique for Chestband Contour Shape-Mapping in Lateral Impact
Hallman, Jason J; Yoganandan, Narayan; Pintar, Frank A
2011-01-01
The chestband transducer permits noninvasive measurement of transverse plane biomechanical response during blunt thorax impact. Although experiments may reveal complex two-dimensional (2D) deformation response to boundary conditions, biomechanical studies have heretofore employed only uniaxial chestband contour quantifying measurements. The present study described and evaluated an algorithm by which source subject-specific contour data may be systematically mapped to a target generalized anthropometry for computational studies of biomechanical response or anthropomorphic test dummy development. Algorithm performance was evaluated using chestband contour datasets from two rigid lateral impact boundary conditions: Flat wall and anterior-oblique wall. Comparing source and target anthropometry contours, peak deflections and deformation-time traces deviated by less than 4%. These results suggest that the algorithm is appropriate for 2D deformation response to lateral impact boundary conditions. PMID:21676399
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Automatic segmentation of bones from digital hand radiographs
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Taira, Ricky K.; Shim, Hyeonjoon; Keaton, Patricia
1995-05-01
The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The algorithm uses an object-oriented approach comprising several stages beginning with the most general objects to be segmented, such as the outline of the hand from background, and proceeding in a succession of stages to the most specific object, such as a specific phalangeal bone from a digit of the hand. Each stage carries custom operators unique to the needs of that specific stage which will aid in more accurate results. The method is further aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. Shape models, 1-D wrist profiles, as well as an interpretation tree are used to map model and data contour segments. Shape analysis is performed using an arc-length orientation transform. The method is tested on close to 340 phalangeal and epiphyseal objects to be segmented from 17 cases of pediatric hand images obtained from our clinical PACS. Patient age ranges from 2 - 16 years. A pediatric radiologist preliminarily assessed the results of the object contours and were found to be accurate to within 95% for cases with non-fused bones and to within 85% for cases with fused bones. With accurate and robust results, the method can be applied toward areas such as the determination of bone age, the development of a normal hand atlas, and the characterization of many congenital and acquired growth diseases. Furthermore, this method's architecture can be applied to other image segmentation problems.
Welding skate with computerized controls
NASA Technical Reports Server (NTRS)
Wall, W. A., Jr.
1968-01-01
New welding skate concept for automatic TIG welding of contoured or double-contoured parts combines lightweight welding apparatus with electrical circuitry which computes the desired torch angle and positions a torch and cold-wire guide angle manipulator.
Early-type galaxies: Automated reduction and analysis of ROSAT PSPC data
NASA Technical Reports Server (NTRS)
Mackie, G.; Fabbiano, G.; Harnden, F. R., Jr.; Kim, D.-W.; Maggio, A.; Micela, G.; Sciortino, S.; Ciliegi, P.
1996-01-01
Preliminary results of early-type galaxies that will be part of a galaxy catalog to be derived from the complete Rosat data base are presented. The stored data were reduced and analyzed by an automatic pipeline. This pipeline is based on a command language scrip. The important features of the pipeline include new data time screening in order to maximize the signal to noise ratio of faint point-like sources, source detection via a wavelet algorithm, and the identification of sources with objects from existing catalogs. The pipeline outputs include reduced images, contour maps, surface brightness profiles, spectra, color and hardness ratios.
The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2014-06-01
Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landry, Guillaume, E-mail: g.landry@lmu.de; Nijhuis, Reinoud; Thieke, Christian
2015-03-15
Purpose: Intensity modulated proton therapy (IMPT) of head and neck (H and N) cancer patients may be improved by plan adaptation. The decision to adapt the treatment plan based on a dose recalculation on the current anatomy requires a diagnostic quality computed tomography (CT) scan of the patient. As gantry-mounted cone beam CT (CBCT) scanners are currently being offered by vendors, they may offer daily or weekly updates of patient anatomy. CBCT image quality may not be sufficient for accurate proton dose calculation and it is likely necessary to perform CBCT CT number correction. In this work, the authors investigatedmore » deformable image registration (DIR) of the planning CT (pCT) to the CBCT to generate a virtual CT (vCT) to be used for proton dose recalculation. Methods: Datasets of six H and N cancer patients undergoing photon intensity modulated radiation therapy were used in this study to validate the vCT approach. Each dataset contained a CBCT acquired within 3 days of a replanning CT (rpCT), in addition to a pCT. The pCT and rpCT were delineated by a physician. A Morphons algorithm was employed in this work to perform DIR of the pCT to CBCT following a rigid registration of the two images. The contours from the pCT were deformed using the vector field resulting from DIR to yield a contoured vCT. The DIR accuracy was evaluated with a scale invariant feature transform (SIFT) algorithm comparing automatically identified matching features between vCT and CBCT. The rpCT was used as reference for evaluation of the vCT. The vCT and rpCT CT numbers were converted to stopping power ratio and the water equivalent thickness (WET) was calculated. IMPT dose distributions from treatment plans optimized on the pCT were recalculated with a Monte Carlo algorithm on the rpCT and vCT for comparison in terms of gamma index, dose volume histogram (DVH) statistics as well as proton range. The DIR generated contours on the vCT were compared to physician-drawn contours on the rpCT. Results: The DIR accuracy was better than 1.4 mm according to the SIFT evaluation. The mean WET differences between vCT (pCT) and rpCT were below 1 mm (2.6 mm). The amount of voxels passing 3%/3 mm gamma criteria were above 95% for the vCT vs rpCT. When using the rpCT contour set to derive DVH statistics from dose distributions calculated on the rpCT and vCT the differences, expressed in terms of 30 fractions of 2 Gy, were within [−4, 2 Gy] for parotid glands (D{sub mean}), spinal cord (D{sub 2%}), brainstem (D{sub 2%}), and CTV (D{sub 95%}). When using DIR generated contours for the vCT, those differences ranged within [−8, 11 Gy]. Conclusions: In this work, the authors generated CBCT based stopping power distributions using DIR of the pCT to a CBCT scan. DIR accuracy was below 1.4 mm as evaluated by the SIFT algorithm. Dose distributions calculated on the vCT agreed well to those calculated on the rpCT when using gamma index evaluation as well as DVH statistics based on the same contours. The use of DIR generated contours introduced variability in DVH statistics.« less
2012-01-01
Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695
Yip, Eugene; Yun, Jihyun; Gabos, Zsolt; Baker, Sarah; Yee, Don; Wachowicz, Keith; Rathee, Satyapal; Fallone, B Gino
2018-01-01
Real-time tracking of lung tumors using magnetic resonance imaging (MRI) has been proposed as a potential strategy to mitigate the ill-effects of breathing motion in radiation therapy. Several autocontouring methods have been evaluated against a "gold standard" of a single human expert user. However, contours drawn by experts have inherent intra- and interobserver variations. In this study, we aim to evaluate our user-trained autocontouring algorithm with manually drawn contours from multiple expert users, and to contextualize the accuracy of these autocontours within intra- and interobserver variations. Six nonsmall cell lung cancer patients were recruited, with institutional ethics approval. Patients were imaged with a clinical 3 T Philips MR scanner using a dynamic 2D balanced SSFP sequence under free breathing. Three radiation oncology experts, each in two separate sessions, contoured 130 dynamic images for each patient. For autocontouring, the first 30 images were used for algorithm training, and the remaining 100 images were autocontoured and evaluated. Autocontours were compared against manual contours in terms of Dice's coefficient (DC) and Hausdorff distances (d H ). Intra- and interobserver variations of the manual contours were also evaluated. When compared with the manual contours of the expert user who trained it, the algorithm generates autocontours whose evaluation metrics (same session: DC = 0.90(0.03), d H = 3.8(1.6) mm; different session DC = 0.88(0.04), d H = 4.3(1.5) mm) are similar to or better than intraobserver variations (DC = 0.88(0.04), and d H = 4.3(1.7) mm) between two sessions. The algorithm's autocontours are also compared to the manual contours from different expert users with evaluation metrics (DC = 0.87(0.04), d H = 4.8(1.7) mm) similar to interobserver variations (DC = 0.87(0.04), d H = 4.7(1.6) mm). Our autocontouring algorithm delineates tumor contours (<20 ms per contour), in dynamic MRI of lung, that are comparable to multiple human experts (several seconds per contour), but at a much faster speed. At the same time, the agreement between autocontours and manual contours is comparable to the intra- and interobserver variations. This algorithm may be a key component of the real time tumor tracking workflow for our hybrid Linac-MR device in the future. © 2017 American Association of Physicists in Medicine.
A general algorithm for the construction of contour plots
NASA Technical Reports Server (NTRS)
Johnson, W.; Silva, F.
1981-01-01
An algorithm is described that performs the task of drawing equal level contours on a plane, which requires interpolation in two dimensions based on data prescribed at points distributed irregularly over the plane. The approach is described in detail. The computer program that implements the algorithm is documented and listed.
Parallel peak pruning for scalable SMP contour tree computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, Hamish A.; Weber, Gunther H.; Sewell, Christopher M.
As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this formmore » of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.« less
[The automatic iris map overlap technology in computer-aided iridiagnosis].
He, Jia-feng; Ye, Hu-nian; Ye, Miao-yuan
2002-11-01
In the paper, iridology and computer-aided iridiagnosis technologies are briefly introduced and the extraction method of the collarette contour is then investigated. The iris map can be overlapped on the original iris image based on collarette contour extraction. The research on collarette contour extraction and iris map overlap is of great importance to computer-aided iridiagnosis technologies.
Estimation of contour motion and deformation for nonrigid object tracking
NASA Astrophysics Data System (ADS)
Shao, Jie; Porikli, Fatih; Chellappa, Rama
2007-08-01
We present an algorithm for nonrigid contour tracking in heavily cluttered background scenes. Based on the properties of nonrigid contour movements, a sequential framework for estimating contour motion and deformation is proposed. We solve the nonrigid contour tracking problem by decomposing it into three subproblems: motion estimation, deformation estimation, and shape regulation. First, we employ a particle filter to estimate the global motion parameters of the affine transform between successive frames. Then we generate a probabilistic deformation map to deform the contour. To improve robustness, multiple cues are used for deformation probability estimation. Finally, we use a shape prior model to constrain the deformed contour. This enables us to retrieve the occluded parts of the contours and accurately track them while allowing shape changes specific to the given object types. Our experiments show that the proposed algorithm significantly improves the tracker performance.
Bacciu, Davide; Starita, Antonina
2008-11-01
Determining a compact neural coding for a set of input stimuli is an issue that encompasses several biological memory mechanisms as well as various artificial neural network models. In particular, establishing the optimal network structure is still an open problem when dealing with unsupervised learning models. In this paper, we introduce a novel learning algorithm, named competitive repetition-suppression (CoRe) learning, inspired by a cortical memory mechanism called repetition suppression (RS). We show how such a mechanism is used, at various levels of the cerebral cortex, to generate compact neural representations of the visual stimuli. From the general CoRe learning model, we derive a clustering algorithm, named CoRe clustering, that can automatically estimate the unknown cluster number from the data without using a priori information concerning the input distribution. We illustrate how CoRe clustering, besides its biological plausibility, posses strong theoretical properties in terms of robustness to noise and outliers, and we provide an error function describing CoRe learning dynamics. Such a description is used to analyze CoRe relationships with the state-of-the art clustering models and to highlight CoRe similitude with rival penalized competitive learning (RPCL), showing how CoRe extends such a model by strengthening the rival penalization estimation by means of loss functions from robust statistics.
Bunyak, Filiz; Palaniappan, Kannappan; Chagin, Vadim; Cardoso, M
2009-01-01
Fluorescently tagged proteins such as GFP-PCNA produce rich dynamically varying textural patterns of foci distributed in the nucleus. This enables the behavioral study of sub-cellular structures during different phases of the cell cycle. The varying punctuate patterns of fluorescence, drastic changes in SNR, shape and position during mitosis and abundance of touching cells, however, require more sophisticated algorithms for reliable automatic cell segmentation and lineage analysis. Since the cell nuclei are non-uniform in appearance, a distribution-based modeling of foreground classes is essential. The recently proposed graph partitioning active contours (GPAC) algorithm supports region descriptors and flexible distance metrics. We extend GPAC for fluorescence-based cell segmentation using regional density functions and dramatically improve its efficiency for segmentation from O(N(4)) to O(N(2)), for an image with N(2) pixels, making it practical and scalable for high throughput microscopy imaging studies.
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2017-06-01
Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowling, Jason A., E-mail: jason.dowling@csiro.au; University of Newcastle, Callaghan, New South Wales; Sun, Jidi
Purpose: To validate automatic substitute computed tomography CT (sCT) scans generated from standard T2-weighted (T2w) magnetic resonance (MR) pelvic scans for MR-Sim prostate treatment planning. Patients and Methods: A Siemens Skyra 3T MR imaging (MRI) scanner with laser bridge, flat couch, and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole-pelvis MRI scan (1.6 mm 3-dimensional isotropic T2w SPACE [Sampling Perfection with Application optimized Contrasts using different flip angle Evolution] sequence) was acquired. Three additional small field of view scans were acquired: T2w, T2*w, and T1wmore » flip angle 80° for gold fiducials. Patients received a routine planning CT scan. Manual contouring of the prostate, rectum, bladder, and bones was performed independently on the CT and MR scans. Three experienced observers contoured each organ on MRI, allowing interobserver quantification. To generate a training database, each patient CT scan was coregistered to their whole-pelvis T2w using symmetric rigid registration and structure-guided deformable registration. A new multi-atlas local weighted voting method was used to generate automatic contours and sCT results. Results: The mean error in Hounsfield units between the sCT and corresponding patient CT (within the body contour) was 0.6 ± 14.7 (mean ± 1 SD), with a mean absolute error of 40.5 ± 8.2 Hounsfield units. Automatic contouring results were very close to the expert interobserver level (Dice similarity coefficient): prostate 0.80 ± 0.08, bladder 0.86 ± 0.12, rectum 0.84 ± 0.06, bones 0.91 ± 0.03, and body 1.00 ± 0.003. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same dose prescription was found to be 0.3% ± 0.8%. The 3-dimensional γ pass rate was 1.00 ± 0.00 (2 mm/2%). Conclusions: The MR-Sim setup and automatic sCT generation methods using standard MR sequences generates realistic contours and electron densities for prostate cancer radiation therapy dose planning and digitally reconstructed radiograph generation.« less
A Voronoi interior adjacency-based approach for generating a contour tree
NASA Astrophysics Data System (ADS)
Chen, Jun; Qiao, Chaofei; Zhao, Renliang
2004-05-01
A contour tree is a good graphical tool for representing the spatial relations of contour lines and has found many applications in map generalization, map annotation, terrain analysis, etc. A new approach for generating contour trees by introducing a Voronoi-based interior adjacency set concept is proposed in this paper. The immediate interior adjacency set is employed to identify all of the children contours of each contour without contour elevations. It has advantages over existing methods such as the point-in-polygon method and the region growing-based method. This new approach can be used for spatial data mining and knowledge discovering, such as the automatic extraction of terrain features and construction of multi-resolution digital elevation model.
Bohoudi, O; Bruynzeel, A M E; Senan, S; Cuijpers, J P; Slotman, B J; Lagerwaard, F J; Palacios, M A
2017-12-01
To implement a robust and fast stereotactic MR-guided adaptive radiation therapy (SMART) online strategy in locally advanced pancreatic cancer (LAPC). SMART strategy for plan adaptation was implemented with the MRIdian system (ViewRay Inc.). At each fraction, OAR (re-)contouring is done within a distance of 3cm from the PTV surface. Online plan re-optimization is based on robust prediction of OAR dose and optimization objectives, obtained by building an artificial neural network (ANN). Proposed limited re-contouring strategy for plan adaptation (SMART 3CM ) is evaluated by comparing 50 previously delivered fractions against a standard (re-)planning method using full-scale OAR (re-)contouring (FULLOAR). Plan quality was assessed using PTV coverage (V 95% , D mean , D 1cc ) and institutional OAR constraints (e.g. V 33Gy ). SMART 3CM required a significant lower number of optimizations than FULLOAR (4 vs 18 on average) to generate a plan meeting all objectives and institutional OAR constraints. PTV coverage with both strategies was identical (mean V 95% =89%). Adaptive plans with SMART 3CM exhibited significant lower intermediate and high doses to all OARs than FULLOAR, which also failed in 36% of the cases to adhere to the V 33Gy dose constraint. SMART 3CM approach for LAPC allows good OAR sparing and adequate target coverage while requiring only limited online (re-)contouring from clinicians. Copyright © 2017 Elsevier B.V. All rights reserved.
Lin, Kun-Ju; Huang, Jia-Yann; Chen, Yung-Sheng
2011-12-01
Glomerular filtration rate (GFR) is a common accepted standard estimation of renal function. Gamma camera-based methods for estimating renal uptake of (99m)Tc-diethylenetriaminepentaacetic acid (DTPA) without blood or urine sampling have been widely used. Of these, the method introduced by Gates has been the most common method. Currently, most of gamma cameras are equipped with a commercial program for GFR determination, a semi-quantitative analysis by manually drawing region of interest (ROI) over each kidney. Then, the GFR value can be computed from the scintigraphic determination of (99m)Tc-DTPA uptake within the kidney automatically. Delineating the kidney area is difficult when applying a fixed threshold value. Moreover, hand-drawn ROIs are tedious, time consuming, and dependent highly on operator skill. Thus, we developed a fully automatic renal ROI estimation system based on the temporal changes in intensity counts, intensity-pair distribution image contrast enhancement method, adaptive thresholding, and morphological operations that can locate the kidney area and obtain the GFR value from a (99m)Tc-DTPA renogram. To evaluate the performance of the proposed approach, 30 clinical dynamic renograms were introduced. The fully automatic approach failed in one patient with very poor renal function. Four patients had a unilateral kidney, and the others had bilateral kidneys. The automatic contours from the remaining 54 kidneys were compared with the contours of manual drawing. The 54 kidneys were included for area error and boundary error analyses. There was high correlation between two physicians' manual contours and the contours obtained by our approach. For area error analysis, the mean true positive area overlap is 91%, the mean false negative is 13.4%, and the mean false positive is 9.3%. The boundary error is 1.6 pixels. The GFR calculated using this automatic computer-aided approach is reproducible and may be applied to help nuclear medicine physicians in clinical practice.
Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA
Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less
TU-AB-303-08: GPU-Based Software Platform for Efficient Image-Guided Adaptive Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, S; Robinson, A; McNutt, T
2015-06-15
Purpose: In this study, we develop an integrated software platform for adaptive radiation therapy (ART) that combines fast and accurate image registration, segmentation, and dose computation/accumulation methods. Methods: The proposed system consists of three key components; 1) deformable image registration (DIR), 2) automatic segmentation, and 3) dose computation/accumulation. The computationally intensive modules including DIR and dose computation have been implemented on a graphics processing unit (GPU). All required patient-specific data including the planning CT (pCT) with contours, daily cone-beam CTs, and treatment plan are automatically queried and retrieved from their own databases. To improve the accuracy of DIR between pCTmore » and CBCTs, we use the double force demons DIR algorithm in combination with iterative CBCT intensity correction by local intensity histogram matching. Segmentation of daily CBCT is then obtained by propagating contours from the pCT. Daily dose delivered to the patient is computed on the registered pCT by a GPU-accelerated superposition/convolution algorithm. Finally, computed daily doses are accumulated to show the total delivered dose to date. Results: Since the accuracy of DIR critically affects the quality of the other processes, we first evaluated our DIR method on eight head-and-neck cancer cases and compared its performance. Normalized mutual-information (NMI) and normalized cross-correlation (NCC) computed as similarity measures, and our method produced overall NMI of 0.663 and NCC of 0.987, outperforming conventional methods by 3.8% and 1.9%, respectively. Experimental results show that our registration method is more consistent and roust than existing algorithms, and also computationally efficient. Computation time at each fraction took around one minute (30–50 seconds for registration and 15–25 seconds for dose computation). Conclusion: We developed an integrated GPU-accelerated software platform that enables accurate and efficient DIR, auto-segmentation, and dose computation, thus supporting an efficient ART workflow. This work was supported by NIH/NCI under grant R42CA137886.« less
Placental fetal stem segmentation in a sequence of histology images
NASA Astrophysics Data System (ADS)
Athavale, Prashant; Vese, Luminita A.
2012-02-01
Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental fetal stems. Analysis of the fetal stems in a placenta could be useful in the study and diagnosis of some diseases like autism. To study the fetal stem structure effectively, we need to automatically and accurately track fetal stems through a sequence of digitized hematoxylin and eosin (H&E) stained histology slides. There are many problems in successfully achieving this goal. A few of the problems are: large size of images, misalignment of the consecutive H&E slides, unpredictable inaccuracies of manual tracing, very complicated texture patterns of various tissue types without clear characteristics, just to name a few. In this paper we propose a novel algorithm to achieve automatic tracing of the fetal stem in a sequence of H&E images, based on an inaccurate manual segmentation of a fetal stem in one of the images. This algorithm combines global affine registration, local non-affine registration and a novel 'dynamic' version of the active contours model without edges. We first use global affine image registration of all the images based on displacement, scaling and rotation. This gives us approximate location of the corresponding fetal stem in the image that needs to be traced. We then use the affine registration algorithm "locally" near this location. At this point, we use a fast non-affine registration based on L2-similarity measure and diffusion regularization to get a better location of the fetal stem. Finally, we have to take into account inaccuracies in the initial tracing. This is achieved through a novel dynamic version of the active contours model without edges where the coefficients of the fitting terms are computed iteratively to ensure that we obtain a unique stem in the segmentation. The segmentation thus obtained can then be used as an initial guess to obtain segmentation in the rest of the images in the sequence. This constitutes an important step in the extraction and understanding of the fetal stem vasculature.
NASA Astrophysics Data System (ADS)
Dowling, J. A.; Burdett, N.; Greer, P. B.; Sun, J.; Parker, J.; Pichler, P.; Stanwell, P.; Chandra, S.; Rivest-Hénault, D.; Ghose, S.; Salvado, O.; Fripp, J.
2014-03-01
Our group have been developing methods for MRI-alone prostate cancer radiation therapy treatment planning. To assist with clinical validation of the workflow we are investigating a cloud platform solution for research purposes. Benefits of cloud computing can include increased scalability, performance and extensibility while reducing total cost of ownership. In this paper we demonstrate the generation of DICOM-RT directories containing an automatic average atlas based electron density image and fast pelvic organ contouring from whole pelvis MR scans.
NASA Technical Reports Server (NTRS)
Feinstein, S. P.; Girard, M. A.
1979-01-01
An automated technique for measuring particle diameters and their spatial coordinates from holographic reconstructions is being developed. Preliminary tests on actual cold-flow holograms of impinging jets indicate that a suitable discriminant algorithm consists of a Fourier-Gaussian noise filter and a contour thresholding technique. This process identifies circular as well as noncircular objects. The desired objects (in this case, circular or possibly ellipsoidal) are then selected automatically from the above set and stored with their parametric representations. From this data, dropsize distributions as a function of spatial coordinates can be generated and combustion effects due to hardware and/or physical variables studied.
[Medical image segmentation based on the minimum variation snake model].
Zhou, Changxiong; Yu, Shenglin
2007-02-01
It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.
Markel, D; Naqa, I El
2012-06-01
Positron emission tomography (PET) presents a valuable resource for delineating the biological tumor volume (BTV) for image-guided radiotherapy. However, accurate and consistent image segmentation is a significant challenge within the context of PET, owing to its low spatial resolution and high levels of noise. Active contour methods based on the level set methods can be sensitive to noise and susceptible to failing in low contrast regions. Therefore, this work evaluates a novel active contour algorithm applied to the task of PET tumor segmentation. A novel active contour segmentation algorithm based on maximizing the Jensen-Renyi Divergence between regions of interest was applied to the task of segmenting lesions in 7 patients with T3-T4 pharyngolaryngeal squamous cell carcinoma. The algorithm was implemented on an NVidia GEFORCE GTV 560M GPU. The cases were taken from the Louvain database, which includes contours of the macroscopically defined BTV drawn using histology of resected tissue. The images were pre-processed using denoising/deconvolution. The segmented volumes agreed well with the macroscopic contours, with an average concordance index and classification error of 0.6 ± 0.09 and 55 ± 16.5%, respectively. The algorithm in its present implementation requires approximately 0.5-1.3 sec per iteration and can reach convergence within 10-30 iterations. The Jensen-Renyi active contour method was shown to come close to and in terms of concordance, outperforms a variety of PET segmentation methods that have been previously evaluated using the same data. Further evaluation on a larger dataset along with performance optimization is necessary before clinical deployment. © 2012 American Association of Physicists in Medicine.
Data Mining Citizen Science Results
NASA Astrophysics Data System (ADS)
Borne, K. D.
2012-12-01
Scientific discovery from big data is enabled through multiple channels, including data mining (through the application of machine learning algorithms) and human computation (commonly implemented through citizen science tasks). We will describe the results of new data mining experiments on the results from citizen science activities. Discovering patterns, trends, and anomalies in data are among the powerful contributions of citizen science. Establishing scientific algorithms that can subsequently re-discover the same types of patterns, trends, and anomalies in automatic data processing pipelines will ultimately result from the transformation of those human algorithms into computer algorithms, which can then be applied to much larger data collections. Scientific discovery from big data is thus greatly amplified through the marriage of data mining with citizen science.
Efficient hyperspectral image segmentation using geometric active contour formulation
NASA Astrophysics Data System (ADS)
Albalooshi, Fatema A.; Sidike, Paheding; Asari, Vijayan K.
2014-10-01
In this paper, we present a new formulation of geometric active contours that embeds the local hyperspectral image information for an accurate object region and boundary extraction. We exploit self-organizing map (SOM) unsupervised neural network to train our model. The segmentation process is achieved by the construction of a level set cost functional, in which, the dynamic variable is the best matching unit (BMU) coming from SOM map. In addition, we use Gaussian filtering to discipline the deviation of the level set functional from a signed distance function and this actually helps to get rid of the re-initialization step that is computationally expensive. By using the properties of the collective computational ability and energy convergence capability of the active control models (ACM) energy functional, our method optimizes the geometric ACM energy functional with lower computational time and smoother level set function. The proposed algorithm starts with feature extraction from raw hyperspectral images. In this step, the principal component analysis (PCA) transformation is employed, and this actually helps in reducing dimensionality and selecting best sets of the significant spectral bands. Then the modified geometric level set functional based ACM is applied on the optimal number of spectral bands determined by the PCA. By introducing local significant spectral band information, our proposed method is capable to force the level set functional to be close to a signed distance function, and therefore considerably remove the need of the expensive re-initialization procedure. To verify the effectiveness of the proposed technique, we use real-life hyperspectral images and test our algorithm in varying textural regions. This framework can be easily adapted to different applications for object segmentation in aerial hyperspectral imagery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dengwang; Liu, Li; Kapp, Daniel S.
2015-06-15
Purpose: For facilitating the current automatic segmentation, in this work we propose a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. Methods: In setting up an atlas-based library, we include not only the coordinates of contour points, but also the image features adjacent to the contour. 139 planning CT scans with normal appearing livers obtained during their radiotherapy treatment planning were used to construct the library. The CT images within the library were registered each other using affine registration. A nonlinear narrow shell with the regionalmore » thickness determined by the distance between two vertices alongside the contour. The narrow shell was automatically constructed both inside and outside of the liver contours. The common image features within narrow shell between a new case and a library case were first selected by a Speed-up Robust Features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the images of the new patient by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy function within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by a physician. Results: Application of the technique to 30 liver cases suggested that the technique was capable of reliably segment organs such as the liver with little human intervention. Compared with the manual segmentation results by a physician, the average and discrepancies of the volumetric overlap percentage (VOP) was found to be 92.43%+2.14%. Conclusion: Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. This work is supported by NIH/NIBIB (1R01-EB016777), National Natural Science Foundation of China (No.61471226 and No.61201441), Research funding from Shandong Province (No.BS2012DX038 and No.J12LN23), and Research funding from Jinan City (No.201401221 and No.20120109)« less
An automated workflow for patient-specific quality control of contour propagation
NASA Astrophysics Data System (ADS)
Beasley, William J.; McWilliam, Alan; Slevin, Nicholas J.; Mackay, Ranald I.; van Herk, Marcel
2016-12-01
Contour propagation is an essential component of adaptive radiotherapy, but current contour propagation algorithms are not yet sufficiently accurate to be used without manual supervision. Manual review of propagated contours is time-consuming, making routine implementation of real-time adaptive radiotherapy unrealistic. Automated methods of monitoring the performance of contour propagation algorithms are therefore required. We have developed an automated workflow for patient-specific quality control of contour propagation and validated it on a cohort of head and neck patients, on which parotids were outlined by two observers. Two types of error were simulated—mislabelling of contours and introducing noise in the scans before propagation. The ability of the workflow to correctly predict the occurrence of errors was tested, taking both sets of observer contours as ground truth, using receiver operator characteristic analysis. The area under the curve was 0.90 and 0.85 for the observers, indicating good ability to predict the occurrence of errors. This tool could potentially be used to identify propagated contours that are likely to be incorrect, acting as a flag for manual review of these contours. This would make contour propagation more efficient, facilitating the routine implementation of adaptive radiotherapy.
Albà, Xènia; Figueras I Ventura, Rosa M; Lekadir, Karim; Tobon-Gomez, Catalina; Hoogendoorn, Corné; Frangi, Alejandro F
2014-12-01
Magnetic resonance imaging (MRI), specifically late-enhanced MRI, is the standard clinical imaging protocol to assess cardiac viability. Segmentation of myocardial walls is a prerequisite for this assessment. Automatic and robust multisequence segmentation is required to support processing massive quantities of data. A generic rule-based framework to automatically segment the left ventricle myocardium is presented here. We use intensity information, and include shape and interslice smoothness constraints, providing robustness to subject- and study-specific changes. Our automatic initialization considers the geometrical and appearance properties of the left ventricle, as well as interslice information. The segmentation algorithm uses a decoupled, modified graph cut approach with control points, providing a good balance between flexibility and robustness. The method was evaluated on late-enhanced MRI images from a 20-patient in-house database, and on cine-MRI images from a 15-patient open access database, both using as reference manually delineated contours. Segmentation agreement, measured using the Dice coefficient, was 0.81±0.05 and 0.92±0.04 for late-enhanced MRI and cine-MRI, respectively. The method was also compared favorably to a three-dimensional Active Shape Model approach. The experimental validation with two magnetic resonance sequences demonstrates increased accuracy and versatility. © 2013 Wiley Periodicals, Inc.
Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang
2017-06-01
To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle 3 Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.
Gray matter segmentation of the spinal cord with active contours in MR images.
Datta, Esha; Papinutto, Nico; Schlaeger, Regina; Zhu, Alyssa; Carballido-Gamio, Julio; Henry, Roland G
2017-02-15
Fully or partially automated spinal cord gray matter segmentation techniques for spinal cord gray matter segmentation will allow for pivotal spinal cord gray matter measurements in the study of various neurological disorders. The objective of this work was multi-fold: (1) to develop a gray matter segmentation technique that uses registration methods with an existing delineation of the cord edge along with Morphological Geodesic Active Contour (MGAC) models; (2) to assess the accuracy and reproducibility of the newly developed technique on 2D PSIR T1 weighted images; (3) to test how the algorithm performs on different resolutions and other contrasts; (4) to demonstrate how the algorithm can be extended to 3D scans; and (5) to show the clinical potential for multiple sclerosis patients. The MGAC algorithm was developed using a publicly available implementation of a morphological geodesic active contour model and the spinal cord segmentation tool of the software Jim (Xinapse Systems) for initial estimate of the cord boundary. The MGAC algorithm was demonstrated on 2D PSIR images of the C2/C3 level with two different resolutions, 2D T2* weighted images of the C2/C3 level, and a 3D PSIR image. These images were acquired from 45 healthy controls and 58 multiple sclerosis patients selected for the absence of evident lesions at the C2/C3 level. Accuracy was assessed though visual assessment, Hausdorff distances, and Dice similarity coefficients. Reproducibility was assessed through interclass correlation coefficients. Validity was assessed through comparison of segmented gray matter areas in images with different resolution for both manual and MGAC segmentations. Between MGAC and manual segmentations in healthy controls, the mean Dice similarity coefficient was 0.88 (0.82-0.93) and the mean Hausdorff distance was 0.61 (0.46-0.76) mm. The interclass correlation coefficient from test and retest scans of healthy controls was 0.88. The percent change between the manual segmentations from high and low-resolution images was 25%, while the percent change between the MGAC segmentations from high and low resolution images was 13%. Between MGAC and manual segmentations in MS patients, the average Dice similarity coefficient was 0.86 (0.8-0.92) and the average Hausdorff distance was 0.83 (0.29-1.37) mm. We demonstrate that an automatic segmentation technique, based on a morphometric geodesic active contours algorithm, can provide accurate and precise spinal cord gray matter segmentations on 2D PSIR images. We have also shown how this automated technique can potentially be extended to other imaging protocols. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, J; Tian, Z; Gu, X
Purpose: To investigate the dosimetric benefit of adaptive re-planning for lung stereotactic body radiotherapy(SBRT). Methods: Five lung cancer patients with SBRT treatment were retrospectively investigated. Our in-house supercomputing online re-planning environment (SCORE) was used to realize the re-planning process. First a deformable image registration was carried out to transfer contours from treatment planning CT to each treatment CBCT. Then an automatic re-planning using original plan DVH guided fluence-map optimization is performed to get a new plan for the up-to-date patient geometry. We compared the re-optimized plan to the original plan projected on the up-to-date patient geometry in critical dosimetric parameters,more » such as PTV coverage, spinal cord maximum and volumetric constraint dose, esophagus maximum and volumetric constraint dose. Results: The average volume of PTV covered by prescription dose for all patients was improved by 7.56% after the adaptive re-planning. The volume of the spinal cord receiving 14.5Gy and 23Gy (V14.5, V23) decreased by 1.48% and 0.68%, respectively. For the esophagus, the volume receiving 19.5Gy (V19.5) reduced by 1.37%. Meanwhile, the maximum dose dropped off by 2.87% for spinal cord and 4.80% for esophagus. Conclusion: Our experimental results demonstrate that adaptive re-planning for lung SBRT has the potential to minimize the dosimetric effect of inter-fraction deformation and thus improve target coverage while reducing the risk of toxicity to nearby normal tissues.« less
Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.
2015-07-28
An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.
Fully Automatic Segmentation of Fluorescein Leakage in Subjects With Diabetic Macular Edema
Rabbani, Hossein; Allingham, Michael J.; Mettu, Priyatham S.; Cousins, Scott W.; Farsiu, Sina
2015-01-01
Purpose. To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Methods. Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. Results. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Conclusions. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. PMID:25634978
Fully automatic segmentation of fluorescein leakage in subjects with diabetic macular edema.
Rabbani, Hossein; Allingham, Michael J; Mettu, Priyatham S; Cousins, Scott W; Farsiu, Sina
2015-01-29
To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
Attention capture by contour onsets and offsets: no special role for onsets.
Watson, D G; Humphreys, G W
1995-07-01
In five experiments, we investigated the power of targets defined by the onset or offset of one of an object's parts (contour onsets and offsets) either to guide or to capture visual attention. In Experiment 1, search for a single contour onset target was compared with search for a single contour offset target against a static background of distractors; no difference was found between the efficiency with which each could be detected. In Experiment 2, onsets and offsets were compared for automatic attention capture, when both occurred simultaneously. Unlike in previous studies, the effects of overall luminance change, new-object creation, and number of onset and offset items were controlled. It was found that contour onset and offset items captured attention equally well. However, display size effects on both target types were also apparent. Such effects may have been due to competition for selection between multiple onset and offset stimuli. In Experiments 3 and 4, single onset and offset stimuli were presented simultaneously and pitted directly against one another among a background of static distractors. In Experiment 3, we examined "guided search," for a target that was formed either from an onset or from an offset among static items. In Experiment 4, the onsets and offsets were uncorrelated with the target location. Similar results occurred in both experiments: target onsets and offsets were detected more efficiently than static stimuli which needed serial search; there remained effects of display size on performance; but there was still no advantage for onsets. In Experiment 5, we examined automatic attention capture by single onset and offset stimuli presented individually among static distractors. Again, there was no advantage for onset over offset targets and a display size effect was also present. These results suggest that, both in isolation and in competition, onsets that do not form new objects neither guide nor gain automatic attention more efficiently than offsets. In addition, in contrast to previous studies in which onsets formed new objects, contour onsets and offsets did not reliably capture attention automatically.
WE-AB-BRA-05: Fully Automatic Segmentation of Male Pelvic Organs On CT Without Manual Intervention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Y; Lian, J; Chen, R
Purpose: We aim to develop a fully automatic tool for accurate contouring of major male pelvic organs in CT images for radiotherapy without any manual initialization, yet still achieving superior performance than the existing tools. Methods: A learning-based 3D deformable shape model was developed for automatic contouring. Specifically, we utilized a recent machine learning method, random forest, to jointly learn both image regressor and classifier for each organ. In particular, the image regressor is trained to predict the 3D displacement from each vertex of the 3D shape model towards the organ boundary based on the local image appearance around themore » location of this vertex. The predicted 3D displacements are then used to drive the 3D shape model towards the target organ. Once the shape model is deformed close to the target organ, it is further refined by an organ likelihood map estimated by the learned classifier. As the organ likelihood map provides good guideline for the organ boundary, the precise contouring Result could be achieved, by deforming the 3D shape model locally to fit boundaries in the organ likelihood map. Results: We applied our method to 29 previously-treated prostate cancer patients, each with one planning CT scan. Compared with manually delineated pelvic organs, our method obtains overlap ratios of 85.2%±3.74% for the prostate, 94.9%±1.62% for the bladder, and 84.7%±1.97% for the rectum, respectively. Conclusion: This work demonstrated feasibility of a novel machine-learning based approach for accurate and automatic contouring of major male pelvic organs. It shows the potential to replace the time-consuming and inconsistent manual contouring in the clinic. Also, compared with the existing works, our method is more accurate and also efficient since it does not require any manual intervention, such as manual landmark placement. Moreover, our method obtained very similar contouring results as the clinical experts. Project is partially support by a grant from NCI 1R01CA140413.« less
NASA Astrophysics Data System (ADS)
Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.
2006-09-01
Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.
Hatt, Mathieu; Lee, John A.; Schmidtlein, Charles R.; Naqa, Issam El; Caldwell, Curtis; De Bernardi, Elisabetta; Lu, Wei; Das, Shiva; Geets, Xavier; Gregoire, Vincent; Jeraj, Robert; MacManus, Michael P.; Mawlawi, Osama R.; Nestle, Ursula; Pugachev, Andrei B.; Schöder, Heiko; Shepherd, Tony; Spezi, Emiliano; Visvikis, Dimitris; Zaidi, Habib; Kirov, Assen S.
2017-01-01
Purpose The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application. Approach A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed. Findings A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol. Conclusions Available comparison studies suggest that PET-AS algorithms relying on advanced image analysis paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type, and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to evaluating both existing and future PET-AS algorithms needs to be designed, to aid clinicians in evaluating and selecting PET-AS algorithms and to establish performance limits for their acceptance for clinical use. The initial steps toward designing and building such a standard are undertaken by the task group members. PMID:28120467
Large-scale urban point cloud labeling and reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu
2018-04-01
The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.
Rebouças Filho, Pedro Pedrosa; Cortez, Paulo César; da Silva Barros, Antônio C; C Albuquerque, Victor Hugo; R S Tavares, João Manuel
2017-01-01
The World Health Organization estimates that 300 million people have asthma, 210 million people have Chronic Obstructive Pulmonary Disease (COPD), and, according to WHO, COPD will become the third major cause of death worldwide in 2030. Computational Vision systems are commonly used in pulmonology to address the task of image segmentation, which is essential for accurate medical diagnoses. Segmentation defines the regions of the lungs in CT images of the thorax that must be further analyzed by the system or by a specialist physician. This work proposes a novel and powerful technique named 3D Adaptive Crisp Active Contour Method (3D ACACM) for the segmentation of CT lung images. The method starts with a sphere within the lung to be segmented that is deformed by forces acting on it towards the lung borders. This process is performed iteratively in order to minimize an energy function associated with the 3D deformable model used. In the experimental assessment, the 3D ACACM is compared against three approaches commonly used in this field: the automatic 3D Region Growing, the level-set algorithm based on coherent propagation and the semi-automatic segmentation by an expert using the 3D OsiriX toolbox. When applied to 40 CT scans of the chest the 3D ACACM had an average F-measure of 99.22%, revealing its superiority and competency to segment lungs in CT images. Copyright © 2016 Elsevier B.V. All rights reserved.
Fast automatic delineation of cardiac volume of interest in MSCT images
NASA Astrophysics Data System (ADS)
Lorenz, Cristian; Lessick, Jonathan; Lavi, Guy; Bulow, Thomas; Renisch, Steffen
2004-05-01
Computed Tomography Angiography (CTA) is an emerging modality for assessing cardiac anatomy. The delineation of the cardiac volume of interest (VOI) is a pre-processing step for subsequent visualization or image processing. It serves the suppression of anatomic structures being not in the primary focus of the cardiac application, such as sternum, ribs, spinal column, descending aorta and pulmonary vasculature. These structures obliterate standard visualizations such as direct volume renderings or maximum intensity projections. In addition, outcome and performance of post-processing steps such as ventricle suppression, coronary artery segmentation or the detection of short and long axes of the heart can be improved. The structures being part of the cardiac VOI (coronary arteries and veins, myocardium, ventricles and atria) differ tremendously in appearance. In addition, there is no clear image feature associated with the contour (or better cut-surface) distinguishing between cardiac VOI and surrounding tissue making the automatic delineation of the cardiac VOI a difficult task. The presented approach locates in a first step chest wall and descending aorta in all image slices giving a rough estimate of the location of the heart. In a second step, a Fourier based active contour approach delineates slice-wise the border of the cardiac VOI. The algorithm has been evaluated on 41 multi-slice CT data-sets including cases with coronary stents and venous and arterial bypasses. The typical processing time amounts to 5-10s on a 1GHz P3 PC.
Object-oriented approach to the automatic segmentation of bones from pediatric hand radiographs
NASA Astrophysics Data System (ADS)
Shim, Hyeonjoon; Liu, Brent J.; Taira, Ricky K.; Hall, Theodore R.
1997-04-01
The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The development of this system draws principles from object-oriented design, model- guided analysis, and feedback control. A system architecture called 'the object segmentation machine' was implemented incorporating these design philosophies. The system is aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. These models include object structure models, shape models, 1-D wrist profiles, and gray level histogram models. Shape analysis is performed first by using an arc-length orientation transform to break down a given contour into elementary segments and curves. Then an interpretation tree is used as an inference engine to map known model contour segments to data contour segments obtained from the transform. Spatial and anatomical relationships among contour segments work as constraints from shape model. These constraints aid in generating a list of candidate matches. The candidate match with the highest confidence is chosen to be the current intermediate result. Verification of intermediate results are perform by a feedback control loop.
Automatic classification of canine PRG neuronal discharge patterns using K-means clustering.
Zuperku, Edward J; Prkic, Ivana; Stucke, Astrid G; Miller, Justin R; Hopp, Francis A; Stuth, Eckehard A
2015-02-01
Respiratory-related neurons in the parabrachial-Kölliker-Fuse (PB-KF) region of the pons play a key role in the control of breathing. The neuronal activities of these pontine respiratory group (PRG) neurons exhibit a variety of inspiratory (I), expiratory (E), phase spanning and non-respiratory related (NRM) discharge patterns. Due to the variety of patterns, it can be difficult to classify them into distinct subgroups according to their discharge contours. This report presents a method that automatically classifies neurons according to their discharge patterns and derives an average subgroup contour of each class. It is based on the K-means clustering technique and it is implemented via SigmaPlot User-Defined transform scripts. The discharge patterns of 135 canine PRG neurons were classified into seven distinct subgroups. Additional methods for choosing the optimal number of clusters are described. Analysis of the results suggests that the K-means clustering method offers a robust objective means of both automatically categorizing neuron patterns and establishing the underlying archetypical contours of subtypes based on the discharge patterns of group of neurons. Published by Elsevier B.V.
Bercovich, A; Edan, Y; Alchanatis, V; Moallem, U; Parmet, Y; Honig, H; Maltz, E; Antler, A; Halachmi, I
2013-01-01
Body condition evaluation is a common tool to assess energy reserves of dairy cows and to estimate their fatness or thinness. This study presents a computer-vision tool that automatically estimates cow's body condition score. Top-view images of 151 cows were collected on an Israeli research dairy farm using a digital still camera located at the entrance to the milking parlor. The cow's tailhead area and its contour were segmented and extracted automatically. Two types of features of the tailhead contour were extracted: (1) the angles and distances between 5 anatomical points; and (2) the cow signature, which is a 1-dimensional vector of the Euclidean distances from each point in the normalized tailhead contour to the shape center. Two methods were applied to describe the cow's signature and to reduce its dimension: (1) partial least squares regression, and (2) Fourier descriptors of the cow signature. Three prediction models were compared with manual scores of an expert. Results indicate that (1) it is possible to automatically extract and predict body condition from color images without any manual interference; and (2) Fourier descriptors of the cow's signature result in improved performance (R(2)=0.77). Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopal, A; Xu, H; Chen, S
Purpose: To compare the contour propagation accuracy of two deformable image registration (DIR) algorithms in the Raystation treatment planning system – the “Hybrid” algorithm based on image intensities and anatomical information; and the “Biomechanical” algorithm based on linear anatomical elasticity and finite element modeling. Methods: Both DIR algorithms were used for CT-to-CT deformation for 20 lung radiation therapy patients that underwent treatment plan revisions. Deformation accuracy was evaluated using landmark tracking to measure the target registration error (TRE) and inverse consistency error (ICE). The deformed contours were also evaluated against physician drawn contours using Dice similarity coefficients (DSC). Contour propagationmore » was qualitatively assessed using a visual quality score assigned by physicians, and a refinement quality score (0 0.9 for lungs, > 0.85 for heart, > 0.8 for liver) and similar qualitative assessments (VQS < 0.35, RQS > 0.75 for lungs). When anatomical structures were used to control the deformation, the DSC improved more significantly for the biomechanical DIR compared to the hybrid DIR, while the VQS and RQS improved only for the controlling structures. However, while the inclusion of controlling structures improved the TRE for the hybrid DIR, it increased the TRE for the biomechanical DIR. Conclusion: The hybrid DIR was found to perform slightly better than the biomechanical DIR based on lower TRE while the DSC, VQS, and RQS studies yielded comparable results for both. The use of controlling structures showed considerable improvement in the hybrid DIR results and is recommended for clinical use in contour propagation.« less
Altschuler, Ted S.; Molholm, Sophie; Butler, John S.; Mercier, Manuel R.; Brandwein, Alice B.; Foxe, John J.
2014-01-01
The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200 ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230-400 ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N= 63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern - engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5 years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent. PMID:24365674
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Joe H.; University of Melbourne, Victoria; Lim Joon, Daryl
2015-06-01
Purpose: The purpose of this study was to compare the accuracy of [{sup 11}C]choline positron emission tomography (CHOL-PET) with that of the combination of T2-weighted and diffusion-weighted (T2W/DW) magnetic resonance imaging (MRI) for delineating malignant intraprostatic lesions (IPLs) for guiding focal therapies and to investigate factors predicting the accuracy of CHOL-PET. Methods and Materials: This study included 21 patients who underwent CHOL-PET and T2W/DW MRI prior to radical prostatectomy. Two observers manually delineated IPL contours for each scan, and automatic IPL contours were generated on CHOL-PET based on varying proportions of the maximum standardized uptake value (SUV). IPLs identified onmore » prostatectomy specimens defined reference standard contours. The imaging-based contours were compared with the reference standard contours using Dice similarity coefficient (DSC), and sensitivity and specificity values. Factors that could potentially predict the DSC of the best contouring method were analyzed using linear models. Results: The best automatic contouring method, 60% of the maximum SUV (SUV{sub 60}) , had similar correlations (DSC: 0.59) with the manual PET contours (DSC: 0.52, P=.127) and significantly better correlations than the manual MRI contours (DSC: 0.37, P<.001). The sensitivity and specificity values were 72% and 71% for SUV{sub 60}; 53% and 86% for PET manual contouring; and 28% and 92% for MRI manual contouring. The tumor volume and transition zone pattern could independently predict the accuracy of CHOL-PET. Conclusions: CHOL-PET is superior to the combination of T2W/DW MRI for delineating IPLs. The accuracy of CHOL-PET is insufficient for gland-sparing focal therapies but may be accurate enough for focal boost therapies. The transition zone pattern is a new classification that may predict how well CHOL-PET delineates IPLs.« less
Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic
NASA Astrophysics Data System (ADS)
Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder
2017-12-01
The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.
A segmentation editing framework based on shape change statistics
NASA Astrophysics Data System (ADS)
Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen
2017-02-01
Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.
An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using amore » combinatorial algorithm.« less
Interactive 3D segmentation using connected orthogonal contours.
de Bruin, P W; Dercksen, V J; Post, F H; Vossepoel, A M; Streekstra, G J; Vos, F M
2005-05-01
This paper describes a new method for interactive segmentation that is based on cross-sectional design and 3D modelling. The method represents a 3D model by a set of connected contours that are planar and orthogonal. Planar contours overlayed on image data are easily manipulated and linked contours reduce the amount of user interaction.1 This method solves the contour-to-contour correspondence problem and can capture extrema of objects in a more flexible way than manual segmentation of a stack of 2D images. The resulting 3D model is guaranteed to be free of geometric and topological errors. We show that manual segmentation using connected orthogonal contours has great advantages over conventional manual segmentation. Furthermore, the method provides effective feedback and control for creating an initial model for, and control and steering of, (semi-)automatic segmentation methods.
Rayarao, Geetha; Biederman, Robert W W; Williams, Ronald B; Yamrozik, June A; Lombardi, Richard; Doyle, Mark
2018-01-01
To establish the clinical validity and accuracy of automatic thresholding and manual trimming (ATMT) by comparing the method with the conventional contouring method for in vivo cardiac volume measurements. CMR was performed on 40 subjects (30 patients and 10 controls) using steady-state free precession cine sequences with slices oriented in the short-axis and acquired contiguously from base to apex. Left ventricular (LV) volumes, end-diastolic volume, end-systolic volume, and stroke volume (SV) were obtained with ATMT and with the conventional contouring method. Additionally, SV was measured independently using CMR phase velocity mapping (PVM) of the aorta for validation. Three methods of calculating SV were compared by applying Bland-Altman analysis. The Bland-Altman standard deviation of variation (SD) and offset bias for LV SV for the three sets of data were: ATMT-PVM (7.65, [Formula: see text]), ATMT-contours (7.85, [Formula: see text]), and contour-PVM (11.01, 4.97), respectively. Equating the observed range to the error contribution of each approach, the error magnitude of ATMT:PVM:contours was in the ratio 1:2.4:2.5. Use of ATMT for measuring ventricular volumes accommodates trabeculae and papillary structures more intuitively than contemporary contouring methods. This results in lower variation when analyzing cardiac structure and function and consequently improved accuracy in assessing chamber volumes.
Automated detection of breast cancer in resected specimens with fluorescence lifetime imaging
NASA Astrophysics Data System (ADS)
Phipps, Jennifer E.; Gorpas, Dimitris; Unger, Jakob; Darrow, Morgan; Bold, Richard J.; Marcu, Laura
2018-01-01
Re-excision rates for breast cancer lumpectomy procedures are currently nearly 25% due to surgeons relying on inaccurate or incomplete methods of evaluating specimen margins. The objective of this study was to determine if cancer could be automatically detected in breast specimens from mastectomy and lumpectomy procedures by a classification algorithm that incorporated parameters derived from fluorescence lifetime imaging (FLIm). This study generated a database of co-registered histologic sections and FLIm data from breast cancer specimens (N = 20) and a support vector machine (SVM) classification algorithm able to automatically detect cancerous, fibrous, and adipose breast tissue. Classification accuracies were greater than 97% for automated detection of cancerous, fibrous, and adipose tissue from breast cancer specimens. The classification worked equally well for specimens scanned by hand or with a mechanical stage, demonstrating that the system could be used during surgery or on excised specimens. The ability of this technique to simply discriminate between cancerous and normal breast tissue, in particular to distinguish fibrous breast tissue from tumor, which is notoriously challenging for optical techniques, leads to the conclusion that FLIm has great potential to assess breast cancer margins. Identification of positive margins before waiting for complete histologic analysis could significantly reduce breast cancer re-excision rates.
Liu, Ruijie Rachel; Erwin, William D
2006-08-01
An algorithm was developed to estimate noncircular orbit (NCO) single-photon emission computed tomography (SPECT) detector radius on a SPECT/CT imaging system using the CT images, for incorporation into collimator resolution modeling for iterative SPECT reconstruction. Simulated male abdominal (arms up), male head and neck (arms down) and female chest (arms down) anthropomorphic phantom, and ten patient, medium-energy SPECT/CT scans were acquired on a hybrid imaging system. The algorithm simulated inward SPECT detector radial motion and object contour detection at each projection angle, employing the calculated average CT image and a fixed Hounsfield unit (HU) threshold. Calculated radii were compared to the observed true radii, and optimal CT threshold values, corresponding to patient bed and clothing surfaces, were found to be between -970 and -950 HU. The algorithm was constrained by the 45 cm CT field-of-view (FOV), which limited the detected radii to < or = 22.5 cm and led to occasional radius underestimation in the case of object truncation by CT. Two methods incorporating the algorithm were implemented: physical model (PM) and best fit (BF). The PM method computed an offset that produced maximum overlap of calculated and true radii for the phantom scans, and applied that offset as a calculated-to-true radius transformation. For the BF method, the calculated-to-true radius transformation was based upon a linear regression between calculated and true radii. For the PM method, a fixed offset of +2.75 cm provided maximum calculated-to-true radius overlap for the phantom study, which accounted for the camera system's object contour detect sensor surface-to-detector face distance. For the BF method, a linear regression of true versus calculated radius from a reference patient scan was used as a calculated-to-true radius transform. Both methods were applied to ten patient scans. For -970 and -950 HU thresholds, the combined overall average root-mean-square (rms) error in radial position for eight patient scans without truncation were 3.37 cm (12.9%) for PM and 1.99 cm (8.6%) for BF, indicating BF is superior to PM in the absence of truncation. For two patient scans with truncation, the rms error was 3.24 cm (12.2%) for PM and 4.10 cm (18.2%) for BF. The slightly better performance of PM in the case of truncation is anomalous, due to FOV edge truncation artifacts in the CT reconstruction, and thus is suspect. The calculated NCO contour for a patient SPECT/CT scan was used with an iterative reconstruction algorithm that incorporated compensation for system resolution. The resulting image was qualitatively superior to the image obtained by reconstructing the data using the fixed radius stored by the scanner. The result was also superior to the image reconstructed using the iterative algorithm provided with the system, which does not incorporate resolution modeling. These results suggest that, under conditions of no or only mild lateral truncation of the CT scan, the algorithm is capable of providing radius estimates suitable for iterative SPECT reconstruction collimator geometric resolution modeling.
Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S
2018-02-01
Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.
An adaptive DPCM algorithm for predicting contours in NTSC composite video signals
NASA Astrophysics Data System (ADS)
Cox, N. R.
An adaptive DPCM algorithm is proposed for encoding digitized National Television Systems Committee (NTSC) color video signals. This algorithm essentially predicts picture contours in the composite signal without resorting to component separation. The contour parameters (slope thresholds) are optimized using four 'typical' television frames that have been sampled at three times the color subcarrier frequency. Three variations of the basic predictor are simulated and compared quantitatively with three non-adaptive predictors of similar complexity. By incorporating a dual-word-length coder and buffer memory, high quality color pictures can be encoded at 4.0 bits/pel or 42.95 Mbit/s. The effect of channel error propagation is also investigated.
Automatic generation of investigator bibliographies for institutional research networking systems.
Johnson, Stephen B; Bales, Michael E; Dine, Daniel; Bakken, Suzanne; Albert, Paul J; Weng, Chunhua
2014-10-01
Publications are a key data source for investigator profiles and research networking systems. We developed ReCiter, an algorithm that automatically extracts bibliographies from PubMed using institutional information about the target investigators. ReCiter executes a broad query against PubMed, groups the results into clusters that appear to constitute distinct author identities and selects the cluster that best matches the target investigator. Using information about investigators from one of our institutions, we compared ReCiter results to queries based on author name and institution and to citations extracted manually from the Scopus database. Five judges created a gold standard using citations of a random sample of 200 investigators. About half of the 10,471 potential investigators had no matching citations in PubMed, and about 45% had fewer than 70 citations. Interrater agreement (Fleiss' kappa) for the gold standard was 0.81. Scopus achieved the best recall (sensitivity) of 0.81, while name-based queries had 0.78 and ReCiter had 0.69. ReCiter attained the best precision (positive predictive value) of 0.93 while Scopus had 0.85 and name-based queries had 0.31. ReCiter accesses the most current citation data, uses limited computational resources and minimizes manual entry by investigators. Generation of bibliographies using named-based queries will not yield high accuracy. Proprietary databases can perform well but requite manual effort. Automated generation with higher recall is possible but requires additional knowledge about investigators. Copyright © 2014 Elsevier Inc. All rights reserved.
Automatic generation of investigator bibliographies for institutional research networking systems
Johnson, Stephen B.; Bales, Michael E.; Dine, Daniel; Bakken, Suzanne; Albert, Paul J.; Weng, Chunhua
2014-01-01
Objective Publications are a key data source for investigator profiles and research networking systems. We developed ReCiter, an algorithm that automatically extracts bibliographies from PubMed using institutional information about the target investigators. Methods ReCiter executes a broad query against PubMed, groups the results into clusters that appear to constitute distinct author identities and selects the cluster that best matches the target investigator. Using information about investigators from one of our institutions, we compared ReCiter results to queries based on author name and institution and to citations extracted manually from the Scopus database. Five judges created a gold standard using citations of a random sample of 200 investigators. Results About half of the 10,471 potential investigators had no matching citations in PubMed, and about 45% had fewer than 70 citations. Interrater agreement (Fleiss’ kappa) for the gold standard was 0.81. Scopus achieved the best recall (sensitivity) of 0.81, while name-based queries had 0.78 and ReCiter had 0.69. ReCiter attained the best precision (positive predictive value) of 0.93 while Scopus had 0.85 and name-based queries had 0.31. Discussion ReCiter accesses the most current citation data, uses limited computational resources and minimizes manual entry by investigators. Generation of bibliographies using named-based queries will not yield high accuracy. Proprietary databases can perform well but requite manual effort. Automated generation with higher recall is possible but requires additional knowledge about investigators. PMID:24694772
Fedotchev, A I
2010-01-01
The perspective approach to non-pharmacological correction of the stress induced functional disorders in humans, based on the double negative feedback from patient's EEG was validated and experimentally tested. The approach implies a simultaneous use of narrow frequency EEG-oscillators, characteristic for each patient and recorded in real time span, in two independent contours of negative feedback--traditional contour of adaptive biomanagement and additional contour of resonance stimulation. In the last the signals of negative feedback from individual narrow frequency EEG oscillators are not recognized by the subject, but serve for an automatic modulation of the parameters of the sensory impact. Was shown that due to combination of active (conscious perception) and passive (automatic modulation) use of signals of negative feedback from narrow frequency EEG components of the patient, opens a possibility of considerable increase of efficiency of the procedures of EEG biomanagement.
User-assisted video segmentation system for visual communication
NASA Astrophysics Data System (ADS)
Wu, Zhengping; Chen, Chun
2002-01-01
Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, W; Wu, Q; Yuan, L
Purpose: To improve the robustness of a knowledge based automatic lung IMRT planning method and to further validate the reliability of this algorithm by utilizing for the planning of clinical cases with non-coplanar beams. Methods: A lung IMRT planning method which automatically determines both plan optimization objectives and beam configurations with non-coplanar beams has been reported previously. A beam efficiency index map is constructed to guide beam angle selection in this algorithm. This index takes into account both the dose contributions from individual beams and the combined effect of multiple beams which is represented by a beam separation score. Wemore » studied the effect of this beam separation score on plan quality and determined the optimal weight for this score.14 clinical plans were re-planned with the knowledge-based algorithm. Significant dosimetric metrics for the PTV and OARs in the automatic plans are compared with those in the clinical plans by the two-sample t-test. In addition, a composite dosimetric quality index was defined to obtain the relationship between the plan quality and the beam separation score. Results: On average, we observed more than 15% reduction on conformity index and homogeneity index for PTV and V{sub 40}, V{sub 60} for heart while an 8% and 3% increase on V{sub 5}, V{sub 20} for lungs, respectively. The variation curve of the composite index as a function of angle spread score shows that 0.6 is the best value for the weight of the beam separation score. Conclusion: Optimal value for beam angle spread score in automatic lung IMRT planning is obtained. With this value, model can result in statistically the “best” achievable plans. This method can potentially improve the quality and planning efficiency for IMRT plans with no-coplanar angles.« less
Altschuler, Ted S; Molholm, Sophie; Butler, John S; Mercier, Manuel R; Brandwein, Alice B; Foxe, John J
2014-04-15
The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200 ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230 and 400 ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N=63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern-engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent. Copyright © 2013 Elsevier Inc. All rights reserved.
Towards automatic planning for manufacturing generative processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
CALTON,TERRI L.
2000-05-24
Generative process planning describes methods process engineers use to modify manufacturing/process plans after designs are complete. A completed design may be the result from the introduction of a new product based on an old design, an assembly upgrade, or modified product designs used for a family of similar products. An engineer designs an assembly and then creates plans capturing manufacturing processes, including assembly sequences, component joining methods, part costs, labor costs, etc. When new products originate as a result of an upgrade, component geometry may change, and/or additional components and subassemblies may be added to or are omitted from themore » original design. As a result process engineers are forced to create new plans. This is further complicated by the fact that the process engineer is forced to manually generate these plans for each product upgrade. To generate new assembly plans for product upgrades, engineers must manually re-specify the manufacturing plan selection criteria and re-run the planners. To remedy this problem, special-purpose assembly planning algorithms have been developed to automatically recognize design modifications and automatically apply previously defined manufacturing plan selection criteria and constraints.« less
ARCOCT: Automatic detection of lumen border in intravascular OCT images.
Cheimariotis, Grigorios-Aris; Chatzizisis, Yiannis S; Koutkias, Vassilis G; Toutouzas, Konstantinos; Giannopoulos, Andreas; Riga, Maria; Chouvarda, Ioanna; Antoniadis, Antonios P; Doulaverakis, Charalambos; Tsamboulatidis, Ioannis; Kompatsiaris, Ioannis; Giannoglou, George D; Maglaveras, Nicos
2017-11-01
Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. ARCOCT allows accurate and fully-automated lumen border detection in OCT images. Copyright © 2017 Elsevier B.V. All rights reserved.
Identification of the optic nerve head with genetic algorithms.
Carmona, Enrique J; Rincón, Mariano; García-Feijoó, Julián; Martínez-de-la-Casa, José M
2008-07-01
This work proposes creating an automatic system to locate and segment the optic nerve head (ONH) in eye fundus photographic images using genetic algorithms. Domain knowledge is used to create a set of heuristics that guide the various steps involved in the process. Initially, using an eye fundus colour image as input, a set of hypothesis points was obtained that exhibited geometric properties and intensity levels similar to the ONH contour pixels. Next, a genetic algorithm was used to find an ellipse containing the maximum number of hypothesis points in an offset of its perimeter, considering some constraints. The ellipse thus obtained is the approximation to the ONH. The segmentation method is tested in a sample of 110 eye fundus images, belonging to 55 patients with glaucoma (23.1%) and eye hypertension (76.9%) and random selected from an eye fundus image base belonging to the Ophthalmology Service at Miguel Servet Hospital, Saragossa (Spain). The results obtained are competitive with those in the literature. The method's generalization capability is reinforced when it is applied to a different image base from the one used in our study and a discrepancy curve is obtained very similar to the one obtained in our image base. In addition, the robustness of the method proposed can be seen in the high percentage of images obtained with a discrepancy delta<5 (96% and 99% in our and a different image base, respectively). The results also confirm the hypothesis that the ONH contour can be properly approached with a non-deformable ellipse. Another important aspect of the method is that it directly provides the parameters characterising the shape of the papilla: lengths of its major and minor axes, its centre of location and its orientation with regard to the horizontal position.
Drechsler, Axel; Helling, Tobias; Steinfartz, Sebastian
2015-01-01
Capture–mark–recapture (CMR) approaches are the backbone of many studies in population ecology to gain insight on the life cycle, migration, habitat use, and demography of target species. The reliable and repeatable recognition of an individual throughout its lifetime is the basic requirement of a CMR study. Although invasive techniques are available to mark individuals permanently, noninvasive methods for individual recognition mainly rest on photographic identification of external body markings, which are unique at the individual level. The re-identification of an individual based on comparing shape patterns of photographs by eye is commonly used. Automated processes for photographic re-identification have been recently established, but their performance in large datasets (i.e., > 1000 individuals) has rarely been tested thoroughly. Here, we evaluated the performance of the program AMPHIDENT, an automatic algorithm to identify individuals on the basis of ventral spot patterns in the great crested newt (Triturus cristatus) versus the genotypic fingerprint of individuals based on highly polymorphic microsatellite loci using GENECAP. Between 2008 and 2010, we captured, sampled and photographed adult newts and calculated for 1648 samples/photographs recapture rates for both approaches. Recapture rates differed slightly with 8.34% for GENECAP and 9.83% for AMPHIDENT. With an estimated rate of 2% false rejections (FRR) and 0.00% false acceptances (FAR), AMPHIDENT proved to be a highly reliable algorithm for CMR studies of large datasets. We conclude that the application of automatic recognition software of individual photographs can be a rather powerful and reliable tool in noninvasive CMR studies for a large number of individuals. Because the cross-correlation of standardized shape patterns is generally applicable to any pattern that provides enough information, this algorithm is capable of becoming a single application with broad use in CMR studies for many species. PMID:25628871
Clinical evaluation of atlas and deep learning based automatic contouring for lung cancer.
Lustberg, Tim; van Soest, Johan; Gooding, Mark; Peressutti, Devis; Aljabar, Paul; van der Stoep, Judith; van Elmpt, Wouter; Dekker, Andre
2018-02-01
Contouring of organs at risk (OARs) is an important but time consuming part of radiotherapy treatment planning. The aim of this study was to investigate whether using institutional created software-generated contouring will save time if used as a starting point for manual OAR contouring for lung cancer patients. Twenty CT scans of stage I-III NSCLC patients were used to compare user adjusted contours after an atlas-based and deep learning contour, against manual delineation. The lungs, esophagus, spinal cord, heart and mediastinum were contoured for this study. The time to perform the manual tasks was recorded. With a median time of 20 min for manual contouring, the total median time saved was 7.8 min when using atlas-based contouring and 10 min for deep learning contouring. Both atlas based and deep learning adjustment times were significantly lower than manual contouring time for all OARs except for the left lung and esophagus of the atlas based contouring. User adjustment of software generated contours is a viable strategy to reduce contouring time of OARs for lung radiotherapy while conforming to local clinical standards. In addition, deep learning contouring shows promising results compared to existing solutions. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
SU-E-J-224: Multimodality Segmentation of Head and Neck Tumors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristophanous, M; Yang, J; Beadle, B
2014-06-01
Purpose: Develop an algorithm that is able to automatically segment tumor volume in Head and Neck cancer by integrating information from CT, PET and MR imaging simultaneously. Methods: Twenty three patients that were recruited under an adaptive radiotherapy protocol had MR, CT and PET/CT scans within 2 months prior to start of radiotherapy. The patients had unresectable disease and were treated either with chemoradiotherapy or radiation therapy alone. Using the Velocity software, the PET/CT and MR (T1 weighted+contrast) scans were registered to the planning CT using deformable and rigid registration respectively. The PET and MR images were then resampled accordingmore » to the registration to match the planning CT. The resampled images, together with the planning CT, were fed into a multi-channel segmentation algorithm, which is based on Gaussian mixture models and solved with the expectation-maximization algorithm and Markov random fields. A rectangular region of interest (ROI) was manually placed to identify the tumor area and facilitate the segmentation process. The auto-segmented tumor contours were compared with the gross tumor volume (GTV) manually defined by the physician. The volume difference and Dice similarity coefficient (DSC) between the manual and autosegmented GTV contours were calculated as the quantitative evaluation metrics. Results: The multimodality segmentation algorithm was applied to all 23 patients. The volumes of the auto-segmented GTV ranged from 18.4cc to 32.8cc. The average (range) volume difference between the manual and auto-segmented GTV was −42% (−32.8%–63.8%). The average DSC value was 0.62, ranging from 0.39 to 0.78. Conclusion: An algorithm for the automated definition of tumor volume using multiple imaging modalities simultaneously was successfully developed and implemented for Head and Neck cancer. This development along with more accurate registration algorithms can aid physicians in the efforts to interpret the multitude of imaging information available in radiotherapy today. This project was supported by a grant by Varian Medical Systems.« less
NASA Astrophysics Data System (ADS)
Adame, Isabel M.; van der Geest, Rob J.; Wasserman, Bruce A.; Mohamed, Mona; Reiber, Johan H. C.; Lelieveldt, Boudewijn P. F.
2004-05-01
Composition and structure of atherosclerotic plaque is a primary focus of cardiovascular research. In vivo MRI provides a meanse to non-invasively image and assess the morphological features of athersclerotic and normal human carotid arteries. To quantitatively assess the vulnerability and the type of plaque, the contours of the lumen, outer boundary of the vessel wall and plaque components, need to be traced. To achieve this goal, we have developed an automated contou detection technique, which consists of three consecutive steps: firstly, the outer boundary of the vessel wall is detected by means of an ellipse-fitting procedure in order to obtain smoothed shapes; secondly, the lumen is segnented using fuzzy clustering. Thre region to be classified is that within the outer vessel wall boundary obtained from the previous step; finally, for plaque detection we follow the same approach as for lumen segmentation: fuzzy clustering. However, plaque is more difficult to segment, as the pixel gray value can differ considerably from one region to another, even when it corresponds to the same type of tissue. That makes further processing necessary. All these three steps might be carried out combining information from different sequences (PD-, T2-, T1-weighted images, pre- and post-contrast), to improve the contour detection. The algorithm has been validated in vivo on 58 high-resolution PD and T1 weighted MR images (19 patients). The results demonstrate excellent correspondence between automatic and manual area measurements: lumen (r=0.94), outer (r=0.92), and acceptable for fibrous cap thickness (r=0.76).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellon, M; Kumarasiri, A; Kim, J
Purpose: To compare the performance of two deformable image registration (DIR) algorithms for contour propagation and to evaluate the accuracy of DIR for use with high dose rate (HDR) brachytherapy planning for cervical cancer. Methods: Five patients undergoing HDR ring and tandem brachytherapy were included in this retrospective study. All patients underwent CT simulation and replanning prior to each fraction (3–5 fractions total). CT-to-CT DIR was performed using two commercially available software platforms: SmartAdapt, Varian Medical Systems (Demons) and Velocity AI, Velocity Medical Solutions (B-spline). Fraction 1 contours were deformed and propagated to each subsequent image set and compared tomore » contours manually drawn by an expert clinician. Dice similarity coefficients (DSC), defined as, DSC(A,B)=2(AandB)/(A+B) were calculated to quantify spatial overlap between manual (A) and deformed (B) contours. Additionally, clinician-assigned visual scores were used to describe and compare the performance of each DIR method and ultimately evaluate which was more clinically acceptable. Scoring was based on a 1–5 scale—with 1 meaning, “clinically acceptable with no contour changes” and 5 meaning, “clinically unacceptable”. Results: Statistically significant differences were not observed between the two DIR algorithms. The average DSC for the bladder, rectum and rectosigmoid were 0.82±0.08, 0.67±0.13 and 0.48±0.18, respectively. The poorest contour agreement was observed for the rectosigmoid due to limited soft tissue contrast and drastic anatomical changes, i.e., organ shape/filling. Two clinicians gave nearly equivalent average scores of 2.75±0.91 for SmartAdapt and 2.75±0.94 for Velocity AI—indicating that for a majority of the cases, more than one of the three contours evaluated required major modifications. Conclusion: Limitations of both DIR algorithms resulted in inaccuracies in contour propagation in the pelvic region, thus hampering the clinical utility of this technology. Further work is required to optimize these algorithms and take advantage of the potential of DIR for HDR brachytherapy planning.« less
Extracting contours of oval-shaped objects by Hough transform and minimal path algorithms
NASA Astrophysics Data System (ADS)
Tleis, Mohamed; Verbeek, Fons J.
2014-04-01
Circular and oval-like objects are very common in cell and micro biology. These objects need to be analyzed, and to that end, digitized images from the microscope are used so as to come to an automated analysis pipeline. It is essential to detect all the objects in an image as well as to extract the exact contour of each individual object. In this manner it becomes possible to perform measurements on these objects, i.e. shape and texture features. Our measurement objective is achieved by probing contour detection through dynamic programming. In this paper we describe a method that uses Hough transform and two minimal path algorithms to detect contours of (ovoid-like) objects. These algorithms are based on an existing grey-weighted distance transform and a new algorithm to extract the circular shortest path in an image. The methods are tested on an artificial dataset of a 1000 images, with an F1-score of 0.972. In a case study with yeast cells, contours from our methods were compared with another solution using Pratt's figure of merit. Results indicate that our methods were more precise based on a comparison with a ground-truth dataset. As far as yeast cells are concerned, the segmentation and measurement results enable, in future work, to retrieve information from different developmental stages of the cell using complex features.
Biological object recognition in μ-radiography images
NASA Astrophysics Data System (ADS)
Prochazka, A.; Dammer, J.; Weyda, F.; Sopko, V.; Benes, J.; Zeman, J.; Jandejsek, I.
2015-03-01
This study presents an applicability of real-time microradiography to biological objects, namely to horse chestnut leafminer, Cameraria ohridella (Insecta: Lepidoptera, Gracillariidae) and following image processing focusing on image segmentation and object recognition. The microradiography of insects (such as horse chestnut leafminer) provides a non-invasive imaging that leaves the organisms alive. The imaging requires a high spatial resolution (micrometer scale) radiographic system. Our radiographic system consists of a micro-focus X-ray tube and two types of detectors. The first is a charge integrating detector (Hamamatsu flat panel), the second is a pixel semiconductor detector (Medipix2 detector). The latter allows detection of single quantum photon of ionizing radiation. We obtained numerous horse chestnuts leafminer pupae in several microradiography images easy recognizable in automatic mode using the image processing methods. We implemented an algorithm that is able to count a number of dead and alive pupae in images. The algorithm was based on two methods: 1) noise reduction using mathematical morphology filters, 2) Canny edge detection. The accuracy of the algorithm is higher for the Medipix2 (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.83), than for the flat panel (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.77). Therefore, we conclude that Medipix2 has lower noise and better displays contours (edges) of biological objects. Our method allows automatic selection and calculation of dead and alive chestnut leafminer pupae. It leads to faster monitoring of the population of one of the world's important insect pest.
SU-F-T-405: Development of a Rapid Cardiac Contouring Tool Using Landmark-Driven Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelletier, C; Jung, J; Mosher, E
2016-06-15
Purpose: This study aims to develop a tool to rapidly delineate cardiac substructures for use in dosimetry for large-scale clinical trial or epidemiological investigations. The goal is to produce a system that can semi-automatically delineate nine cardiac structures to a reasonable accuracy within a couple of minutes. Methods: The cardiac contouring tool employs a Most Similar Atlas method, where a selection criterion is used to pre-select the most similar model to the patient from a library of pre-defined atlases. Sixty contrast-enhanced cardiac computed tomography angiography (CTA) scans (30 male and 30 female) were manually contoured to serve as the atlasmore » library. For each CTA 12 structures were delineated. Kabsch algorithm was used to compute the optimum rotation and translation matrices between the patient and atlas. Minimum root mean squared distance between the patient and atlas after transformation was used to select the most-similar atlas. An initial study using 10 CTA sets was performed to assess system feasibility. Leave-one patient out method was performed, and fit criteria were calculated to evaluate the fit accuracy compared to manual contours. Results: For the pilot study, mean dice indices of .895 were achieved for the whole heart, .867 for the ventricles, and .802 for the atria. In addition, mean distance was measured via the chord length distribution (CLD) between ground truth and the atlas structures for the four coronary arteries. The mean CLD for all coronary arteries was below 14mm, with the left circumflex artery showing the best agreement (7.08mm). Conclusion: The cardiac contouring tool is able to delineate cardiac structures with reasonable accuracy in less than 90 seconds. Pilot data indicates that the system is able to delineate the whole heart and ventricles within a reasonable accuracy using even a limited library. We are extending the atlas sets to 60 adult males and females in total.« less
Semi-automatic geographic atrophy segmentation for SD-OCT images.
Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Zheng, Luoluo; Kutzscher, Lauren; Rubin, Daniel L
2013-01-01
Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semi-automated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in which the projection region is restricted to a sub-volume of the retina where the presence of GA can be identified. Subsequently, a geometric active contour model is employed to automatically detect and segment the extent of GA in the projection images. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to qualitatively and quantitatively evaluate the proposed GA segmentation method. Experimental results suggest that the proposed algorithm can achieve high segmentation accuracy. The mean GA overlap ratios between our proposed method and outlines drawn in the SD-OCT scans, our method and outlines drawn in the fundus auto-fluorescence (FAF) images, and the commercial software (Carl Zeiss Meditec proprietary software, Cirrus version 6.0) and outlines drawn in FAF images were 72.60%, 65.88% and 59.83%, respectively.
Design of thrust vectoring exhaust nozzles for real-time applications using neural networks
NASA Technical Reports Server (NTRS)
Prasanth, Ravi K.; Markin, Robert E.; Whitaker, Kevin W.
1991-01-01
Thrust vectoring continues to be an important issue in military aircraft system designs. A recently developed concept of vectoring aircraft thrust makes use of flexible exhaust nozzles. Subtle modifications in the nozzle wall contours produce a non-uniform flow field containing a complex pattern of shock and expansion waves. The end result, due to the asymmetric velocity and pressure distributions, is vectored thrust. Specification of the nozzle contours required for a desired thrust vector angle (an inverse design problem) has been achieved with genetic algorithms. This approach is computationally intensive and prevents the nozzles from being designed in real-time, which is necessary for an operational aircraft system. An investigation was conducted into using genetic algorithms to train a neural network in an attempt to obtain, in real-time, two-dimensional nozzle contours. Results show that genetic algorithm trained neural networks provide a viable, real-time alternative for designing thrust vectoring nozzles contours. Thrust vector angles up to 20 deg were obtained within an average error of 0.0914 deg. The error surfaces encountered were highly degenerate and thus the robustness of genetic algorithms was well suited for minimizing global errors.
Volonghi, Paola; Tresoldi, Daniele; Cadioli, Marcello; Usuelli, Antonio M; Ponzini, Raffaele; Morbiducci, Umberto; Esposito, Antonio; Rizzo, Giovanna
2016-02-01
To propose and assess a new method that automatically extracts a three-dimensional (3D) geometric model of the thoracic aorta (TA) from 3D cine phase contrast MRI (PCMRI) acquisitions. The proposed method is composed of two steps: segmentation of the TA and creation of the 3D geometric model. The segmentation algorithm, based on Level Set, was set and applied to healthy subjects acquired in three different modalities (with and without SENSE reduction factors). Accuracy was evaluated using standard quality indices. The 3D model is characterized by the vessel surface mesh and its centerline; the comparison of models obtained from the three different datasets was also carried out in terms of radius of curvature (RC) and average tortuosity (AT). In all datasets, the segmentation quality indices confirmed very good agreement between manual and automatic contours (average symmetric distance < 1.44 mm, DICE Similarity Coefficient > 0.88). The 3D models extracted from the three datasets were found to be comparable, with differences of less than 10% for RC and 11% for AT. Our method was found effective on PCMRI data to provide a 3D geometric model of the TA, to support morphometric and hemodynamic characterization of the aorta. © 2015 Wiley Periodicals, Inc.
Contour matching for a fish recognition and migration-monitoring system
NASA Astrophysics Data System (ADS)
Lee, Dah-Jye; Schoenberger, Robert B.; Shiozawa, Dennis; Xu, Xiaoqian; Zhan, Pengcheng
2004-12-01
Fish migration is being monitored year round to provide valuable information for the study of behavioral responses of fish to environmental variations. However, currently all monitoring is done by human observers. An automatic fish recognition and migration monitoring system is more efficient and can provide more accurate data. Such a system includes automatic fish image acquisition, contour extraction, fish categorization, and data storage. Shape is a very important characteristic and shape analysis and shape matching are studied for fish recognition. Previous work focused on finding critical landmark points on fish shape using curvature function analysis. Fish recognition based on landmark points has shown satisfying results. However, the main difficulty of this approach is that landmark points sometimes cannot be located very accurately. Whole shape matching is used for fish recognition in this paper. Several shape descriptors, such as Fourier descriptors, polygon approximation and line segments, are tested. A power cepstrum technique has been developed in order to improve the categorization speed using contours represented in tangent space with normalized length. Design and integration including image acquisition, contour extraction and fish categorization are discussed in this paper. Fish categorization results based on shape analysis and shape matching are also included.
Flame filtering and perimeter localization of wildfires using aerial thermal imagery
NASA Astrophysics Data System (ADS)
Valero, Mario M.; Verstockt, Steven; Rios, Oriol; Pastor, Elsa; Vandecasteele, Florian; Planas, Eulàlia
2017-05-01
Airborne thermal infrared (TIR) imaging systems are being increasingly used for wild fire tactical monitoring since they show important advantages over spaceborne platforms and visible sensors while becoming much more affordable and much lighter than multispectral cameras. However, the analysis of aerial TIR images entails a number of difficulties which have thus far prevented monitoring tasks from being totally automated. One of these issues that needs to be addressed is the appearance of flame projections during the geo-correction of off-nadir images. Filtering these flames is essential in order to accurately estimate the geographical location of the fuel burning interface. Therefore, we present a methodology which allows the automatic localisation of the active fire contour free of flame projections. The actively burning area is detected in TIR georeferenced images through a combination of intensity thresholding techniques, morphological processing and active contours. Subsequently, flame projections are filtered out by the temporal frequency analysis of the appropriate contour descriptors. The proposed algorithm was tested on footages acquired during three large-scale field experimental burns. Results suggest this methodology may be suitable to automatise the acquisition of quantitative data about the fire evolution. As future work, a revision of the low-pass filter implemented for the temporal analysis (currently a median filter) was recommended. The availability of up-to-date information about the fire state would improve situational awareness during an emergency response and may be used to calibrate data-driven simulators capable of emitting short-term accurate forecasts of the subsequent fire evolution.
Medical Imaging Lesion Detection Based on Unified Gravitational Fuzzy Clustering
Vianney Kinani, Jean Marie; Gallegos Funes, Francisco; Mújica Vargas, Dante; Ramos Díaz, Eduardo; Arellano, Alfonso
2017-01-01
We develop a swift, robust, and practical tool for detecting brain lesions with minimal user intervention to assist clinicians and researchers in the diagnosis process, radiosurgery planning, and assessment of the patient's response to the therapy. We propose a unified gravitational fuzzy clustering-based segmentation algorithm, which integrates the Newtonian concept of gravity into fuzzy clustering. We first perform fuzzy rule-based image enhancement on our database which is comprised of T1/T2 weighted magnetic resonance (MR) and fluid-attenuated inversion recovery (FLAIR) images to facilitate a smoother segmentation. The scalar output obtained is fed into a gravitational fuzzy clustering algorithm, which separates healthy structures from the unhealthy. Finally, the lesion contour is automatically outlined through the initialization-free level set evolution method. An advantage of this lesion detection algorithm is its precision and its simultaneous use of features computed from the intensity properties of the MR scan in a cascading pattern, which makes the computation fast, robust, and self-contained. Furthermore, we validate our algorithm with large-scale experiments using clinical and synthetic brain lesion datasets. As a result, an 84%–93% overlap performance is obtained, with an emphasis on robustness with respect to different and heterogeneous types of lesion and a swift computation time. PMID:29158887
Discriminative Cooperative Networks for Detecting Phase Transitions
NASA Astrophysics Data System (ADS)
Liu, Ye-Hua; van Nieuwenburg, Evert P. L.
2018-04-01
The classification of states of matter and their corresponding phase transitions is a special kind of machine-learning task, where physical data allow for the analysis of new algorithms, which have not been considered in the general computer-science setting so far. Here we introduce an unsupervised machine-learning scheme for detecting phase transitions with a pair of discriminative cooperative networks (DCNs). In this scheme, a guesser network and a learner network cooperate to detect phase transitions from fully unlabeled data. The new scheme is efficient enough for dealing with phase diagrams in two-dimensional parameter spaces, where we can utilize an active contour model—the snake—from computer vision to host the two networks. The snake, with a DCN "brain," moves and learns actively in the parameter space, and locates phase boundaries automatically.
Medical image segmentation using 3D MRI data
NASA Astrophysics Data System (ADS)
Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.
2017-05-01
Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.
Halftoning processing on a JPEG-compressed image
NASA Astrophysics Data System (ADS)
Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent
2003-12-01
Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.
GPU based contouring method on grid DEM data
NASA Astrophysics Data System (ADS)
Tan, Liheng; Wan, Gang; Li, Feng; Chen, Xiaohui; Du, Wenlong
2017-08-01
This paper presents a novel method to generate contour lines from grid DEM data based on the programmable GPU pipeline. The previous contouring approaches often use CPU to construct a finite element mesh from the raw DEM data, and then extract contour segments from the elements. They also need a tracing or sorting strategy to generate the final continuous contours. These approaches can be heavily CPU-costing and time-consuming. Meanwhile the generated contours would be unsmooth if the raw data is sparsely distributed. Unlike the CPU approaches, we employ the GPU's vertex shader to generate a triangular mesh with arbitrary user-defined density, in which the height of each vertex is calculated through a third-order Cardinal spline function. Then in the same frame, segments are extracted from the triangles by the geometry shader, and translated to the CPU-side with an internal order in the GPU's transform feedback stage. Finally we propose a "Grid Sorting" algorithm to achieve the continuous contour lines by travelling the segments only once. Our method makes use of multiple stages of GPU pipeline for computation, which can generate smooth contour lines, and is significantly faster than the previous CPU approaches. The algorithm can be easily implemented with OpenGL 3.3 API or higher on consumer-level PCs.
Zhou, Yongxin; Bai, Jing
2007-01-01
A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.
Superpixel-based segmentation of glottal area from videolaryngoscopy images
NASA Astrophysics Data System (ADS)
Turkmen, H. Irem; Albayrak, Abdulkadir; Karsligil, M. Elif; Kocak, Ismail
2017-11-01
Segmentation of the glottal area with high accuracy is one of the major challenges for the development of systems for computer-aided diagnosis of vocal-fold disorders. We propose a hybrid model combining conventional methods with a superpixel-based segmentation approach. We first employed a superpixel algorithm to reveal the glottal area by eliminating the local variances of pixels caused by bleedings, blood vessels, and light reflections from mucosa. Then, the glottal area was detected by exploiting a seeded region-growing algorithm in a fully automatic manner. The experiments were conducted on videolaryngoscopy images obtained from both patients having pathologic vocal folds as well as healthy subjects. Finally, the proposed hybrid approach was compared with conventional region-growing and active-contour model-based glottal area segmentation algorithms. The performance of the proposed method was evaluated in terms of segmentation accuracy and elapsed time. The F-measure, true negative rate, and dice coefficients of the hybrid method were calculated as 82%, 93%, and 82%, respectively, which are superior to the state-of-art glottal-area segmentation methods. The proposed hybrid model achieved high success rates and robustness, making it suitable for developing a computer-aided diagnosis system that can be used in clinical routines.
Human recognition based on head-shoulder contour extraction and BP neural network
NASA Astrophysics Data System (ADS)
Kong, Xiao-fang; Wang, Xiu-qin; Gu, Guohua; Chen, Qian; Qian, Wei-xian
2014-11-01
In practical application scenarios like video surveillance and human-computer interaction, human body movements are uncertain because the human body is a non-rigid object. Based on the fact that the head-shoulder part of human body can be less affected by the movement, and will seldom be obscured by other objects, in human detection and recognition, a head-shoulder model with its stable characteristics can be applied as a detection feature to describe the human body. In order to extract the head-shoulder contour accurately, a head-shoulder model establish method with combination of edge detection and the mean-shift algorithm in image clustering has been proposed in this paper. First, an adaptive method of mixture Gaussian background update has been used to extract targets from the video sequence. Second, edge detection has been used to extract the contour of moving objects, and the mean-shift algorithm has been combined to cluster parts of target's contour. Third, the head-shoulder model can be established, according to the width and height ratio of human head-shoulder combined with the projection histogram of the binary image, and the eigenvectors of the head-shoulder contour can be acquired. Finally, the relationship between head-shoulder contour eigenvectors and the moving objects will be formed by the training of back-propagation (BP) neural network classifier, and the human head-shoulder model can be clustered for human detection and recognition. Experiments have shown that the method combined with edge detection and mean-shift algorithm proposed in this paper can extract the complete head-shoulder contour, with low calculating complexity and high efficiency.
Analysis of contour images using optics of spiral beams
NASA Astrophysics Data System (ADS)
Volostnikov, V. G.; Kishkin, S. A.; Kotova, S. P.
2018-03-01
An approach is outlined to the recognition of contour images using computer technology based on coherent optics principles. A mathematical description of the recognition process algorithm and the results of numerical modelling are presented. The developed approach to the recognition of contour images using optics of spiral beams is described and justified.
Li, Sujiao; Zhang, Zhengxiang; Wang, Jue
2014-01-01
Prevention of pressure sores remains a significant problem confronting spinal cord injury patients and the elderly with limited mobility. One vital aspect of this subject concerns the development of cushions to decrease pressure ulcers for seated patients, particularly those bound by wheelchairs. Here, we present a novel cushion system that employs interface pressure distribution between the cushion and the buttocks to design custom contoured foam cushion. An optimized normalization algorithm was proposed, with which interface pressure distribution was transformed into the carving depth of foam cushions according to the biomechanical characteristics of the foam. The shape and pressure-relief performance of the custom contoured foam cushions was investigated. The outcomes showed that the contoured shape of personalized cushion matched the buttock contour very well. Moreover, the custom contoured cushion could alleviate pressure under buttocks and increase subjective comfort and stability significantly. Furthermore, the fabricating method not only decreased the unit production cost but also simplified the procedure for manufacturing. All in all, this prototype seat cushion would be an effective and economical way to prevent pressure ulcers.
Knowledge discovery through games and game theory
NASA Astrophysics Data System (ADS)
Smith, James F., III; Rhyne, Robert D.
2001-03-01
A fuzzy logic based expert system has been developed that automatically allocates electronic attack (EA) resources in real-time over many dissimilar platforms. The platforms can be very general, e.g., ships, planes, robots, land based facilities, etc. Potential foes the platforms deal with can also be general. The initial version of the algorithm was optimized using a genetic algorithm employing fitness functions constructed based on expertise. A new approach is being explored that involves embedding the resource manager in a electronic game environment. The game allows a human expert to play against the resource manager in a simulated battlespace with each of the defending platforms being exclusively directed by the fuzzy resource manager and the attacking platforms being controlled by the human expert or operating autonomously under their own logic. This approach automates the data mining problem. The game automatically creates a database reflecting the domain expert's knowledge, it calls a data mining function, a genetic algorithm, for data mining of the database as required. The game allows easy evaluation of the information mined in the second step. The measure of effectiveness (MOE) for re-optimization is discussed. The mined information is extremely valuable as shown through demanding scenarios.
Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire
2017-12-01
Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tracking fuzzy borders using geodesic curves with application to liver segmentation on planning CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Yading, E-mail: yading.yuan@mssm.edu; Chao, Ming; Sheu, Ren-Dih
Purpose: This work aims to develop a robust and efficient method to track the fuzzy borders between liver and the abutted organs where automatic liver segmentation usually suffers, and to investigate its applications in automatic liver segmentation on noncontrast-enhanced planning computed tomography (CT) images. Methods: In order to track the fuzzy liver–chestwall and liver–heart borders where oversegmentation is often found, a starting point and an ending point were first identified on the coronal view images; the fuzzy border was then determined as a geodesic curve constructed by minimizing the gradient-weighted path length between these two points near the fuzzy border.more » The minimization of path length was numerically solved by fast-marching method. The resultant fuzzy borders were incorporated into the authors’ automatic segmentation scheme, in which the liver was initially estimated by a patient-specific adaptive thresholding and then refined by a geodesic active contour model. By using planning CT images of 15 liver patients treated with stereotactic body radiation therapy, the liver contours extracted by the proposed computerized scheme were compared with those manually delineated by a radiation oncologist. Results: The proposed automatic liver segmentation method yielded an average Dice similarity coefficient of 0.930 ± 0.015, whereas it was 0.912 ± 0.020 if the fuzzy border tracking was not used. The application of fuzzy border tracking was found to significantly improve the segmentation performance. The mean liver volume obtained by the proposed method was 1727 cm{sup 3}, whereas it was 1719 cm{sup 3} for manual-outlined volumes. The computer-generated liver volumes achieved excellent agreement with manual-outlined volumes with correlation coefficient of 0.98. Conclusions: The proposed method was shown to provide accurate segmentation for liver in the planning CT images where contrast agent is not applied. The authors’ results also clearly demonstrated that the application of tracking the fuzzy borders could significantly reduce contour leakage during active contour evolution.« less
Sanz-Requena, Roberto; Moratal, David; García-Sánchez, Diego Ramón; Bodí, Vicente; Rieta, José Joaquín; Sanchis, Juan Manuel
2007-03-01
Intravascular ultrasound (IVUS) imaging is used along with X-ray coronary angiography to detect vessel pathologies. Manual analysis of IVUS images is slow and time-consuming and it is not feasible for clinical purposes. A semi-automated method is proposed to generate 3D reconstructions from IVUS video sequences, so that a fast diagnose can be easily done, quantifying plaque length and severity as well as plaque volume of the vessels under study. The methodology described in this work has four steps: a pre-processing of IVUS images, a segmentation of media-adventitia contour, a detection of intima and plaque and a 3D reconstruction of the vessel. Preprocessing is intended to remove noise from the images without blurring the edges. Segmentation of media-adventitia contour is achieved using active contours (snakes). In particular, we use the gradient vector flow (GVF) as external force for the snakes. The detection of lumen border is obtained taking into account gray-level information of the inner part of the previously detected contours. A knowledge-based approach is used to determine which level of gray corresponds statistically to the different regions of interest: intima, plaque and lumen. The catheter region is automatically discarded. An estimate of plaque type is also given. Finally, 3D reconstruction of all detected regions is made. The suitability of this methodology has been verified for the analysis and visualization of plaque length, stenosis severity, automatic detection of the most problematic regions, calculus of plaque volumes and a preliminary estimation of plaque type obtaining for automatic measures of lumen and vessel area an average error smaller than 1mm(2) (equivalent aproximately to 10% of the average measure), for calculus of plaque and lumen volume errors smaller than 0.5mm(3) (equivalent approximately to 20% of the average measure) and for plaque type estimates a mismatch of less than 8% in the analysed frames.
NASA Astrophysics Data System (ADS)
He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan
2017-07-01
While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.
Yet another method for triangulation and contouring for automated cartography
NASA Technical Reports Server (NTRS)
De Floriani, L.; Falcidieno, B.; Nasy, G.; Pienovi, C.
1982-01-01
An algorithm is presented for hierarchical subdivision of a set of three-dimensional surface observations. The data structure used for obtaining the desired triangulation is also singularly appropriate for extracting contours. Some examples are presented, and the results obtained are compared with those given by Delaunay triangulation. The data points selected by the algorithm provide a better approximation to the desired surface than do randomly selected points.
NASA Astrophysics Data System (ADS)
Gaillot, P.; Bardaine, T.; Lyon-Caen, H.
2004-12-01
Since recent years, various automatic phase pickers based on the wavelet transform have been developed. The main motivation for using wavelet transform is that they are excellent at finding the characteristics of transient signals, they have good time resolution at all periods, and they are easy to program for fast execution. Thus, the time-scale properties and flexibility of the wavelets allow detection of P and S phases in a broad frequency range making their utilization possible in various context. However, the direct application of an automatic picking program in a different context/network than the one for which it has been initially developed is quickly tedious. In fact, independently of the strategy involved in automatic picking algorithms (window average, autoregressive, beamforming, optimization filtering, neuronal network), all developed algorithms use different parameters that depend on the objective of the seismological study, the region and the seismological network. Classically, these parameters are manually defined by trial-error or calibrated learning stage. In order to facilitate this laborious process, we have developed an automated method that provide optimal parameters for the picking programs. The set of parameters can be explored using simulated annealing which is a generic name for a family of optimization algorithms based on the principle of stochastic relaxation. The optimization process amounts to systematically modifying an initial realization so as to decrease the value of the objective function, getting the realization acceptably close to the target statistics. Different formulations of the optimization problem (objective function) are discussed using (1) world seismicity data recorded by the French national seismic monitoring network (ReNass), (2) regional seismicity data recorded in the framework of the Corinth Rift Laboratory (CRL) experiment, (3) induced seismicity data from the gas field of Lacq (Western Pyrenees), and (4) micro-seismicity data from glacier monitoring. The developed method is discussed and tested using our wavelet version of the standard STA-LTA algorithm.
NASA Astrophysics Data System (ADS)
Jiang, Yang; Gong, Yuanzheng; Wang, Thomas D.; Seibel, Eric J.
2017-02-01
Multimodal endoscopy, with fluorescence-labeled probes binding to overexpressed molecular targets, is a promising technology to visualize early-stage cancer. T/B ratio is the quantitative analysis used to correlate fluorescence regions to cancer. Currently, T/B ratio calculation is post-processing and does not provide real-time feedback to the endoscopist. To achieve real-time computer assisted diagnosis (CAD), we establish image processing protocols for calculating T/B ratio and locating high-risk fluorescence regions for guiding biopsy and therapy in Barrett's esophagus (BE) patients. Methods: Chan-Vese algorithm, an active contour model, is used to segment high-risk regions in fluorescence videos. A semi-implicit gradient descent method was applied to minimize the energy function of this algorithm and evolve the segmentation. The surrounding background was then identified using morphology operation. The average T/B ratio was computed and regions of interest were highlighted based on user-selected thresholding. Evaluation was conducted on 50 fluorescence videos acquired from clinical video recordings using a custom multimodal endoscope. Results: With a processing speed of 2 fps on a laptop computer, we obtained accurate segmentation of high-risk regions examined by experts. For each case, the clinical user could optimize target boundary by changing the penalty on area inside the contour. Conclusion: Automatic and real-time procedure of calculating T/B ratio and identifying high-risk regions of early esophageal cancer was developed. Future work will increase processing speed to <5 fps, refine the clinical interface, and apply to additional GI cancers and fluorescence peptides.
Chiu, Stephanie J; Toth, Cynthia A; Bowes Rickman, Catherine; Izatt, Joseph A; Farsiu, Sina
2012-05-01
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique.
Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina
2012-01-01
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602
Does flexibility in perceptual organization compete with automatic grouping?
van Assche, Mitsouko; Gos, Pierre; Giersch, Anne
2012-02-06
Segregated objects can be sought simultaneously, i.e., mentally "re-grouped." Although the mechanisms underlying such "re-grouping" clearly differ from automatic grouping, it is unclear whether or not the end products of "re-grouping" and automatic grouping are the same. If they are, they would have similar impact on visual organization but would be in conflict. We compared the consequences of grouping and re-grouping on the performance cost induced by stimuli presented across hemifields. Two identical and contiguous target figures had to be identified within a display of circles and squares alternating around a fixation point. Eye tracking was used to check central fixation. The target pair could be located in the same or separate hemifields. A large cost of presenting targets across hemifields was observed. Grouping by connectedness yielded two types of target pair, connected and unconnected. Subjects prioritized unconnected pairs efficiently when prompted to do so, suggesting "re-grouping." However, unlike automatic grouping, this did not affect the cost of across-hemifield presentation. The suggestion is that re-grouping yields different outputs to automatic grouping, such that a fresh representation resulting from re-grouping complements the one resulting from automatic grouping but does not replace it. This is one step toward understanding how our mental exploration of the world ties in and coexists with ongoing perception.
Hybrid Parallel Contour Trees, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher; Fasel, Patricia; Carr, Hamish
A common operation in scientific visualization is to compute and render a contour of a data set. Given a function of the form f : R^d -> R, a level set is defined as an inverse image f^-1(h) for an isovalue h, and a contour is a single connected component of a level set. The Reeb graph can then be defined to be the result of contracting each contour to a single point, and is well defined for Euclidean spaces or for general manifolds. For simple domains, the graph is guaranteed to be a tree, and is called the contourmore » tree. Analysis can then be performed on the contour tree in order to identify isovalues of particular interest, based on various metrics, and render the corresponding contours, without having to know such isovalues a priori. This code is intended to be the first data-parallel algorithm for computing contour trees. Our implementation will use the portable data-parallel primitives provided by Nvidia’s Thrust library, allowing us to compile our same code for both GPUs and multi-core CPUs. Native OpenMP and purely serial versions of the code will likely also be included. It will also be extended to provide a hybrid data-parallel / distributed algorithm, allowing scaling beyond a single GPU or CPU.« less
Giersch, Anne; van Assche, Mitsouko; Capa, Rémi L; Marrer, Corinne; Gounot, Daniel
2012-01-01
Looking at a pair of objects is easy when automatic grouping mechanisms bind these objects together, but visual exploration can also be more flexible. It is possible to mentally "re-group" two objects that are not only separate but belong to different pairs of objects. "Re-grouping" is in conflict with automatic grouping, since it entails a separation of each item from the set it belongs to. This ability appears to be impaired in patients with schizophrenia. Here we check if this impairment is selective, which would suggest a dissociation between grouping and "re-grouping," or if it impacts on usual, automatic grouping, which would call for a better understanding of the interactions between automatic grouping and "re-grouping." Sixteen outpatients with schizophrenia and healthy controls had to identify two identical and contiguous target figures within a display of circles and squares alternating around a fixation point. Eye-tracking was used to check central fixation. The target pair could be located in the same or separate hemifields. Identical figures were grouped by a connector (grouped automatically) or not (to be re-grouped). Attention modulation of automatic grouping was tested by manipulating the proportion of connected and unconnected targets, thus prompting subjects to focalize on either connected or unconnected pairs. Both groups were sensitive to automatic grouping in most conditions, but patients were unusually slowed down for connected targets while focalizing on unconnected pairs. In addition, this unusual effect occurred only when targets were presented within the same hemifield. Patients and controls differed on this asymmetry between within- and across-hemifield presentation, suggesting that patients with schizophrenia do not re-group figures in the same way as controls do. We discuss possible implications on how "re-grouping" ties in with ongoing, automatic perception in healthy volunteers.
Giersch, Anne; van Assche, Mitsouko; Capa, Rémi L.; Marrer, Corinne; Gounot, Daniel
2012-01-01
Looking at a pair of objects is easy when automatic grouping mechanisms bind these objects together, but visual exploration can also be more flexible. It is possible to mentally “re-group” two objects that are not only separate but belong to different pairs of objects. “Re-grouping” is in conflict with automatic grouping, since it entails a separation of each item from the set it belongs to. This ability appears to be impaired in patients with schizophrenia. Here we check if this impairment is selective, which would suggest a dissociation between grouping and “re-grouping,” or if it impacts on usual, automatic grouping, which would call for a better understanding of the interactions between automatic grouping and “re-grouping.” Sixteen outpatients with schizophrenia and healthy controls had to identify two identical and contiguous target figures within a display of circles and squares alternating around a fixation point. Eye-tracking was used to check central fixation. The target pair could be located in the same or separate hemifields. Identical figures were grouped by a connector (grouped automatically) or not (to be re-grouped). Attention modulation of automatic grouping was tested by manipulating the proportion of connected and unconnected targets, thus prompting subjects to focalize on either connected or unconnected pairs. Both groups were sensitive to automatic grouping in most conditions, but patients were unusually slowed down for connected targets while focalizing on unconnected pairs. In addition, this unusual effect occurred only when targets were presented within the same hemifield. Patients and controls differed on this asymmetry between within- and across-hemifield presentation, suggesting that patients with schizophrenia do not re-group figures in the same way as controls do. We discuss possible implications on how “re-grouping” ties in with ongoing, automatic perception in healthy volunteers. PMID:22912621
Automated Coronal Loop Identification Using Digital Image Processing Techniques
NASA Technical Reports Server (NTRS)
Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.
2003-01-01
The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.
Automated diagnosis of diabetic retinopathy and glaucoma using fundus and OCT images.
Pachiyappan, Arulmozhivarman; Das, Undurti N; Murthy, Tatavarti Vsp; Tatavarti, Rao
2012-06-13
We describe a system for the automated diagnosis of diabetic retinopathy and glaucoma using fundus and optical coherence tomography (OCT) images. Automatic screening will help the doctors to quickly identify the condition of the patient in a more accurate way. The macular abnormalities caused due to diabetic retinopathy can be detected by applying morphological operations, filters and thresholds on the fundus images of the patient. Early detection of glaucoma is done by estimating the Retinal Nerve Fiber Layer (RNFL) thickness from the OCT images of the patient. The RNFL thickness estimation involves the use of active contours based deformable snake algorithm for segmentation of the anterior and posterior boundaries of the retinal nerve fiber layer. The algorithm was tested on a set of 89 fundus images of which 85 were found to have at least mild retinopathy and OCT images of 31 patients out of which 13 were found to be glaucomatous. The accuracy for optical disk detection is found to be 97.75%. The proposed system therefore is accurate, reliable and robust and can be realized.
Automated diagnosis of diabetic retinopathy and glaucoma using fundus and OCT images
2012-01-01
We describe a system for the automated diagnosis of diabetic retinopathy and glaucoma using fundus and optical coherence tomography (OCT) images. Automatic screening will help the doctors to quickly identify the condition of the patient in a more accurate way. The macular abnormalities caused due to diabetic retinopathy can be detected by applying morphological operations, filters and thresholds on the fundus images of the patient. Early detection of glaucoma is done by estimating the Retinal Nerve Fiber Layer (RNFL) thickness from the OCT images of the patient. The RNFL thickness estimation involves the use of active contours based deformable snake algorithm for segmentation of the anterior and posterior boundaries of the retinal nerve fiber layer. The algorithm was tested on a set of 89 fundus images of which 85 were found to have at least mild retinopathy and OCT images of 31 patients out of which 13 were found to be glaucomatous. The accuracy for optical disk detection is found to be 97.75%. The proposed system therefore is accurate, reliable and robust and can be realized. PMID:22695250
Using Gaussian mixture models to detect and classify dolphin whistles and pulses.
Peso Parada, Pablo; Cardenal-López, Antonio
2014-06-01
In recent years, a number of automatic detection systems for free-ranging cetaceans have been proposed that aim to detect not just surfaced, but also submerged, individuals. These systems are typically based on pattern-recognition techniques applied to underwater acoustic recordings. Using a Gaussian mixture model, a classification system was developed that detects sounds in recordings and classifies them as one of four types: background noise, whistles, pulses, and combined whistles and pulses. The classifier was tested using a database of underwater recordings made off the Spanish coast during 2011. Using cepstral-coefficient-based parameterization, a sound detection rate of 87.5% was achieved for a 23.6% classification error rate. To improve these results, two parameters computed using the multiple signal classification algorithm and an unpredictability measure were included in the classifier. These parameters, which helped to classify the segments containing whistles, increased the detection rate to 90.3% and reduced the classification error rate to 18.1%. Finally, the potential of the multiple signal classification algorithm and unpredictability measure for estimating whistle contours and classifying cetacean species was also explored, with promising results.
Cherry recognition in natural environment based on the vision of picking robot
NASA Astrophysics Data System (ADS)
Zhang, Qirong; Chen, Shanxiong; Yu, Tingzhong; Wang, Yan
2017-04-01
In order to realize the automatic recognition of cherry in the natural environment, this paper designed a robot vision system recognition method. The first step of this method is to pre-process the cherry image by median filtering. The second step is to identify the colour of the cherry through the 0.9R-G colour difference formula, and then use the Otsu algorithm for threshold segmentation. The third step is to remove noise by using the area threshold. The fourth step is to remove the holes in the cherry image by morphological closed and open operation. The fifth step is to obtain the centroid and contour of cherry by using the smallest external rectangular and the Hough transform. Through this recognition process, we can successfully identify 96% of the cherry without blocking and adhesion.
Automated recognition of microcalcification clusters in mammograms
NASA Astrophysics Data System (ADS)
Bankman, Isaac N.; Christens-Barry, William A.; Kim, Dong W.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.
1993-07-01
The widespread and increasing use of mammographic screening for early breast cancer detection is placing a significant strain on clinical radiologists. Large numbers of radiographic films have to be visually interpreted in fine detail to determine the subtle hallmarks of cancer that may be present. We developed an algorithm for detecting microcalcification clusters, the most common and useful signs of early, potentially curable breast cancer. We describe this algorithm, which utilizes contour map representations of digitized mammographic films, and discuss its benefits in overcoming difficulties often encountered in algorithmic approaches to radiographic image processing. We present experimental analyses of mammographic films employing this contour-based algorithm and discuss practical issues relevant to its use in an automated film interpretation instrument.
Estimation of prenatal aorta intima-media thickness in ultrasound examination
NASA Astrophysics Data System (ADS)
Veronese, Elisa; Poletti, Enea; Cosmi, Erich; Grisan, Enrico
2012-03-01
Prenatal events such as intrauterine growth restriction have been shown to be associated with an increased thickness of abdominal aorta in the fetus. Therefore the measurement of abdominal aortic intima-media thickness (aIMT) has been recently considered a sensitive marker of artherosclerosis risk. To date measure of aortic diameter and of aIMT has been performed manually on US fetal images, thus being susceptible to intra- and inter- operator variability. This work introduces an automatic algorithm that identifies abdominal aorta and estimates its diameter and aIMT from videos recorded during routine third trimester ultrasonographic fetal biometry. Firstly, in each frame, the algorithm locates and segments the region corresponding to aorta by means of an active contour driven by two different external forces: a static vector field convolution force and a dynamic pressure force. Then, in each frame, the mean diameter of the vessel is computed, to reconstruct the cardiac cycle: in fact, we expect the diameter to have a sinusoidal trend, according to the heart rate. From the obtained sinusoid, we identify the frames corresponding to the end diastole and to the end systole. Finally, in these frames we assess the aIMT. According to its definition, we consider as aIMT the distance between the leading edge of the blood-intima interface, and the leading edge of the media-adventitia interface on the far wall of the vessel. The correlation between end-diastole and end-systole aIMT automatic and manual measures is 0.90 and 0.84 respectively.
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
Three-dimensional tracking for efficient fire fighting in complex situations
NASA Astrophysics Data System (ADS)
Akhloufi, Moulay; Rossi, Lucile
2009-05-01
Each year, hundred millions hectares of forests burn causing human and economic losses. For efficient fire fighting, the personnel in the ground need tools permitting the prediction of fire front propagation. In this work, we present a new technique for automatically tracking fire spread in three-dimensional space. The proposed approach uses a stereo system to extract a 3D shape from fire images. A new segmentation technique is proposed and permits the extraction of fire regions in complex unstructured scenes. It works in the visible spectrum and combines information extracted from YUV and RGB color spaces. Unlike other techniques, our algorithm does not require previous knowledge about the scene. The resulting fire regions are classified into different homogenous zones using clustering techniques. Contours are then extracted and a feature detection algorithm is used to detect interest points like local maxima and corners. Extracted points from stereo images are then used to compute the 3D shape of the fire front. The resulting data permits to build the fire volume. The final model is used to compute important spatial and temporal fire characteristics like: spread dynamics, local orientation, heading direction, etc. Tests conducted on the ground show the efficiency of the proposed scheme. This scheme is being integrated with a fire spread mathematical model in order to predict and anticipate the fire behaviour during fire fighting. Also of interest to fire-fighters, is the proposed automatic segmentation technique that can be used in early detection of fire in complex scenes.
Gao, Shan; van 't Klooster, Ronald; Kitslaar, Pieter H; Coolen, Bram F; van den Berg, Alexandra M; Smits, Loek P; Shahzad, Rahil; Shamonin, Denis P; de Koning, Patrick J H; Nederveen, Aart J; van der Geest, Rob J
2017-10-01
The quantification of vessel wall morphology and plaque burden requires vessel segmentation, which is generally performed by manual delineations. The purpose of our work is to develop and evaluate a new 3D model-based approach for carotid artery wall segmentation from dual-sequence MRI. The proposed method segments the lumen and outer wall surfaces including the bifurcation region by fitting a subdivision surface constructed hierarchical-tree model to the image data. In particular, a hybrid segmentation which combines deformable model fitting with boundary classification was applied to extract the lumen surface. The 3D model ensures the correct shape and topology of the carotid artery, while the boundary classification uses combined image information of 3D TOF-MRA and 3D BB-MRI to promote accurate delineation of the lumen boundaries. The proposed algorithm was validated on 25 subjects (48 arteries) including both healthy volunteers and atherosclerotic patients with 30% to 70% carotid stenosis. For both lumen and outer wall border detection, our result shows good agreement between manually and automatically determined contours, with contour-to-contour distance less than 1 pixel as well as Dice overlap greater than 0.87 at all different carotid artery sections. The presented 3D segmentation technique has demonstrated the capability of providing vessel wall delineation for 3D carotid MRI data with high accuracy and limited user interaction. This brings benefits to large-scale patient studies for assessing the effect of pharmacological treatment of atherosclerosis by reducing image analysis time and bias between human observers. © 2017 American Association of Physicists in Medicine.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
NASA Astrophysics Data System (ADS)
Diaz, Kristians; Castañeda, Benjamín; Miranda, César; Lavarello, Roberto; Llanos, Alejandro
2010-03-01
We developed a protocol for the acquisition of digital images and an algorithm for a color-based automatic segmentation of cutaneous lesions of Leishmaniasis. The protocol for image acquisition provides control over the working environment to manipulate brightness, lighting and undesirable shadows on the injury using indirect lighting. Also, this protocol was used to accurately calculate the area of the lesion expressed in mm2 even in curved surfaces by combining the information from two consecutive images. Different color spaces were analyzed and compared using ROC curves in order to determine the color layer with the highest contrast between the background and the wound. The proposed algorithm is composed of three stages: (1) Location of the wound determined by threshold and mathematical morphology techniques to the H layer of the HSV color space, (2) Determination of the boundaries of the wound by analyzing the color characteristics in the YIQ space based on masks (for the wound and the background) estimated from the first stage, and (3) Refinement of the calculations obtained on the previous stages by using the discrete dynamic contours algorithm. The segmented regions obtained with the algorithm were compared with manual segmentations made by a medical specialist. Broadly speaking, our results support that color provides useful information during segmentation and measurement of wounds of cutaneous Leishmaniasis. Results from ten images showed 99% specificity, 89% sensitivity, and 98% accuracy.
Clustering Of Left Ventricular Wall Motion Patterns
NASA Astrophysics Data System (ADS)
Bjelogrlic, Z.; Jakopin, J.; Gyergyek, L.
1982-11-01
A method for detection of wall regions with similar motion was presented. A model based on local direction information was used to measure the left ventricular wall motion from cineangiographic sequence. Three time functions were used to define segmental motion patterns: distance of a ventricular contour segment from the mean contour, the velocity of a segment and its acceleration. Motion patterns were clustered by the UPGMA algorithm and by an algorithm based on K-nearest neighboor classification rule.
Three-dimensional reconstruction from serial sections in PC-Windows platform by using 3D_Viewer.
Xu, Yi-Hua; Lahvis, Garet; Edwards, Harlene; Pitot, Henry C
2004-11-01
Three-dimensional (3D) reconstruction from serial sections allows identification of objects of interest in 3D and clarifies the relationship among these objects. 3D_Viewer, developed in our laboratory for this purpose, has four major functions: image alignment, movie frame production, movie viewing, and shift-overlay image generation. Color images captured from serial sections were aligned; then the contours of objects of interest were highlighted in a semi-automatic manner. These 2D images were then automatically stacked at different viewing angles, and their composite images on a projected plane were recorded by an image transform-shift-overlay technique. These composition images are used in the object-rotation movie show. The design considerations of the program and the procedures used for 3D reconstruction from serial sections are described. This program, with a digital image-capture system, a semi-automatic contours highlight method, and an automatic image transform-shift-overlay technique, greatly speeds up the reconstruction process. Since images generated by 3D_Viewer are in a general graphic format, data sharing with others is easy. 3D_Viewer is written in MS Visual Basic 6, obtainable from our laboratory on request.
Ping, Lichuan; Wang, Ningyuan; Tang, Guofang; Lu, Thomas; Yin, Li; Tu, Wenhe; Fu, Qian-Jie
2017-09-01
Because of limited spectral resolution, Mandarin-speaking cochlear implant (CI) users have difficulty perceiving fundamental frequency (F0) cues that are important to lexical tone recognition. To improve Mandarin tone recognition in CI users, we implemented and evaluated a novel real-time algorithm (C-tone) to enhance the amplitude contour, which is strongly correlated with the F0 contour. The C-tone algorithm was implemented in clinical processors and evaluated in eight users of the Nurotron NSP-60 CI system. Subjects were given 2 weeks of experience with C-tone. Recognition of Chinese tones, monosyllables, and disyllables in quiet was measured with and without the C-tone algorithm. Subjective quality ratings were also obtained for C-tone. After 2 weeks of experience with C-tone, there were small but significant improvements in recognition of lexical tones, monosyllables, and disyllables (P < 0.05 in all cases). Among lexical tones, the largest improvements were observed for Tone 3 (falling-rising) and the smallest for Tone 4 (falling). Improvements with C-tone were greater for disyllables than for monosyllables. Subjective quality ratings showed no strong preference for or against C-tone, except for perception of own voice, where C-tone was preferred. The real-time C-tone algorithm provided small but significant improvements for speech performance in quiet with no change in sound quality. Pre-processing algorithms to reduce noise and better real-time F0 extraction would improve the benefits of C-tone in complex listening environments. Chinese CI users' speech recognition in quiet can be significantly improved by modifying the amplitude contour to better resemble the F0 contour.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, S; Robinson, A; Kiess, A
2015-06-15
Purpose: The purpose of this study is to develop an accurate and effective technique to predict and monitor volume changes of the tumor and organs at risk (OARs) from daily cone-beam CTs (CBCTs). Methods: While CBCT is typically used to minimize the patient setup error, its poor image quality impedes accurate monitoring of daily anatomical changes in radiotherapy. Reconstruction artifacts in CBCT often cause undesirable errors in registration-based contour propagation from the planning CT, a conventional way to estimate anatomical changes. To improve the registration and segmentation accuracy, we developed a new deformable image registration (DIR) that iteratively corrects CBCTmore » intensities using slice-based histogram matching during the registration process. Three popular DIR algorithms (hierarchical B-spline, demons, optical flow) augmented by the intensity correction were implemented on a graphics processing unit for efficient computation, and their performances were evaluated on six head and neck (HN) cancer cases. Four trained scientists manually contoured nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs for each case, to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial software, VelocityAI (Varian Medical Systems Inc.). Results: Manual contouring showed significant variations, [-76, +141]% from the mean of all four sets of contours. The volume differences (mean±std in cc) between the average manual segmentation and four automatic segmentations are 3.70±2.30(B-spline), 1.25±1.78(demons), 0.93±1.14(optical flow), and 4.39±3.86 (VelocityAI). In comparison to the average volume of the manual segmentations, the proposed approach significantly reduced the estimation error by 9%(B-spline), 38%(demons), and 51%(optical flow) over the conventional mutual information based method (VelocityAI). Conclusion: The proposed CT-CBCT registration with local CBCT intensity correction can accurately predict the tumor volume change with reduced errors. Although demonstrated only on HN nodal GTVs, the results imply improved accuracy for other critical structures. This work was supported by NIH/NCI under grant R42CA137886.« less
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
Probabilistic location estimation of acoustic emission sources in isotropic plates with one sensor
NASA Astrophysics Data System (ADS)
Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
This paper presents a probabilistic acoustic emission (AE) source localization algorithm for isotropic plate structures. The proposed algorithm requires only one sensor and uniformly monitors the entire area of such plates without any blind zones. In addition, it takes a probabilistic approach and quantifies localization uncertainties. The algorithm combines a modal acoustic emission (MAE) and a reflection-based technique to obtain information pertaining to the location of AE sources. To estimate confidence contours for the location of sources, uncertainties are quantified and propagated through the two techniques. The approach was validated using standard pencil lead break (PLB) tests on an Aluminum plate. The results demonstrate that the proposed source localization algorithm successfully estimates confidence contours for the location of AE sources.
Research on cutting path optimization of sheet metal parts based on ant colony algorithm
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Ling, H.; Li, L.; Wu, L. H.; Liu, N. B.
2017-09-01
In view of the disadvantages of the current cutting path optimization methods of sheet metal parts, a new method based on ant colony algorithm was proposed in this paper. The cutting path optimization problem of sheet metal parts was taken as the research object. The essence and optimization goal of the optimization problem were presented. The traditional serial cutting constraint rule was improved. The cutting constraint rule with cross cutting was proposed. The contour lines of parts were discretized and the mathematical model of cutting path optimization was established. Thus the problem was converted into the selection problem of contour lines of parts. Ant colony algorithm was used to solve the problem. The principle and steps of the algorithm were analyzed.
Marchi, Luciana Manzotti De; Pini, Núbia Inocencya Pavesi; Hayacibara, Roberto Massayuki; Silva, Rafael Santos; Pascotto, Renata Corrêa
2012-01-01
To evaluate functional and periodontal aspects in patients with unilateral or bilateral congenitally missing maxillary lateral incisors, treated with either implants or space closure and tooth re-contouring. The sample consisted of 68 volunteers, divided into 3 groups: SCR - space closure and tooth re-contouring with composite resin (n = 26); SOI – implants placed in the area of agenesis (n = 20); and CG - control group (n = 22). A modified Helkimo questionnaire and the Research Diagnostic Criteria for Temporomandibular Disorders were used by a single, previously calibrated evaluator to assess signs and symptoms of temporomandibular joint disorder. The periodontal assessment involved the following aspects: plaque index, bleeding upon probing, pocket depth greater than 3 mm, gingival recession, abfraction, periodontal biotype and papilla index. The data were analyzed using Fisher's exact test and the nonparametric Mann-Whitney and Kruskal-Wallis tests (α=.05). No differences in periodontal status were found between treatments. None of the groups were associated with signs and symptoms of temporomandibular joint disorder. Both treatment alternatives for patients with congenitally missing maxillary lateral incisors were satisfactory and achieved functional and periodontal results similar to those of the control group. PMID:23346262
Zhang, Pin; Liang, Yanmei; Chang, Shengjiang; Fan, Hailun
2013-08-01
Accurate segmentation of renal tissues in abdominal computed tomography (CT) image sequences is an indispensable step for computer-aided diagnosis and pathology detection in clinical applications. In this study, the goal is to develop a radiology tool to extract renal tissues in CT sequences for the management of renal diagnosis and treatments. In this paper, the authors propose a new graph-cuts-based active contours model with an adaptive width of narrow band for kidney extraction in CT image sequences. Based on graph cuts and contextual continuity, the segmentation is carried out slice-by-slice. In the first stage, the middle two adjacent slices in a CT sequence are segmented interactively based on the graph cuts approach. Subsequently, the deformable contour evolves toward the renal boundaries by the proposed model for the kidney extraction of the remaining slices. In this model, the energy function combining boundary with regional information is optimized in the constructed graph and the adaptive search range is determined by contextual continuity and the object size. In addition, in order to reduce the complexity of the min-cut computation, the nodes in the graph only have n-links for fewer edges. The total 30 CT images sequences with normal and pathological renal tissues are used to evaluate the accuracy and effectiveness of our method. The experimental results reveal that the average dice similarity coefficient of these image sequences is from 92.37% to 95.71% and the corresponding standard deviation for each dataset is from 2.18% to 3.87%. In addition, the average automatic segmentation time for one kidney in each slice is about 0.36 s. Integrating the graph-cuts-based active contours model with contextual continuity, the algorithm takes advantages of energy minimization and the characteristics of image sequences. The proposed method achieves effective results for kidney segmentation in CT sequences.
NASA Astrophysics Data System (ADS)
Sands, Michelle M.; Borrego, David; Maynard, Matthew R.; Bahadori, Amir A.; Bolch, Wesley E.
2017-11-01
One of the hazards faced by space crew members in low-Earth orbit or in deep space is exposure to ionizing radiation. It has been shown previously that while differences in organ-specific and whole-body risk estimates due to body size variations are small for highly-penetrating galactic cosmic rays, large differences in these quantities can result from exposure to shorter-range trapped proton or solar particle event radiations. For this reason, it is desirable to use morphometrically accurate computational phantoms representing each astronaut for a risk analysis, especially in the case of a solar particle event. An algorithm was developed to automatically sculpt and scale the UF adult male and adult female hybrid reference phantom to the individual outer body contour of a given astronaut. This process begins with the creation of a laser-measured polygon mesh model of the astronaut's body contour. Using the auto-scaling program and selecting several anatomical landmarks, the UF adult male or female phantom is adjusted to match the laser-measured outer body contour of the astronaut. A dosimetry comparison study was conducted to compare the organ dose accuracy of both the autoscaled phantom and that based upon a height-weight matched phantom from the UF/NCI Computational Phantom Library. Monte Carlo methods were used to simulate the environment of the August 1972 and February 1956 solar particle events. Using a series of individual-specific voxel phantoms as a local benchmark standard, autoscaled phantom organ dose estimates were shown to provide a 1% and 10% improvement in organ dose accuracy for a population of females and males, respectively, as compared to organ doses derived from height-weight matched phantoms from the UF/NCI Computational Phantom Library. In addition, this slight improvement in organ dose accuracy from the autoscaled phantoms is accompanied by reduced computer storage requirements and a more rapid method for individualized phantom generation when compared to the UF/NCI Computational Phantom Library.
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1990-01-01
A collection of technical papers are presented that cover modeling pilot interaction with automated digital avionics systems and guidance and control algorithms for contour and nap-of-the-earth flight. The titles of the papers presented are as follows: (1) Automation effects in a multiloop manual control system; (2) A qualitative model of human interaction with complex dynamic systems; (3) Generalized predictive control of dynamic systems; (4) An application of generalized predictive control to rotorcraft terrain-following flight; (5) Self-tuning generalized predictive control applied to terrain-following flight; and (6) Precise flight path control using a predictive algorithm.
Automatic joint alignment measurements in pre- and post-operative long leg standing radiographs.
Goossen, A; Weber, G M; Dries, S P M
2012-01-01
For diagnosis or treatment assessment of knee joint osteoarthritis it is required to measure bone morphometry from radiographic images. We propose a method for automatic measurement of joint alignment from pre-operative as well as post-operative radiographs. In a two step approach we first detect and segment any implants or other artificial objects within the image. We exploit physical characteristics and avoid prior shape information to cope with the vast amount of implant types. Subsequently, we exploit the implant delineations to adapt the initialization and adaptation phase of a dedicated bone segmentation scheme using deformable template models. Implant and bone contours are fused to derive the final joint segmentation and thus the alignment measurements. We evaluated our method on clinical long leg radiographs and compared both the initialization rate, corresponding to the number of images successfully processed by the proposed algorithm, and the accuracy of the alignment measurement. Ground truth has been generated by an experienced orthopedic surgeon. For comparison a second reader reevaluated the measurements. Experiments on two sets of 70 and 120 digital radiographs show that 92% of the joints could be processed automatically and the derived measurements of the automatic method are comparable to a human reader for pre-operative as well as post-operative images with a typical error of 0.7° and correlations of r = 0.82 to r = 0.99 with the ground truth. The proposed method allows deriving objective measures of joint alignment from clinical radiographs. Its accuracy and precision are on par with a human reader for all evaluated measurements.
Parmar, Chintan; Blezek, Daniel; Estepar, Raul San Jose; Pieper, Steve; Kim, John; Aerts, Hugo J. W. L.
2017-01-01
Purpose Accurate segmentation of lung nodules is crucial in the development of imaging biomarkers for predicting malignancy of the nodules. Manual segmentation is time consuming and affected by inter-observer variability. We evaluated the robustness and accuracy of a publically available semiautomatic segmentation algorithm that is implemented in the 3D Slicer Chest Imaging Platform (CIP) and compared it with the performance of manual segmentation. Methods CT images of 354 manually segmented nodules were downloaded from the LIDC database. Four radiologists performed the manual segmentation and assessed various nodule characteristics. The semiautomatic CIP segmentation was initialized using the centroid of the manual segmentations, thereby generating four contours for each nodule. The robustness of both segmentation methods was assessed using the region of uncertainty (δ) and Dice similarity index (DSI). The robustness of the segmentation methods was compared using the Wilcoxon-signed rank test (pWilcoxon<0.05). The Dice similarity index (DSIAgree) between the manual and CIP segmentations was computed to estimate the accuracy of the semiautomatic contours. Results The median computational time of the CIP segmentation was 10 s. The median CIP and manually segmented volumes were 477 ml and 309 ml, respectively. CIP segmentations were significantly more robust than manual segmentations (median δCIP = 14ml, median dsiCIP = 99% vs. median δmanual = 222ml, median dsimanual = 82%) with pWilcoxon~10−16. The agreement between CIP and manual segmentations had a median DSIAgree of 60%. While 13% (47/354) of the nodules did not require any manual adjustment, minor to substantial manual adjustments were needed for 87% (305/354) of the nodules. CIP segmentations were observed to perform poorly (median DSIAgree≈50%) for non-/sub-solid nodules with subtle appearances and poorly defined boundaries. Conclusion Semi-automatic CIP segmentation can potentially reduce the physician workload for 13% of nodules owing to its computational efficiency and superior stability compared to manual segmentation. Although manual adjustment is needed for many cases, CIP segmentation provides a preliminary contour for physicians as a starting point. PMID:28594880
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C; Yin, Y
2015-06-15
Purpose: A method using four-dimensional(4D) PET/CT in design of radiation treatment planning was proposed and the target volume and radiation dose distribution changes relative to standard three-dimensional (3D) PET/CT were examined. Methods: A target deformable registration method was used by which the whole patient’s respiration process was considered and the effect of respiration motion was minimized when designing radiotherapy planning. The gross tumor volume of a non-small-cell lung cancer was contoured on the 4D FDG-PET/CT and 3D PET/CT scans by use of two different techniques: manual contouring by an experienced radiation oncologist using a predetermined protocol; another technique using amore » constant threshold of standardized uptake value (SUV) greater than 2.5. The target volume and radiotherapy dose distribution between VOL3D and VOL4D were analyzed. Results: For all phases, the average automatic and manually GTV volume was 18.61 cm3 (range, 16.39–22.03 cm3) and 31.29 cm3 (range, 30.11–35.55 cm3), respectively. The automatic and manually volume of merged IGTV were 27.82 cm3 and 49.37 cm3, respectively. For the manual contour, compared to 3D plan the mean dose for the left, right, and total lung of 4D plan have an average decrease 21.55%, 15.17% and 15.86%, respectively. The maximum dose of spinal cord has an average decrease 2.35%. For the automatic contour, the mean dose for the left, right, and total lung have an average decrease 23.48%, 16.84% and 17.44%, respectively. The maximum dose of spinal cord has an average decrease 1.68%. Conclusion: In comparison to 3D PET/CT, 4D PET/CT may better define the extent of moving tumors and reduce the contouring tumor volume thereby optimize radiation treatment planning for lung tumors.« less
Re-Dimensional Thinking in Earth Science: From 3-D Virtual Reality Panoramas to 2-D Contour Maps
ERIC Educational Resources Information Center
Park, John; Carter, Glenda; Butler, Susan; Slykhuis, David; Reid-Griffin, Angelia
2008-01-01
This study examines the relationship of gender and spatial perception on student interactivity with contour maps and non-immersive virtual reality. Eighteen eighth-grade students elected to participate in a six-week activity-based course called "3-D GeoMapping." The course included nine days of activities related to topographic mapping.…
Soccer player recognition by pixel classification in a hybrid color space
NASA Astrophysics Data System (ADS)
Vandenbroucke, Nicolas; Macaire, Ludovic; Postaire, Jack-Gerard
1997-08-01
Soccer is a very popular sport all over the world, Coaches and sport commentators need accurate information about soccer games, especially about the players behavior. These information can be gathered by inspectors who watch the soccer match and report manually the actions of the players involved in the principal phases of the game. Generally, these inspectors focus their attention on the few players standing near the ball and don't report about the motion of all the other players. So it seems desirable to design a system which automatically tracks all the players in real- time. That's why we propose to automatically track each player through the successive color images of the sequences acquired by a fixed color camera. Each player which is present in the image, is modelized by an active contour model or snake. When, during the soccer match, a player is hidden by another, the snakes which track these two players merge. So, it becomes impossible to track the players, except if the snakes are interactively re-initialized. Fortunately, in most cases, the two players don't belong to the same team. That is why we present an algorithm which recognizes the teams of the players by pixels representing the soccer ground which must be withdrawn before considering the players themselves. To eliminate these pixels, the color characteristics of the ground are determined interactively. In a second step, dealing with windows containing only one player of one team, the color features which yield the best discrimination between the two teams are selected. Thanks to these color features, the pixels associated to the players of the two teams form two separated clusters into a color space. In fact, there are many color representation systems and it's interesting to evaluate the features which provide the best separation between the two classes of pixels according to the players soccer suit. Finally, the classification process for image segmentation is based on the three most discriminating color features which define the coordinates of each pixel in an 'hybrid color space.' Thanks to this hybrid color representation, each pixel can be assigned to one of the two classes by a minimum distance classification.
A spectral k-means approach to bright-field cell image segmentation.
Bradbury, Laura; Wan, Justin W L
2010-01-01
Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.
Capturing planar shapes by approximating their outlines
NASA Astrophysics Data System (ADS)
Sarfraz, M.; Riyazuddin, M.; Baig, M. H.
2006-05-01
A non-deterministic evolutionary approach for approximating the outlines of planar shapes has been developed. Non-uniform Rational B-splines (NURBS) have been utilized as an underlying approximation curve scheme. Simulated Annealing heuristic is used as an evolutionary methodology. In addition to independent studies of the optimization of weight and knot parameters of the NURBS, a separate scheme has also been developed for the optimization of weights and knots simultaneously. The optimized NURBS models have been fitted over the contour data of the planar shapes for the ultimate and automatic output. The output results are visually pleasing with respect to the threshold provided by the user. A web-based system has also been developed for the effective and worldwide utilization. The objective of this system is to provide the facility to visualize the output to the whole world through internet by providing the freedom to the user for various desired input parameters setting in the algorithm designed.
Segmentation and classification of cell cycle phases in fluorescence imaging.
Ersoy, Ilker; Bunyak, Filiz; Chagin, Vadim; Cardoso, M Christina; Palaniappan, Kannappan
2009-01-01
Current chemical biology methods for studying spatiotemporal correlation between biochemical networks and cell cycle phase progression in live-cells typically use fluorescence-based imaging of fusion proteins. Stable cell lines expressing fluorescently tagged protein GFP-PCNA produce rich, dynamically varying sub-cellular foci patterns characterizing the cell cycle phases, including the progress during the S-phase. Variable fluorescence patterns, drastic changes in SNR, shape and position changes and abundance of touching cells require sophisticated algorithms for reliable automatic segmentation and cell cycle classification. We extend the recently proposed graph partitioning active contours (GPAC) for fluorescence-based nucleus segmentation using regional density functions and dramatically improve its efficiency, making it scalable for high content microscopy imaging. We utilize surface shape properties of GFP-PCNA intensity field to obtain descriptors of foci patterns and perform automated cell cycle phase classification, and give quantitative performance by comparing our results to manually labeled data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, AS; Piper, J; Curry, K
2015-06-15
Purpose: Prostate MRI plays an important role in diagnosis, biopsy guidance, and therapy planning for prostate cancer. Prostate MRI contours can be used to aid in image fusion for ultrasound biopsy guidance and delivery of radiation. Our goal in this study is to evaluate an automatic atlas-based segmentation method for generating prostate and peripheral zone (PZ) contours on MRI. Methods: T2-weighted MRIs were acquired on 3T-Discovery MR750 System (GE, Milwaukee). The Volumes of Interest (VOIs): prostate and PZ were outlined by an expert radiation oncologist and used to create an atlas library for atlas-based segmentation. The atlas-segmentation accuracy was evaluatedmore » using a leave-one-out analysis. The method involved automatically finding the atlas subject that best matched the test subject followed by a normalized intensity-based free-form deformable registration of the atlas subject to the test subject. The prostate and PZ contours were transformed to the test subject using the same deformation. For each test subject the three best matches were used and the final contour was combined using Majority Vote. The atlas-segmentation process was fully automatic. Dice similarity coefficients (DSC) and mean Hausdorff values were used for comparison. Results: VOIs contours were available for 28 subjects. For the prostate, the atlas-based segmentation method resulted in an average DSC of 0.88+/−0.08 and a mean Hausdorff distance of 1.1+/−0.9mm. The number of patients (#) in DSC ranges are as follows: 0.60–0.69(1), 0.70–0.79(2), 0.80–0.89(13), >0.89(11). For the PZ, the average DSC was 0.72+/−0.17 and average Hausdorff of 0.9+/−0.9mm. The number of patients (#) in DSC ranges are as follows: <0.60(4), 0.60–0.69(6), 0.70–0.79(7), 0.80–0.89(9), >0.89(1). Conclusion: The MRI atlas-based segmentation method achieved good results for both the whole prostate and PZ compared to expert defined VOIs. The technique is fast, fully automatic, and has the potential to provide significant time savings for prostate VOI definition. AS Nelson and J Piper are partial owners of MIM Software, Inc. AS Nelson, J Piper, K Curry, and A Swallen are current employees at MIM Software, Inc.« less
A modified three-term PRP conjugate gradient algorithm for optimization models.
Wu, Yanlin
2017-01-01
The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.
Shepherd, T; Teras, M; Beichel, RR; Boellaard, R; Bruynooghe, M; Dicken, V; Gooding, MJ; Julyan, PJ; Lee, JA; Lefèvre, S; Mix, M; Naranjo, V; Wu, X; Zaidi, H; Zeng, Z; Minn, H
2017-01-01
The impact of positron emission tomography (PET) on radiation therapy is held back by poor methods of defining functional volumes of interest. Many new software tools are being proposed for contouring target volumes but the different approaches are not adequately compared and their accuracy is poorly evaluated due to the ill-definition of ground truth. This paper compares the largest cohort to date of established, emerging and proposed PET contouring methods, in terms of accuracy and variability. We emphasize spatial accuracy and present a new metric that addresses the lack of unique ground truth. Thirty methods are used at 13 different institutions to contour functional volumes of interest in clinical PET/CT and a custom-built PET phantom representing typical problems in image guided radiotherapy. Contouring methods are grouped according to algorithmic type, level of interactivity and how they exploit structural information in hybrid images. Experiments reveal benefits of high levels of user interaction, as well as simultaneous visualization of CT images and PET gradients to guide interactive procedures. Method-wise evaluation identifies the danger of over-automation and the value of prior knowledge built into an algorithm. PMID:22692898
NASA Astrophysics Data System (ADS)
Bouaynaya, N.; Schonfeld, Dan
2005-03-01
Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasquier, David; Lacornerie, Thomas; Vermandel, Maximilien
Purpose: Target-volume and organ-at-risk delineation is a time-consuming task in radiotherapy planning. The development of automated segmentation tools remains problematic, because of pelvic organ shape variability. We evaluate a three-dimensional (3D), deformable-model approach and a seeded region-growing algorithm for automatic delineation of the prostate and organs-at-risk on magnetic resonance images. Methods and Materials: Manual and automatic delineation were compared in 24 patients using a sagittal T2-weighted (T2-w) turbo spin echo (TSE) sequence and an axial T1-weighted (T1-w) 3D fast-field echo (FFE) or TSE sequence. For automatic prostate delineation, an organ model-based method was used. Prostates without seminal vesicles were delineatedmore » as the clinical target volume (CTV). For automatic bladder and rectum delineation, a seeded region-growing method was used. Manual contouring was considered the reference method. The following parameters were measured: volume ratio (Vr) (automatic/manual), volume overlap (Vo) (ratio of the volume of intersection to the volume of union; optimal value = 1), and correctly delineated volume (Vc) (percent ratio of the volume of intersection to the manually defined volume; optimal value 100). Results: For the CTV, the Vr, Vo, and Vc were 1.13 ({+-}0.1 SD), 0.78 ({+-}0.05 SD), and 94.75 ({+-}3.3 SD), respectively. For the rectum, the Vr, Vo, and Vc were 0.97 ({+-}0.1 SD), 0.78 ({+-}0.06 SD), and 86.52 ({+-}5 SD), respectively. For the bladder, the Vr, Vo, and Vc were 0.95 ({+-}0.03 SD), 0.88 ({+-}0.03 SD), and 91.29 ({+-}3.1 SD), respectively. Conclusions: Our results show that the organ-model method is robust, and results in reproducible prostate segmentation with minor interactive corrections. For automatic bladder and rectum delineation, magnetic resonance imaging soft-tissue contrast enables the use of region-growing methods.« less
Spiral Light Beams and Contour Image Processing
NASA Astrophysics Data System (ADS)
Kishkin, Sergey A.; Kotova, Svetlana P.; Volostnikov, Vladimir G.
Spiral beams of light are characterized by their ability to remain structurally unchanged at propagation. They may have the shape of any closed curve. In the present paper a new approach is proposed within the framework of the contour analysis based on a close cooperation of modern coherent optics, theory of functions and numerical methods. An algorithm for comparing contours is presented and theoretically justified, which allows convincing of whether two contours are similar or not to within the scale factor and/or rotation. The advantages and disadvantages of the proposed approach are considered; the results of numerical modeling are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hui; Liu, Yiping; Qiu, Tianshuang
2014-08-15
Purpose: To develop and evaluate a computerized semiautomatic segmentation method for accurate extraction of three-dimensional lesions from dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) of the breast. Methods: The authors propose a new background distribution-based active contour model using level set (BDACMLS) to segment lesions in breast DCE-MRIs. The method starts with manual selection of a region of interest (ROI) that contains the entire lesion in a single slice where the lesion is enhanced. Then the lesion volume from the volume data of interest, which is captured automatically, is separated. The core idea of BDACMLS is a new signed pressure functionmore » which is based solely on the intensity distribution combined with pathophysiological basis. To compare the algorithm results, two experienced radiologists delineated all lesions jointly to obtain the ground truth. In addition, results generated by other different methods based on level set (LS) are also compared with the authors’ method. Finally, the performance of the proposed method is evaluated by several region-based metrics such as the overlap ratio. Results: Forty-two studies with 46 lesions that contain 29 benign and 17 malignant lesions are evaluated. The dataset includes various typical pathologies of the breast such as invasive ductal carcinoma, ductal carcinomain situ, scar carcinoma, phyllodes tumor, breast cysts, fibroadenoma, etc. The overlap ratio for BDACMLS with respect to manual segmentation is 79.55% ± 12.60% (mean ± s.d.). Conclusions: A new active contour model method has been developed and shown to successfully segment breast DCE-MRI three-dimensional lesions. The results from this model correspond more closely to manual segmentation, solve the weak-edge-passed problem, and improve the robustness in segmenting different lesions.« less
Active contour based segmentation of resected livers in CT images
NASA Astrophysics Data System (ADS)
Oelmann, Simon; Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan
2015-03-01
The majority of state of the art segmentation algorithms are able to give proper results in healthy organs but not in pathological ones. However, many clinical applications require an accurate segmentation of pathological organs. The determination of the target boundaries for radiotherapy or liver volumetry calculations are examples of this. Volumetry measurements are of special interest after tumor resection for follow up of liver regrow. The segmentation of resected livers presents additional challenges that were not addressed by state of the art algorithms. This paper presents a snakes based algorithm specially developed for the segmentation of resected livers. The algorithm is enhanced with a novel dynamic smoothing technique that allows the active contour to propagate with different speeds depending on the intensities visible in its neighborhood. The algorithm is evaluated in 6 clinical CT images as well as 18 artificial datasets generated from additional clinical CT images.
NASA Astrophysics Data System (ADS)
Assi, Kondo Claude; Gay, Etienne; Chnafa, Christophe; Mendez, Simon; Nicoud, Franck; Abascal, Juan F. P. J.; Lantelme, Pierre; Tournoux, François; Garcia, Damien
2017-09-01
We propose a regularized least-squares method for reconstructing 2D velocity vector fields within the left ventricular cavity from single-view color Doppler echocardiographic images. Vector flow mapping is formulated as a quadratic optimization problem based on an {{\\ell }2} -norm minimization of a cost function composed of a Doppler data-fidelity term and a regularizer. The latter contains three physically interpretable expressions related to 2D mass conservation, Dirichlet boundary conditions, and smoothness. A finite difference discretization of the continuous problem was adopted in a polar coordinate system, leading to a sparse symmetric positive-definite system. The three regularization parameters were determined automatically by analyzing the L-hypersurface, a generalization of the L-curve. The performance of the proposed method was numerically evaluated using (1) a synthetic flow composed of a mixture of divergence-free and curl-free flow fields and (2) simulated flow data from a patient-specific CFD (computational fluid dynamics) model of a human left heart. The numerical evaluations showed that the vector flow fields reconstructed from the Doppler components were in good agreement with the original velocities, with a relative error less than 20%. It was also demonstrated that a perturbation of the domain contour has little effect on the rebuilt velocity fields. The capability of our intraventricular vector flow mapping (iVFM) algorithm was finally illustrated on in vivo echocardiographic color Doppler data acquired in patients. The vortex that forms during the rapid filling was clearly deciphered. This improved iVFM algorithm is expected to have a significant clinical impact in the assessment of diastolic function.
Automatic allograft bone selection through band registration and its application to distal femur.
Zhang, Yu; Qiu, Lei; Li, Fengzan; Zhang, Qing; Zhang, Li; Niu, Xiaohui
2017-09-01
Clinical reports suggest that large bone defects could be effectively restored by allograft bone transplantation, where allograft bone selection acts an important role. Besides, there is a huge demand for developing the automatic allograft bone selection methods, as the automatic methods could greatly improve the management efficiency of the large bone banks. Although several automatic methods have been presented to select the most suitable allograft bone from the massive allograft bone bank, these methods still suffer from inaccuracy. In this paper, we propose an effective allograft bone selection method without using the contralateral bones. Firstly, the allograft bone is globally aligned to the recipient bone by surface registration. Then, the global alignment is further refined through band registration. The band, defined as the recipient points within the lifted and lowered cutting planes, could involve more local structure of the defected segment. Therefore, our method could achieve robust alignment and high registration accuracy of the allograft and recipient. Moreover, the existing contour method and surface method could be unified into one framework under our method by adjusting the lift and lower distances of the cutting planes. Finally, our method has been validated on the database of distal femurs. The experimental results indicate that our method outperforms the surface method and contour method.
Improve accuracy for automatic acetabulum segmentation in CT images.
Liu, Hao; Zhao, Jianning; Dai, Ning; Qian, Hongbo; Tang, Yuehong
2014-01-01
Separation of the femur head and acetabulum is one of main difficulties in the diseased hip joint due to deformed shapes and extreme narrowness of the joint space. To improve the segmentation accuracy is the key point of existing automatic or semi-automatic segmentation methods. In this paper, we propose a new method to improve the accuracy of the segmented acetabulum using surface fitting techniques, which essentially consists of three parts: (1) design a surface iterative process to obtain an optimization surface; (2) change the ellipsoid fitting to two-phase quadric surface fitting; (3) bring in a normal matching method and an optimization region method to capture edge points for the fitting quadric surface. Furthermore, this paper cited vivo CT data sets of 40 actual patients (with 79 hip joints). Test results for these clinical cases show that: (1) the average error of the quadric surface fitting method is 2.3 (mm); (2) the accuracy ratio of automatically recognized contours is larger than 89.4%; (3) the error ratio of section contours is less than 10% for acetabulums without severe malformation and less than 30% for acetabulums with severe malformation. Compared with similar methods, the accuracy of our method, which is applied in a software system, is significantly enhanced.
NASA Astrophysics Data System (ADS)
Yip, Stephen S. F.; Coroller, Thibaud P.; Sanford, Nina N.; Huynh, Elizabeth; Mamon, Harvey; Aerts, Hugo J. W. L.; Berbeco, Ross I.
2016-01-01
Change in PET-based textural features has shown promise in predicting cancer response to treatment. However, contouring tumour volumes on longitudinal scans is time-consuming. This study investigated the usefulness of contour propagation in texture analysis for the purpose of pathologic response prediction in esophageal cancer. Forty-five esophageal cancer patients underwent PET/CT scans before and after chemo-radiotherapy. Patients were classified into responders and non-responders after the surgery. Physician-defined tumour ROIs on pre-treatment PET were propagated onto the post-treatment PET using rigid and ten deformable registration algorithms. PET images were converted into 256 discrete values. Co-occurrence, run-length, and size zone matrix textures were computed within all ROIs. The relative difference of each texture at different treatment time-points was used to predict the pathologic responders. Their predictive value was assessed using the area under the receiver-operating-characteristic curve (AUC). Propagated ROIs from different algorithms were compared using Dice similarity index (DSI). Contours propagated by the fast-demons, fast-free-form and rigid algorithms did not fully capture the high FDG uptake regions of tumours. Fast-demons propagated ROIs had the least agreement with other contours (DSI = 58%). Moderate to substantial overlap were found in the ROIs propagated by all other algorithms (DSI = 69%-79%). Rigidly propagated ROIs with co-occurrence texture failed to significantly differentiate between responders and non-responders (AUC = 0.58, q-value = 0.33), while the differentiation was significant with other textures (AUC = 0.71‒0.73, p < 0.009). Among the deformable algorithms, fast-demons (AUC = 0.68‒0.70, q-value < 0.03) and fast-free-form (AUC = 0.69‒0.74, q-value < 0.04) were the least predictive. ROIs propagated by all other deformable algorithms with any texture significantly predicted pathologic responders (AUC = 0.72‒0.78, q-value < 0.01). Propagated ROIs using deformable registration for all textures can lead to accurate prediction of pathologic response, potentially expediting the temporal texture analysis process. However, fast-demons, fast-free-form, and rigid algorithms should be applied with care due to their inferior performance compared to other algorithms.
Cavity contour segmentation in chest radiographs using supervised learning and dynamic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maduskar, Pragnya, E-mail: pragnya.maduskar@radboudumc.nl; Hogeweg, Laurens; Sánchez, Clara I.
Purpose: Efficacy of tuberculosis (TB) treatment is often monitored using chest radiography. Monitoring size of cavities in pulmonary tuberculosis is important as the size predicts severity of the disease and its persistence under therapy predicts relapse. The authors present a method for automatic cavity segmentation in chest radiographs. Methods: A two stage method is proposed to segment the cavity borders, given a user defined seed point close to the center of the cavity. First, a supervised learning approach is employed to train a pixel classifier using texture and radial features to identify the border pixels of the cavity. A likelihoodmore » value of belonging to the cavity border is assigned to each pixel by the classifier. The authors experimented with four different classifiers:k-nearest neighbor (kNN), linear discriminant analysis (LDA), GentleBoost (GB), and random forest (RF). Next, the constructed likelihood map was used as an input cost image in the polar transformed image space for dynamic programming to trace the optimal maximum cost path. This constructed path corresponds to the segmented cavity contour in image space. Results: The method was evaluated on 100 chest radiographs (CXRs) containing 126 cavities. The reference segmentation was manually delineated by an experienced chest radiologist. An independent observer (a chest radiologist) also delineated all cavities to estimate interobserver variability. Jaccard overlap measure Ω was computed between the reference segmentation and the automatic segmentation; and between the reference segmentation and the independent observer's segmentation for all cavities. A median overlap Ω of 0.81 (0.76 ± 0.16), and 0.85 (0.82 ± 0.11) was achieved between the reference segmentation and the automatic segmentation, and between the segmentations by the two radiologists, respectively. The best reported mean contour distance and Hausdorff distance between the reference and the automatic segmentation were, respectively, 2.48 ± 2.19 and 8.32 ± 5.66 mm, whereas these distances were 1.66 ± 1.29 and 5.75 ± 4.88 mm between the segmentations by the reference reader and the independent observer, respectively. The automatic segmentations were also visually assessed by two trained CXR readers as “excellent,” “adequate,” or “insufficient.” The readers had good agreement in assessing the cavity outlines and 84% of the segmentations were rated as “excellent” or “adequate” by both readers. Conclusions: The proposed cavity segmentation technique produced results with a good degree of overlap with manual expert segmentations. The evaluation measures demonstrated that the results approached the results of the experienced chest radiologists, in terms of overlap measure and contour distance measures. Automatic cavity segmentation can be employed in TB clinics for treatment monitoring, especially in resource limited settings where radiologists are not available.« less
Gao, Yi; Kikinis, Ron; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen
2012-08-01
Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: first, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction-this not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we provide the reader reproducible experiments that demonstrate the capability of the proposed segmentation tool on several public available data sets. Copyright © 2012 Elsevier B.V. All rights reserved.
A 3D Interactive Multi-object Segmentation Tool using Local Robust Statistics Driven Active Contours
Gao, Yi; Kikinis, Ron; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen
2012-01-01
Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: First, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction — This not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we provide the reader reproducible experiments that demonstrate the capability of the proposed segmentation tool on several public available data sets. PMID:22831773
Joint Denoising/Compression of Image Contours via Shape Prior and Context Tree
NASA Astrophysics Data System (ADS)
Zheng, Amin; Cheung, Gene; Florencio, Dinei
2018-07-01
With the advent of depth sensing technologies, the extraction of object contours in images---a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition---has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we first propose a burst error model that models typical errors encountered in an observed string y of directional edges. We then formulate a rate-constrained maximum a posteriori (MAP) problem that trades off the posterior probability p(x'|y) of an estimated string x' given y with its code rate R(x'). We design a dynamic programming (DP) algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree (TST) that can reduce complexity of the algorithm dramatically. Experimental results show that our joint denoising / compression scheme outperformed a competing separate scheme in rate-distortion performance noticeably.
Simple computer method provides contours for radiological images
NASA Technical Reports Server (NTRS)
Newell, J. D.; Keller, R. A.; Baily, N. A.
1975-01-01
Computer is provided with information concerning boundaries in total image. Gradient of each point in digitized image is calculated with aid of threshold technique; then there is invoked set of algorithms designed to reduce number of gradient elements and to retain only major ones for definition of contour.
Character feature integration of Chinese calligraphy and font
NASA Astrophysics Data System (ADS)
Shi, Cao; Xiao, Jianguo; Jia, Wenhua; Xu, Canhui
2013-01-01
A framework is proposed in this paper to effectively generate a new hybrid character type by means of integrating local contour feature of Chinese calligraphy with structural feature of font in computer system. To explore traditional art manifestation of calligraphy, multi-directional spatial filter is applied for local contour feature extraction. Then the contour of character image is divided into sub-images. The sub-images in the identical position from various characters are estimated by Gaussian distribution. According to its probability distribution, the dilation operator and erosion operator are designed to adjust the boundary of font image. And then new Chinese character images are generated which possess both contour feature of artistical calligraphy and elaborate structural feature of font. Experimental results demonstrate the new characters are visually acceptable, and the proposed framework is an effective and efficient strategy to automatically generate the new hybrid character of calligraphy and font.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569
Gao, Wei-Wei; Shen, Jian-Xin; Wang, Yu-Liang; Liang, Chun; Zuo, Jing
2013-02-01
In order to automatically detect hemorrhages in fundus images, and develop an automated diabetic retinopathy screening system, a novel algorithm named locally adaptive region growing based on multi-template matching was established and studied. Firstly, spectral signature of major anatomical structures in fundus was studied, so that the right channel among RGB channels could be selected for different segmentation objects. Secondly, the fundus image was preprocessed by means of HSV brightness correction and contrast limited adaptive histogram equalization (CLAHE). Then, seeds of region growing were founded out by removing optic disc and vessel from the resulting image of normalized cross-correlation (NCC) template matching on the previous preprocessed image with several templates. Finally, locally adaptive region growing segmentation was used to find out the exact contours of hemorrhages, and the automated detection of the lesions was accomplished. The approach was tested on 90 different resolution fundus images with variable color, brightness and quality. Results suggest that the approach could fast and effectively detect hemorrhages in fundus images, and it is stable and robust. As a result, the approach can meet the clinical demands.
Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai
2005-10-01
This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.
Segmenting breast cancerous regions in thermal images using fuzzy active contours
Ghayoumi Zadeh, Hossein; Haddadnia, Javad; Rahmani Seryasat, Omid; Mostafavi Isfahani, Sayed Mohammad
2016-01-01
Breast cancer is the main cause of death among young women in developing countries. The human body temperature carries critical medical information related to the overall body status. Abnormal rise in total and regional body temperature is a natural symptom in diagnosing many diseases. Thermal imaging (Thermography) utilizes infrared beams which are fast, non-invasive, and non-contact and the output created images by this technique are flexible and useful to monitor the temperature of the human body. In some clinical studies and biopsy tests, it is necessary for the clinician to know the extent of the cancerous area. In such cases, the thermal image is very useful. In the same line, to detect the cancerous tissue core, thermal imaging is beneficial. This paper presents a fully automated approach to detect the thermal edge and core of the cancerous area in thermography images. In order to evaluate the proposed method, 60 patients with an average age of 44/9 were chosen. These cases were suspected of breast tissue disease. These patients referred to Tehran Imam Khomeini Imaging Center. Clinical examinations such as ultrasound, biopsy, questionnaire, and eventually thermography were done precisely on these individuals. Finally, the proposed model is applied for segmenting the proved abnormal area in thermal images. The proposed model is based on a fuzzy active contour designed by fuzzy logic. The presented method can segment cancerous tissue areas from its borders in thermal images of the breast area. In order to evaluate the proposed algorithm, Hausdorff and mean distance between manual and automatic method were used. Estimation of distance was conducted to accurately separate the thermal core and edge. Hausdorff distance between the proposed and the manual method for thermal core and edge was 0.4719 ± 0.4389, 0.3171 ± 0.1056 mm respectively, and the average distance between the proposed and the manual method for core and thermal edge was 0.0845 ± 0.0619, 0.0710 ± 0.0381 mm respectively. Furthermore, the sensitivity in recognizing the thermal pattern in breast tissue masses is 85 % and its accuracy is 91.98 %.A thermal imaging system has been proposed that is able to recognize abnormal breast tissue masses. This system utilizes fuzzy active contours to extract the abnormal regions automatically. PMID:28096784
Data-Parallel Algorithm for Contour Tree Construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher Meyer; Ahrens, James Paul; Carr, Hamish
2017-01-19
The goal of this project is to develop algorithms for additional visualization and analysis filters in order to expand the functionality of the VTK-m toolkit to support less critical but commonly used operators.
Finite grade pheromone ant colony optimization for image segmentation
NASA Astrophysics Data System (ADS)
Yuanjing, F.; Li, Y.; Liangjun, K.
2008-06-01
By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.
A hand tracking algorithm with particle filter and improved GVF snake model
NASA Astrophysics Data System (ADS)
Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe
2017-07-01
To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.
Active mask segmentation of fluorescence microscope images.
Srinivasa, Gowri; Fickus, Matthew C; Guo, Yusong; Linstedt, Adam D; Kovacević, Jelena
2009-08-01
We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the "contour" to that of "inside and outside," or masks, allowing for easy multidimensional segmentation. It adapts to the topology of the image through the use of multiple masks. The algorithm is almost invariant under initialization, allowing for random initialization, and uses a few easily tunable parameters. Experiments show that the active mask algorithm matches the ground truth well and outperforms the algorithm widely used in fluorescence microscopy, seeded watershed, both qualitatively, as well as quantitatively.
Yeap, P L; Noble, D J; Harrison, K; Bates, A M; Burnet, N G; Jena, R; Romanchikova, M; Sutcliffe, M P F; Thomas, S J; Barnett, G C; Benson, R J; Jefferies, S J; Parker, M A
2017-07-12
To determine delivered dose to the spinal cord, a technique has been developed to propagate manual contours from kilovoltage computed-tomography (kVCT) scans for treatment planning to megavoltage computed-tomography (MVCT) guidance scans. The technique uses the Elastix software to perform intensity-based deformable image registration of each kVCT scan to the associated MVCT scans. The registration transform is then applied to contours of the spinal cord drawn manually on the kVCT scan, to obtain contour positions on the MVCT scans. Different registration strategies have been investigated, with performance evaluated by comparing the resulting auto-contours with manual contours, drawn by oncologists. The comparison metrics include the conformity index (CI), and the distance between centres (DBC). With optimised registration, auto-contours generally agree well with manual contours. Considering all 30 MVCT scans for each of three patients, the median CI is [Formula: see text], and the median DBC is ([Formula: see text]) mm. An intra-observer comparison for the same scans gives a median CI of [Formula: see text] and a DBC of ([Formula: see text]) mm. Good levels of conformity are also obtained when auto-contours are compared with manual contours from one observer for a single MVCT scan for each of 30 patients, and when they are compared with manual contours from six observers for two MVCT scans for each of three patients. Using the auto-contours to estimate organ position at treatment time, a preliminary study of 33 patients who underwent radiotherapy for head-and-neck cancers indicates good agreement between planned and delivered dose to the spinal cord.
NASA Astrophysics Data System (ADS)
Yeap, P. L.; Noble, D. J.; Harrison, K.; Bates, A. M.; Burnet, N. G.; Jena, R.; Romanchikova, M.; Sutcliffe, M. P. F.; Thomas, S. J.; Barnett, G. C.; Benson, R. J.; Jefferies, S. J.; Parker, M. A.
2017-08-01
To determine delivered dose to the spinal cord, a technique has been developed to propagate manual contours from kilovoltage computed-tomography (kVCT) scans for treatment planning to megavoltage computed-tomography (MVCT) guidance scans. The technique uses the Elastix software to perform intensity-based deformable image registration of each kVCT scan to the associated MVCT scans. The registration transform is then applied to contours of the spinal cord drawn manually on the kVCT scan, to obtain contour positions on the MVCT scans. Different registration strategies have been investigated, with performance evaluated by comparing the resulting auto-contours with manual contours, drawn by oncologists. The comparison metrics include the conformity index (CI), and the distance between centres (DBC). With optimised registration, auto-contours generally agree well with manual contours. Considering all 30 MVCT scans for each of three patients, the median CI is 0.759 +/- 0.003 , and the median DBC is (0.87 +/- 0.01 ) mm. An intra-observer comparison for the same scans gives a median CI of 0.820 +/- 0.002 and a DBC of (0.64 +/- 0.01 ) mm. Good levels of conformity are also obtained when auto-contours are compared with manual contours from one observer for a single MVCT scan for each of 30 patients, and when they are compared with manual contours from six observers for two MVCT scans for each of three patients. Using the auto-contours to estimate organ position at treatment time, a preliminary study of 33 patients who underwent radiotherapy for head-and-neck cancers indicates good agreement between planned and delivered dose to the spinal cord.
SU-F-J-115: Target Volume and Artifact Evaluation of a New Device-Less 4D CT Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, R; Pan, T
2016-06-15
Purpose: 4DCT is often used in radiation therapy treatment planning to define the extent of motion of the visible tumor (IGTV). Recent available software allows 4DCT images to be created without the use of an external motion surrogate. This study aims to compare this device-less algorithm to a standard device-driven technique (RPM) in regards to artifacts and the creation of treatment volumes. Methods: 34 lung cancer patients who had previously received a cine 4DCT scan on a GE scanner with an RPM determined respiratory signal were selected. Cine images were sorted into 10 phases based on both the RPM signalmore » and the device-less algorithm. Contours were created on standard and device-less maximum intensity projection (MIP) images using a region growing algorithm and manual adjustment to remove other structures. Variations in measurements due to intra-observer differences in contouring were assessed by repeating a subset of 6 patients 2 additional times. Artifacts in each phase image were assessed using normalized cross correlation at each bed position transition. A score between +1 (artifacts “better” in all phases for device-less) and −1 (RPM similarly better) was assigned for each patient based on these results. Results: Device-less IGTV contours were 2.1 ± 1.0% smaller than standard IGTV contours (not significant, p = 0.15). The Dice similarity coefficient (DSC) was 0.950 ± 0.006 indicating good similarity between the contours. Intra-observer variation resulted in standard deviations of 1.2 percentage points in percent volume difference and 0.005 in DSC measurements. Only two patients had improved artifacts with RPM, and the average artifact score (0.40) was significantly greater than zero. Conclusion: Device-less 4DCT can be used in place of the standard method for target definition due to no observed difference between standard and device-less IGTVs. Phase image artifacts were significantly reduced with the device-less method.« less
Closed geometric models in medical applications
NASA Astrophysics Data System (ADS)
Jagannathan, Lakshmipathy; Nowinski, Wieslaw L.; Raphel, Jose K.; Nguyen, Bonnie T.
1996-04-01
Conventional surface fitting methods give twisted surfaces and complicates capping closures. This is a typical character of surfaces that lack rectangular topology. We suggest an algorithm which overcomes these limitations. The analysis of the algorithm is presented with experimental results. This algorithm assumes the mass center lying inside the object. Both capping closure and twisting are results of inadequate information on the geometric proximity of the points and surfaces which are proximal in the parametric space. Geometric proximity at the contour level is handled by mapping the points along the contour onto a hyper-spherical space. The resulting angular gradation with respect to the centroid is monotonic and hence avoids the twisting problem. Inter-contour geometric proximity is achieved by partitioning the point set based on the angle it makes with the respective centroids. Avoidance of capping complications is achieved by generating closed cross curves connecting curves which are reflections about the abscissa. The method is of immense use for the generation of the deep cerebral structures and is applied to the deep structures generated from the Schaltenbrand- Wahren brain atlas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkerts, MM; University of California San Diego, La Jolla, California; Long, T
Purpose: To provide a tool to generate large sets of realistic virtual patient geometries and beamlet doses for treatment optimization research. This tool enables countless studies exploring the fundamental interplay between patient geometry, objective functions, weight selections, and achievable dose distributions for various algorithms and modalities. Methods: Generating realistic virtual patient geometries requires a small set of real patient data. We developed a normalized patient shape model (PSM) which captures organ and target contours in a correspondence-preserving manner. Using PSM-processed data, we perform principal component analysis (PCA) to extract major modes of variation from the population. These PCA modes canmore » be shared without exposing patient information. The modes are re-combined with different weights to produce sets of realistic virtual patient contours. Because virtual patients lack imaging information, we developed a shape-based dose calculation (SBD) relying on the assumption that the region inside the body contour is water. SBD utilizes a 2D fluence-convolved scatter kernel, derived from Monte Carlo simulations, and can compute both full dose for a given set of fluence maps, or produce a dose matrix (dose per fluence pixel) for many modalities. Combining the shape model with SBD provides the data needed for treatment plan optimization research. Results: We used PSM to capture organ and target contours for 96 prostate cases, extracted the first 20 PCA modes, and generated 2048 virtual patient shapes by randomly sampling mode scores. Nearly half of the shapes were thrown out for failing anatomical checks, the remaining 1124 were used in computing dose matrices via SBD and a standard 7-beam protocol. As a proof of concept, and to generate data for later study, we performed fluence map optimization emphasizing PTV coverage. Conclusions: We successfully developed and tested a tool for creating customizable sets of virtual patients suitable for large-scale radiation therapy optimization research.« less
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-06-08
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-01-01
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383
Pomegranate MR images analysis using ACM and FCM algorithms
NASA Astrophysics Data System (ADS)
Morad, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.
2011-10-01
Segmentation of an image plays an important role in image processing applications. In this paper segmentation of pomegranate magnetic resonance (MR) images has been explored. Pomegranate has healthy nutritional and medicinal properties for which the maturity indices and quality of internal tissues play an important role in the sorting process in which the admissible determination of features mentioned above cannot be easily achieved by human operator. Seeds and soft tissues are the main internal components of pomegranate. For research purposes, such as non-destructive investigation, in order to determine the ripening index and the percentage of seeds in growth period, segmentation of the internal structures should be performed as exactly as possible. In this paper, we present an automatic algorithm to segment the internal structure of pomegranate. Since its intensity of stem and calyx is close to the internal tissues, the stem and calyx pixels are usually labeled to the internal tissues by segmentation algorithm. To solve this problem, first, the fruit shape is extracted from its background using active contour model (ACM). Then stem and calyx are removed using morphological filters. Finally the image is segmented by fuzzy c-means (FCM). The experimental results represent an accuracy of 95.91% in the presence of stem and calyx, while the accuracy of segmentation increases to 97.53% when stem and calyx are first removed by morphological filters.
NASA Astrophysics Data System (ADS)
Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia
2013-10-01
Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.
A Method for Identifying Contours in Processing Digital Images from Computer Tomograph
NASA Astrophysics Data System (ADS)
Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela
2011-09-01
The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.
An Algorithm for Converting Contours to Elevation Grids.
ERIC Educational Resources Information Center
Reid-Green, Keith S.
Some of the test questions for the National Council of Architectural Registration Boards deal with the site, including drainage, regrading, and the like. Some questions are most easily scored by examining contours, but others, such as water flow questions, are best scored from a grid in which each element is assigned its average elevation. This…
New descriptor for skeletons of planar shapes: the calypter
NASA Astrophysics Data System (ADS)
Pirard, Eric; Nivart, Jean-Francois
1994-05-01
The mathematical definition of the skeleton as the locus of centers of maximal inscribed discs is a nondigitizable one. The idea presented in this paper is to incorporate the skeleton information and the chain-code of the contour into a single descriptor by associating to each point of a contour the center and radius of the maximum inscribed disc tangent at that point. This new descriptor is called calypter. The encoding of a calypter is a three stage algorithm: (1) chain coding of the contour; (2) euclidean distance transformation, (3) climbing on the distance relief from each point of the contour towards the corresponding maximal inscribed disc center. Here we introduce an integer euclidean distance transform called the holodisc distance transform. The major interest of this holodisc transform is to confer 8-connexity to the isolevels of the generated distance relief thereby allowing a climbing algorithm to proceed step by step towards the centers of the maximal inscribed discs. The calypter has a cyclic structure delivering high speed access to the skeleton data. Its potential uses are in high speed euclidean mathematical morphology, shape processing, and analysis.
Computerized adaptive control weld skate with CCTV weld guidance project
NASA Technical Reports Server (NTRS)
Wall, W. A.
1976-01-01
This report summarizes progress of the automatic computerized weld skate development portion of the Computerized Weld Skate with Closed Circuit Television (CCTV) Arc Guidance Project. The main goal of the project is to develop an automatic welding skate demonstration model equipped with CCTV weld guidance. The three main goals of the overall project are to: (1) develop a demonstration model computerized weld skate system, (2) develop a demonstration model automatic CCTV guidance system, and (3) integrate the two systems into a demonstration model of computerized weld skate with CCTV weld guidance for welding contoured parts.
Effective pore size and radius of capture for K(+) ions in K-channels.
Moldenhauer, Hans; Díaz-Franulic, Ignacio; González-Nilo, Fernando; Naranjo, David
2016-02-02
Reconciling protein functional data with crystal structure is arduous because rare conformations or crystallization artifacts occur. Here we present a tool to validate the dimensions of open pore structures of potassium-selective ion channels. We used freely available algorithms to calculate the molecular contour of the pore to determine the effective internal pore radius (r(E)) in several K-channel crystal structures. r(E) was operationally defined as the radius of the biggest sphere able to enter the pore from the cytosolic side. We obtained consistent r(E) estimates for MthK and Kv1.2/2.1 structures, with r(E) = 5.3-5.9 Å and r(E) = 4.5-5.2 Å, respectively. We compared these structural estimates with functional assessments of the internal mouth radii of capture (r(C)) for two electrophysiological counterparts, the large conductance calcium activated K-channel (r(C) = 2.2 Å) and the Shaker Kv-channel (r(C) = 0.8 Å), for MthK and Kv1.2/2.1 structures, respectively. Calculating the difference between r(E) and r(C), produced consistent size radii of 3.1-3.7 Å and 3.6-4.4 Å for hydrated K(+) ions. These hydrated K(+) estimates harmonize with others obtained with diverse experimental and theoretical methods. Thus, these findings validate MthK and the Kv1.2/2.1 structures as templates for open BK and Kv-channels, respectively.
Automatic firearm class identification from cartridge cases
NASA Astrophysics Data System (ADS)
Kamalakannan, Sridharan; Mann, Christopher J.; Bingham, Philip R.; Karnowski, Thomas P.; Gleason, Shaun S.
2011-03-01
We present a machine vision system for automatic identification of the class of firearms by extracting and analyzing two significant properties from spent cartridge cases, namely the Firing Pin Impression (FPI) and the Firing Pin Aperture Outline (FPAO). Within the framework of the proposed machine vision system, a white light interferometer is employed to image the head of the spent cartridge cases. As a first step of the algorithmic procedure, the Primer Surface Area (PSA) is detected using a circular Hough transform. Once the PSA is detected, a customized statistical region-based parametric active contour model is initialized around the center of the PSA and evolved to segment the FPI. Subsequently, the scaled version of the segmented FPI is used to initialize a customized Mumford-Shah based level set model in order to segment the FPAO. Once the shapes of FPI and FPAO are extracted, a shape-based level set method is used in order to compare these extracted shapes to an annotated dataset of FPIs and FPAOs from varied firearm types. A total of 74 cartridge case images non-uniformly distributed over five different firearms are processed using the aforementioned scheme and the promising nature of the results (95% classification accuracy) demonstrate the efficacy of the proposed approach.
NASA Astrophysics Data System (ADS)
Mazonakis, Michalis; Grinias, Elias; Pagonidis, Konstantin; Tziritas, George; Damilakis, John
2010-02-01
The purpose of this study was to develop and evaluate a semiautomatic method for left ventricular (LV) segmentation on cine MR images and subsequent estimation of cardiac parameters. The study group comprised cardiac MR examinations of 18 consecutive patients with known or suspected coronary artery disease. The new method allowed the automatic detection of the LV endocardial and epicardial boundaries on each short-axis cine MR image using a Bayesian flooding segmentation algorithm and weighted least-squares B-splines minimization. Manual editing of the automatic contours could be performed for unsatisfactory segmentation results. The end-diastolic volume (EDV), end-systolic volume (ESV), ejection fraction (EF) and LV mass estimated by the new method were compared with the reference values obtained by manually tracing the LV cavity borders. The reproducibility of the new method was determined using data from two independent observers. The mean number of endocardial and epicardial outlines not requiring any manual adjustment was more than 80% and 76% of the total contour number per study, respectively. The mean segmentation time including the required manual corrections was 2.3 ± 0.7 min per patient. LV volumes estimated by the semiautomatic method were significantly lower than those by manual tracing (P < 0.05), whereas no difference was found for EF and LV mass (P > 0.05). LV indices estimated by the two methods were well correlated (r >= 0.80). The mean difference between manual and semiautomatic method for estimating EDV, ESV, EF and LV mass was 6.1 ± 7.2 ml, 3.0 ± 5.2 ml, -0.6 ± 4.3% and -6.2 ± 12.2 g, respectively. The intraobserver and interobserver variability associated with the semiautomatic determination of LV indices was 0.5-1.2% and 0.8-3.9%, respectively. The estimation of LV parameters with the new semiautomatic segmentation method is technically feasible, highly reproducible and time effective.
Garrett, Daniel S; Gronenborn, Angela M; Clore, G Marius
2011-12-01
The Contour Approach to Peak Picking was developed to aid in the analysis and interpretation and of multidimensional NMR spectra of large biomolecules. In essence, it comprises an interactive graphics software tool to computationally select resonance positions in heteronuclear, 3- and 4D spectra. Copyright © 2011. Published by Elsevier Inc.
A graphics package for meteorological data, version 1.5
NASA Technical Reports Server (NTRS)
Moorthi, Shrinivas; Suarez, Max; Phillips, Bill; Schemm, Jae-Kyung; Schubert, Siegfried
1989-01-01
A plotting package has been developed to simplify the task of plotting meteorological data. The calling sequences and examples of high level yet flexible routines which allow contouring, vectors and shading of cylindrical, polar, orthographic and Mollweide (egg) projections are given. Routines are also included for contouring pressure-latitude and pressure-longitude fields with linear or log scales in pressure (interpolation to fixed grid interval is done automatically). Also included is a fairly general line plotting routine. The present version (1.5) produces plots on WMS laser printers and uses graphics primitives from WOLFPLOT.
Zhang, Lian; Wang, Zhi; Shi, Chengyu; Long, Tengfei; Xu, X George
2018-05-30
Deformable image registration (DIR) is the key process for contour propagation and dose accumulation in adaptive radiation therapy (ART). However, currently, ART suffers from a lack of understanding of "robustness" of the process involving the image contour based on DIR and subsequent dose variations caused by algorithm itself and the presetting parameters. The purpose of this research is to evaluate the DIR caused variations for contour propagation and dose accumulation during ART using the RayStation treatment planning system. Ten head and neck cancer patients were selected for retrospective studies. Contours were performed by a single radiation oncologist and new treatment plans were generated on the weekly CT scans for all patients. For each DIR process, four deformation vector fields (DVFs) were generated to propagate contours and accumulate weekly dose by the following algorithms: (a) ANACONDA with simple presetting parameters, (b) ANACONDA with detailed presetting parameters, (c) MORFEUS with simple presetting parameters, and (d) MORFEUS with detailed presetting parameters. The geometric evaluation considered DICE coefficient and Hausdorff distance. The dosimetric evaluation included D 95 , D max , D mean , D min , and Homogeneity Index. For geometric evaluation, the DICE coefficient variations of the GTV were found to be 0.78 ± 0.11, 0.96 ± 0.02, 0.64 ± 0.15, and 0.91 ± 0.03 for simple ANACONDA, detailed ANACONDA, simple MORFEUS, and detailed MORFEUS, respectively. For dosimetric evaluation, the corresponding Homogeneity Index variations were found to be 0.137 ± 0.115, 0.006 ± 0.032, 0.197 ± 0.096, and 0.006 ± 0.033, respectively. The coherent geometric and dosimetric variations also consisted in large organs and small organs. Overall, the results demonstrated that the contour propagation and dose accumulation in clinical ART were influenced by the DIR algorithm, and to a greater extent by the presetting parameters. A quality assurance procedure should be established for the proper use of a commercial DIR for adaptive radiation therapy. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Markel, D; Naqa, I El; Freeman, C; Vallières, M
2012-06-01
To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. It was found that JR divergence when used for segmentation has an improved robustness to noise compared to using mutual information, or other entropy-based metrics. The MI metric failed at around 2/3 the noise power than the JR divergence. The JR divergence metric is useful for the task of joint segmentation/registration of multimodality images and shows improved results compared entropy based metric. The algorithm can be easily modified to incorporate non-intensity based images, which would allow applications into multi-modality and texture analysis. © 2012 American Association of Physicists in Medicine.
Evaluation of atlas-based auto-segmentation software in prostate cancer patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenham, Stuart, E-mail: stuart.greenham@ncahs.health.nsw.gov.au; Dean, Jenna; Fu, Cheuk Kuen Kenneth
2014-09-15
The performance and limitations of an atlas-based auto-segmentation software package (ABAS; Elekta Inc.) was evaluated using male pelvic anatomy as the area of interest. Contours from 10 prostate patients were selected to create atlases in ABAS. The contoured regions of interest were created manually to align with published guidelines and included the prostate, bladder, rectum, femoral heads and external patient contour. Twenty-four clinically treated prostate patients were auto-contoured using a randomised selection of two, four, six, eight or ten atlases. The concordance between the manually drawn and computer-generated contours were evaluated statistically using Pearson's product–moment correlation coefficient (r) and clinicallymore » in a validated qualitative evaluation. In the latter evaluation, six radiation therapists classified the degree of agreement for each structure using seven clinically appropriate categories. The ABAS software generated clinically acceptable contours for the bladder, rectum, femoral heads and external patient contour. For these structures, ABAS-generated volumes were highly correlated with ‘as treated’ volumes, manually drawn; for four atlases, for example, bladder r = 0.988 (P < 0.001), rectum r = 0.739 (P < 0.001) and left femoral head r = 0.560 (P < 0.001). Poorest results were seen for the prostate (r = 0.401, P < 0.05) (four atlases); however this was attributed to the comparison prostate volume being contoured on magnetic resonance imaging (MRI) rather than computed tomography (CT) data. For all structures, increasing the number of atlases did not consistently improve accuracy. ABAS-generated contours are clinically useful for a range of structures in the male pelvis. Clinically appropriate volumes were created, but editing of some contours was inevitably required. The ideal number of atlases to improve generated automatic contours is yet to be determined.« less
A new idea for visualization of lesions distribution in mammogram based on CPD registration method.
Pan, Xiaoguang; Qi, Buer; Yu, Hongfei; Wei, Haiping; Kang, Yan
2017-07-20
Mammography is currently the most effective technique for breast cancer. Lesions distribution can provide support for clinical diagnosis and epidemiological studies. We presented a new idea to help radiologists study breast lesions distribution conveniently. We also developed an automatic tool based on this idea which could show visualization of lesions distribution in a standard mammogram. Firstly, establishing a lesion database to study; then, extracting breast contours and match different women's mammograms to a standard mammogram; finally, showing the lesion distribution in the standard mammogram, and providing the distribution statistics. The crucial process of developing this tool was matching different women's mammograms correctly. We used a hybrid breast contour extraction method combined with coherent point drift method to match different women's mammograms. We tested our automatic tool by four mass datasets of 641 images. The distribution results shown by the tool were consistent with the results counted according to their reports and mammograms by manual. We also discussed the registration error that was less than 3.3 mm in average distance. The new idea is effective and the automatic tool can provide lesions distribution results which are consistent with radiologists simply and conveniently.
A hybrid skull-stripping algorithm based on adaptive balloon snake models
NASA Astrophysics Data System (ADS)
Liu, Hung-Ting; Sheu, Tony W. H.; Chang, Herng-Hua
2013-02-01
Skull-stripping is one of the most important preprocessing steps in neuroimage analysis. We proposed a hybrid algorithm based on an adaptive balloon snake model to handle this challenging task. The proposed framework consists of two stages: first, the fuzzy possibilistic c-means (FPCM) is used for voxel clustering, which provides a labeled image for the snake contour initialization. In the second stage, the contour is initialized outside the brain surface based on the FPCM result and evolves under the guidance of the balloon snake model, which drives the contour with an adaptive inward normal force to capture the boundary of the brain. The similarity indices indicate that our method outperformed the BSE and BET methods in skull-stripping the MR image volumes in the IBSR data set. Experimental results show the effectiveness of this new scheme and potential applications in a wide variety of skull-stripping applications.
The algorithm for automatic detection of the calibration object
NASA Astrophysics Data System (ADS)
Artem, Kruglov; Irina, Ugfeld
2017-06-01
The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.
NASA Technical Reports Server (NTRS)
Waugh, Darryn W.; Plumb, R. Alan
1994-01-01
We present a trajectory technique, contour advection with surgery (CAS), for tracing the evolution of material contours in a specified (including observed) evolving flow. CAS uses the algorithms developed by Dritschel for contour dynamics/surgery to trace the evolution of specified contours. The contours are represented by a series of particles, which are advected by a specified, gridded, wind distribution. The resolution of the contours is preserved by continually adjusting the number of particles, and finescale features are produced that are not present in the input data (and cannot easily be generated using standard trajectory techniques). The reliability, and dependence on the spatial and temporal resolution of the wind field, of the CAS procedure is examined by comparisons with high-resolution numerical data (from contour dynamics calculations and from a general circulation model), and with routine stratospheric analyses. These comparisons show that the large-scale motions dominate the deformation field and that CAS can accurately reproduce small scales from low-resolution wind fields. The CAS technique therefore enables examination of atmospheric tracer transport at previously unattainable resolution.
Saliency-aware food image segmentation for personal dietary assessment using a wearable computer
Chen, Hsin-Chen; Jia, Wenyan; Sun, Xin; Li, Zhaoxin; Li, Yuecheng; Fernstrom, John D.; Burke, Lora E.; Baranowski, Thomas; Sun, Mingui
2015-01-01
Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods. PMID:26257473
A new fractional order derivative based active contour model for colon wall segmentation
NASA Astrophysics Data System (ADS)
Chen, Bo; Li, Lihong C.; Wang, Huafeng; Wei, Xinzhou; Huang, Shan; Chen, Wensheng; Liang, Zhengrong
2018-02-01
Segmentation of colon wall plays an important role in advancing computed tomographic colonography (CTC) toward a screening modality. Due to the low contrast of CT attenuation around colon wall, accurate segmentation of the boundary of both inner and outer wall is very challenging. In this paper, based on the geodesic active contour model, we develop a new model for colon wall segmentation. First, tagged materials in CTC images were automatically removed via a partial volume (PV) based electronic colon cleansing (ECC) strategy. We then present a new fractional order derivative based active contour model to segment the volumetric colon wall from the cleansed CTC images. In this model, the regionbased Chan-Vese model is incorporated as an energy term to the whole model so that not only edge/gradient information but also region/volume information is taken into account in the segmentation process. Furthermore, a fractional order differentiation derivative energy term is also developed in the new model to preserve the low frequency information and improve the noise immunity of the new segmentation model. The proposed colon wall segmentation approach was validated on 16 patient CTC scans. Experimental results indicate that the present scheme is very promising towards automatically segmenting colon wall, thus facilitating computer aided detection of initial colonic polyp candidates via CTC.
Saliency-aware food image segmentation for personal dietary assessment using a wearable computer
NASA Astrophysics Data System (ADS)
Chen, Hsin-Chen; Jia, Wenyan; Sun, Xin; Li, Zhaoxin; Li, Yuecheng; Fernstrom, John D.; Burke, Lora E.; Baranowski, Thomas; Sun, Mingui
2015-02-01
Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods.
NASA Astrophysics Data System (ADS)
Babayan, Pavel; Smirnov, Sergey; Strotov, Valery
2017-10-01
This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Localized Fault Recovery for Nested Fork-Join Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kestor, Gokcen; Krishnamoorthy, Sriram; Ma, Wenjing
Nested fork-join programs scheduled using work stealing can automatically balance load and adapt to changes in the execution environment. In this paper, we design an approach to efficiently recover from faults encountered by these programs. Specifically, we focus on localized recovery of the task space in the presence of fail-stop failures. We present an approach to efficiently track, under work stealing, the relationships between the work executed by various threads. This information is used to identify and schedule the tasks to be re-executed without interfering with normal task execution. The algorithm precisely computes the work lost, incurs minimal re-execution overhead,more » and can recover from an arbitrary number of failures. Experimental evaluation demonstrates low overheads in the absence of failures, recovery overheads on the same order as the lost work, and much lower recovery costs than alternative strategies.« less
Gao, Shan; van 't Klooster, Ronald; Brandts, Anne; Roes, Stijntje D; Alizadeh Dehnavi, Reza; de Roos, Albert; Westenberg, Jos J M; van der Geest, Rob J
2017-01-01
To develop and evaluate a method that can fully automatically identify the vessel wall boundaries and quantify the wall thickness for both common carotid artery (CCA) and descending aorta (DAO) from axial magnetic resonance (MR) images. 3T MRI data acquired with T 1 -weighted gradient-echo black-blood imaging sequence from carotid (39 subjects) and aorta (39 subjects) were used to develop and test the algorithm. The vessel wall segmentation was achieved by respectively fitting a 3D cylindrical B-spline surface to the boundaries of lumen and outer wall. The tube-fitting was based on the edge detection performed on the signal intensity (SI) profile along the surface normal. To achieve a fully automated process, Hough Transform (HT) was developed to estimate the lumen centerline and radii for the target vessel. Using the outputs of HT, a tube model for lumen segmentation was initialized and deformed to fit the image data. Finally, lumen segmentation was dilated to initiate the adaptation procedure of outer wall tube. The algorithm was validated by determining: 1) its performance against manual tracing; 2) its interscan reproducibility in quantifying vessel wall thickness (VWT); 3) its capability of detecting VWT difference in hypertensive patients compared with healthy controls. Statistical analysis including Bland-Altman analysis, t-test, and sample size calculation were performed for the purpose of algorithm evaluation. The mean distance between the manual and automatically detected lumen/outer wall contours was 0.00 ± 0.23/0.09 ± 0.21 mm for CCA and 0.12 ± 0.24/0.14 ± 0.35 mm for DAO. No significant difference was observed between the interscan VWT assessment using automated segmentation for both CCA (P = 0.19) and DAO (P = 0.94). Both manual and automated segmentation detected significantly higher carotid (P = 0.016 and P = 0.005) and aortic (P < 0.001 and P = 0.021) wall thickness in the hypertensive patients. A reliable and reproducible pipeline for fully automatic vessel wall quantification was developed and validated on healthy volunteers as well as patients with increased vessel wall thickness. This method holds promise for helping in efficient image interpretation for large-scale cohort studies. 4 J. Magn. Reson. Imaging 2017;45:215-228. © 2016 International Society for Magnetic Resonance in Medicine.
User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy.
Ramkumar, Anjana; Dolz, Jose; Kirisli, Hortense A; Adebahr, Sonja; Schimek-Jasch, Tanja; Nestle, Ursula; Massoptier, Laurent; Varga, Edit; Stappers, Pieter Jan; Niessen, Wiro J; Song, Yu
2016-04-01
Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians' expertise and computers' potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the "strokes" and the "contour", to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design.
An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization
NASA Astrophysics Data System (ADS)
Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc
2002-09-01
A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.
Contour-Driven Atlas-Based Segmentation
Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina
2016-01-01
We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202
Automatic CT simulation optimization for radiation therapy: A general strategy.
Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa
2014-03-01
In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.
The research of automatic speed control algorithm based on Green CBTC
NASA Astrophysics Data System (ADS)
Lin, Ying; Xiong, Hui; Wang, Xiaoliang; Wu, Youyou; Zhang, Chuanqi
2017-06-01
Automatic speed control algorithm is one of the core technologies of train operation control system. It’s a typical multi-objective optimization control algorithm, which achieve the train speed control for timing, comfort, energy-saving and precise parking. At present, the train speed automatic control technology is widely used in metro and inter-city railways. It has been found that the automatic speed control technology can effectively reduce the driver’s intensity, and improve the operation quality. However, the current used algorithm is poor at energy-saving, even not as good as manual driving. In order to solve the problem of energy-saving, this paper proposes an automatic speed control algorithm based on Green CBTC system. Based on the Green CBTC system, the algorithm can adjust the operation status of the train to improve the efficient using rate of regenerative braking feedback energy while ensuring the timing, comfort and precise parking targets. Due to the reason, the energy-using of Green CBTC system is lower than traditional CBTC system. The simulation results show that the algorithm based on Green CBTC system can effectively reduce the energy-using due to the improvement of the using rate of regenerative braking feedback energy.
NASA Astrophysics Data System (ADS)
Roy, Priyanka; Gholami, Peyman; Kuppuswamy Parthasarathy, Mohana; Zelek, John; Lakshminarayanan, Vasudevan
2018-02-01
Segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images facilitates visualization and quantification of sub-retinal layers for diagnosis of retinal pathologies. However, manual segmentation is subjective, expertise dependent, and time-consuming, which limits applicability of SD-OCT. Efforts are therefore being made to implement active-contours, artificial intelligence, and graph-search to automatically segment retinal layers with accuracy comparable to that of manual segmentation, to ease clinical decision-making. Although, low optical contrast, heavy speckle noise, and pathologies pose challenges to automated segmentation. Graph-based image segmentation approach stands out from the rest because of its ability to minimize the cost function while maximising the flow. This study has developed and implemented a shortest-path based graph-search algorithm for automated intraretinal layer segmentation of SD-OCT images. The algorithm estimates the minimal-weight path between two graph-nodes based on their gradients. Boundary position indices (BPI) are computed from the transition between pixel intensities. The mean difference between BPIs of two consecutive layers quantify individual layer thicknesses, which shows statistically insignificant differences when compared to a previous study [for overall retina: p = 0.17, for individual layers: p > 0.05 (except one layer: p = 0.04)]. These results substantiate the accurate delineation of seven intraretinal boundaries in SD-OCT images by this algorithm, with a mean computation time of 0.93 seconds (64-bit Windows10, core i5, 8GB RAM). Besides being self-reliant for denoising, the algorithm is further computationally optimized to restrict segmentation within the user defined region-of-interest. The efficiency and reliability of this algorithm, even in noisy image conditions, makes it clinically applicable.
Morais, Pedro; Vilaça, João L; Queirós, Sandro; Marchi, Alberto; Bourier, Felix; Deisenhofer, Isabel; D'hooge, Jan; Tavares, João Manuel R S
2018-07-01
Image-fusion strategies have been applied to improve inter-atrial septal (IAS) wall minimally-invasive interventions. Hereto, several landmarks are initially identified on richly-detailed datasets throughout the planning stage and then combined with intra-operative images, enhancing the relevant structures and easing the procedure. Nevertheless, such planning is still performed manually, which is time-consuming and not necessarily reproducible, hampering its regular application. In this article, we present a novel automatic strategy to segment the atrial region (left/right atrium and aortic tract) and the fossa ovalis (FO). The method starts by initializing multiple 3D contours based on an atlas-based approach with global transforms only and refining them to the desired anatomy using a competitive segmentation strategy. The obtained contours are then applied to estimate the FO by evaluating both IAS wall thickness and the expected FO spatial location. The proposed method was evaluated in 41 computed tomography datasets, by comparing the atrial region segmentation and FO estimation results against manually delineated contours. The automatic segmentation method presented a performance similar to the state-of-the-art techniques and a high feasibility, failing only in the segmentation of one aortic tract and of one right atrium. The FO estimation method presented an acceptable result in all the patients with a performance comparable to the inter-observer variability. Moreover, it was faster and fully user-interaction free. Hence, the proposed method proved to be feasible to automatically segment the anatomical models for the planning of IAS wall interventions, making it exceptionally attractive for use in the clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.
An adipose segmentation and quantification scheme for the intra abdominal region on minipigs
NASA Astrophysics Data System (ADS)
Engholm, Rasmus; Dubinskiy, Aleksandr; Larsen, Rasmus; Hanson, Lars G.; Christoffersen, Berit Østergaard
2006-03-01
This article describes a method for automatic segmentation of the abdomen into three anatomical regions: subcutaneous, retroperitoneal and visceral. For the last two regions the amount of adipose tissue (fat) is quantified. According to recent medical research, the distinction between retroperitoneal and visceral fat is important for studying metabolic syndrome, which is closely related to diabetes. However previous work has neglected to address this point, treating the two types of fat together. We use T1-weighted three-dimensional magnetic resonance data of the abdomen of obese minipigs. The pigs were manually dissected right after the scan, to produce the "ground truth" segmentation. We perform automatic segmentation on a representative slice, which on humans has been shown to correlate with the amount of adipose tissue in the abdomen. The process of automatic fat estimation consists of three steps. First, the subcutaneous fat is removed with a modified active contour approach. The energy formulation of the active contour exploits the homogeneous nature of the subcutaneous fat and the smoothness of the boundary. Subsequently the retroperitoneal fat located around the abdominal cavity is separated from the visceral fat. For this, we formulate a cost function on a contour, based on intensities, edges, distance to center and smoothness, so as to exploit the properties of the retroperitoneal fat. We then globally optimize this function using dynamic programming. Finally, the fat content of the retroperitoneal and visceral regions is quantified based on a fuzzy c-means clustering of the intensities within the segmented regions. The segmentation proved satisfactory by visual inspection, and closely correlated with the manual dissection data. The correlation was 0.89 for the retroperitoneal fat, and 0.74 for the visceral fat.
Zhang, Ling; Chen, Siping; Chin, Chien Ting; Wang, Tianfu; Li, Shengli
2012-08-01
To assist radiologists and decrease interobserver variability when using 2D ultrasonography (US) to locate the standardized plane of early gestational sac (SPGS) and to perform gestational sac (GS) biometric measurements. In this paper, the authors report the design of the first automatic solution, called "intelligent scanning" (IS), for selecting SPGS and performing biometric measurements using real-time 2D US. First, the GS is efficiently and precisely located in each ultrasound frame by exploiting a coarse to fine detection scheme based on the training of two cascade AdaBoost classifiers. Next, the SPGS are automatically selected by eliminating false positives. This is accomplished using local context information based on the relative position of anatomies in the image sequence. Finally, a database-guided multiscale normalized cuts algorithm is proposed to generate the initial contour of the GS, based on which the GS is automatically segmented for measurement by a modified snake model. This system was validated on 31 ultrasound videos involving 31 pregnant volunteers. The differences between system performance and radiologist performance with respect to SPGS selection and length and depth (diameter) measurements are 7.5% ± 5.0%, 5.5% ± 5.2%, and 6.5% ± 4.6%, respectively. Additional validations prove that the IS precision is in the range of interobserver variability. Our system can display the SPGS along with biometric measurements in approximately three seconds after the video ends, when using a 1.9 GHz dual-core computer. IS of the GS from 2D real-time US is a practical, reproducible, and reliable approach.
Kawalilak, C E; Johnston, J D; Cooper, D M L; Olszynski, W P; Kontulainen, S A
2016-02-01
Precision errors of cortical bone micro-architecture from high-resolution peripheral quantitative computed tomography (pQCT) ranged from 1 to 16 % and did not differ between automatic or manually modified endocortical contour methods in postmenopausal women or young adults. In postmenopausal women, manually modified contours led to generally higher cortical bone properties when compared to the automated method. First, the objective of the study was to define in vivo precision errors (coefficient of variation root mean square (CV%RMS)) and least significant change (LSC) for cortical bone micro-architecture using two endocortical contouring methods: automatic (AUTO) and manually modified (MOD) in two groups (postmenopausal women and young adults) from high-resolution pQCT (HR-pQCT) scans. Second, it was to compare precision errors and bone outcomes obtained with both methods within and between groups. Using HR-pQCT, we scanned twice the distal radius and tibia of 34 postmenopausal women (mean age ± SD 74 ± 7 years) and 30 young adults (27 ± 9 years). Cortical micro-architecture was determined using AUTO and MOD contour methods. CV%RMS and LSC were calculated. Repeated measures and multivariate ANOVA were used to compare mean CV% and bone outcomes between the methods within and between the groups. Significance was accepted at P < 0.05. CV%RMS ranged from 0.9 to 16.3 %. Within-group precision did not differ between evaluation methods. Compared to young adults, postmenopausal women had better precision for radial cortical porosity (precision difference 9.3 %) and pore volume (7.5 %) with MOD. Young adults had better precision for cortical thickness (0.8 %, MOD) and tibial cortical density (0.2 %, AUTO). In postmenopausal women, MOD resulted in 0.2-54 % higher values for most cortical outcomes, as well as 6-8 % lower radial and tibial cortical BMD and 2 % lower tibial cortical thickness. Results suggest that AUTO and MOD endocortical contour methods provide comparable repeatability. In postmenopausal women, manual modification of endocortical contours led to generally higher cortical bone properties when compared to the automated method, while no between-method differences were observed in young adults.
Interactive outlining: an improved approach using active contours
NASA Astrophysics Data System (ADS)
Daneels, Dirk; van Campenhout, David; Niblack, Carlton W.; Equitz, Will; Barber, Ron; Fierens, Freddy
1993-04-01
The purpose of our work is to outline objects on images in an interactive environment. We use an improved method based on energy minimizing active contours or `snakes.' Kass et al., proposed a variational technique; Amini used dynamic programming; and Williams and Shah introduced a fast, greedy algorithm. We combine the advantages of the latter two methods in a two-stage algorithm. The first stage is a greedy procedure that provides fast initial convergence. It is enhanced with a cost term that extends over a large number of points to avoid oscillations. The second stage, when accuracy becomes important, uses dynamic programming. This step is accelerated by the use of alternating search neighborhoods and by dropping stable points from the iterations. We have also added several features for user interaction. First, the user can define points of high confidence. Mathematically, this results in an extra cost term and, in that way, the robustness in difficult areas (e.g., noisy edges, sharp corners) is improved. We also give the user the possibility of incremental contour tracking, thus providing feedback on the refinement process. The algorithm has been tested on numerous photographic clip art images and extensive tests on medical images are in progress.
Kume, Teruyoshi; Kim, Byeong-Keuk; Waseda, Katsuhisa; Sathyanarayana, Shashidhar; Li, Wenguang; Teo, Tat-Jin; Yock, Paul G; Fitzgerald, Peter J; Honda, Yasuhiro
2013-02-01
The aim of this study was to evaluate a new fully automated lumen border tracing system based on a novel multifrequency processing algorithm. We developed the multifrequency processing method to enhance arterial lumen detection by exploiting the differential scattering characteristics of blood and arterial tissue. The implementation of the method can be integrated into current intravascular ultrasound (IVUS) hardware. This study was performed in vivo with conventional 40-MHz IVUS catheters (Atlantis SR Pro™, Boston Scientific Corp, Natick, MA) in 43 clinical patients with coronary artery disease. A total of 522 frames were randomly selected, and lumen areas were measured after automatically tracing lumen borders with the new tracing system and a commercially available tracing system (TraceAssist™) referred to as the "conventional tracing system." The data assessed by the two automated systems were compared with the results of manual tracings by experienced IVUS analysts. New automated lumen measurements showed better agreement with manual lumen area tracings compared with those of the conventional tracing system (correlation coefficient: 0.819 vs. 0.509). When compared against manual tracings, the new algorithm also demonstrated improved systematic error (mean difference: 0.13 vs. -1.02 mm(2) ) and random variability (standard deviation of difference: 2.21 vs. 4.02 mm(2) ) compared with the conventional tracing system. This preliminary study showed that the novel fully automated tracing system based on the multifrequency processing algorithm can provide more accurate lumen border detection than current automated tracing systems and thus, offer a more reliable quantitative evaluation of lumen geometry. Copyright © 2011 Wiley Periodicals, Inc.
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan
2015-01-01
Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.
Automatic segmentation of mammogram and tomosynthesis images
NASA Astrophysics Data System (ADS)
Sargent, Dusty; Park, Sun Young
2016-03-01
Breast cancer is a one of the most common forms of cancer in terms of new cases and deaths both in the United States and worldwide. However, the survival rate with breast cancer is high if it is detected and treated before it spreads to other parts of the body. The most common screening methods for breast cancer are mammography and digital tomosynthesis, which involve acquiring X-ray images of the breasts that are interpreted by radiologists. The work described in this paper is aimed at optimizing the presentation of mammography and tomosynthesis images to the radiologist, thereby improving the early detection rate of breast cancer and the resulting patient outcomes. Breast cancer tissue has greater density than normal breast tissue, and appears as dense white image regions that are asymmetrical between the breasts. These irregularities are easily seen if the breast images are aligned and viewed side-by-side. However, since the breasts are imaged separately during mammography, the images may be poorly centered and aligned relative to each other, and may not properly focus on the tissue area. Similarly, although a full three dimensional reconstruction can be created from digital tomosynthesis images, the same centering and alignment issues can occur for digital tomosynthesis. Thus, a preprocessing algorithm that aligns the breasts for easy side-by-side comparison has the potential to greatly increase the speed and accuracy of mammogram reading. Likewise, the same preprocessing can improve the results of automatic tissue classification algorithms for mammography. In this paper, we present an automated segmentation algorithm for mammogram and tomosynthesis images that aims to improve the speed and accuracy of breast cancer screening by mitigating the above mentioned problems. Our algorithm uses information in the DICOM header to facilitate preprocessing, and incorporates anatomical region segmentation and contour analysis, along with a hidden Markov model (HMM) for processing the multi-frame tomosynthesis images. The output of the algorithm is a new set of images that have been processed to show only the diagnostically relevant region and align the breasts so that they can be easily compared side-by-side. Our method has been tested on approximately 750 images, including various examples of mammogram, tomosynthesis, and scanned images, and has correctly segmented the diagnostically relevant image region in 97% of cases.
Fast Solvers for Moving Material Interfaces
2008-01-01
interface method—with the semi-Lagrangian contouring method developed in References [16–20]. We are now finalizing portable C / C ++ codes for fast adaptive ...stepping scheme couples a CIR predictor with a trapezoidal corrector using the velocity evaluated from the CIR approximation. It combines the...formula with efficient geometric algorithms and fast accurate contouring techniques. A modular adaptive implementation with fast new geometry modules
NASA Astrophysics Data System (ADS)
Sukhanov, AY
2017-02-01
We present an approximation Voigt contour for some parameters intervals such as the interval with y less than 0.02 and absolute value x less than 1.6 gives a simple formula for calculating and relative error less than 0.1%, and for some of the intervals suggetsted to use Hermite quadrature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaffney, David K., E-mail: david.gaffney@hci.utah.edu; King, Bronwyn; Viswanathan, Akila N.
Purpose: The purpose of this study was to develop a radiation therapy (RT) contouring atlas and recommendations for women with postoperative and locally advanced vulvar carcinoma. Methods and Materials: An international committee of 35 expert gynecologic radiation oncologists completed a survey of the treatment of vulvar carcinoma. An initial set of recommendations for contouring was discussed and generated by consensus. Two cases, 1 locally advanced and 1 postoperative, were contoured by 14 physicians. Contours were compared and analyzed using an expectation-maximization algorithm for simultaneous truth and performance level estimation (STAPLE), and a 95% confidence interval contour was developed. The levelmore » of agreement among contours was assessed using a kappa statistic. STAPLE contours underwent full committee editing to generate the final atlas consensus contours. Results: Analysis of the 14 contours showed substantial agreement, with kappa statistics of 0.69 and 0.64 for cases 1 and 2, respectively. There was high specificity for both cases (≥99%) and only moderate sensitivity of 71.3% and 64.9% for cases 1 and 2, respectively. Expert review and discussion generated consensus recommendations for contouring target volumes and treatment for postoperative and locally advanced vulvar cancer. Conclusions: These consensus recommendations for contouring and treatment of vulvar cancer identified areas of complexity and controversy. Given the lack of clinical research evidence in vulvar cancer radiation therapy, the committee advocates a conservative and consistent approach using standardized recommendations.« less
CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.
2003-01-01
A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.
Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle
NASA Astrophysics Data System (ADS)
Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun
2018-05-01
The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.
An automatic gore panel mapping system
NASA Technical Reports Server (NTRS)
Shiver, John D.; Phelps, Norman N.
1990-01-01
The Automatic Gore Mapping System is being developed to reduce the time and labor costs associated with manufacturing the External Tank. The present chem-milling processes and procedures are discussed. The down loading of the simulation of the system has to be performed to verify that the simulation package will translate the simulation code into robot code. Also a simulation of this system has to be programmed for a gantry robot instead of the articulating robot that is presently in the system. It was discovered using the simulation package that the articulation robot cannot reach all the point on some of the panels, therefore when the system is ready for production, a gantry robot will be used. Also a hydrosensor system is being developed to replace the point-to-point contact probe. The hydrosensor will allow the robot to perform a non-contact continuous scan of the panel. It will also provide a faster scan of the panel because it will eliminate the in-and-out movement required for the present end effector. The system software is currently being modified so that the hydrosensor will work with the system. The hydrosensor consists of a Krautkramer-Branson transducer encased in a plexiglass nozzle. The water stream pumped through the nozzle is the couplant for the probe. Also, software is being written so that the robot will have the ability to draw the contour lines on the panel displaying the out-of-tolerance regions. Presently the contour lines can only be displayed on the computer screens. Research is also being performed on improving and automating the method of scribing the panels. Presently the panels are manually scribed with a sharp knife. The use of a low power laser or water jet is being studied as a method of scribing the panels. The contour drawing pen will be replaced with scribing tool and the robot will then move along the contour lines. With these developments the Automatic Gore Mapping Systems will provide a reduction in time and labor costs associated with manufacturing the External Task. The system also has the potential of inspecting other manufactured parts.
Color and Contour Based Identification of Stem of Coconut Bunch
NASA Astrophysics Data System (ADS)
Kannan Megalingam, Rajesh; Manoharan, Sakthiprasad K.; Reddy, Rajesh G.; Sriteja, Gone; Kashyap, Ashwin
2017-08-01
Vision is the key component of Artificial Intelligence and Automated Robotics. Sensors or Cameras are the sight organs for a robot. Only through this, they are able to locate themselves or identify the shape of a regular or an irregular object. This paper presents the method of Identification of an object based on color and contour recognition using a camera through digital image processing techniques for robotic applications. In order to identify the contour, shape matching technique is used, which takes the input data from the database provided, and uses it to identify the contour by checking for shape match. The shape match is based on the idea of iterating through each contour of the threshold image. The color is identified on HSV Scale, by approximating the desired range of values from the database. HSV data along with iteration is used for identifying a quadrilateral, which is our required contour. This algorithm could also be used in a non-deterministic plane, which only uses HSV values exclusively.
Ecological statistics of Gestalt laws for the perceptual organization of contours.
Elder, James H; Goldberg, Richard M
2002-01-01
Although numerous studies have measured the strength of visual grouping cues for controlled psychophysical stimuli, little is known about the statistical utility of these various cues for natural images. In this study, we conducted experiments in which human participants trace perceived contours in natural images. These contours are automatically mapped to sequences of discrete tangent elements detected in the image. By examining relational properties between pairs of successive tangents on these traced curves, and between randomly selected pairs of tangents, we are able to estimate the likelihood distributions required to construct an optimal Bayesian model for contour grouping. We employed this novel methodology to investigate the inferential power of three classical Gestalt cues for contour grouping: proximity, good continuation, and luminance similarity. The study yielded a number of important results: (1) these cues, when appropriately defined, are approximately uncorrelated, suggesting a simple factorial model for statistical inference; (2) moderate image-to-image variation of the statistics indicates the utility of general probabilistic models for perceptual organization; (3) these cues differ greatly in their inferential power, proximity being by far the most powerful; and (4) statistical modeling of the proximity cue indicates a scale-invariant power law in close agreement with prior psychophysics.
NASA Astrophysics Data System (ADS)
Bouganssa, Issam; Sbihi, Mohamed; Zaim, Mounia
2017-07-01
The 2D Discrete Wavelet Transform (DWT) is a computationally intensive task that is usually implemented on specific architectures in many imaging systems in real time. In this paper, a high throughput edge or contour detection algorithm is proposed based on the discrete wavelet transform. A technique for applying the filters on the three directions (Horizontal, Vertical and Diagonal) of the image is used to present the maximum of the existing contours. The proposed architectures were designed in VHDL and mapped to a Xilinx Sparten6 FPGA. The results of the synthesis show that the proposed architecture has a low area cost and can operate up to 100 MHz, which can perform 2D wavelet analysis for a sequence of images while maintaining the flexibility of the system to support an adaptive algorithm.
The algorithm of fast image stitching based on multi-feature extraction
NASA Astrophysics Data System (ADS)
Yang, Chunde; Wu, Ge; Shi, Jing
2018-05-01
This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.
Latest processing status and quality assessment of the GOMOS, MIPAS and SCIAMACHY ESA dataset
NASA Astrophysics Data System (ADS)
Niro, F.; Brizzi, G.; Saavedra de Miguel, L.; Scarpino, G.; Dehn, A.; Fehr, T.; von Kuhlmann, R.
2011-12-01
GOMOS, MIPAS and SCIAMACHY instruments are successfully observing the changing Earth's atmosphere since the launch of the ENVISAT-ESA platform on March 2002. The measurements recorded by these instruments are relevant for the Atmospheric-Chemistry community both in terms of time extent and variety of observing geometry and techniques. In order to fully exploit these measurements, it is crucial to maintain a good reliability in the data processing and distribution and to continuously improving the scientific output. The goal is to meet the evolving needs of both the near-real-time and research applications. Within this frame, the ESA operational processor remains the reference code, although many scientific algorithms are nowadays available to the users. In fact, the ESA algorithm has a well-established calibration and validation scheme, a certified quality assessment process and the possibility to reach a wide users' community. Moreover, the ESA algorithm upgrade procedures and the re-processing performances have much improved during last two years, thanks to the recent updates of the Ground Segment infrastructure and overall organization. The aim of this paper is to promote the usage and stress the quality of the ESA operational dataset for the GOMOS, MIPAS and SCIAMACHY missions. The recent upgrades in the ESA processor (GOMOS V6, MIPAS V5 and SCIAMACHY V5) will be presented, with detailed information on improvements in the scientific output and preliminary validation results. The planned algorithm evolution and on-going re-processing campaigns will be mentioned that involves the adoption of advanced set-up, such as the MIPAS V6 re-processing on a clouds-computing system. Finally, the quality control process will be illustrated that allows to guarantee a standard of quality to the users. In fact, the operational ESA algorithm is carefully tested before switching into operations and the near-real time and off-line production is thoughtfully verified via the implementation of automatic quality control procedures. The scientific validity of the ESA dataset will be additionally illustrated with examples of applications that can be supported, such as ozone-hole monitoring, volcanic ash detection and analysis of atmospheric composition changes during the past years.
NASA Astrophysics Data System (ADS)
Lindsay, D.
1985-02-01
Research on the automatic computer analysis of intonation using linguistic knowledge is described. The use of computer programs to analyze and classify fundamental frequency (FO) contours, and work on the psychophysics of British English intonation and on the phonetics of FO contours are described. Results suggest that FO can be conveniently tracked to represent intonation through time, which can be subsequently used by a computer program as the basis for analysis. Nuclear intonation, where the intonational nucleus is the region of auditory prominence, or information focus, found in all spoken sentences was studied. The main mechanism behind such prominence is the perception of an extensive FO movement on the nuclear syllable. A classification of the nuclear contour shape is a classification of the sentence type, often into categories that cannot be readily determined from only the segmental phonemes of the utterance.
Atlas-based segmentation of 3D cerebral structures with competitive level sets and fuzzy control.
Ciofolo, Cybèle; Barillot, Christian
2009-06-01
We propose a novel approach for the simultaneous segmentation of multiple structures with competitive level sets driven by fuzzy control. To this end, several contours evolve simultaneously toward previously defined anatomical targets. A fuzzy decision system combines the a priori knowledge provided by an anatomical atlas with the intensity distribution of the image and the relative position of the contours. This combination automatically determines the directional term of the evolution equation of each level set. This leads to a local expansion or contraction of the contours, in order to match the boundaries of their respective targets. Two applications are presented: the segmentation of the brain hemispheres and the cerebellum, and the segmentation of deep internal structures. Experimental results on real magnetic resonance (MR) images are presented, quantitatively assessed and discussed.
Ji, Fuhai; Li, Jian; Fleming, Neal; Rose, David; Liu, Hong
2015-08-01
Phenylephrine is often used to treat intra-operative hypotension. Previous studies have shown that the FloTrac cardiac monitor may overestimate cardiac output (CO) changes following phenylephrine administration. A new algorithm (4th generation) has been developed to improve performance in this setting. We performed a prospective observational study to assess the effects of phenylephrine administration on CO values measured by the 3rd and 4th generation FloTrac algorithms. 54 patients were enrolled in this study. We used the Nexfin, a pulse contour method shown to be insensitive to vasopressor administration, as the reference method. Radial arterial pressures were recorded continuously in patients undergoing surgery. Phenylephrine administration times were documented. Arterial pressure recordings were subsequently analyzed offline using three different pulse contour analysis algorithms: FloTrac 3rd generation (G3), FloTrac 4th generation (G4), and Nexfin (nf). One minute of hemodynamic measurements was analyzed immediately before phenylephrine administration and then repeated when the mean arterial pressure peaked. A total of 157 (4.6 ± 3.2 per patient, range 1-15) paired sets of hemodynamic recordings were analyzed. Phenylephrine induced a significant increase in stroke volume (SV) and CO with the FloTrac G3, but not with FloTrac G4 or Nexfin algorithms. Agreement between FloTrac G3 and Nexfin was: 0.23 ± 1.19 l/min and concordance was 51.1%. In contrast, agreement between FloTrac G4 and Nexfin was: 0.19 ± 0.86 l/min and concordance was 87.2%. In conclusion, the pulse contour method of measuring CO, as implemented in FloTrac 4th generation algorithm, has significantly improved its ability to track the changes in CO induced by phenylephrine.
Effective pore size and radius of capture for K+ ions in K-channels
Moldenhauer, Hans; Díaz-Franulic, Ignacio; González-Nilo, Fernando; Naranjo, David
2016-01-01
Reconciling protein functional data with crystal structure is arduous because rare conformations or crystallization artifacts occur. Here we present a tool to validate the dimensions of open pore structures of potassium-selective ion channels. We used freely available algorithms to calculate the molecular contour of the pore to determine the effective internal pore radius (rE) in several K-channel crystal structures. rE was operationally defined as the radius of the biggest sphere able to enter the pore from the cytosolic side. We obtained consistent rE estimates for MthK and Kv1.2/2.1 structures, with rE = 5.3–5.9 Å and rE = 4.5–5.2 Å, respectively. We compared these structural estimates with functional assessments of the internal mouth radii of capture (rC) for two electrophysiological counterparts, the large conductance calcium activated K-channel (rC = 2.2 Å) and the Shaker Kv-channel (rC = 0.8 Å), for MthK and Kv1.2/2.1 structures, respectively. Calculating the difference between rE and rC, produced consistent size radii of 3.1–3.7 Å and 3.6–4.4 Å for hydrated K+ ions. These hydrated K+ estimates harmonize with others obtained with diverse experimental and theoretical methods. Thus, these findings validate MthK and the Kv1.2/2.1 structures as templates for open BK and Kv-channels, respectively. PMID:26831782
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaudry, J.; Bergman, A.; British Columbia Cancer Agency - Vancouver Centre, Vancouver, BC
Lung tumours move due to respiratory motion. This is managed during planning by acquiring a 4DCT and capturing the excursion of the GTV (gross tumour volume) throughout the breathing cycle within an IGTV (Internal Gross Tumour Volume) contour. Patients undergo a verification cone-beam CT (CBCT) scan immediately prior to treatment. 3D reconstructed images do not consider tumour motion, resulting in image artefacts, such as blurring. This may lead to difficulty in identifying the tumour on reconstructed images. It would be valuable to create a 4DCBCT reconstruction of the tumour motion to confirm that does indeed remain within the planned IGTV.more » CBCT projections of a Quasar Respiratory Motion Phantom are acquired in Treatment mode (half-fan scan) on a Varian TrueBeam accelerator. This phantom contains a mobile, low-density lung insert with an embedded 3cm diameter tumour object. It is programmed to create a 15s periodic, 2cm (sup/inf) displacement. A Varian Real-time Position Management (RPM) tracking-box is placed on the phantom breathing platform. Breathing phase information is automatically integrated into the projection image files. Using in-house Matlab programs and RTK (Reconstruction Tool Kit) open-source toolboxes, the projections are re-binned into 10 phases and a 4DCBCT scan reconstructed. The planning IGTV is registered to the 4DCBCT and the tumour excursion is verified to remain within the planned contour. This technique successfully reconstructs 4DCBCT images using clinical modes for a breathing phantom. UBC-BCCA ethics approval has been obtained to perform 4DCBCT reconstructions on lung patients (REB#H12-00192). Clinical images will be accrued starting April 2014.« less
Gradient maintenance: A new algorithm for fast online replanning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahunbay, Ergun E., E-mail: eahunbay@mcw.edu; Li, X. Allen
2015-06-15
Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan qualitymore » of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by several critical structures.« less
Ozel, Bora; Sezgin, Billur; Guney, Kirdar; Latifoglu, Osman; Celebi, Cemallettin
2015-02-01
Although aesthetic procedures are known to have a higher impact on women, men are becoming more inclined toward such procedures since the last decade. To determine the reason behind the increase in demand for male aesthetic procedures and to learn about the expectations and inquietude related to body contouring surgery, a prospective questionnaire study was conducted on 200 Turkish males from January 1, 2011-May 31, 2012. Demographic information, previous aesthetic procedures and thoughts on body contouring procedures with given reasons were questioned. The results of the study showed that 53 % of all participants considered undergoing body contouring surgery with the given reason that they believed their current body structure required it. For those who did not consider contouring operations, 92.5 % said they felt that they did not need such a procedure. The results of the statistical analysis showed that BMI was a significant factor in the decision making process for wanting to undergo body contouring procedures. The results of the study showed that men's consideration for aesthetic operations depends mainly on necessity and that the most considered region was the abdominal zone in regard to contouring. We can conclude that men are becoming more interested in body contouring operations and therefore different surgical procedures should be refined and re-defined according to the expectations of this new patient group.
Photomask quality evaluation using lithography simulation and precision SEM image contour data
NASA Astrophysics Data System (ADS)
Murakawa, Tsutomu; Fukuda, Naoki; Shida, Soichi; Iwai, Toshimichi; Matsumoto, Jun; Nakamura, Takayuki; Hagiwara, Kazuyuki; Matsushita, Shohei; Hara, Daisuke; Adamov, Anthony
2012-11-01
To evaluate photomask quality, the current method uses spatial imaging by optical inspection tools. This technique at 1Xnm node has a resolution limit because small defects will be difficult to extract. To simulate the mask error-enhancement factor (MEEF) influence for aggressive OPC in 1Xnm node, wide FOV contour data and tone information are derived from high precision SEM images. For this purpose we have developed a new contour data extraction algorithm with sub-nanometer accuracy resulting in a wide Field of View (FOV) SEM image: (for example, more than 10um x 10um square). We evaluated MEEF influence of high-end photomask pattern using the wide FOV contour data of "E3630 MVM-SEMTM" and lithography simulator "TrueMaskTM DS" of D2S, Inc. As a result, we can detect the "invisible defect" as the MEEF influence using the wide FOV contour data and lithography simulator.
Research on feature extraction techniques of Hainan Li brocade pattern
NASA Astrophysics Data System (ADS)
Zhou, Yuping; Chen, Fuqiang; Zhou, Yuhua
2016-03-01
Hainan Li brocade skills has been listed as world non-material cultural heritage preservation, therefore, the research on Hainan Li brocade patterns plays an important role in Li brocade culture inheritance. The meaning of Li brocade patterns was analyzed and the shape feature extraction techniques to original Li brocade patterns were advanced in this paper, based on the contour tracking algorithm. First, edge detection was made on the design patterns, and then the morphological closing operation was used to smooth the image, and finally contour tracking was used to extract the outer contours of Li brocade patterns. The extracted contour features were processed by means of morphology, and digital characteristics of contours are obtained by invariant moments. At last, different patterns of Li brocade design are briefly analyzed according to the digital characteristics. The results showed that the pattern extraction method to Li brocade pattern shapes is feasible and effective according to above method.
Real-Time Symbol Extraction From Grey-Level Images
NASA Astrophysics Data System (ADS)
Massen, R.; Simnacher, M.; Rosch, J.; Herre, E.; Wuhrer, H. W.
1988-04-01
A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented. This 3 Giga operation per second processor uses large kernel convolvers and new non-linear neighbourhood processing algorithms to compute true 1-pixel wide and noise-free contours without thresholding even from grey-level images with quite varying edge sharpness. The local edge orientation is used as an additional cue to compute a list of vectors describing the closed and open contours in real-time and to dump a CAD-like symbolic image description into a symbol memory at pixel clock rate.
Wang, Hui; Vees, Hansjörg; Miralbell, Raymond; Wissmeyer, Michael; Steiner, Charles; Ratib, Osman; Senthamizhchelvan, Srinivasan; Zaidi, Habib
2009-11-01
We evaluate the contribution of (18)F-choline PET/CT in the delineation of gross tumour volume (GTV) in local recurrent prostate cancer after initial irradiation using various PET image segmentation techniques. Seventeen patients with local-only recurrent prostate cancer (median=5.7 years) after initial irradiation were included in the study. Rebiopsies were performed in 10 patients that confirmed the local recurrence. Following injection of 300 MBq of (18)F-fluorocholine, dynamic PET frames (3 min each) were reconstructed from the list-mode acquisition. Five PET image segmentation techniques were used to delineate the (18)F-choline-based GTVs. These included manual delineation of contours (GTV(man)) by two teams consisting of a radiation oncologist and a nuclear medicine physician each, a fixed threshold of 40% and 50% of the maximum signal intensity (GTV(40%) and GTV(50%)), signal-to-background ratio-based adaptive thresholding (GTV(SBR)), and a region growing (GTV(RG)) algorithm. Geographic mismatches between the GTVs were also assessed using overlap analysis. Inter-observer variability for manual delineation of GTVs was high but not statistically significant (p=0.459). In addition, the volumes and shapes of GTVs delineated using semi-automated techniques were significantly higher than those of GTVs defined manually. Semi-automated segmentation techniques for (18)F-choline PET-guided GTV delineation resulted in substantially higher GTVs compared to manual delineation and might replace the latter for determination of recurrent prostate cancer for partial prostate re-irradiation. The selection of the most appropriate segmentation algorithm still needs to be determined.
Automatic Syllabification in English: A Comparison of Different Algorithms
ERIC Educational Resources Information Center
Marchand, Yannick; Adsett, Connie R.; Damper, Robert I.
2009-01-01
Automatic syllabification of words is challenging, not least because the syllable is not easy to define precisely. Consequently, no accepted standard algorithm for automatic syllabification exists. There are two broad approaches: rule-based and data-driven. The rule-based method effectively embodies some theoretical position regarding the…
Abbreviation definition identification based on automatic precision estimates.
Sohn, Sunghwan; Comeau, Donald C; Kim, Won; Wilbur, W John
2008-09-25
The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. Due to the size of databases such as MEDLINE only a small fraction of abbreviation-definition pairs can be examined manually. An automatic way to estimate the accuracy of abbreviation-definition pairs extracted from text is needed. In this paper we propose an abbreviation definition identification algorithm that employs a variety of strategies to identify the most probable abbreviation definition. In addition our algorithm produces an accuracy estimate, pseudo-precision, for each strategy without using a human-judged gold standard. The pseudo-precisions determine the order in which the algorithm applies the strategies in seeking to identify the definition of an abbreviation. On the Medstract corpus our algorithm produced 97% precision and 85% recall which is higher than previously reported results. We also annotated 1250 randomly selected MEDLINE records as a gold standard. On this set we achieved 96.5% precision and 83.2% recall. This compares favourably with the well known Schwartz and Hearst algorithm. We developed an algorithm for abbreviation identification that uses a variety of strategies to identify the most probable definition for an abbreviation and also produces an estimated accuracy of the result. This process is purely automatic.
NASA Astrophysics Data System (ADS)
Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.
2016-03-01
Remodeling of the myocardium is associated with increased risk of arrhythmia and heart failure. Our objective is to automatically identify regions of fibrotic myocardium, dense collagen, and adipose tissue, which can serve as a way to guide radiofrequency ablation therapy or endomyocardial biopsies. Using computer vision and machine learning, we present an automated algorithm to classify tissue compositions from cardiac optical coherence tomography (OCT) images. Three dimensional OCT volumes were obtained from 15 human hearts ex vivo within 48 hours of donor death (source, NDRI). We first segmented B-scans using a graph searching method. We estimated the boundary of each region by minimizing a cost function, which consisted of intensity, gradient, and contour smoothness. Then, features, including texture analysis, optical properties, and statistics of high moments, were extracted. We used a statistical model, relevance vector machine, and trained this model with abovementioned features to classify tissue compositions. To validate our method, we applied our algorithm to 77 volumes. The datasets for validation were manually segmented and classified by two investigators who were blind to our algorithm results and identified the tissues based on trichrome histology and pathology. The difference between automated and manual segmentation was 51.78 +/- 50.96 μm. Experiments showed that the attenuation coefficients of dense collagen were significantly different from other tissue types (P < 0.05, ANOVA). Importantly, myocardial fibrosis tissues were different from normal myocardium in entropy and kurtosis. The tissue types were classified with an accuracy of 84%. The results show good agreements with histology.
Corner detection and sorting method based on improved Harris algorithm in camera calibration
NASA Astrophysics Data System (ADS)
Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang
2016-11-01
In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.
Optimal Search Strategy for the Definition of a DNAPL Source
2009-08-01
29. Flow field results for stochastic model (colored contours) and potentiometric map created by hydrogeologist using well water level measurements...potentiometric map created by hydrogeologist using well water level measurements (black contours). 5.1.3. Source search algorithm Figure 30 shows the 15...and C. D. Tankersley, “Forecasting piezometric head levels in the Floridian aquifer: A Kalman filtering approach”, Water Resources Research, 29(11
NASA Astrophysics Data System (ADS)
Wu, T. Y.; Lin, S. F.
2013-10-01
Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.
NASA Astrophysics Data System (ADS)
Chan, Kwai H.; Lau, Rynson W.
1996-09-01
Image warping concerns about transforming an image from one spatial coordinate to another. It is widely used for the vidual effect of deforming and morphing images in the film industry. A number of warping techniques have been introduced, which are mainly based on the corresponding pair mapping of feature points, feature vectors or feature patches (mostly triangular or quadrilateral). However, very often warping of an image object with an arbitrary shape is required. This requires a warping technique which is based on boundary contour instead of feature points or feature line-vectors. In addition, when feature point or feature vector based techniques are used, approximation of the object boundary by using point or vectors is required. In this case, the matching process of the corresponding pairs will be very time consuming if a fine approximation is required. In this paper, we propose a contour-based warping technique for warping image objects with arbitrary shapes. The novel idea of the new method is the introduction of mathematical morphology to allow a more flexible control of image warping. Two morphological operators are used as contour determinators. The erosion operator is used to warp image contents which are inside a user specified contour while the dilation operation is used to warp image contents which are outside of the contour. This new method is proposed to assist further development of a semi-automatic motion morphing system when accompanied with robust feature extractors such as deformable template or active contour model.
Accurate segmenting of cervical tumors in PET imaging based on similarity between adjacent slices.
Chen, Liyuan; Shen, Chenyang; Zhou, Zhiguo; Maquilan, Genevieve; Thomas, Kimberly; Folkert, Michael R; Albuquerque, Kevin; Wang, Jing
2018-06-01
Because in PET imaging cervical tumors are close to the bladder with high capacity for the secreted 18 FDG tracer, conventional intensity-based segmentation methods often misclassify the bladder as a tumor. Based on the observation that tumor position and area do not change dramatically from slice to slice, we propose a two-stage scheme that facilitates segmentation. In the first stage, we used a graph-cut based algorithm to obtain initial contouring of the tumor based on local similarity information between voxels; this was achieved through manual contouring of the cervical tumor on one slice. In the second stage, initial tumor contours were fine-tuned to more accurate segmentation by incorporating similarity information on tumor shape and position among adjacent slices, according to an intensity-spatial-distance map. Experimental results illustrate that the proposed two-stage algorithm provides a more effective approach to segmenting cervical tumors in 3D 18 FDG PET images than the benchmarks used for comparison. Copyright © 2018 Elsevier Ltd. All rights reserved.
Data analysis using scale-space filtering and Bayesian probabilistic reasoning
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Kutulakos, Kiriakos; Robinson, Peter
1991-01-01
This paper describes a program for analysis of output curves from Differential Thermal Analyzer (DTA). The program first extracts probabilistic qualitative features from a DTA curve of a soil sample, and then uses Bayesian probabilistic reasoning to infer the mineral in the soil. The qualifier module employs a simple and efficient extension of scale-space filtering suitable for handling DTA data. We have observed that points can vanish from contours in the scale-space image when filtering operations are not highly accurate. To handle the problem of vanishing points, perceptual organizations heuristics are used to group the points into lines. Next, these lines are grouped into contours by using additional heuristics. Probabilities are associated with these contours using domain-specific correlations. A Bayes tree classifier processes probabilistic features to infer the presence of different minerals in the soil. Experiments show that the algorithm that uses domain-specific correlation to infer qualitative features outperforms a domain-independent algorithm that does not.
Digital signal processing algorithms for automatic voice recognition
NASA Technical Reports Server (NTRS)
Botros, Nazeih M.
1987-01-01
The current digital signal analysis algorithms are investigated that are implemented in automatic voice recognition algorithms. Automatic voice recognition means, the capability of a computer to recognize and interact with verbal commands. The digital signal is focused on, rather than the linguistic, analysis of speech signal. Several digital signal processing algorithms are available for voice recognition. Some of these algorithms are: Linear Predictive Coding (LPC), Short-time Fourier Analysis, and Cepstrum Analysis. Among these algorithms, the LPC is the most widely used. This algorithm has short execution time and do not require large memory storage. However, it has several limitations due to the assumptions used to develop it. The other 2 algorithms are frequency domain algorithms with not many assumptions, but they are not widely implemented or investigated. However, with the recent advances in the digital technology, namely signal processors, these 2 frequency domain algorithms may be investigated in order to implement them in voice recognition. This research is concerned with real time, microprocessor based recognition algorithms.
NASA Astrophysics Data System (ADS)
Ehsani, Amir Houshang; Quiel, Friedrich
2009-02-01
In this paper, we demonstrate artificial neural networks—self-organizing map (SOM)—as a semi-automatic method for extraction and analysis of landscape elements in the man and biosphere reserve "Eastern Carpathians". The Shuttle Radar Topography Mission (SRTM) collected data to produce generally available digital elevation models (DEM). Together with Landsat Thematic Mapper data, this provides a unique, consistent and nearly worldwide data set. To integrate the DEM with Landsat data, it was re-projected from geographic coordinates to UTM with 28.5 m spatial resolution using cubic convolution interpolation. To provide quantitative morphometric parameters, first-order (slope) and second-order derivatives of the DEM—minimum curvature, maximum curvature and cross-sectional curvature—were calculated by fitting a bivariate quadratic surface with a window size of 9×9 pixels. These surface curvatures are strongly related to landform features and geomorphological processes. Four morphometric parameters and seven Landsat-enhanced thematic mapper (ETM+) bands were used as input for the SOM algorithm. Once the network weights have been randomly initialized, different learning parameter sets, e.g. initial radius, final radius and number of iterations, were investigated. An optimal SOM with 20 classes using 1000 iterations and a final neighborhood radius of 0.05 provided a low average quantization error of 0.3394 and was used for further analysis. The effect of randomization of initial weights for optimal SOM was also studied. Feature space analysis, three-dimensional inspection and auxiliary data facilitated the assignment of semantic meaning to the output classes in terms of landform, based on morphometric analysis, and land use, based on spectral properties. Results were displayed as thematic map of landscape elements according to form, cover and slope. Spectral and morphometric signature analysis with corresponding zoom samples superimposed by contour lines were compared in detail to clarify the role of morphometric parameters to separate landscape elements. The results revealed the efficiency of SOM to integrate SRTM and Landsat data in landscape analysis. Despite the stochastic nature of SOM, the results in this particular study are not sensitive to randomization of initial weight vectors if many iterations are used. This procedure is reproducible for the same application with consistent results.
A Construction System for CALL Materials from TV News with Captions
NASA Astrophysics Data System (ADS)
Kobayashi, Satoshi; Tanaka, Takashi; Mori, Kazumasa; Nakagawa, Seiichi
Many language learning materials have been published. In language learning, although repetition training is obviously necessary, it is difficult to maintain the learner's interest/motivation using existing learning materials, because those materials are limited in their scope and contents. In addition, we doubt whether the speech sounds used in most materials are natural in various situations. Nowadays, some TV news programs (CNN, ABC, PBS, NHK, etc.) have closed/open captions corresponding to the announcer's speech. We have developed a system that makes Computer Assisted Language Learning (CALL) materials for both English learning by Japanese and Japanese learning by foreign students from such captioned newscasts. This system computes the synchronization between captions and speech by using HMMs and a forced alignment algorithm. Materials made by the system have following functions: full/partial text caption display, repetition listening, consulting an electronic dictionary, display of the user's/announcer's sound waveform and pitch contour, and automatic construction of a dictation test. Materials have following advantages: materials present polite and natural speech, various and timely topics. Furthermore, the materials have the following possibility: automatic creation of listening/understanding tests, and storage/retrieval of the many materials. In this paper, firstly, we present the organization of the system. Then, we describe results of questionnaires on trial use of the materials. As the result, we got enough accuracy on the synchronization between captions and speech. Speaking totally, we encouraged to research this system.
Automatic intraaortic balloon pump timing using an intrabeat dicrotic notch prediction algorithm.
Schreuder, Jan J; Castiglioni, Alessandro; Donelli, Andrea; Maisano, Francesco; Jansen, Jos R C; Hanania, Ramzi; Hanlon, Pat; Bovelander, Jan; Alfieri, Ottavio
2005-03-01
The efficacy of intraaortic balloon counterpulsation (IABP) during arrhythmic episodes is questionable. A novel algorithm for intrabeat prediction of the dicrotic notch was used for real time IABP inflation timing control. A windkessel model algorithm was used to calculate real-time aortic flow from aortic pressure. The dicrotic notch was predicted using a percentage of calculated peak flow. Automatic inflation timing was set at intrabeat predicted dicrotic notch and was combined with automatic IAB deflation. Prophylactic IABP was applied in 27 patients with low ejection fraction (< 35%) undergoing cardiac surgery. Analysis of IABP at a 1:4 ratio revealed that IAB inflation occurred at a mean of 0.6 +/- 5 ms from the dicrotic notch. In all patients accurate automatic timing at a 1:1 assist ratio was performed. Seventeen patients had episodes of severe arrhythmia, the novel IABP inflation algorithm accurately assisted 318 of 320 arrhythmic beats at a 1:1 ratio. The novel real-time intrabeat IABP inflation timing algorithm performed accurately in all patients during both regular rhythms and severe arrhythmia, allowing fully automatic intrabeat IABP timing.
NASA Technical Reports Server (NTRS)
Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.
1991-01-01
Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-based and moving-based real-time piloted simulations, thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.
NASA Technical Reports Server (NTRS)
Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.
1991-01-01
Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-base and moving-base real-time piloted simulations; thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin
Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures andmore » ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28% ± 1.46%) and margin error (0.49 ± 0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. Conclusions: The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.« less
Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa
2016-08-01
For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28% ± 1.46%) and margin error (0.49 ± 0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.
Shape and texture fused recognition of flying targets
NASA Astrophysics Data System (ADS)
Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás
2011-06-01
This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).
Path Planning Based on Ply Orientation Information for Automatic Fiber Placement on Mesh Surface
NASA Astrophysics Data System (ADS)
Pei, Jiazhi; Wang, Xiaoping; Pei, Jingyu; Yang, Yang
2018-03-01
This article introduces an investigation of path planning with ply orientation information for automatic fiber placement (AFP) on open-contoured mesh surface. The new method makes use of the ply orientation information generated by loading characteristics on surface, divides the surface into several zones according to the ply orientation information and then designs different fiber paths in different zones. This article also gives new idea of up-layer design in order to make up for defects between parts and improve product's strength.
Emerging CFD technologies and aerospace vehicle design
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.
1995-01-01
With the recent focus on the needs of design and applications CFD, research groups have begun to address the traditional bottlenecks of grid generation and surface modeling. Now, a host of emerging technologies promise to shortcut or dramatically simplify the simulation process. This paper discusses the current status of these emerging technologies. It will argue that some tools are already available which can have positive impact on portions of the design cycle. However, in most cases, these tools need to be integrated into specific engineering systems and process cycles to be used effectively. The rapidly maturing status of unstructured and Cartesian approaches for inviscid simulations makes suggests the possibility of highly automated Euler-boundary layer simulations with application to loads estimation and even preliminary design. Similarly, technology is available to link block structured mesh generation algorithms with topology libraries to avoid tedious re-meshing of topologically similar configurations. Work in algorithmic based auto-blocking suggests that domain decomposition and point placement operations in multi-block mesh generation may be properly posed as problems in Computational Geometry, and following this approach may lead to robust algorithmic processes for automatic mesh generation.
A method for smoothing segmented lung boundary in chest CT images
NASA Astrophysics Data System (ADS)
Yim, Yeny; Hong, Helen
2007-03-01
To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.
Semi-automated contour recognition using DICOMautomaton
NASA Astrophysics Data System (ADS)
Clark, H.; Wu, J.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Thomas, S.
2014-03-01
Purpose: A system has been developed which recognizes and classifies Digital Imaging and Communication in Medicine contour data with minimal human intervention. It allows researchers to overcome obstacles which tax analysis and mining systems, including inconsistent naming conventions and differences in data age or resolution. Methods: Lexicographic and geometric analysis is used for recognition. Well-known lexicographic methods implemented include Levenshtein-Damerau, bag-of-characters, Double Metaphone, Soundex, and (word and character)-N-grams. Geometrical implementations include 3D Fourier Descriptors, probability spheres, boolean overlap, simple feature comparison (e.g. eccentricity, volume) and rule-based techniques. Both analyses implement custom, domain-specific modules (e.g. emphasis differentiating left/right organ variants). Contour labels from 60 head and neck patients are used for cross-validation. Results: Mixed-lexicographical methods show an effective improvement in more than 10% of recognition attempts compared with a pure Levenshtein-Damerau approach when withholding 70% of the lexicon. Domain-specific and geometrical techniques further boost performance. Conclusions: DICOMautomaton allows users to recognize contours semi-automatically. As usage increases and the lexicon is filled with additional structures, performance improves, increasing the overall utility of the system.
Goumeidane, Aicha Baya; Nacereddine, Nafaa; Khamadja, Mohammed
2015-01-01
A perfect knowledge of a defect shape is determinant for the analysis step in automatic radiographic inspection. Image segmentation is carried out on radiographic images and extract defects indications. This paper deals with weld defect delineation in radiographic images. The proposed method is based on a new statistics-based explicit active contour. An association of local and global modeling of the image pixels intensities is used to push the model to the desired boundaries. Furthermore, other strategies are proposed to accelerate its evolution and make the convergence speed depending only on the defect size as selecting a band around the active contour curve. The experimental results are very promising, since experiments on synthetic and radiographic images show the ability of the proposed model to extract a piece-wise homogenous object from very inhomogeneous background, even in a bad quality image.
3D reconstruction of microminiature objects based on contour line
NASA Astrophysics Data System (ADS)
Li, Cailin; Wang, Qiang; Guo, Baoyun
2009-10-01
A new 3D automatic reconstruction method of micro solid of revolution is presented in this paper. In the implementation procedure of this method, image sequence of the solid of revolution of 360° is obtained, which rotation speed is controlled by motor precisely, in the rotate photographic mode of back light. Firstly, we need calibrate the height of turntable, the size of pixel and rotation axis of turntable. Then according to the calibration result of rotation axis, the height of turntable, rotation angle and the pixel size, the contour points of each image can be transformed into 3D points in the reference coordinate system to generate the point cloud model. Finally, the surface geometrical model of solid of revolution is obtained by using the relationship of two adjacent contours. Experimental results on real images are presented, which demonstrate the effectiveness of the Approach.
2016-06-01
TECHNICAL REPORT Algorithm for Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar...Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar Gradiometer Leon Vaizer, Jesse Angle, Neil...of Magnetic Dipole Targets Using LSG i June 2016 TABLE OF CONTENTS INTRODUCTION
Jung, Jaehoon; Yoon, Inhye; Paik, Joonki
2016-01-01
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978
HOLA: Human-like Orthogonal Network Layout.
Kieffer, Steve; Dwyer, Tim; Marriott, Kim; Wybrow, Michael
2016-01-01
Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new "human-centred" methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.
Segmentation of breast ultrasound images based on active contours using neutrosophic theory.
Lotfollahi, Mahsa; Gity, Masoumeh; Ye, Jing Yong; Mahlooji Far, A
2018-04-01
Ultrasound imaging is an effective approach for diagnosing breast cancer, but it is highly operator-dependent. Recent advances in computer-aided diagnosis have suggested that it can assist physicians in diagnosis. Definition of the region of interest before computer analysis is still needed. Since manual outlining of the tumor contour is tedious and time-consuming for a physician, developing an automatic segmentation method is important for clinical application. The present paper represents a novel method to segment breast ultrasound images. It utilizes a combination of region-based active contour and neutrosophic theory to overcome the natural properties of ultrasound images including speckle noise and tissue-related textures. First, due to inherent speckle noise and low contrast of these images, we have utilized a non-local means filter and fuzzy logic method for denoising and image enhancement, respectively. This paper presents an improved weighted region-scalable active contour to segment breast ultrasound images using a new feature derived from neutrosophic theory. This method has been applied to 36 breast ultrasound images. It generates true-positive and false-positive results, and similarity of 95%, 6%, and 90%, respectively. The purposed method indicates clear advantages over other conventional methods of active contour segmentation, i.e., region-scalable fitting energy and weighted region-scalable fitting energy.
Contour Tracking in Echocardiographic Sequences via Sparse Representation and Dictionary Learning
Huang, Xiaojie; Dione, Donald P.; Compas, Colin B.; Papademetris, Xenophon; Lin, Ben A.; Bregasi, Alda; Sinusas, Albert J.; Staib, Lawrence H.; Duncan, James S.
2013-01-01
This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554
Semiautomatic segmentation of liver metastases on volumetric CT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Jiayong; Schwartz, Lawrence H.; Zhao, Binsheng, E-mail: bz2166@cumc.columbia.edu
2015-11-15
Purpose: Accurate segmentation and quantification of liver metastases on CT images are critical to surgery/radiation treatment planning and therapy response assessment. To date, there are no reliable methods to perform such segmentation automatically. In this work, the authors present a method for semiautomatic delineation of liver metastases on contrast-enhanced volumetric CT images. Methods: The first step is to manually place a seed region-of-interest (ROI) in the lesion on an image. This ROI will (1) serve as an internal marker and (2) assist in automatically identifying an external marker. With these two markers, lesion contour on the image can be accuratelymore » delineated using traditional watershed transformation. Density information will then be extracted from the segmented 2D lesion and help determine the 3D connected object that is a candidate of the lesion volume. The authors have developed a robust strategy to automatically determine internal and external markers for marker-controlled watershed segmentation. By manually placing a seed region-of-interest in the lesion to be delineated on a reference image, the method can automatically determine dual threshold values to approximately separate the lesion from its surrounding structures and refine the thresholds from the segmented lesion for the accurate segmentation of the lesion volume. This method was applied to 69 liver metastases (1.1–10.3 cm in diameter) from a total of 15 patients. An independent radiologist manually delineated all lesions and the resultant lesion volumes served as the “gold standard” for validation of the method’s accuracy. Results: The algorithm received a median overlap, overestimation ratio, and underestimation ratio of 82.3%, 6.0%, and 11.5%, respectively, and a median average boundary distance of 1.2 mm. Conclusions: Preliminary results have shown that volumes of liver metastases on contrast-enhanced CT images can be accurately estimated by a semiautomatic segmentation method.« less
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
Object segmentation using graph cuts and active contours in a pyramidal framework
NASA Astrophysics Data System (ADS)
Subudhi, Priyambada; Mukhopadhyay, Susanta
2018-03-01
Graph cuts and active contours are two very popular interactive object segmentation techniques in the field of computer vision and image processing. However, both these approaches have their own well-known limitations. Graph cut methods perform efficiently giving global optimal segmentation result for smaller images. However, for larger images, huge graphs need to be constructed which not only takes an unacceptable amount of memory but also increases the time required for segmentation to a great extent. On the other hand, in case of active contours, initial contour selection plays an important role in the accuracy of the segmentation. So a proper selection of initial contour may improve the complexity as well as the accuracy of the result. In this paper, we have tried to combine these two approaches to overcome their above-mentioned drawbacks and develop a fast technique of object segmentation. Here, we have used a pyramidal framework and applied the mincut/maxflow algorithm on the lowest resolution image with the least number of seed points possible which will be very fast due to the smaller size of the image. Then, the obtained segmentation contour is super-sampled and and worked as the initial contour for the next higher resolution image. As the initial contour is very close to the actual contour, so fewer number of iterations will be required for the convergence of the contour. The process is repeated for all the high-resolution images and experimental results show that our approach is faster as well as memory efficient as compare to both graph cut or active contour segmentation alone.
Zhou, Bin; Zhang, Zhendong; Wang, Ji; Yu, Y Eric; Liu, Xiaowei Sherry; Nishiyama, Kyle K; Rubin, Mishaela R; Shane, Elizabeth; Bilezikian, John P; Guo, X Edward
2016-06-01
Trabecular plate and rod microstructure plays a dominant role in the apparent mechanical properties of trabecular bone. With high-resolution computed tomography (CT) images, digital topological analysis (DTA) including skeletonization and topological classification was applied to transform the trabecular three-dimensional (3D) network into surface and curve skeletons. Using the DTA-based topological analysis and a new reconstruction/recovery scheme, individual trabecula segmentation (ITS) was developed to segment individual trabecular plates and rods and quantify the trabecular plate- and rod-related morphological parameters. High-resolution peripheral quantitative computed tomography (HR-pQCT) is an emerging in vivo imaging technique to visualize 3D bone microstructure. Based on HR-pQCT images, ITS was applied to various HR-pQCT datasets to examine trabecular plate- and rod-related microstructure and has demonstrated great potential in cross-sectional and longitudinal clinical applications. However, the reproducibility of ITS has not been fully determined. The aim of the current study is to quantify the precision errors of ITS plate-rod microstructural parameters. In addition, we utilized three different frequently used contour techniques to separate trabecular and cortical bone and to evaluate their effect on ITS measurements. Overall, good reproducibility was found for the standard HR-pQCT parameters with precision errors for volumetric BMD and bone size between 0.2%-2.0%, and trabecular bone microstructure between 4.9%-6.7% at the radius and tibia. High reproducibility was also achieved for ITS measurements using all three different contour techniques. For example, using automatic contour technology, low precision errors were found for plate and rod trabecular number (pTb.N, rTb.N, 0.9% and 3.6%), plate and rod trabecular thickness (pTb.Th, rTb.Th, 0.6% and 1.7%), plate trabecular surface (pTb.S, 3.4%), rod trabecular length (rTb.ℓ, 0.8%), and plate-plate junction density (P-P Junc.D, 2.3%) at the tibia. The precision errors at the radius were similar to those at the tibia. In addition, precision errors were affected by the contour technique. At the tibia, precision error by the manual contour method was significantly different from automatic and standard contour methods for pTb.N, rTb.N and rTb.Th. Precision error using the manual contour method was also significantly different from the standard contour method for rod trabecular number (rTb.N), rod trabecular thickness (rTb.Th), rod-rod and plate-rod junction densities (R-R Junc.D and P-R Junc.D) at the tibia. At the radius, the precision error was similar between the three different contour methods. Image quality was also found to significantly affect the ITS reproducibility. We concluded that ITS parameters are highly reproducible, giving assurance that future cross-sectional and longitudinal clinical HR-pQCT studies are feasible in the context of limited sample sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardenas, C; The University of Texas Graduate School of Biomedical Sciences, Houston, TX; Wong, A
Purpose: To develop and test population-based machine learning algorithms for delineating high-dose clinical target volumes (CTVs) in H&N tumors. Automating and standardizing the contouring of CTVs can reduce both physician contouring time and inter-physician variability, which is one of the largest sources of uncertainty in H&N radiotherapy. Methods: Twenty-five node-negative patients treated with definitive radiotherapy were selected (6 right base of tongue, 11 left and 9 right tonsil). All patients had GTV and CTVs manually contoured by an experienced radiation oncologist prior to treatment. This contouring process, which is driven by anatomical, pathological, and patient specific information, typically results inmore » non-uniform margin expansions about the GTV. Therefore, we tested two methods to delineate high-dose CTV given a manually-contoured GTV: (1) regression-support vector machines(SVM) and (2) classification-SVM. These models were trained and tested on each patient group using leave-one-out cross-validation. The volume difference(VD) and Dice similarity coefficient(DSC) between the manual and auto-contoured CTV were calculated to evaluate the results. Distances from GTV-to-CTV were computed about each patient’s GTV and these distances, in addition to distances from GTV to surrounding anatomy in the expansion direction, were utilized in the regression-SVM method. The classification-SVM method used categorical voxel-information (GTV, selected anatomical structures, else) from a 3×3×3cm3 ROI centered about the voxel to classify voxels as CTV. Results: Volumes for the auto-contoured CTVs ranged from 17.1 to 149.1cc and 17.4 to 151.9cc; the average(range) VD between manual and auto-contoured CTV were 0.93 (0.48–1.59) and 1.16(0.48–1.97); while average(range) DSC values were 0.75(0.59–0.88) and 0.74(0.59–0.81) for the regression-SVM and classification-SVM methods, respectively. Conclusion: We developed two novel machine learning methods to delineate high-dose CTV for H&N patients. Both methods showed promising results that hint to a solution to the standardization of the contouring process of clinical target volumes. Varian Medical Systems grant.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldini, Elizabeth H., E-mail: ebaldini@partners.org; Abrams, Ross A.; Bosch, Walter
Purpose: The purpose of this study was to evaluate the variability in target volume and organ at risk (OAR) contour delineation for retroperitoneal sarcoma (RPS) among 12 sarcoma radiation oncologists. Methods and Materials: Radiation planning computed tomography (CT) scans for 2 cases of RPS were distributed among 12 sarcoma radiation oncologists with instructions for contouring gross tumor volume (GTV), clinical target volume (CTV), high-risk CTV (HR CTV: area judged to be at high risk of resulting in positive margins after resection), and OARs: bowel bag, small bowel, colon, stomach, and duodenum. Analysis of contour agreement was performed using the simultaneousmore » truth and performance level estimation (STAPLE) algorithm and kappa statistics. Results: Ten radiation oncologists contoured both RPS cases, 1 contoured only RPS1, and 1 contoured only RPS2 such that each case was contoured by 11 radiation oncologists. The first case (RPS 1) was a patient with a de-differentiated (DD) liposarcoma (LPS) with a predominant well-differentiated (WD) component, and the second case (RPS 2) was a patient with DD LPS made up almost entirely of a DD component. Contouring agreement for GTV and CTV contours was high. However, the agreement for HR CTVs was only moderate. For OARs, agreement for stomach, bowel bag, small bowel, and colon was high, but agreement for duodenum (distorted by tumor in one of these cases) was fair to moderate. Conclusions: For preoperative treatment of RPS, sarcoma radiation oncologists contoured GTV, CTV, and most OARs with a high level of agreement. HR CTV contours were more variable. Further clarification of this volume with the help of sarcoma surgical oncologists is necessary to reach consensus. More attention to delineation of the duodenum is also needed.« less
Baldini, Elizabeth H.; Abrams, Ross A.; Bosch, Walter; Roberge, David; Haas, Rick L.M.; Catton, Charles N.; Indelicato, Daniel J.; Olsen, Jeffrey R.; Deville, Curtiland; Chen, Yen-Lin; Finkelstein, Steven E.; DeLaney, Thomas F.; Wang, Dian
2015-01-01
Purpose The purpose of this study was to evaluate the variability in target volume and organ at risk (OAR) contour delineation for retroperitoneal sarcoma (RPS) among 12 sarcoma radiation oncologists. Methods and Materials Radiation planning computed tomography (CT) scans for 2 cases of RPS were distributed among 12 sarcoma radiation oncologists with instructions for contouring gross tumor volume (GTV), clinical target volume (CTV), high-risk CTV (HR CTV: area judged to be at high risk of resulting in positive margins after resection), and OARs: bowel bag, small bowel, colon, stomach, and duodenum. Analysis of contour agreement was performed using the simultaneous truth and performance level estimation (STAPLE) algorithm and kappa statistics. Results Ten radiation oncologists contoured both RPS cases, 1 contoured only RPS1, and 1 contoured only RPS2 such that each case was contoured by 11 radiation oncologists. The first case (RPS 1) was a patient with a de-differentiated (DD) liposarcoma (LPS) with a predominant well-differentiated (WD) component, and the second case (RPS 2) was a patient with DD LPS made up almost entirely of a DD component. Contouring agreement for GTV and CTV contours was high. However, the agreement for HR CTVs was only moderate. For OARs, agreement for stomach, bowel bag, small bowel, and colon was high, but agreement for duodenum (distorted by tumor in one of these cases) was fair to moderate. Conclusions For preoperative treatment of RPS, sarcoma radiation oncologists contoured GTV, CTV, and most OARs with a high level of agreement. HR CTV contours were more variable. Further clarification of this volume with the help of sarcoma surgical oncologists is necessary to reach consensus. More attention to delineation of the duodenum is also needed. PMID:26194680
Schaly, Bryan; Kempe, Jeff; Venkatesan, Varagur; Mitchell, Sylvia; Battista, Jerry J
2017-11-01
During radiation therapy of head and neck cancer, the decision to consider replanning a treatment because of anatomical changes has significant resource implications. We developed an algorithm that compares cone-beam computed tomography (CBCT) image pairs and provides an automatic alert as to when remedial action may be required. Retrospective CBCT data from ten head and neck cancer patients that were replanned during their treatment was used to train the algorithm on when to recommend a repeat CT simulation (re-CT). An additional 20 patients (replanned and not replanned) were used to validate the predictive power of the algorithm. CBCT images were compared in 3D using the gamma index, combining Hounsfield Unit (HU) difference with distance-to-agreement (DTA), where the CBCT study acquired on the first fraction is used as the reference. We defined the match quality parameter (MQP x ) as a difference between the x th percentiles of the failed-pixel histograms calculated from the reference gamma comparison and subsequent comparisons, where the reference gamma comparison is taken from the first two CBCT images acquired during treatment. The decision to consider re-CT was based on three consecutive MQP values being less than or equal to a threshold value, such that re-CT recommendations were within ±3 fractions of the actual re-CT order date for the training cases. Receiver-operator characteristic analysis showed that the best trade-off in sensitivity and specificity was achieved using gamma criteria of 3 mm DTA and 30 HU difference, and the 80 th percentile of the failed-pixel histogram. A sensitivity of 82% and 100% was achieved in the training and validation cases, respectively, with a false positive rate of ~30%. We have demonstrated that gamma analysis of CBCT-acquired anatomy can be used to flag patients for possible replanning in a manner consistent with local clinical practice guidelines. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
The influence of tone inventory on ERP without focal attention: a cross-language study.
Zheng, Hong-Ying; Peng, Gang; Chen, Jian-Yong; Zhang, Caicai; Minett, James W; Wang, William S-Y
2014-01-01
This study investigates the effect of tone inventories on brain activities underlying pitch without focal attention. We find that the electrophysiological responses to across-category stimuli are larger than those to within-category stimuli when the pitch contours are superimposed on nonspeech stimuli; however, there is no electrophysiological response difference associated with category status in speech stimuli. Moreover, this category effect in nonspeech stimuli is stronger for Cantonese speakers. Results of previous and present studies lead us to conclude that brain activities to the same native lexical tone contrasts are modulated by speakers' language experiences not only in active phonological processing but also in automatic feature detection without focal attention. In contrast to the condition with focal attention, where phonological processing is stronger for speech stimuli, the feature detection (pitch contours in this study) without focal attention as shaped by language background is superior in relatively regular stimuli, that is, the nonspeech stimuli. The results suggest that Cantonese listeners outperform Mandarin listeners in automatic detection of pitch features because of the denser Cantonese tone system.
Estimating spatial travel times using automatic vehicle identification data
DOT National Transportation Integrated Search
2001-01-01
Prepared ca. 2001. The paper describes an algorithm that was developed for estimating reliable and accurate average roadway link travel times using Automatic Vehicle Identification (AVI) data. The algorithm presented is unique in two aspects. First, ...
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.
Automated map sharpening by maximization of detail and connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Automated map sharpening by maximization of detail and connectivity
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...
2018-05-18
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Fully automated contour detection of the ascending aorta in cardiac 2D phase-contrast MRI.
Codari, Marina; Scarabello, Marco; Secchi, Francesco; Sforza, Chiarella; Baselli, Giuseppe; Sardanelli, Francesco
2018-04-01
In this study we proposed a fully automated method for localizing and segmenting the ascending aortic lumen with phase-contrast magnetic resonance imaging (PC-MRI). Twenty-five phase-contrast series were randomly selected out of a large population dataset of patients whose cardiac MRI examination, performed from September 2008 to October 2013, was unremarkable. The local Ethical Committee approved this retrospective study. The ascending aorta was automatically identified on each phase of the cardiac cycle using a priori knowledge of aortic geometry. The frame that maximized the area, eccentricity, and solidity parameters was chosen for unsupervised initialization. Aortic segmentation was performed on each frame using active contouring without edges techniques. The entire algorithm was developed using Matlab R2016b. To validate the proposed method, the manual segmentation performed by a highly experienced operator was used. Dice similarity coefficient, Bland-Altman analysis, and Pearson's correlation coefficient were used as performance metrics. Comparing automated and manual segmentation of the aortic lumen on 714 images, Bland-Altman analysis showed a bias of -6.68mm 2 , a coefficient of repeatability of 91.22mm 2 , a mean area measurement of 581.40mm 2 , and a reproducibility of 85%. Automated and manual segmentation were highly correlated (R=0.98). The Dice similarity coefficient versus the manual reference standard was 94.6±2.1% (mean±standard deviation). A fully automated and robust method for identification and segmentation of ascending aorta on PC-MRI was developed. Its application on patients with a variety of pathologic conditions is advisable. Copyright © 2017 Elsevier Inc. All rights reserved.
Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo
2016-03-12
Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.
TU-AB-303-11: Predict Parotids Deformation Applying SIS Epidemiological Model in H&N Adaptive RT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maffei, N; Guidi, G; University of Bologna, Bologna, Bologna
2015-06-15
Purpose: The aim is to investigate the use of epidemiological models to predict morphological variations in patients undergoing radiation therapy (RT). The susceptible-infected-susceptible (SIS) deterministic model was applied to simulate warping within a focused region of interest (ROI). Hypothesis is to consider each voxel like a single subject of the whole sample and to treat displacement vector fields like an infection. Methods: Using Raystation hybrid deformation algorithms and automatic re-contouring based on mesh grid, we post-processed 360 MVCT images of 12 H&N patients treated with Tomotherapy. Study focused on parotid glands, identified by literature and previous analysis, as ROI moremore » susceptible to warping in H&N region. Susceptible (S) and infectious (I) cases were identified in voxels with inter-fraction movement respectively under and over a set threshold. IronPython scripting allowed to export positions and displacement data of surface voxels for every fraction. A MATLAB homemade toolbox was developed to model the SIS. Results: SIS model was validated simulating organ motion on QUASAR phantom. Applying model in patients, within a [0–1cm] range, a single voxel movement of 0.4cm was selected as displacement threshold. SIS indexes were evaluated by MATLAB simulations. Dynamic time warping algorithm was used to assess matching between model and parotids behavior days of treatments. The best fit of the model was obtained with contact rate of 7.89±0.94 and recovery rate of 2.36±0.21. Conclusion: SIS model can follow daily structures evolutions, making possible to compare warping conditions and highlighting challenges due to abnormal variation and set-up errors. By epidemiology approach, organ motion could be assessed and predicted not in terms of average of the whole ROI, but in a voxel-by-voxel deterministic trend. Identifying anatomical region subjected to variations, would be possible to focus clinic controls within a cohort of pre-selected patients eligible for adaptive RT. The research is partially co-funded by the Italian Research Grant: Dose warping methods for IGRT and Adaptive RT: dose accumulation based on organ motion and anatomical variations of the patients during radiation therapy treatments,MoH (GR-2010-2318757) and Tecnologie Avanzate S.r.l.(Italy)« less
Novel multimodality segmentation using level sets and Jensen-Rényi divergence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva
2013-12-15
Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less
Novel multimodality segmentation using level sets and Jensen-Rényi divergence.
Markel, Daniel; Zaidi, Habib; El Naqa, Issam
2013-12-01
Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu
2017-05-06
In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu's algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape.
Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data
NASA Astrophysics Data System (ADS)
Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.
2016-06-01
Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.
Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan
2016-08-01
Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.
NASA Technical Reports Server (NTRS)
Korte, John J.
1992-01-01
A new procedure unifying the best of present classical design practices, CFD and optimization procedures, is demonstrated for designing the aerodynamic lines of hypersonic wind tunnel nozzles. This procedure can be employed to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been demonstrated to break down. Advantages of this procedure allow full utilization of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, may be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure.
Gradient Evolution-based Support Vector Machine Algorithm for Classification
NASA Astrophysics Data System (ADS)
Zulvia, Ferani E.; Kuo, R. J.
2018-03-01
This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.
Optimization of polymer electrolyte membrane fuel cell flow channels using a genetic algorithm
NASA Astrophysics Data System (ADS)
Catlin, Glenn; Advani, Suresh G.; Prasad, Ajay K.
The design of the flow channels in PEM fuel cells directly impacts the transport of reactant gases to the electrodes and affects cell performance. This paper presents results from a study to optimize the geometry of the flow channels in a PEM fuel cell. The optimization process implements a genetic algorithm to rapidly converge on the channel geometry that provides the highest net power output from the cell. In addition, this work implements a method for the automatic generation of parameterized channel domains that are evaluated for performance using a commercial computational fluid dynamics package from ANSYS. The software package includes GAMBIT as the solid modeling and meshing software, the solver FLUENT, and a PEMFC Add-on Module capable of modeling the relevant physical and electrochemical mechanisms that describe PEM fuel cell operation. The result of the optimization process is a set of optimal channel geometry values for the single-serpentine channel configuration. The performance of the optimal geometry is contrasted with a sub-optimal one by comparing contour plots of current density, oxygen and hydrogen concentration. In addition, the role of convective bypass in bringing fresh reactant to the catalyst layer is examined in detail. The convergence to the optimal geometry is confirmed by a bracketing study which compares the performance of the best individual to those of its neighbors with adjacent parameter values.
[Automatic Sleep Stage Classification Based on an Improved K-means Clustering Algorithm].
Xiao, Shuyuan; Wang, Bei; Zhang, Jian; Zhang, Qunfeng; Zou, Junzhong
2016-10-01
Sleep stage scoring is a hotspot in the field of medicine and neuroscience.Visual inspection of sleep is laborious and the results may be subjective to different clinicians.Automatic sleep stage classification algorithm can be used to reduce the manual workload.However,there are still limitations when it encounters complicated and changeable clinical cases.The purpose of this paper is to develop an automatic sleep staging algorithm based on the characteristics of actual sleep data.In the proposed improved K-means clustering algorithm,points were selected as the initial centers by using a concept of density to avoid the randomness of the original K-means algorithm.Meanwhile,the cluster centers were updated according to the‘Three-Sigma Rule’during the iteration to abate the influence of the outliers.The proposed method was tested and analyzed on the overnight sleep data of the healthy persons and patients with sleep disorders after continuous positive airway pressure(CPAP)treatment.The automatic sleep stage classification results were compared with the visual inspection by qualified clinicians and the averaged accuracy reached 76%.With the analysis of morphological diversity of sleep data,it was proved that the proposed improved K-means algorithm was feasible and valid for clinical practice.
DALMATIAN: An Algorithm for Automatic Cell Detection and Counting in 3D.
Shuvaev, Sergey A; Lazutkin, Alexander A; Kedrov, Alexander V; Anokhin, Konstantin V; Enikolopov, Grigori N; Koulakov, Alexei A
2017-01-01
Current 3D imaging methods, including optical projection tomography, light-sheet microscopy, block-face imaging, and serial two photon tomography enable visualization of large samples of biological tissue. Large volumes of data obtained at high resolution require development of automatic image processing techniques, such as algorithms for automatic cell detection or, more generally, point-like object detection. Current approaches to automated cell detection suffer from difficulties originating from detection of particular cell types, cell populations of different brightness, non-uniformly stained, and overlapping cells. In this study, we present a set of algorithms for robust automatic cell detection in 3D. Our algorithms are suitable for, but not limited to, whole brain regions and individual brain sections. We used watershed procedure to split regional maxima representing overlapping cells. We developed a bootstrap Gaussian fit procedure to evaluate the statistical significance of detected cells. We compared cell detection quality of our algorithm and other software using 42 samples, representing 6 staining and imaging techniques. The results provided by our algorithm matched manual expert quantification with signal-to-noise dependent confidence, including samples with cells of different brightness, non-uniformly stained, and overlapping cells for whole brain regions and individual tissue sections. Our algorithm provided the best cell detection quality among tested free and commercial software.
SA-SOM algorithm for detecting communities in complex networks
NASA Astrophysics Data System (ADS)
Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang
2017-10-01
Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.
Fast automated segmentation of multiple objects via spatially weighted shape learning
NASA Astrophysics Data System (ADS)
Chandra, Shekhar S.; Dowling, Jason A.; Greer, Peter B.; Martin, Jarad; Wratten, Chris; Pichler, Peter; Fripp, Jurgen; Crozier, Stuart
2016-11-01
Active shape models (ASMs) have proved successful in automatic segmentation by using shape and appearance priors in a number of areas such as prostate segmentation, where accurate contouring is important in treatment planning for prostate cancer. The ASM approach however, is heavily reliant on a good initialisation for achieving high segmentation quality. This initialisation often requires algorithms with high computational complexity, such as three dimensional (3D) image registration. In this work, we present a fast, self-initialised ASM approach that simultaneously fits multiple objects hierarchically controlled by spatially weighted shape learning. Prominent objects are targeted initially and spatial weights are progressively adjusted so that the next (more difficult, less visible) object is simultaneously initialised using a series of weighted shape models. The scheme was validated and compared to a multi-atlas approach on 3D magnetic resonance (MR) images of 38 cancer patients and had the same (mean, median, inter-rater) Dice’s similarity coefficients of (0.79, 0.81, 0.85), while having no registration error and a computational time of 12-15 min, nearly an order of magnitude faster than the multi-atlas approach.
Fast automated segmentation of multiple objects via spatially weighted shape learning.
Chandra, Shekhar S; Dowling, Jason A; Greer, Peter B; Martin, Jarad; Wratten, Chris; Pichler, Peter; Fripp, Jurgen; Crozier, Stuart
2016-11-21
Active shape models (ASMs) have proved successful in automatic segmentation by using shape and appearance priors in a number of areas such as prostate segmentation, where accurate contouring is important in treatment planning for prostate cancer. The ASM approach however, is heavily reliant on a good initialisation for achieving high segmentation quality. This initialisation often requires algorithms with high computational complexity, such as three dimensional (3D) image registration. In this work, we present a fast, self-initialised ASM approach that simultaneously fits multiple objects hierarchically controlled by spatially weighted shape learning. Prominent objects are targeted initially and spatial weights are progressively adjusted so that the next (more difficult, less visible) object is simultaneously initialised using a series of weighted shape models. The scheme was validated and compared to a multi-atlas approach on 3D magnetic resonance (MR) images of 38 cancer patients and had the same (mean, median, inter-rater) Dice's similarity coefficients of (0.79, 0.81, 0.85), while having no registration error and a computational time of 12-15 min, nearly an order of magnitude faster than the multi-atlas approach.
Interactive lesion segmentation with shape priors from offline and online learning.
Shepherd, Tony; Prince, Simon J D; Alexander, Daniel C
2012-09-01
In medical image segmentation, tumors and other lesions demand the highest levels of accuracy but still call for the highest levels of manual delineation. One factor holding back automatic segmentation is the exemption of pathological regions from shape modelling techniques that rely on high-level shape information not offered by lesions. This paper introduces two new statistical shape models (SSMs) that combine radial shape parameterization with machine learning techniques from the field of nonlinear time series analysis. We then develop two dynamic contour models (DCMs) using the new SSMs as shape priors for tumor and lesion segmentation. From training data, the SSMs learn the lower level shape information of boundary fluctuations, which we prove to be nevertheless highly discriminant. One of the new DCMs also uses online learning to refine the shape prior for the lesion of interest based on user interactions. Classification experiments reveal superior sensitivity and specificity of the new shape priors over those previously used to constrain DCMs. User trials with the new interactive algorithms show that the shape priors are directly responsible for improvements in accuracy and reductions in user demand.
Automatic Whistler Detector and Analyzer system: Implementation of the analyzer algorithm
NASA Astrophysics Data System (ADS)
Lichtenberger, JáNos; Ferencz, Csaba; Hamar, Daniel; Steinbach, Peter; Rodger, Craig J.; Clilverd, Mark A.; Collier, Andrew B.
2010-12-01
The full potential of whistlers for monitoring plasmaspheric electron density variations has not yet been realized. The primary reason is the vast human effort required for the analysis of whistler traces. Recently, the first part of a complete whistler analysis procedure was successfully automated, i.e., the automatic detection of whistler traces from the raw broadband VLF signal was achieved. This study describes a new algorithm developed to determine plasmaspheric electron density measurements from whistler traces, based on a Virtual (Whistler) Trace Transformation, using a 2-D fast Fourier transform transformation. This algorithm can be automated and can thus form the final step to complete an Automatic Whistler Detector and Analyzer (AWDA) system. In this second AWDA paper, the practical implementation of the Automatic Whistler Analyzer (AWA) algorithm is discussed and a feasible solution is presented. The practical implementation of the algorithm is able to track the variations of plasmasphere in quasi real time on a PC cluster with 100 CPU cores. The electron densities obtained by the AWA method can be used in investigations such as plasmasphere dynamics, ionosphere-plasmasphere coupling, or in space weather models.
Automatic Clustering Using FSDE-Forced Strategy Differential Evolution
NASA Astrophysics Data System (ADS)
Yasid, A.
2018-01-01
Clustering analysis is important in datamining for unsupervised data, cause no adequate prior knowledge. One of the important tasks is defining the number of clusters without user involvement that is known as automatic clustering. This study intends on acquiring cluster number automatically utilizing forced strategy differential evolution (AC-FSDE). Two mutation parameters, namely: constant parameter and variable parameter are employed to boost differential evolution performance. Four well-known benchmark datasets were used to evaluate the algorithm. Moreover, the result is compared with other state of the art automatic clustering methods. The experiment results evidence that AC-FSDE is better or competitive with other existing automatic clustering algorithm.
GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems
NASA Astrophysics Data System (ADS)
Goossens, Bart; Luong, Hiêp; Philips, Wilfried
2017-08-01
Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.
Nurmohamadi, Maryam; Pourghassem, Hossein
2014-05-01
The utilization of antibiotics produced by Clavulanic acid (CA) is an increasing need in medicine and industry. Usually, the CA is created from the fermentation of Streptomycen Clavuligerus (SC) bacteria. Analysis of visual and morphological features of SC bacteria is an appropriate measure to estimate the growth of CA. In this paper, an automatic and fast CA production level estimation algorithm based on visual and structural features of SC bacteria instead of statistical methods and experimental evaluation by microbiologist is proposed. In this algorithm, structural features such as the number of newborn branches, thickness of hyphal and bacterial density and also color features such as acceptance color levels are extracted from the SC bacteria. Moreover, PH and biomass of the medium provided by microbiologists are considered as specified features. The level of CA production is estimated by using a new application of Self-Organizing Map (SOM), and a hybrid model of genetic algorithm with back propagation network (GA-BPN). The proposed algorithm is evaluated on four carbonic resources including malt, starch, wheat flour and glycerol that had used as different mediums of bacterial growth. Then, the obtained results are compared and evaluated with observation of specialist. Finally, the Relative Error (RE) for the SOM and GA-BPN are achieved 14.97% and 16.63%, respectively. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Exudate detection in color retinal images for mass screening of diabetic retinopathy.
Zhang, Xiwei; Thibault, Guillaume; Decencière, Etienne; Marcotegui, Beatriz; Laÿ, Bruno; Danno, Ronan; Cazuguel, Guy; Quellec, Gwénolé; Lamard, Mathieu; Massin, Pascale; Chabouis, Agnès; Victor, Zeynep; Erginay, Ali
2014-10-01
The automatic detection of exudates in color eye fundus images is an important task in applications such as diabetic retinopathy screening. The presented work has been undertaken in the framework of the TeleOphta project, whose main objective is to automatically detect normal exams in a tele-ophthalmology network, thus reducing the burden on the readers. A new clinical database, e-ophtha EX, containing precisely manually contoured exudates, is introduced. As opposed to previously available databases, e-ophtha EX is very heterogeneous. It contains images gathered within the OPHDIAT telemedicine network for diabetic retinopathy screening. Image definition, quality, as well as patients condition or the retinograph used for the acquisition, for example, are subject to important changes between different examinations. The proposed exudate detection method has been designed for this complex situation. We propose new preprocessing methods, which perform not only normalization and denoising tasks, but also detect reflections and artifacts in the image. A new candidates segmentation method, based on mathematical morphology, is proposed. These candidates are characterized using classical features, but also novel contextual features. Finally, a random forest algorithm is used to detect the exudates among the candidates. The method has been validated on the e-ophtha EX database, obtaining an AUC of 0.95. It has been also validated on other databases, obtaining an AUC between 0.93 and 0.95, outperforming state-of-the-art methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Validation of automatic segmentation of ribs for NTCP modeling.
Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob
2016-03-01
Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Multimedia systems in ultrasound image boundary detection and measurements
NASA Astrophysics Data System (ADS)
Pathak, Sayan D.; Chalana, Vikram; Kim, Yongmin
1997-05-01
Ultrasound as a medical imaging modality offers the clinician a real-time of the anatomy of the internal organs/tissues, their movement, and flow noninvasively. One of the applications of ultrasound is to monitor fetal growth by measuring biparietal diameter (BPD) and head circumference (HC). We have been working on automatic detection of fetal head boundaries in ultrasound images. These detected boundaries are used to measure BPD and HC. The boundary detection algorithm is based on active contour models and takes 32 seconds on an external high-end workstation, SUN SparcStation 20/71. Our goal has been to make this tool available within an ultrasound machine and at the same time significantly improve its performance utilizing multimedia technology. With the advent of high- performance programmable digital signal processors (DSP), the software solution within an ultrasound machine instead of the traditional hardwired approach or requiring an external computer is now possible. We have integrated our boundary detection algorithm into a programmable ultrasound image processor (PUIP) that fits into a commercial ultrasound machine. The PUIP provides both the high computing power and flexibility needed to support computationally-intensive image processing algorithms within an ultrasound machine. According to our data analysis, BPD/HC measurements made on PUIP lie within the interobserver variability. Hence, the errors in the automated BPD/HC measurements using the algorithm are on the same order as the average interobserver differences. On PUIP, it takes 360 ms to measure the values of BPD/HC on one head image. When processing multiple head images in sequence, it takes 185 ms per image, thus enabling 5.4 BPD/HC measurements per second. Reduction in the overall execution time from 32 seconds to a fraction of a second and making this multimedia system available within an ultrasound machine will help this image processing algorithm and other computer-intensive imaging applications become a practical tool for the sonographers in the feature.
Automatic comparison of striation marks and automatic classification of shoe prints
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Keijzer, Jan; Keereweer, Isaac
1995-09-01
A database for toolmarks (named TRAX) and a database for footwear outsole designs (named REBEZO) have been developed on a PC. The databases are filled with video-images and administrative data about the toolmarks and the footwear designs. An algorithm for the automatic comparison of the digitized striation patterns has been developed for TRAX. The algorithm appears to work well for deep and complete striation marks and will be implemented in TRAX. For REBEZO some efforts have been made to the automatic classification of outsole patterns. The algorithm first segments the shoeprofile. Fourier-features are selected for the separate elements and are classified with a neural network. In future developments information on invariant moments of the shape and rotation angle will be included in the neural network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, X; Gao, H; Sharp, G
2015-06-15
Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to eachmore » chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.
Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou
2017-01-01
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod
Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less
Reconstruction and simplification of urban scene models based on oblique images
NASA Astrophysics Data System (ADS)
Liu, J.; Guo, B.
2014-08-01
We describe a multi-view stereo reconstruction and simplification algorithms for urban scene models based on oblique images. The complexity, diversity, and density within the urban scene, it increases the difficulty to build the city models using the oblique images. But there are a lot of flat surfaces existing in the urban scene. One of our key contributions is that a dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed. The basic idea of matching propagating based on Self-Adaptive Patch is to build patches centred by seed points which are already matched. The extent and shape of the patches can adapt to the objects of urban scene automatically: when the surface is flat, the extent of the patch would become bigger; while the surface is very rough, the extent of the patch would become smaller. The other contribution is that the mesh generated by Graph Cuts is 2-manifold surface satisfied the half edge data structure. It is solved by clustering and re-marking tetrahedrons in s-t graph. The purpose of getting 2- manifold surface is to simply the mesh by edge collapse algorithm which can preserve and stand out the features of buildings.
G-DYN Multibody Dynamics Engine
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Broderick, Daniel
2011-01-01
G-DYN is a multi-body dynamic simulation software engine that automatically assembles and integrates equations of motion for arbitrarily connected multibody dynamic systems. The algorithm behind G-DYN is based on a primal-dual formulation of the dynamics that captures the position and velocity vectors (primal variables) of each body and the interaction forces (dual variables) between bodies, which are particularly useful for control and estimation analysis and synthesis. It also takes full advantage of the spare matrix structure resulting from the system dynamics to numerically integrate the equations of motion efficiently. Furthermore, the dynamic model for each body can easily be replaced without re-deriving the overall equations of motion, and the assembly of the equations of motion is done automatically. G-DYN proved an essential software tool in the simulation of spacecraft systems used for small celestial body surface sampling, specifically in simulating touch-and-go (TAG) maneuvers of a robotic sampling system from a comet and asteroid. It is used extensively in validating mission concepts for small body sample return, such as Comet Odyssey and Galahad New Frontiers proposals.
Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm
NASA Astrophysics Data System (ADS)
Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.
2018-05-01
A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.
Chang, Sung-A; Lee, Sang-Chol; Kim, Eun-Young; Hahm, Seung-Hee; Jang, Shin Yi; Park, Sung-Ji; Choi, Jin-Oh; Park, Seung Woo; Choe, Yeon Hyeon; Oh, Jae K
2011-08-01
With recent developments in echocardiographic technology, a new system using real-time three-dimensional echocardiography (RT3DE) that allows single-beat acquisition of the entire volume of the left ventricle and incorporates algorithms for automated border detection has been introduced. Provided that these techniques are acceptably reliable, three-dimensional echocardiography may be much more useful for clinical practice. The aim of this study was to evaluate the feasibility and accuracy of left ventricular (LV) volume measurements by RT3DE using the single-beat full-volume capture technique. One hundred nine consecutive patients scheduled for cardiac magnetic resonance imaging and RT3DE using the single-beat full-volume capture technique on the same day were recruited. LV end-systolic volume, end-diastolic volume, and ejection fraction were measured using an auto-contouring algorithm from data acquired on RT3DE. The data were compared with the same measurements obtained using cardiac magnetic resonance imaging. Volume measurements on RT3DE with single-beat full-volume capture were feasible in 84% of patients. Both interobserver and intraobserver variability of three-dimensional measurements of end-systolic and end-diastolic volumes showed excellent agreement. Pearson's correlation analysis showed a close correlation of end-systolic and end-diastolic volumes between RT3DE and cardiac magnetic resonance imaging (r = 0.94 and r = 0.91, respectively, P < .0001 for both). Bland-Altman analysis showed reasonable limits of agreement. After application of the auto-contouring algorithm, the rate of successful auto-contouring (cases requiring minimal manual corrections) was <50%. RT3DE using single-beat full-volume capture is an easy and reliable technique to assess LV volume and systolic function in clinical practice. However, the image quality and low frame rate still limit its application for dilated left ventricles, and the automated volume analysis program needs more development to make it clinically efficacious. Copyright © 2011 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.
An Improved Snake Model for Refinement of Lidar-Derived Building Roof Contours Using Aerial Images
NASA Astrophysics Data System (ADS)
Chen, Qi; Wang, Shugen; Liu, Xiuguo
2016-06-01
Building roof contours are considered as very important geometric data, which have been widely applied in many fields, including but not limited to urban planning, land investigation, change detection and military reconnaissance. Currently, the demand on building contours at a finer scale (especially in urban areas) has been raised in a growing number of studies such as urban environment quality assessment, urban sprawl monitoring and urban air pollution modelling. LiDAR is known as an effective means of acquiring 3D roof points with high elevation accuracy. However, the precision of the building contour obtained from LiDAR data is restricted by its relatively low scanning resolution. With the use of the texture information from high-resolution imagery, the precision can be improved. In this study, an improved snake model is proposed to refine the initial building contours extracted from LiDAR. First, an improved snake model is constructed with the constraints of the deviation angle, image gradient, and area. Then, the nodes of the contour are moved in a certain range to find the best optimized result using greedy algorithm. Considering both precision and efficiency, the candidate shift positions of the contour nodes are constrained, and the searching strategy for the candidate nodes is explicitly designed. The experiments on three datasets indicate that the proposed method for building contour refinement is effective and feasible. The average quality index is improved from 91.66% to 93.34%. The statistics of the evaluation results for every single building demonstrated that 77.0% of the total number of contours is updated with higher quality index.
Contour interpolation: A case study in Modularity of Mind.
Keane, Brian P
2018-05-01
In his monograph Modularity of Mind (1983), philosopher Jerry Fodor argued that mental architecture can be partly decomposed into computational organs termed modules, which were characterized as having nine co-occurring features such as automaticity, domain specificity, and informational encapsulation. Do modules exist? Debates thus far have been framed very generally with few, if any, detailed case studies. The topic is important because it has direct implications on current debates in cognitive science and because it potentially provides a viable framework from which to further understand and make hypotheses about the mind's structure and function. Here, the case is made for the modularity of contour interpolation, which is a perceptual process that represents non-visible edges on the basis of how surrounding visible edges are spatiotemporally configured. There is substantial evidence that interpolation is domain specific, mandatory, fast, and developmentally well-sequenced; that it produces representationally impoverished outputs; that it relies upon a relatively fixed neural architecture that can be selectively impaired; that it is encapsulated from belief and expectation; and that its inner workings cannot be fathomed through conscious introspection. Upon differentiating contour interpolation from a higher-order contour representational ability ("contour abstraction") and upon accommodating seemingly inconsistent experimental results, it is argued that interpolation is modular to the extent that the initiating conditions for interpolation are strong. As interpolated contours become more salient, the modularity features emerge. The empirical data, taken as a whole, show that at least certain parts of the mind are modularly organized. Copyright © 2018 Elsevier B.V. All rights reserved.
Accurate and ergonomic method of registration for image-guided neurosurgery
NASA Astrophysics Data System (ADS)
Henderson, Jaimie M.; Bucholz, Richard D.
1994-05-01
There has been considerable interest in the development of frameless stereotaxy based upon scalp mounted fiducials. In practice we have experienced difficulty in relating markers to the image data sets in our series of 25 frameless cases, as well as inaccuracy due to scalp movement and the size of the markers. We have developed an alternative system for accurately and conveniently achieving surgical registration for image-guided neurosurgery based on alignment and matching of patient forehead contours. The system consists of a laser contour digitizer which is used in the operating room to acquire forehead contours, editing software for extracting contours from patient image data sets, and a contour-match algorithm for aligning the two contours and performing data set registration. The contour digitizer is tracked by a camera array which relates its position with respect to light emitting diodes placed on the head clamp. Once registered, surgical instrument can be tracked throughout the procedure. Contours can be extracted from either CT or MRI image datasets. The system has proven to be robust in the laboratory setting. Overall error of registration is 1 - 2 millimeters in routine use. Image to patient registration can therefore be achieved quite easily and accurately, without the need for fixation of external markers to the skull, or manually finding markers on the scalp and image datasets. The system is unobtrusive and imposes little additional effort on the neurosurgeon, broadening the appeal of image-guided surgery.
Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2015-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.
Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2017-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329
Three dimensional shape measurement of wear particle by iterative volume intersection
NASA Astrophysics Data System (ADS)
Wu, Hongkun; Li, Ruowei; Liu, Shilong; Rahman, Md Arifur; Liu, Sanchi; Kwok, Ngaiming; Peng, Zhongxiao
2018-04-01
The morphology of wear particle is a fundamental indicator where wear oriented machine health can be assessed. Previous research proved that thorough measurement of the particle shape allows more reliable explanation of the occurred wear mechanism. However, most of current particle measurement techniques are focused on extraction of the two-dimensional (2-D) morphology, while other critical particle features including volume and thickness are not available. As a result, a three-dimensional (3-D) shape measurement method is developed to enable a more comprehensive particle feature description. The developed method is implemented in three steps: (1) particle profiles in multiple views are captured via a camera mounted above a micro fluid channel; (2) a preliminary reconstruction is accomplished by the shape-from-silhouette approach with the collected particle contours; (3) an iterative re-projection process follows to obtain the final 3-D measurement by minimizing the difference between the original and the re-projected contours. Results from real data are presented, demonstrating the feasibility of the proposed method.
Automatic control algorithm effects on energy production
NASA Technical Reports Server (NTRS)
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Comprehensive eye evaluation algorithm
NASA Astrophysics Data System (ADS)
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Cone-Beam Computed Tomography for Image-Guided Radiation Therapy of Prostate Cancer
2008-01-01
forexa t volumetri image re onstru tion. As a onsequense, images re onstru ted by approx-imate algorithms, mostly based on the Feldkamp algorithm...patient dose from CBCT. Reverse heli al CBCT has been developed for exa tre onstru tion of volumetri images, region-of-interest (ROI) re onstru tion...algorithm with a priori informa-tion in few-view CBCT for IGRT. We expe t the proposed algorithm an redu e the numberof proje tions needed for volumetri