Science.gov

Sample records for volume segmentation analysis

  1. Economic Analysis. Volume IV. Segments 51-64.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    The fourth volume of the multimedia, individualized course in economic analysis produced for the United States Naval Academy covers segments 51-64 of the course. Included in this volume are discussions of the theory of demand, costs of production in both the short and the long run, and industry equilibrium in a perfectly competitive market. Other…

  2. Automated segmentation and dose-volume analysis with DICOMautomaton

    NASA Astrophysics Data System (ADS)

    Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.

    2014-03-01

    Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.

  3. Volume Segmentation and Ghost Particles

    NASA Astrophysics Data System (ADS)

    Ziskin, Isaac; Adrian, Ronald

    2011-11-01

    Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.

  4. Direct volume estimation without segmentation

    NASA Astrophysics Data System (ADS)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  5. Inter-sport variability of muscle volume distribution identified by segmental bioelectrical impedance analysis in four ball sports

    PubMed Central

    Yamada, Yosuke; Masuo, Yoshihisa; Nakamura, Eitaro; Oda, Shingo

    2013-01-01

    The aim of this study was to evaluate and quantify differences in muscle distribution in athletes of various ball sports using segmental bioelectrical impedance analysis (SBIA). Participants were 115 male collegiate athletes from four ball sports (baseball, soccer, tennis, and lacrosse). Percent body fat (%BF) and lean body mass were measured, and SBIA was used to measure segmental muscle volume (MV) in bilateral upper arms, forearms, thighs, and lower legs. We calculated the MV ratios of dominant to nondominant, proximal to distal, and upper to lower limbs. The measurements consisted of a total of 31 variables. Cluster and factor analyses were applied to identify redundant variables. The muscle distribution was significantly different among groups, but the %BF was not. The classification procedures of the discriminant analysis could correctly distinguish 84.3% of the athletes. These results suggest that collegiate ball game athletes have adapted their physique to their sport movements very well, and the SBIA, which is an affordable, noninvasive, easy-to-operate, and fast alternative method in the field, can distinguish ball game athletes according to their specific muscle distribution within a 5-minute measurement. The SBIA could be a useful, affordable, and fast tool for identifying talents for specific sports. PMID:24379714

  6. Early Expansion of the Intracranial CSF Volume After Palliative Whole-Brain Radiotherapy: Results of a Longitudinal CT Segmentation Analysis

    SciTech Connect

    Sanghera, Paul; Gardner, Sandra L.; Scora, Daryl; Davey, Phillip

    2010-03-15

    Purpose: To assess cerebral atrophy after radiotherapy, we measured intracranial cerebrospinal fluid volume (ICSFV) over time after whole-brain radiotherapy (WBRT) and compared it with published normal-population data. Methods and Materials: We identified 9 patients receiving a single course of WBRT (30 Gy in 10 fractions over 2 weeks) for ipsilateral brain metastases with at least 3 years of computed tomography follow-up. Segmentation analysis was confined to the tumor-free hemi-cranium. The technique was semiautomated by use of thresholds based on scanned image intensity. The ICSFV percentage (ratio of ICSFV to brain volume) was used for modeling purposes. Published normal-population ICSFV percentages as a function of age were used as a control. A repeated-measures model with cross-sectional (between individuals) and longitudinal (within individuals) quadratic components was fitted to the collected data. The influence of clinical factors including the use of subependymal plate shielding was studied. Results: The median imaging follow-up was 6.25 years. There was an immediate increase (p < 0.0001) in ICSFV percentage, which decelerated over time. The clinical factors studied had no significant effect on the model. Conclusions: WBRT immediately accelerates the rate of brain atrophy. This longitudinal study in patients with brain metastases provides a baseline against which the potential benefits of more localized radiotherapeutic techniques such as radiosurgery may be compared.

  7. Parallel Mean Shift for Interactive Volume Segmentation

    NASA Astrophysics Data System (ADS)

    Zhou, Fangfang; Zhao, Ying; Ma, Kwan-Liu

    In this paper we present a parallel dynamic mean shift algorithm based on path transmission for medical volume data segmentation. The algorithm first translates the volume data into a joint position-color feature space subdivided uniformly by bandwidths, and then clusters points in feature space in parallel by iteratively finding its peak point. Over iterations it improves the convergent rate by dynamically updating data points via path transmission and reduces the amount of data points by collapsing overlapping points into one point. The GPU implementation of the algorithm can segment 256x256x256 volume in 6 seconds using an NVIDIA GeForce 8800 GTX card for interactive processing, which is hundreds times faster than its CPU implementation. We also introduce an interactive interface to segment volume data based on this GPU implementation. This interface not only provides the user with the capability to specify segmentation resolution, but also allows the user to operate on the segmented tissues and create desired visualization results.

  8. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 2: Program users manual

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  9. Glandular segmentation of cone beam breast CT volume images

    NASA Astrophysics Data System (ADS)

    Packard, Nathan; Boone, John M.

    2007-03-01

    Cone beam breast CT (CBBCT) has potential as an alternative to mammography for screening breast cancer while limiting the radiation dose to that of a two-view mammogram. A clinical trial of CBBCT has been underway and volumetric breast images have been obtained. Although these images clearly show the 3D structure of the breast, they are limited by quantum noise due to dose limitations. Noise from these images adds to the challenges of glandular/adipose tissue segmentation. In response to this, an automated method for reducing noise and segmenting glandular tissue in CBBCT images was developed. A histogram based 2-means clustering algorithm was used in conjunction with a seven-point 3D median filter to reduce quantum noise. Following this, a 2D parabolic correction was applied to flatten the adipose tissue in each slice to reduce system inhomogeneities. Finally, a median smoothing algorithm was applied to further reduce noise for optimal segmentation. The algorithm was tested on actual breast scan volume data sets for subjective analysis and on a 3D mathematical phantom to test the algorithm. Subjective comparison of the actual breast scans with the denoised and segmented volumes showed good segmentation with little to no noticeable degradation. The mathematical phantom, after denoising and segmentation, was found to accurately measure the percent glandularity within 0.03% of the actual value for the phantom containing larger spherical shapes, but was only able to preserve small micro-calcification sized spheres of 0.8 and 1.0 mm, and small fibers with diameters of 1.2 and 1.4 mm.

  10. Automatic Video Object Segmentation Using Volume Growing and Hierarchical Clustering

    NASA Astrophysics Data System (ADS)

    Porikli, Fatih; Wang, Yao

    2004-12-01

    We introduce an automatic segmentation framework that blends the advantages of color-, texture-, shape-, and motion-based segmentation methods in a computationally feasible way. A spatiotemporal data structure is first constructed for each group of video frames, in which each pixel is assigned a feature vector based on low-level visual information. Then, the smallest homogeneous components, so-called as volumes, are expanded from selected marker points using an adaptive, three-dimensional, centroid-linkage method. Self descriptors that characterize each volume and relational descriptors that capture the mutual properties between pairs of volumes are determined by evaluating the boundary, trajectory, and motion of the volumes. These descriptors are used to measure the similarity between volumes based on which volumes are further grouped into objects. A fine-to-coarse clustering algorithm yields a multiresolution object tree representation as an output of the segmentation.

  11. Bioimpedance Measurement of Segmental Fluid Volumes and Hemodynamics

    NASA Technical Reports Server (NTRS)

    Montgomery, Leslie D.; Wu, Yi-Chang; Ku, Yu-Tsuan E.; Gerth, Wayne A.; DeVincenzi, D. (Technical Monitor)

    2000-01-01

    Bioimpedance has become a useful tool to measure changes in body fluid compartment volumes. An Electrical Impedance Spectroscopic (EIS) system is described that extends the capabilities of conventional fixed frequency impedance plethysmographic (IPG) methods to allow examination of the redistribution of fluids between the intracellular and extracellular compartments of body segments. The combination of EIS and IPG techniques was evaluated in the human calf, thigh, and torso segments of eight healthy men during 90 minutes of six degree head-down tilt (HDT). After 90 minutes HDT the calf and thigh segments significantly (P < 0.05) lost conductive volume (eight and four percent, respectively) while the torso significantly (P < 0.05) gained volume (approximately three percent). Hemodynamic responses calculated from pulsatile IPG data also showed a segmental pattern consistent with vascular fluid loss from the lower extremities and vascular engorgement in the torso. Lumped-parameter equivalent circuit analyses of EIS data for the calf and thigh indicated that the overall volume decreases in these segments arose from reduced extracellular volume that was not completely balanced by increased intracellular volume. The combined use of IPG and EIS techniques enables noninvasive tracking of multi-segment volumetric and hemodynamic responses to environmental and physiological stresses.

  12. Interobserver variation in clinical target volume and organs at risk segmentation in post-parotidectomy radiotherapy: can segmentation protocols help?

    PubMed Central

    Mukesh, M; Benson, R; Jena, R; Hoole, A; Roques, T; Scrase, C; Martin, C; Whitfield, G A; Gemmill, J; Jefferies, S

    2012-01-01

    Objective : A study of interobserver variation in the segmentation of the post-operative clinical target volume (CTV) and organs at risk (OARs) for parotid tumours was undertaken. The segmentation exercise was performed as a baseline, and repeated after 3 months using a segmentation protocol to assess whether CTV conformity improved. Methods : Four head and neck oncologists independently segmented CTVs and OARs (contralateral parotid, spinal cord and brain stem) on CT data sets of five patients post parotidectomy. For each CTV or OAR delineation, total volume was calculated. The conformity level (CL) between different clinicians' outlines was measured using a validated outline analysis tool. The data for CTVs were reaanalysed after using the cochlear sparing therapy and conventional radiation segmentation protocol. Results : Significant differences in CTV morphology were observed at baseline, yielding a mean CL of 30% (range 25–39%). The CL improved after using the segmentation protocol with a mean CL of 54% (range 50–65%). For OARs, the mean CL was 60% (range 53–68%) for the contralateral parotid gland, 23% (range 13–27%) for the brain stem and 25% (range 22–31%) for the spinal cord. Conclusions There was low conformity for CTVs and OARs between different clinicians. The CL for CTVs improved with use of a segmentation protocol, but the CLs remained lower than expected. This study supports the need for clear guidelines for segmentation of target and OARs to compare and interpret the results of head and neck cancer radiation studies. PMID:22815423

  13. A Ray Casting Accelerated Method of Segmented Regular Volume Data

    NASA Astrophysics Data System (ADS)

    Zhu, Min; Guo, Ming; Wang, Liting; Dai, Yujin

    The size of volume data field which is constructed by large-scale war industry product ICT images is large, and empty voxels in the volume data field occupy little ratio. The effect of existing ray casting accelerated methods is not distinct. In 3D visualization fault diagnosis of large-scale war industry product, only some of the information in the volume data field can help surveyor check out fault inside it. Computational complexity will greatly increase if all volume data is 3D reconstructed. So a new ray casting accelerated method based on segmented volume data is put forward. Segmented information volume data field is built by use of segmented result. Consulting the conformation method of existing hierarchical volume data structures, hierarchical volume data structure on the base of segmented information is constructed. According to the structure, the construction parts defined by user are identified automatically in ray casting. The other parts are regarded as empty voxels, hence the sampling step is adjusted dynamically, the sampling point amount is decreased, and the volume rendering speed is improved. Experimental results finally reveal the high efficiency and good display performance of the proposed method.

  14. Partial volume effect modeling for segmentation and tissue classification of brain magnetic resonance images: A review

    PubMed Central

    Tohka, Jussi

    2014-01-01

    Quantitative analysis of magnetic resonance (MR) brain images are facilitated by the development of automated segmentation algorithms. A single image voxel may contain of several types of tissues due to the finite spatial resolution of the imaging device. This phenomenon, termed partial volume effect (PVE), complicates the segmentation process, and, due to the complexity of human brain anatomy, the PVE is an important factor for accurate brain structure quantification. Partial volume estimation refers to a generalized segmentation task where the amount of each tissue type within each voxel is solved. This review aims to provide a systematic, tutorial-like overview and categorization of methods for partial volume estimation in brain MRI. The review concentrates on the statistically based approaches for partial volume estimation and also explains differences to other, similar image segmentation approaches. PMID:25431640

  15. Tumor volume measurement for nasopharyngeal carcinoma using knowledge-based fuzzy clustering MRI segmentation

    NASA Astrophysics Data System (ADS)

    Zhou, Jiayin; Lim, Tuan-Kay; Chong, Vincent

    2002-05-01

    A knowledge-based fuzzy clustering (KBFC) MRI segmentation algorithm was proposed to obtain accurate tumor segmentation for tumor volume measurement of nasopharyngeal carcinoma (NPC). An initial segmentation was performed on T1 and contrast enhanced T1 MR images using a semi-supervised fuzzy c-means (SFCM) algorithm. Then, three types of anatomic and space knowledge--symmetry, connectivity and cluster center were used for image analysis which contributed the final tumor segmentation. After the segmentation, tumor volume was obtained by multi-planimetry method. Visual and quantitative validations were performed on phantom model and six data volumes of NPC patients, compared with ground truth (GT) and the results acquired using seeds growing (SG) for tumor segmentation. In visual format, KBFC showed better tumor segmentation image than SG. In quantitative segmentation quality estimation, on phantom model, the matching percent (MP) / correspondence ratio (CR) was 94.1-96.4% / 0.888-0.925 for KBFC and 94.1-96.0% / 0.884-0.918 for SG while on patient data volumes, it was 92.1+/- 2.6% / 0.884+/- 0.014 for KBFC and 87.4+/- 4.3% / 0.843+/- 0.041 for SG. In tumor volume measurement, on phantom model, measurement error was 4.2-5.0% for KBFC and 4.8-6.1% for SG while on patient data volumes, it was 6.6+/- 3.5% for KBFC and 8.8+/- 5.4% for SG. Based on these results, KBFC could provide high quality of MRI tumor segmentation for tumor volume measurement of NPC.

  16. Amygdalar and hippocampal volume: A comparison between manual segmentation, Freesurfer and VBM.

    PubMed

    Grimm, Oliver; Pohlack, Sebastian; Cacciaglia, Raffaele; Winkelmann, Tobias; Plichta, Michael M; Demirakca, Traute; Flor, Herta

    2015-09-30

    Automated segmentation of the amygdala and the hippocampus is of interest for research looking at large datasets where manual segmentation of T1-weighted magnetic resonance tomography images is less feasible for morphometric analysis. Manual segmentation still remains the gold standard for subcortical structures like the hippocampus and the amygdala. A direct comparison of VBM8 and Freesurfer is rarely done, because VBM8 results are most often used for voxel-based analysis. We used the same region-of-interest (ROI) for Freesurfer and VBM8 to relate automated and manually derived volumes of the amygdala and the hippocampus. We processed a large manually segmented dataset of n=92 independent samples with an automated segmentation strategy (VBM8 vs. Freesurfer Version 5.0). For statistical analysis, we only calculated Pearsons's correlation coefficients, but used methods developed for comparison such as Lin's concordance coefficient. The correlation between automatic and manual segmentation was high for the hippocampus [0.58-0.76] and lower for the amygdala [0.45-0.59]. However, concordance coefficients point to higher concordance for the amygdala [0.46-0.62] instead of the hippocampus [0.06-0.12]. VBM8 and Freesurfer segmentation performed on a comparable level in comparison to manual segmentation. We conclude (1) that correlation alone does not capture systematic differences (e.g. of hippocampal volumes), (2) calculation of ROI volumes with VBM8 gives measurements comparable to Freesurfer V5.0 when using the same ROI and (3) systematic and proportional differences are caused mainly by different definitions of anatomic boundaries and only to a lesser part by different segmentation strategies. This work underscores the importance of using method comparison techniques and demonstrates that even with high correlation coefficients, there can be still large differences in absolute volume. PMID:26057114

  17. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    PubMed Central

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís; Sastre-Garriga, Jaume; Montalban, Xavier; Rovira, Àlex; Lladó, Xavier

    2015-01-01

    Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS) lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM) and white matter (WM) using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations. PMID:26740917

  18. Reproducibility of intracranial volume measurement by unsupervised multispectral brain segmentation.

    PubMed

    Alfano, B; Quarantelli, M; Brunetti, A; Larobina, M; Covelli, E M; Tedeschi, E; Salvatore, M

    1998-03-01

    To assess the inter-study variability of a recently published unsupervised segmentation method (Magn. Reson. Med. 1997;37:84-93), 14 brain MR studies were performed in five normal subjects. Standard deviations for absolute and fractional volumes of intracranial compartments, which reflect the experimental variability, were smaller than 16.5 ml and 1.1%, respectively. By comparing the experimental component of the variability with the variability observed in our reference database, an estimate of the biological variability of the intracranial fractional volumes in the database population was obtained. PMID:9498607

  19. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  20. Tooth segmentation system with intelligent editing for cephalometric analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shoupu

    2015-03-01

    Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.

  1. Artificial Neural Network-Based System for PET Volume Segmentation

    PubMed Central

    Sharif, Mhd Saeed; Abbod, Maysam; Amira, Abbes; Zaidi, Habib

    2010-01-01

    Tumour detection, classification, and quantification in positron emission tomography (PET) imaging at early stage of disease are important issues for clinical diagnosis, assessment of response to treatment, and radiotherapy planning. Many techniques have been proposed for segmenting medical imaging data; however, some of the approaches have poor performance, large inaccuracy, and require substantial computation time for analysing large medical volumes. Artificial intelligence (AI) approaches can provide improved accuracy and save decent amount of time. Artificial neural networks (ANNs), as one of the best AI techniques, have the capability to classify and quantify precisely lesions and model the clinical evaluation for a specific problem. This paper presents a novel application of ANNs in the wavelet domain for PET volume segmentation. ANN performance evaluation using different training algorithms in both spatial and wavelet domains with a different number of neurons in the hidden layer is also presented. The best number of neurons in the hidden layer is determined according to the experimental results, which is also stated Levenberg-Marquardt backpropagation training algorithm as the best training approach for the proposed application. The proposed intelligent system results are compared with those obtained using conventional techniques including thresholding and clustering based approaches. Experimental and Monte Carlo simulated PET phantom data sets and clinical PET volumes of nonsmall cell lung cancer patients were utilised to validate the proposed algorithm which has demonstrated promising results. PMID:20936152

  2. Fully automated segmentation of oncological PET volumes using a combined multiscale and statistical model

    SciTech Connect

    Montgomery, David W. G.; Amira, Abbes; Zaidi, Habib

    2007-02-15

    The widespread application of positron emission tomography (PET) in clinical oncology has driven this imaging technology into a number of new research and clinical arenas. Increasing numbers of patient scans have led to an urgent need for efficient data handling and the development of new image analysis techniques to aid clinicians in the diagnosis of disease and planning of treatment. Automatic quantitative assessment of metabolic PET data is attractive and will certainly revolutionize the practice of functional imaging since it can lower variability across institutions and may enhance the consistency of image interpretation independent of reader experience. In this paper, a novel automated system for the segmentation of oncological PET data aiming at providing an accurate quantitative analysis tool is proposed. The initial step involves expectation maximization (EM)-based mixture modeling using a k-means clustering procedure, which varies voxel order for initialization. A multiscale Markov model is then used to refine this segmentation by modeling spatial correlations between neighboring image voxels. An experimental study using an anthropomorphic thorax phantom was conducted for quantitative evaluation of the performance of the proposed segmentation algorithm. The comparison of actual tumor volumes to the volumes calculated using different segmentation methodologies including standard k-means, spatial domain Markov Random Field Model (MRFM), and the new multiscale MRFM proposed in this paper showed that the latter dramatically reduces the relative error to less than 8% for small lesions (7 mm radii) and less than 3.5% for larger lesions (9 mm radii). The analysis of the resulting segmentations of clinical oncologic PET data seems to confirm that this methodology shows promise and can successfully segment patient lesions. For problematic images, this technique enables the identification of tumors situated very close to nearby high normal physiologic uptake. The use of this technique to estimate tumor volumes for assessment of response to therapy and to delineate treatment volumes for the purpose of combined PET/CT-based radiation therapy treatment planning is also discussed.

  3. Fully automated segmentation of oncological PET volumes using a combined multiscale and statistical model.

    PubMed

    Montgomery, David W G; Amira, Abbes; Zaidi, Habib

    2007-02-01

    The widespread application of positron emission tomography (PET) in clinical oncology has driven this imaging technology into a number of new research and clinical arenas. Increasing numbers of patient scans have led to an urgent need for efficient data handling and the development of new image analysis techniques to aid clinicians in the diagnosis of disease and planning of treatment. Automatic quantitative assessment of metabolic PET data is attractive and will certainly revolutionize the practice of functional imaging since it can lower variability across institutions and may enhance the consistency of image interpretation independent of reader experience. In this paper, a novel automated system for the segmentation of oncological PET data aiming at providing an accurate quantitative analysis tool is proposed. The initial step involves expectation maximization (EM)-based mixture modeling using a k-means clustering procedure, which varies voxel order for initialization. A multiscale Markov model is then used to refine this segmentation by modeling spatial correlations between neighboring image voxels. An experimental study using an anthropomorphic thorax phantom was conducted for quantitative evaluation of the performance of the proposed segmentation algorithm. The comparison of actual tumor volumes to the volumes calculated using different segmentation methodologies including standard k-means, spatial domain Markov Random Field Model (MRFM), and the new multiscale MRFM proposed in this paper showed that the latter dramatically reduces the relative error to less than 8% for small lesions (7 mm radii) and less than 3.5% for larger lesions (9 mm radii). The analysis of the resulting segmentations of clinical oncologic PET data seems to confirm that this methodology shows promise and can successfully segment patient lesions. For problematic images, this technique enables the identification of tumors situated very close to nearby high normal physiologic uptake. The use of this technique to estimate tumor volumes for assessment of response to therapy and to delineate treatment volumes for the purpose of combined PET/CT-based radiation therapy treatment planning is also discussed. PMID:17388190

  4. Perceptual analysis for music segmentation

    NASA Astrophysics Data System (ADS)

    Jian, Min-Hong; Lin, Chia-Han; Chen, Arbee L. P.

    2003-12-01

    In this paper, a music segmentation framework is proposed to segment music streams based on human perception. In the proposed framework, three perceptual features corresponding to four perceptual properties are extracted. By analyzing the trajectory of feature values, the cutting points of a music stream can be identified. According to the complementary characteristics of the three features, a ranking algorithm is designed to achieve a better accuracy. We perform a series of experiments to evaluate the Complementary Characteristics and the effectiveness of the proposed framework.

  5. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  6. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging.

    PubMed

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart

    2015-02-21

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51? ± ?1.92) to (97.27? ± ?0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development. PMID:25611494

  7. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    NASA Astrophysics Data System (ADS)

    Hatt, M.; Lamare, F.; Boussion, N.; Turzo, A.; Collet, C.; Salzenstein, F.; Roux, C.; Jarritt, P.; Carson, K.; Cheze-LeRest, C.; Visvikis, D.

    2007-07-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.

  8. Brain tumor target volume determination for radiation therapy treatment planning through the use of automated MRI segmentation

    NASA Astrophysics Data System (ADS)

    Mazzara, Gloria Patrika

    Radiation therapy seeks to effectively irradiate the tumor cells while minimizing the dose to adjacent normal cells. Prior research found that the low success rates for treating brain tumors would be improved with higher radiation doses to the tumor area. This is feasible only if the target volume can be precisely identified. However, the definition of tumor volume is still based on time-intensive, highly subjective manual outlining by radiation oncologists. In this study the effectiveness of two automated Magnetic Resonance Imaging (MRI) segmentation methods, k-Nearest Neighbors (kNN) and Knowledge-Guided (KG), in determining the Gross Tumor Volume (GTV) of brain tumors for use in radiation therapy was assessed. Three criteria were applied: accuracy of the contours; quality of the resulting treatment plan in terms of dose to the tumor; and a novel treatment plan evaluation technique based on post-treatment images. The kNN method was able to segment all cases while the KG method was limited to enhancing tumors and gliomas with clear enhancing edges. Various software applications were developed to create a closed smooth contour that encompassed the tumor pixels from the segmentations and to integrate these results into the treatment planning software. A novel, probabilistic measurement of accuracy was introduced to compare the agreement of the segmentation methods with the weighted average physician volume. Both computer methods under-segment the tumor volume when compared with the physicians but performed within the variability of manual contouring (28% +/- 12% for inter-operator variability). Computer segmentations were modified vertically to compensate for their under-segmentation. When comparing radiation treatment plans designed from physician-defined tumor volumes with treatment plans developed from the modified segmentation results, the reference target volume was irradiated within the same level of conformity. Analysis of the plans based on post-treatment MRI showed that the segmentation plans provided similar dose coverage to areas being treated by the original treatment plans. This research demonstrates that computer segmentations provide a feasible route to automatic target volume definition. Because of the lower variability and greater efficiency of the automated techniques, their use could lead to more precise plans and better prognosis for brain tumor patients.

  9. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region

    PubMed Central

    Tian, Jing; Varga, Boglárka; Somfai, Gábor Márk; Lee, Wen-Hsiang; Smiddy, William E.; Cabrera DeBuc, Delia

    2015-01-01

    Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra’s algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (? 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data. PMID:26258430

  10. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region.

    PubMed

    Tian, Jing; Varga, Boglárka; Somfai, Gábor Márk; Lee, Wen-Hsiang; Smiddy, William E; DeBuc, Delia Cabrera

    2015-01-01

    Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra's algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (? 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data. PMID:26258430

  11. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    PubMed Central

    Hatt, Mathieu; Lamare, Frédéric; Boussion, Nicolas; Roux, Christian; Turzo, Alexandre; Cheze-Lerest, Catherine; Jarritt, Peter; Carson, Kathryn; Salzenstein, Fabien; Collet, Christophe; Visvikis, Dimitris

    2007-01-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the Fuzzy Hidden Markov Chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical Hidden Markov Chain (HMC) algorithm, FHMC takes into account noise, voxel’s intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the “fuzzy” nature of the object on interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8mm3 and 64mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned. PMID:17664555

  12. Three-dimensional partial volume segmentation of multispectral magnetic resonance images using stochastic relaxation

    NASA Astrophysics Data System (ADS)

    Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.

    1994-05-01

    An algorithm has been developed which uses stochastic relaxation in three dimensions to segment brain tissues from images acquired using multiple echo sequences from magnetic resonance imaging (MRI). The initial volume data is assumed to represent a locally dependent Markov random field. Partial volume estimates for each voxel are obtained yielding fractional composition of multiple tissue types for individual voxels. A minimum of user intervention is required to train the algorithm by requiring the manual outlining of regions of interest in a sample image from the volume. Segmentations obtained from multiple echo sequences are determined independently and then combined by forming the product of the probabilities for each tissues type. The implementation has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment 3D MRI data sets using multiple sclerosis lesions, gray matter, white matter, and cerebrospinal fluid as the partial volumes. Results correspond well with manual segmentations of the same data.

  13. Midbrain volume segmentation using active shape models and LBPs

    NASA Astrophysics Data System (ADS)

    Olveres, Jimena; Nava, Rodrigo; Escalante-Ramírez, Boris; Cristóbal, Gabriel; García-Moreno, Carla María.

    2013-09-01

    In recent years, the use of Magnetic Resonance Imaging (MRI) to detect different brain structures such as midbrain, white matter, gray matter, corpus callosum, and cerebellum has increased. This fact together with the evidence that midbrain is associated with Parkinson's disease has led researchers to consider midbrain segmentation as an important issue. Nowadays, Active Shape Models (ASM) are widely used in literature for organ segmentation where the shape is an important discriminant feature. Nevertheless, this approach is based on the assumption that objects of interest are usually located on strong edges. Such a limitation may lead to a final shape far from the actual shape model. This paper proposes a novel method based on the combined use of ASM and Local Binary Patterns for segmenting midbrain. Furthermore, we analyzed several LBP methods and evaluated their performance. The joint-model considers both global and local statistics to improve final adjustments. The results showed that our proposal performs substantially better than the ASM algorithm and provides better segmentation measurements.

  14. Automated lung tumor segmentation for whole body PET volume based on novel downhill region growing

    NASA Astrophysics Data System (ADS)

    Ballangan, Cherry; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Feng, Dagan

    2010-03-01

    We propose an automated lung tumor segmentation method for whole body PET images based on a novel downhill region growing (DRG) technique, which regards homogeneous tumor hotspots as 3D monotonically decreasing functions. The method has three major steps: thoracic slice extraction with K-means clustering of the slice features; hotspot segmentation with DRG; and decision tree analysis based hotspot classification. To overcome the common problem of leakage into adjacent hotspots in automated lung tumor segmentation, DRG employs the tumors' SUV monotonicity features. DRG also uses gradient magnitude of tumors' SUV to improve tumor boundary definition. We used 14 PET volumes from patients with primary NSCLC for validation. The thoracic region extraction step achieved good and consistent results for all patients despite marked differences in size and shape of the lungs and the presence of large tumors. The DRG technique was able to avoid the problem of leakage into adjacent hotspots and produced a volumetric overlap fraction of 0.61 +/- 0.13 which outperformed four other methods where the overlap fraction varied from 0.40 +/- 0.24 to 0.59 +/- 0.14. Of the 18 tumors in 14 NSCLC studies, 15 lesions were classified correctly, 2 were false negative and 15 were false positive.

  15. Fast automatic segmentation of the brain in T1-weighted volume MRI data

    NASA Astrophysics Data System (ADS)

    Lemieux, Louis; Hagemann, Georg; Krakow, Karsten; Woermann, Friedrich G.

    1999-05-01

    A fully automated algorithm was developed to segment the brain from T1-weighted volume MR images. Automatic non- uniformity correction is performed prior to segmentation. The segmentation algorithm is based on automatic thresholding and morphological operations. It is fully 3D and therefore independent of scan orientation. The validity and performance of the algorithm were evaluated by comparing the automatically calculated brain volume with semi- automated measurements in 10 subjects. The amount of non- brain tissue included in the automatic segmentation was calculated. To test reproducibility, the brain volume was calculated in repeated scans in another 10 subjects. The mean and standard deviation of the difference between the semi-automated and automated measurements were 0.6% and 2.8% of the mean brain volume, respectively, which is within the inter-observer variability of the semi-automatic method. The mean amount of non-brain tissue contained in the segmented brain mask was 0.3% of the mean brain volume, with a standard deviation of 0.2%. The mean and standard deviation of the difference between the total volumes calculated from repeated scans were 0.4% and 1.2% of the mean brain volume, respectively.

  16. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  17. 3D robust Chan-Vese model for industrial computed tomography volume data segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Linghui; Zeng, Li; Luan, Xiao

    2013-11-01

    Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.

  18. Multi-region unstructured volume segmentation using tetrahedron filling

    SciTech Connect

    Willliams, Sean Jamerson; Dillard, Scott E; Thoma, Dan J; Hlawitschka, Mario; Hamann, Bernd

    2010-01-01

    Segmentation is one of the most common operations in image processing, and while there are several solutions already present in the literature, they each have their own benefits and drawbacks that make them well-suited for some types of data and not for others. We focus on the problem of breaking an image into multiple regions in a single segmentation pass, while supporting both voxel and scattered point data. To solve this problem, we begin with a set of potential boundary points and use a Delaunay triangulation to complete the boundaries. We use heuristic- and interaction-driven Voronoi clustering to find reasonable groupings of tetrahedra. Apart from the computation of the Delaunay triangulation, our algorithm has linear time complexity with respect to the number of tetrahedra.

  19. LANDSAT-D program. Volume 2: Ground segment

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Raw digital data, as received from the LANDSAT spacecraft, cannot generate images that meet specifications. Radiometric corrections must be made to compensate for aging and for differences in sensitivity among the instrument sensors. Geometric corrections must be made to compensate for off-nadir look angle, and to calculate spacecraft drift from its prescribed path. Corrections must also be made for look-angle jitter caused by vibrations induced by spacecraft equipment. The major components of the LANDSAT ground segment and their functions are discussed.

  20. Automated segmentation of mesothelioma volume on CT scan

    NASA Astrophysics Data System (ADS)

    Zhao, Binsheng; Schwartz, Lawrence; Flores, Raja; Liu, Fan; Kijewski, Peter; Krug, Lee; Rusch, Valerie

    2005-04-01

    In mesothelioma, response is usually assessed by computed tomography (CT). In current clinical practice the Response Evaluation Criteria in Solid Tumors (RECIST) or WHO, i.e., the uni-dimensional or the bi-dimensional measurements, is applied to the assessment of therapy response. However, the shape of the mesothelioma volume is very irregular and its longest dimension is almost never in the axial plane. Furthermore, the sections and the sites where radiologists measure the tumor are rather subjective, resulting in poor reproducibility of tumor size measurements. We are developing an objective three-dimensional (3D) computer algorithm to automatically identify and quantify tumor volumes that are associated with malignant pleural mesothelioma to assess therapy response. The algorithm first extracts the lung pleural surface from the volumetric CT images by interpolating the chest ribs over a number of adjacent slices and then forming a volume that includes the thorax. This volume allows a separation of mesothelioma from the chest wall. Subsequently, the structures inside the extracted pleural lung surface, including the mediastinal area, lung parenchyma, and pleural mesothelioma, can be identified using a multiple thresholding technique and morphological operations. Preliminary results have shown the potential of utilizing this algorithm to automatically detect and quantify tumor volumes on CT scans and thus to assess therapy response for malignant pleural mesothelioma.

  1. Hybrid segmentation framework for 3D medical image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  2. High volume production trial of mirror segments for the Thirty Meter Telescope

    NASA Astrophysics Data System (ADS)

    Oota, Tetsuji; Negishi, Mahito; Shinonaga, Hirohiko; Gomi, Akihiko; Tanaka, Yutaka; Akutsu, Kotaro; Otsuka, Itaru; Mochizuki, Shun; Iye, Masanori; Yamashita, Takuya

    2014-07-01

    The Thirty Meter Telescope is a next-generation optical/infrared telescope to be constructed on Mauna Kea, Hawaii toward the end of this decade, as an international project. Its 30 m primary mirror consists of 492 off-axis aspheric segmented mirrors. High volume production of hundreds of segments has started in 2013 based on the contract between National Astronomical Observatory of Japan and Canon Inc.. This paper describes the achievements of the high volume production trials. The Stressed Mirror Figuring technique which is established by Keck Telescope engineers is arranged and adopted. To measure the segment surface figure, a novel stitching algorithm is evaluated by experiment. The integration procedure is checked with prototype segment.

  3. Comprehensive evaluation of an image segmentation technique for measuring tumor volume from CT images

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun

    2008-03-01

    Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.

  4. Segmentation propagation for the automated quantification of ventricle volume from serial MRI

    NASA Astrophysics Data System (ADS)

    Linguraru, Marius George; Butman, John A.

    2009-02-01

    Accurate ventricle volume estimates could potentially improve the understanding and diagnosis of communicating hydrocephalus. Postoperative communicating hydrocephalus has been recognized in patients with brain tumors where the changes in ventricle volume can be difficult to identify, particularly over short time intervals. Because of the complex alterations of brain morphology in these patients, the segmentation of brain ventricles is challenging. Our method evaluates ventricle size from serial brain MRI examinations; we (i) combined serial images to increase SNR, (ii) automatically segmented this image to generate a ventricle template using fast marching methods and geodesic active contours, and (iii) propagated the segmentation using deformable registration of the original MRI datasets. By applying this deformation to the ventricle template, serial volume estimates were obtained in a robust manner from routine clinical images (0.93 overlap) and their variation analyzed.

  5. Comparison of EM-based and level set partial volume segmentations of MR brain images

    NASA Astrophysics Data System (ADS)

    Tagare, Hemant D.; Chen, Yunmei; Fulbright, Robert K.

    2008-03-01

    EM and level set algorithms are competing methods for segmenting MRI brain images. This paper presents a fair comparison of the two techniques using the Montreal Neurological Institute's software phantom. There are many flavors of level set algorithms for segmentation into multiple regions (multi-phase algorithms, multi-layer algorithms). The specific algorithm evaluated by us is a variant of the multi-layer level set algorithm. It uses a single level set function for segmenting the image into multiple classes and can be run to completion without restarting. The EM-based algorithm is standard. Both algorithms have the capacity to model a variable number of partial volume classes as well as image inhomogeneity (bias field). Our evaluation consists of systematically changing the number of partial volume classes, additive image noise, and regularization parameters. The results suggest that the performances of both algorithms are comparable across noise, number of partial volume classes, and regularization. The segmentation errors of both algorithms are around 5 - 10% for cerebrospinal fluid, gray and white matter. The level set algorithm appears to have a slight advantage for gray matter segmentation. This may be beneficial in studying certain brain diseases (Multiple Sclerosis or Alzheimer's disease) where small changes in gray matter volume are significant.

  6. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  7. Multi-Segment Hemodynamic and Volume Assessment With Impedance Plethysmography: Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Ku, Yu-Tsuan E.; Montgomery, Leslie D.; Webbon, Bruce W. (Technical Monitor)

    1995-01-01

    Definition of multi-segmental circulatory and volume changes in the human body provides an understanding of the physiologic responses to various aerospace conditions. We have developed instrumentation and testing procedures at NASA Ames Research Center that may be useful in biomedical research and clinical diagnosis. Specialized two, four, and six channel impedance systems will be described that have been used to measure calf, thigh, thoracic, arm, and cerebral hemodynamic and volume changes during various experimental investigations.

  8. Sequential Registration-Based Segmentation of the Prostate Gland in MR Image Volumes.

    PubMed

    Khalvati, Farzad; Salmanpour, Aryan; Rahnamayan, Shahryar; Haider, Masoom A; Tizhoosh, H R

    2016-04-01

    Accurate and fast segmentation and volume estimation of the prostate gland in magnetic resonance (MR) images are necessary steps in the diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semi-automated segmentation of individual slices in T2-weighted MR image sequences. The proposed sequential registration-based segmentation (SRS) algorithm, which was inspired by the clinical workflow during medical image contouring, relies on inter-slice image registration and user interaction/correction to segment the prostate gland without the use of an anatomical atlas. It automatically generates contours for each slice using a registration algorithm, provided that the user edits and approves the marking in some previous slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid). Five radiation oncologists participated in the study where they contoured the prostate MR (T2-weighted) images of 15 patients both manually and using the SRS algorithm. Compared to the manual segmentation, on average, the SRS algorithm reduced the contouring time by 62 % (a speedup factor of 2.64×) while maintaining the segmentation accuracy at the same level as the intra-user agreement level (i.e., Dice similarity coefficient of 91 versus 90 %). The proposed algorithm exploits the inter-slice similarity of volumetric MR image series to achieve highly accurate results while significantly reducing the contouring time. PMID:26546179

  9. Semi-automatic active contour approach to segmentation of computed tomography volumes

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Kovacevic, Domagoj; Sorantin, Erich

    2000-06-01

    In this paper a method for three-dimensional (3-D) semi- automatic segmentation of volumes of medical images is described. The method is semi-automatic in the sense that, in the initial phase, the user assistance is required for manual segmentation of a certain number of slices (cross-sections) of the volume. In the second phase, the algorithm for automatic segmentation is started. The segmentation algorithm is based on the active contour approach. A semi 3-D active contour algorithm is used in the sense that additional inter-slice forces are introduced in order to constrain the obtained solution. The energy function which is minimized is modified to exploit information provided by the manual segmentation of some of the slices performed by the user. The experiments have been performed using computed tomography (CT) scans of the abdominal region of the human body. In particular, CT images of abdominal aortic aneurysms have been segmented to determine the location of aorta. The experiments have shown the feasibility of the approach.

  10. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes

    PubMed Central

    Eapen, Maya; Korah, Reeba; Geetha, G.

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  11. Segmentation of Cerebral Gyri in the Sectioned Images by Referring to Volume Model

    PubMed Central

    Park, Jin Seo; Chung, Min Suk; Chi, Je-Geun; Park, Hyo Seok

    2010-01-01

    Authors had prepared the high-quality sectioned images of a cadaver head. For the delineation of each cerebral gyrus, three-dimensional model of the same brain was required. The purpose of this study was to develop the segmentation protocol of cerebral gyri by referring to the three-dimensional model on the personal computer. From the 114 sectioned images (intervals, 1 mm), a cerebral hemisphere was outlined. On MRIcro software, sectioned images including only the cerebral hemisphere were volume reconstructed. The volume model was rotated to capture the lateral, medial, superior, and inferior views of the cerebral hemisphere. On these four views, areas of 33 cerebral gyri were painted with colors. Derived from the painted views, the cerebral gyri in sectioned images were identified and outlined on the Photoshop to prepare segmented images. The segmented images were used for production of volume and surface models of the selected gyri. The segmentation method developed in this research is expected to be applied to other types of images, such as MRIs. Our results of the sectioned and segmented images of the cadaver brain, acquired in the present study, are hopefully utilized for medical learning tools of neuroanatomy. PMID:21165283

  12. Semi-automatic tool for segmentation and volumetric analysis of medical images.

    PubMed

    Heinonen, T; Dastidar, P; Kauppinen, P; Malmivuo, J; Eskola, H

    1998-05-01

    Segmentation software is described, developed for medical image processing and run on Windows. The software applies basic image processing techniques through a graphical user interface. For particular applications, such as brain lesion segmentation, the software enables the combination of different segmentation techniques to improve its efficiency. The program is applied for magnetic resonance imaging, computed tomography and optical images of cryosections. The software can be utilised in numerous applications, including pre-processing for three-dimensional presentations, volumetric analysis and construction of volume conductor models. PMID:9747567

  13. Automated segmentation and measurement of global white matter lesion volume in patients with multiple sclerosis.

    PubMed

    Alfano, B; Brunetti, A; Larobina, M; Quarantelli, M; Tedeschi, E; Ciarmiello, A; Covelli, E M; Salvatore, M

    2000-12-01

    A fully automated magnetic resonance (MR) segmentation method for identification and volume measurement of demyelinated white matter has been developed. Spin-echo MR brain scans were performed in 38 patients with multiple sclerosis (MS) and in 46 healthy subjects. Segmentation of normal tissues and white matter lesions (WML) was obtained, based on their relaxation rates and proton density maps. For WML identification, additional criteria included three-dimensional (3D) lesion shape and surrounding tissue composition. Segmented images were generated, and normal brain tissues and WML volumes were obtained. Sensitivity, specificity, and reproducibility of the method were calculated, using the WML identified by two neuroradiologists as the gold standard. The average volume of "abnormal" white matter in normal subjects (false positive) was 0.11 ml (range 0-0.59 ml). In MS patients the average WML volume was 31.0 ml (range 1.1-132.5 ml), with a sensitivity of 87.3%. In the reproducibility study, the mean SD of WML volumes was 2.9 ml. The procedure appears suitable for monitoring disease changes over time. J. Magn. Reson. Imaging 2000;12:799-807. PMID:11105017

  14. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    NASA Astrophysics Data System (ADS)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  15. Generalized method for partial volume estimation and tissue segmentation in cerebral magnetic resonance images

    PubMed Central

    Khademi, April; Venetsanopoulos, Anastasios; Moody, Alan R.

    2014-01-01

    Abstract. An artifact found in magnetic resonance images (MRI) called partial volume averaging (PVA) has received much attention since accurate segmentation of cerebral anatomy and pathology is impeded by this artifact. Traditional neurological segmentation techniques rely on Gaussian mixture models to handle noise and PVA, or high-dimensional feature sets that exploit redundancy in multispectral datasets. Unfortunately, model-based techniques may not be optimal for images with non-Gaussian noise distributions and/or pathology, and multispectral techniques model probabilities instead of the partial volume (PV) fraction. For robust segmentation, a PV fraction estimation approach is developed for cerebral MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead, the PV fraction is estimated directly from each image using an adaptively defined global edge map constructed by exploiting a relationship between edge content and PVA. The final PVA map is used to segment anatomy and pathology with subvoxel accuracy. Validation on simulated and real, pathology-free T1 MRI (Gaussian noise), as well as pathological fluid attenuation inversion recovery MRI (non-Gaussian noise), demonstrate that the PV fraction is accurately estimated and the resultant segmentation is robust. Comparison to model-based methods further highlight the benefits of the current approach. PMID:26158022

  16. Cell nuclei segmentation for histopathological image analysis

    NASA Astrophysics Data System (ADS)

    Kong, Hui; Belkacem-Boussaid, Kamel; Gurcan, Metin

    2011-03-01

    In this paper, we propose a supervised method for segmenting cell nuclei from background and extra-cellular regions in pathological images. To this end, we segment the cell regions from the other areas by classifying the image pixels into either cell or extra-cellular category. Instead of using pixel color intensities, the color-texture extracted at the local neighborhood of each pixel is utilized as the input to our classification algorithm. The color-texture at each pixel is extracted by local Fourier transform (LFT) from a new color space, the most discriminant color space (MDC). The MDC color space is optimized to be a linear combination of the original RGB color space so that the extracted LFT texture features in the MDC color space can achieve the most discrimination in terms of classification (segmentation) performance. To speed up the texture feature extraction process, we develop an efficient LFT extraction algorithm based on image shifting and image integral. For evaluation, our method is compared with the state-of-the-art segmentation algorithms (Graph-cut, Mean-shift, etc.). Empirical results show that our segmentation method achieves better performance than these popular methods.

  17. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    NASA Astrophysics Data System (ADS)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  18. Automatic large-volume object region segmentation in LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2014-10-01

    LiDAR is a remote sensing method which produces precise point clouds consisting of millions of geo-spatially located 3D data points. Because of the nature of LiDAR point clouds, it can often be difficult for analysts to accurately and efficiently recognize and categorize objects. The goal of this paper is automatic large-volume object region segmentation in LiDAR point clouds. This efficient segmentation technique is intended to be a pre- processing step for the eventual classification of objects within the point cloud. The data is initially segmented into local histogram bins. This local histogram bin representation allows for the efficient consolidation of the point cloud data into voxels without the loss of location information. Additionally, by binning the points, important feature information can be extracted, such as the distribution of points, the density of points and a local ground. From these local histograms, a 3D automatic seeded region growing technique is applied. This technique performs seed selection based on two criteria, similarity and Euclidean distance to nearest neighbors. The neighbors of selected seeds are then examined and assigned labels based on location and Euclidean distance to a region mean. After the initial segmentation step, region integration is performed to rejoin over-segmented regions. The large amount of points in LiDAR data can make other segmentation techniques extremely time consuming. In addition to producing accurate object segmentation results, the proposed local histogram binning process allows for efficient segmentation, covering a point cloud of over 9,000 points in 10 seconds.

  19. Trabecular-Iris Circumference Volume in Open Angle Eyes Using Swept-Source Fourier Domain Anterior Segment Optical Coherence Tomography

    PubMed Central

    Rigi, Mohammed; Blieden, Lauren S.; Nguyen, Donna; Chuang, Alice Z.; Baker, Laura A.; Bell, Nicholas P.; Lee, David A.; Mankiewicz, Kimberly A.; Feldman, Robert M.

    2014-01-01

    Purpose. To introduce a new anterior segment optical coherence tomography parameter, trabecular-iris circumference volume (TICV), which measures the integrated volume of the peripheral angle, and establish a reference range in normal, open angle eyes. Methods. One eye of each participant with open angles and a normal anterior segment was imaged using 3D mode by the CASIA SS-1000 (Tomey, Nagoya, Japan). Trabecular-iris space area (TISA) and TICV at 500 and 750?µm were calculated. Analysis of covariance was performed to examine the effect of age and its interaction with spherical equivalent. Results. The study included 100 participants with a mean age of 50 (±15) years (range 20–79). TICV showed a normal distribution with a mean (±SD) value of 4.75?µL (±2.30) for TICV500 and a mean (±SD) value of 8.90?µL (±3.88) for TICV750. Overall, TICV showed an age-related reduction (P = 0.035). In addition, angle volume increased with increased myopia for all age groups, except for those older than 65 years. Conclusions. This study introduces a new parameter to measure peripheral angle volume, TICV, with age-adjusted normal ranges for open angle eyes. Further investigation is warranted to determine the clinical utility of this new parameter. PMID:25210623

  20. Efficient 3D volume segmentation of MR images by a modified deterministic annealing approach

    NASA Astrophysics Data System (ADS)

    Ge, Zhanyu; Mitra, Sunanda

    2001-07-01

    This paper presents the results of applying the deterministic annealing (DA) algorithm to simulated magnetic resonance image segmentation. The applicability of this methodology for 3-D segmentation has been rigorously tested by using the simulated MRI volumes of normal brain at the Brain Web [8] for the 181 slices and whole volume of different modalities (T1, T2, and PD) without and with various levels of noise and intensity inhomogeneities. With proper thresholding of the clusters formed by the modified DA almost zero misclassification was achieved without the presence of noise. Even up to 7% addition of noise and 40% inhomogeneity, the average misclassification rates of the voxels belonging to white matter, gray matter, and cerebrospinal fluid were found to be less than 5% after median filtering. The accuracy, stability, global optimization and speed of the DA algorithm for 3-D MR image segmentation could provide a more rigorous tool for identification of diseased brain tissues from 3-D MR images than other existing 3-D segmentation techniques. Further inquiry into the DA algorithm shows that it is a Bayesian classifier with the assumption that the data to be classified follow a multivariate normal distribution. The characteristic of being a Bayesian classifier guarantees its achievement of global optimization.

  1. A local contrast based approach to threshold segmentation for PET target volume delineation

    SciTech Connect

    Drever, Laura; Robinson, Don M.; McEwan, Alexander; Roa, Wilson

    2006-06-15

    Current radiation therapy techniques, such as intensity modulated radiation therapy and three-dimensional conformal radiotherapy rely on the precise delivery of high doses of radiation to well-defined volumes. CT, the imaging modality that is most commonly used to determine treatment volumes cannot, however, easily distinguish between cancerous and normal tissue. The ability of positron emission tomography (PET) to more readily differentiate between malignant and healthy tissues has generated great interest in using PET images to delineate target volumes for radiation treatment planning. At present the accurate geometric delineation of tumor volumes is a subject open to considerable interpretation. The possibility of using a local contrast based approach to threshold segmentation to accurately delineate PET target cross sections is investigated using well-defined cylindrical and spherical volumes. Contrast levels which yield correct volumetric quantification are found to be a function of the activity concentration ratio between target and background, target size, and slice location. Possibilities for clinical implementation are explored along with the limits posed by this form of segmentation.

  2. Four-chamber heart modeling and automatic segmentation for 3D cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Georgescu, Bogdan; Barbu, Adrian; Scheuering, Michael; Comaniciu, Dorin

    2008-03-01

    Multi-chamber heart segmentation is a prerequisite for quantification of the cardiac function. In this paper, we propose an automatic heart chamber segmentation system. There are two closely related tasks to develop such a system: heart modeling and automatic model fitting to an unseen volume. The heart is a complicated non-rigid organ with four chambers and several major vessel trunks attached. A flexible and accurate model is necessary to capture the heart chamber shape at an appropriate level of details. In our four-chamber surface mesh model, the following two factors are considered and traded-off: 1) accuracy in anatomy and 2) easiness for both annotation and automatic detection. Important landmarks such as valves and cusp points on the interventricular septum are explicitly represented in our model. These landmarks can be detected reliably to guide the automatic model fitting process. We also propose two mechanisms, the rotation-axis based and parallel-slice based resampling methods, to establish mesh point correspondence, which is necessary to build a statistical shape model to enforce priori shape constraints in the model fitting procedure. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3D computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9-dimensional similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3D shape through learning-based boundary delineation. Extensive experiments demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the state-of-the-art. This is the first study reporting stable results on a large cardiac CT dataset with 323 volumes. In addition, we achieve a speed of less than eight seconds for automatic segmentation of all four chambers.

  3. A novel colonic polyp volume segmentation method for computer tomographic colonography

    NASA Astrophysics Data System (ADS)

    Wang, Huafeng; Li, Lihong C.; Han, Hao; Song, Bowen; Peng, Hao; Wang, Yunhong; Wang, Lihua; Liang, Zhengrong

    2014-03-01

    Colorectal cancer is the third most common type of cancer. However, this disease can be prevented by detection and removal of precursor adenomatous polyps after the diagnosis given by experts on computer tomographic colonography (CTC). During CTC diagnosis, the radiologist looks for colon polyps and measures not only the size but also the malignancy. It is a common sense that to segment polyp volumes from their complicated growing environment is of much significance for accomplishing the CTC based early diagnosis task. Previously, the polyp volumes are mainly given from the manually or semi-automatically drawing by the radiologists. As a result, some deviations cannot be avoided since the polyps are usually small (6~9mm) and the radiologists' experience and knowledge are varying from one to another. In order to achieve automatic polyp segmentation carried out by the machine, we proposed a new method based on the colon decomposition strategy. We evaluated our algorithm on both phantom and patient data. Experimental results demonstrate our approach is capable of segment the small polyps from their complicated growing background.

  4. Semi-automated segmentation of carotid artery total plaque volume from three dimensional ultrasound carotid imaging

    NASA Astrophysics Data System (ADS)

    Buchanan, D.; Gyacskov, I.; Ukwatta, E.; Lindenmaier, T.; Fenster, A.; Parraga, G.

    2012-03-01

    Carotid artery total plaque volume (TPV) is a three-dimensional (3D) ultrasound (US) imaging measurement of carotid atherosclerosis, providing a direct non-invasive and regional estimation of atherosclerotic plaque volume - the direct determinant of carotid stenosis and ischemic stroke. While 3DUS measurements of TPV provide the potential to monitor plaque in individual patients and in populations enrolled in clinical trials, until now, such measurements have been performed manually which is laborious, time-consuming and prone to intra-observer and inter-observer variability. To address this critical translational limitation, here we describe the development and application of a semi-automated 3DUS plaque volume measurement. This semi-automated TPV measurement incorporates three user-selected boundaries in two views of the 3DUS volume to generate a geometric approximation of TPV for each plaque measured. We compared semi-automated repeated measurements to manual segmentation of 22 individual plaques ranging in volume from 2mm3 to 151mm3. Mean plaque volume was 43+/-40mm3 for semi-automated and 48+/-46mm3 for manual measurements and these were not significantly different (p=0.60). Mean coefficient of variation (CV) was 12.0+/-5.1% for the semi-automated measurements.

  5. Segmentation of cerebral MRI scans using a partial volume model, shading correction, and an anatomical prior

    NASA Astrophysics Data System (ADS)

    Noe, Aljaz; Kovacic, Stanislav; Gee, James C.

    2001-07-01

    A mixture-model clustering algorithm is presented for robust MRI brain image segmentation in the presence of partial volume averaging. The method uses additional classes to represent partial volume voxels of mixed tissue type in the image. Probability distributions for partial volume voxels are modeled accordingly. The image model also allows for tissue-dependent variance values and voxel neighborhood information is taken into account in the clustering formulation. Additionally we extend the image model to account for a low frequency intensity inhomogeneity that may be present in an image. This so-called shading effect is modeled as a linear combination of polynomial basis functions, and is estimated within the clustering algorithm. We also investigate the possibility of using additional anatomical prior information obtained by registering tissue class template images to the image to be segmented. The final result is the estimated fractional amount of each tissue type present within a voxel in addition to the label assigned to the voxel. A parallel implementation of the method is evaluated using synthetic and real MRI data.

  6. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  7. An interactive system for volume segmentation in computer-assisted surgery

    NASA Astrophysics Data System (ADS)

    Kunert, Tobias; Heimann, Tobias; Schroter, Andre; Schobinger, Max; Bottger, Thomas; Thorn, Matthias; Wolf, Ivo; Engelmann, Uwe; Meinzer, Hans-Peter

    2004-05-01

    Computer-assisted surgery aims at a decreased surgical risk and a reduced recovery time of patients. However, its use is still limited to complex cases because of the high effort. It is often caused by the extensive medical image analysis. Especially, image segmentation requires a lot of manual work. Surgeons and radiologists are suffering from usability problems of many workstations. In this work, we present a dedicated workplace for interactive segmentation integratd within the CHILI (tele-)radiology system. The software comes with a lot of improvements with respect to its graphical user interface, the segmentation process and the segmentatin methods. We point out important software requirements and give insight into the concepts which were implemented. Further examples and applications illustrate the software system.

  8. Leaf image segmentation method based on multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Jin-Wei; Shi, Wen; Liao, Gui-Ping

    2013-12-01

    To identify singular regions of crop leaf affected by diseases, based on multifractal detrended fluctuation analysis (MF-DFA), an image segmentation method is proposed. In the proposed method, first, we defend a new texture descriptor: local generalized Hurst exponent, recorded as LHq based on MF-DFA. And then, box-counting dimension f(LHq) is calculated for sub-images constituted by the LHq of some pixels, which come from a specific region. Consequently, series of f(LHq) of the different regions can be obtained. Finally, the singular regions are segmented according to the corresponding f(LHq). Six kinds of corn diseases leaf's images are tested in our experiments. Both the proposed method and other two segmentation methods—multifractal spectrum based and fuzzy C-means clustering have been compared in the experiments. The comparison results demonstrate that the proposed method can recognize the lesion regions more effectively and provide more robust segmentations.

  9. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    NASA Astrophysics Data System (ADS)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  10. Partial volume segmentation of brain magnetic resonance images based on maximum a posteriori probability

    SciTech Connect

    Li Xiang; Li Lihong; Lu Hongbing; Liang Zhengrong

    2005-07-15

    Noise, partial volume (PV) effect, and image-intensity inhomogeneity render a challenging task for segmentation of brain magnetic resonance (MR) images. Most of the current MR image segmentation methods focus on only one or two of the above-mentioned effects. The objective of this paper is to propose a unified framework, based on the maximum a posteriori probability principle, by taking all these effects into account simultaneously in order to improve image segmentation performance. Instead of labeling each image voxel with a unique tissue type, the percentage of each voxel belonging to different tissues, which we call a mixture, is considered to address the PV effect. A Markov random field model is used to describe the noise effect by considering the nearby spatial information of the tissue mixture. The inhomogeneity effect is modeled as a bias field characterized by a zero mean Gaussian prior probability. The well-known fuzzy C-mean model is extended to define the likelihood function of the observed image. This framework reduces theoretically, under some assumptions, to the adaptive fuzzy C-mean (AFCM) algorithm proposed by Pham and Prince. Digital phantom and real clinical MR images were used to test the proposed framework. Improved performance over the AFCM algorithm was observed in a clinical environment where the inhomogeneity, noise level, and PV effect are commonly encountered.

  11. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    NASA Astrophysics Data System (ADS)

    Leibfarth, S.; Eckert, F.; Welz, S.; Siegel, C.; Schmidt, H.; Schwenzer, N.; Zips, D.; Thorwarth, D.

    2015-07-01

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only.

  12. Influences of skull segmentation inaccuracies on EEG source analysis.

    PubMed

    Lanfer, B; Scherg, M; Dannhauer, M; Knösche, T R; Burger, M; Wolters, C H

    2012-08-01

    The low-conducting human skull is known to have an especially large influence on electroencephalography (EEG) source analysis. Because of difficulties segmenting the complex skull geometry out of magnetic resonance images, volume conductor models for EEG source analysis might contain inaccuracies and simplifications regarding the geometry of the skull. The computer simulation study presented here investigated the influences of a variety of skull geometry deficiencies on EEG forward simulations and source reconstruction from EEG data. Reference EEG data was simulated in a detailed and anatomically plausible reference model. Test models were derived from the reference model representing a variety of skull geometry inaccuracies and simplifications. These included erroneous skull holes, local errors in skull thickness, modeling cavities as bone, downward extension of the model and simplifying the inferior skull or the inferior skull and scalp as layers of constant thickness. The reference EEG data was compared to forward simulations in the test models, and source reconstruction in the test models was performed on the simulated reference data. The finite element method with high-resolution meshes was employed for all forward simulations. It was found that large skull geometry inaccuracies close to the source space, for example, when cutting the model directly below the skull, led to errors of 20mm and more for extended source space regions. Local defects, for example, erroneous skull holes, caused non-negligible errors only in the vicinity of the defect. The study design allowed a comparison of influence size, and guidelines for modeling the skull geometry were concluded. PMID:22584227

  13. Machine learning based vesselness measurement for coronary artery segmentation in cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Loziczonek, Maciej; Georgescu, Bogdan; Zhou, S. Kevin; Vega-Higuera, Fernando; Comaniciu, Dorin

    2011-03-01

    Automatic coronary centerline extraction and lumen segmentation facilitate the diagnosis of coronary artery disease (CAD), which is a leading cause of death in developed countries. Various coronary centerline extraction methods have been proposed and most of them are based on shortest path computation given one or two end points on the artery. The major variation of the shortest path based approaches is in the different vesselness measurements used for the path cost. An empirically designed measurement (e.g., the widely used Hessian vesselness) is by no means optimal in the use of image context information. In this paper, a machine learning based vesselness is proposed by exploiting the rich domain specific knowledge embedded in an expert-annotated dataset. For each voxel, we extract a set of geometric and image features. The probabilistic boosting tree (PBT) is then used to train a classifier, which assigns a high score to voxels inside the artery and a low score to those outside. The detection score can be treated as a vesselness measurement in the computation of the shortest path. Since the detection score measures the probability of a voxel to be inside the vessel lumen, it can also be used for the coronary lumen segmentation. To speed up the computation, we perform classification only for voxels around the heart surface, which is achieved by automatically segmenting the whole heart from the 3D volume in a preprocessing step. An efficient voxel-wise classification strategy is used to further improve the speed. Experiments demonstrate that the proposed learning based vesselness outperforms the conventional Hessian vesselness in both speed and accuracy. On average, it only takes approximately 2.3 seconds to process a large volume with a typical size of 512x512x200 voxels.

  14. Whole-body and segmental muscle volume are associated with ball velocity in high school baseball pitchers

    PubMed Central

    Yamada, Yosuke; Yamashita, Daichi; Yamamoto, Shinji; Matsui, Tomoyuki; Seo, Kazuya; Azuma, Yoshikazu; Kida, Yoshikazu; Morihara, Toru; Kimura, Misaka

    2013-01-01

    The aim of the study was to examine the relationship between pitching ball velocity and segmental (trunk, upper arm, forearm, upper leg, and lower leg) and whole-body muscle volume (MV) in high school baseball pitchers. Forty-seven male high school pitchers (40 right-handers and seven left-handers; age, 16.2 ± 0.7 years; stature, 173.6 ± 4.9 cm; mass, 65.0 ± 6.8 kg, years of baseball experience, 7.5 ± 1.8 years; maximum pitching ball velocity, 119.0 ± 9.0 km/hour) participated in the study. Segmental and whole-body MV were measured using segmental bioelectrical impedance analysis. Maximum ball velocity was measured with a sports radar gun. The MV of the dominant arm was significantly larger than the MV of the non-dominant arm (P < 0.001). There was no difference in MV between the dominant and non-dominant legs. Whole-body MV was significantly correlated with ball velocity (r = 0.412, P < 0.01). Trunk MV was not correlated with ball velocity, but the MV for both lower legs, and the dominant upper leg, upper arm, and forearm were significantly correlated with ball velocity (P < 0.05). The results were not affected by age or years of baseball experience. Whole-body and segmental MV are associated with ball velocity in high school baseball pitchers. However, the contribution of the muscle mass on pitching ball velocity is limited, thus other fundamental factors (ie, pitching skill) are also important. PMID:24379713

  15. MR volume segmentation of gray matter and white matter using manual thresholding: Dependence on image brightness

    SciTech Connect

    Harris, G.J.; Barta, P.E.; Peng, L.W.; Lee, S.; Brettschneider, P.D.; Shah, A.; Henderer, J.D.; Schlaepfer, T.E.; Pearlson, G.D. Tufts Univ. School of Medicine, Boston, MA )

    1994-02-01

    To describe a quantitative MR imaging segmentation method for determination of the volume of cerebrospinal fluid, gray matter, and white matter in living human brain, and to determine the method's reliability. We developed a computer method that allows rapid, user-friendly determination of cerebrospinal fluid, gray matter, and white matter volumes in a reliable manner, both globally and regionally. This method was applied to a large control population (N = 57). Initially, image brightness had a strong correlation with the gray-white ratio (r = .78). Bright images tended to overestimate, dim images to underestimate gray matter volumes. This artifact was corrected for by offsetting each image to an approximately equal brightness. After brightness correction, gray-white ratio was correlated with age (r = -.35). The age-dependent gray-white ratio was similar to that for the same age range in a prior neuropathology report. Interrater reliability was high (.93 intraclass correlation coefficient). The method described here for gray matter, white matter, and cerebrospinal fluid volume calculation is reliable and valid. A correction method for an artifact related to image brightness was developed. 12 refs., 3 figs.

  16. Hierarchical probabilistic Gabor and MRF segmentation of brain tumours in MRI volumes.

    PubMed

    Subbanna, Nagesh K; Precup, Doina; Collins, D Louis; Arbel, Tal

    2013-01-01

    In this paper, we present a fully automated hierarchical probabilistic framework for segmenting brain tumours from multispectral human brain magnetic resonance images (MRIs) using multiwindow Gabor filters and an adapted Markov Random Field (MRF) framework. In the first stage, a customised Gabor decomposition is developed, based on the combined-space characteristics of the two classes (tumour and non-tumour) in multispectral brain MRIs in order to optimally separate tumour (including edema) from healthy brain tissues. A Bayesian framework then provides a coarse probabilistic texture-based segmentation of tumours (including edema) whose boundaries are then refined at the voxel level through a modified MRF framework that carefully separates the edema from the main tumour. This customised MRF is not only built on the voxel intensities and class labels as in traditional MRFs, but also models the intensity differences between neighbouring voxels in the likelihood model, along with employing a prior based on local tissue class transition probabilities. The second inference stage is shown to resolve local inhomogeneities and impose a smoothing constraint, while also maintaining the appropriate boundaries as supported by the local intensity difference observations. The method was trained and tested on the publicly available MICCAI 2012 Brain Tumour Segmentation Challenge (BRATS) Database [1] on both synthetic and clinical volumes (low grade and high grade tumours). Our method performs well compared to state-of-the-art techniques, outperforming the results of the top methods in cases of clinical high grade and low grade tumour core segmentation by 40% and 45% respectively. PMID:24505735

  17. Segment clustering methodology for unsupervised Holter recordings analysis

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sotelo, Jose Luis; Peluffo-Ordońez, Diego; Castellanos Dominguez, German

    2015-01-01

    Cardiac arrhythmia analysis on Holter recordings is an important issue in clinical settings, however such issue implicitly involves attending other problems related to the large amount of unlabelled data which means a high computational cost. In this work an unsupervised methodology based in a segment framework is presented, which consists of dividing the raw data into a balanced number of segments in order to identify fiducial points, characterize and cluster the heartbeats in each segment separately. The resulting clusters are merged or split according to an assumed criterion of homogeneity. This framework compensates the high computational cost employed in Holter analysis, being possible its implementation for further real time applications. The performance of the method is measure over the records from the MIT/BIH arrhythmia database and achieves high values of sensibility and specificity, taking advantage of database labels, for a broad kind of heartbeats types recommended by the AAMI.

  18. Microscopy image segmentation tool: Robust image data analysis

    NASA Astrophysics Data System (ADS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  19. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  20. Analysis of recent segmental duplications in the bovine genome

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Duplicated sequences are an important source of gene innovation and structural variation within mammalian genomes. We describe the first systematic and genome-wide analysis of segmental duplications in the modern domesticated cattle (Bos taurus). Using two distinct computational analyses, we estimat...

  1. Atrophy of the Cerebellar Vermis in Essential Tremor: Segmental Volumetric MRI Analysis.

    PubMed

    Shin, Hyeeun; Lee, Dong-Kyun; Lee, Jong-Min; Huh, Young-Eun; Youn, Jinyoung; Louis, Elan D; Cho, Jin Whan

    2016-04-01

    Postmortem studies of essential tremor (ET) have demonstrated the presence of degenerative changes in the cerebellum, and imaging studies have examined related structural changes in the brain. However, their results have not been completely consistent and the number of imaging studies has been limited. We aimed to study cerebellar involvement in ET using MRI segmental volumetric analysis. In addition, a unique feature of this study was that we stratified ET patients into subtypes based on the clinical presence of cerebellar signs and compared their MRI findings. Thirty-nine ET patients and 36 normal healthy controls, matched for age and sex, were enrolled. Cerebellar signs in ET patients were assessed using the clinical tremor rating scale and International Cooperative Ataxia Rating Scale. ET patients were divided into two groups: patients with cerebellar signs (cerebellar-ET) and those without (classic-ET). MRI volumetry was performed using CIVET pipeline software. Data on whole and segmented cerebellar volumes were analyzed using SPSS. While there was a trend for whole cerebellar volume to decrease from controls to classic-ET to cerebellar-ET, this trend was not significant. The volume of several contiguous segments of the cerebellar vermis was reduced in ET patients versus controls. Furthermore, these vermis volumes were reduced in the cerebellar-ET group versus the classic-ET group. The volume of several adjacent segments of the cerebellar vermis was reduced in ET. This effect was more evident in ET patients with clinical signs of cerebellar dysfunction. The presence of tissue atrophy suggests that ET might be a neurodegenerative disease. PMID:26062905

  2. Volume change of segments II and III of the liver after gastrectomy in patients with gastric cancer

    PubMed Central

    Ozutemiz, Can; Obuz, Funda; Taylan, Abdullah; Atila, Koray; Bora, Seymen; Ellidokuz, Hulya

    2016-01-01

    PURPOSE We aimed to evaluate the relationship between gastrectomy and the volume of liver segments II and III in patients with gastric cancer. METHODS Computed tomography images of 54 patients who underwent curative gastrectomy for gastric adenocarcinoma were retrospectively evaluated by two blinded observers. Volumes of the total liver and segments II and III were measured. The difference between preoperative and postoperative volume measurements was compared. RESULTS Total liver volumes measured by both observers in the preoperative and postoperative scans were similar (P > 0.05). High correlation was found between both observers (preoperative r=0.99; postoperative r=0.98). Total liver volumes showed a mean reduction of 13.4% after gastrectomy (P = 0.977). The mean volume of segments II and III showed similar decrease in measurements of both observers (38.4% vs. 36.4%, P = 0.363); the correlation between the observers were high (preoperative r=0.97, P < 0.001; postoperative r=0.99, P < 0.001). Volume decrease in the rest of the liver was not different between the observers (8.2% vs. 9.1%, P = 0.388). Time had poor correlation with volume change of segments II and III and the total liver for each observer (observer 1, rseg2/3=0.32, rtotal=0.13; observer 2, rseg2/3=0.37, rtotal=0.16). CONCLUSION Segments II and III of the liver showed significant atrophy compared with the rest of the liver and the total liver after gastrectomy. Volume reduction had poor correlation with time. PMID:26899148

  3. Salted and preserved duck eggs: a consumer market segmentation analysis.

    PubMed

    Arthur, Jennifer; Wiseman, Kelleen; Cheng, K M

    2015-08-01

    The combination of increasing ethnic diversity in North America and growing consumer support for local food products may present opportunities for local producers and processors in the ethnic foods product category. Our study examined the ethnic Chinese (pop. 402,000) market for salted and preserved duck eggs in Vancouver, British Columbia (BC), Canada. The objective of the study was to develop a segmentation model using survey data to categorize consumer groups based on their attitudes and the importance they placed on product attributes. We further used post-segmentation acculturation score, demographics and buyer behaviors to define these groups. Data were gathered via a survey of randomly selected Vancouver households with Chinese surnames (n = 410), targeting the adult responsible for grocery shopping. Results from principal component analysis and a 2-step cluster analysis suggest the existence of 4 market segments, described as Enthusiasts, Potentialists, Pragmatists, Health Skeptics (salted duck eggs), and Neutralists (preserved duck eggs). Kruskal Wallis tests and post hoc Mann-Whitney tests found significant differences between segments in terms of attitudes and the importance placed on product characteristics. Health Skeptics, preserved egg Potentialists, and Pragmatists of both egg products were significantly biased against Chinese imports compared to others. Except for Enthusiasts, segments disagreed that eggs are 'Healthy Products'. Preserved egg Enthusiasts had a significantly lower acculturation score (AS) compared to all others, while salted egg Enthusiasts had a lower AS compared to Health Skeptics. All segments rated "produced in BC, not mainland China" products in the "neutral to very likely" range for increasing their satisfaction with the eggs. Results also indicate that buyers of each egg type are willing to pay an average premium of at least 10% more for BC produced products versus imports, with all other characteristics equal. Overall results indicate that opportunities exist for local producers and processors: Chinese Canadians with lower AS form a core part of the potential market. PMID:26089479

  4. Small rural hospitals: an example of market segmentation analysis.

    PubMed

    Mainous, A G; Shelby, R L

    1991-01-01

    In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution. PMID:10111266

  5. Linear test bed. Volume 1: Test bed no. 1. [aerospike test bed with segmented combustor

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Linear Test Bed program was to design, fabricate, and evaluation test an advanced aerospike test bed which employed the segmented combustor concept. The system is designated as a linear aerospike system and consists of a thrust chamber assembly, a power package, and a thrust frame. It was designed as an experimental system to demonstrate the feasibility of the linear aerospike-segmented combustor concept. The overall dimensions are 120 inches long by 120 inches wide by 96 inches in height. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure, at a mixture ratio of 5.5. At the design conditions, the sea level thrust is 200,000 pounds. The complete program including concept selection, design, fabrication, component test, system test, supporting analysis and posttest hardware inspection is described.

  6. Documented Safety Analysis for the B695 Segment

    SciTech Connect

    Laycak, D

    2008-09-11

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems, and keeping them as simple as possible while complying with industry standards and institutional requirements. No operations to be performed in the B695 Segment or building system are considered to be complex. No anticipated future change in the facility mission is expected to impact the extent of safety analysis documented in this DSA.

  7. MRI segmentation analysis in temporal lobe and idiopathic generalized epilepsy

    PubMed Central

    2014-01-01

    Background Temporal lobe epilepsy (TLE) and idiopathic generalized epilepsy (IGE) patients have each been associated with extensive brain atrophy findings, yet to date there are no reports of head to head comparison of both patient groups. Our aim was to assess and compare between tissue-specific and structural brain atrophy findings in TLE to IGE patients and to healthy controls (HC). Methods TLE patients were classified in TLE lesional (L-TLE) or non-lesional (NL-TLE) based on presence or absence of MRI temporal structural abnormalities. High resolution 3 T MRI with automated segmentation by SIENAX and FIRST tools were performed in a group of patients with temporal lobe epilepsy (11 L-TLE and 15 NL-TLE) and in15 IGE as well as in 26 HC. Normal brain volume (NBV), normal grey matter volume (NGMV), normal white matter volume (NWMV), and volumes of subcortical deep grey matter structures were quantified. Using regression analyses, differences between the groups in both volume and left/right asymmetry were evaluated. Additionally, laterality of results was also evaluated to separately quantify ipsilateral and contralateral effects in the TLE group. Results All epilepsy groups had significantly lower NBV and NWMV compared to HC (p?volume than HC and IGE (p?=?0.001), and all epilepsy groups had significantly lower amygdala volume than HC (p?

  8. Motion analysis and segmentation through spatio-temporal slices processing.

    PubMed

    Ngo, Chong-Wah; Pong, Ting-Chuen; Zhang, Hong-Jiang

    2003-01-01

    This paper presents new approaches in characterizing and segmenting the content of video. These approaches are developed based upon the pattern analysis of spatio-temporal slices. While traditional approaches to motion sequence analysis tend to formulate computational methodologies on two or three adjacent frames, spatio-temporal slices provide rich visual patterns along a larger temporal scale. We first describe a motion computation method based on a structure tensor formulation. This method encodes visual patterns of spatio-temporal slices in a tensor histogram, on one hand, characterizing the temporal changes of motion over time, on the other hand, describing the motion trajectories of different moving objects. By analyzing the tensor histogram of an image sequence, we can temporally segment the sequence into several motion coherent subunits, in addition, spatially segment the sequence into various motion layers. The temporal segmentation of image sequences expeditiously facilitates the motion annotation and content representation of a video, while the spatial decomposition of image sequences leads to a prominent way of reconstructing background panoramic images and computing foreground objects. PMID:18237913

  9. Three-dimensional segmentation of pulmonary artery volume from thoracic computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Lindenmaier, Tamas J.; Sheikh, Khadija; Bluemke, Emma; Gyacskov, Igor; Mura, Marco; Licskai, Christopher; Mielniczuk, Lisa; Fenster, Aaron; Cunningham, Ian A.; Parraga, Grace

    2015-03-01

    Chronic obstructive pulmonary disease (COPD), is a major contributor to hospitalization and healthcare costs in North America. While the hallmark of COPD is airflow limitation, it is also associated with abnormalities of the cardiovascular system. Enlargement of the pulmonary artery (PA) is a morphological marker of pulmonary hypertension, and was previously shown to predict acute exacerbations using a one-dimensional diameter measurement of the main PA. We hypothesized that a three-dimensional (3D) quantification of PA size would be more sensitive than 1D methods and encompass morphological changes along the entire central pulmonary artery. Hence, we developed a 3D measurement of the main (MPA), left (LPA) and right (RPA) pulmonary arteries as well as total PA volume (TPAV) from thoracic CT images. This approach incorporates segmentation of pulmonary vessels in cross-section for the MPA, LPA and RPA to provide an estimate of their volumes. Three observers performed five repeated measurements for 15 ex-smokers with ≥10 pack-years, and randomly identified from a larger dataset of 199 patients. There was a strong agreement (r2=0.76) for PA volume and PA diameter measurements, which was used as a gold standard. Observer measurements were strongly correlated and coefficients of variation for observer 1 (MPA:2%, LPA:3%, RPA:2%, TPA:2%) were not significantly different from observer 2 and 3 results. In conclusion, we generated manual 3D pulmonary artery volume measurements from thoracic CT images that can be performed with high reproducibility. Future work will involve automation for implementation in clinical workflows.

  10. Automated 3D Segmentation of Multiple Surfaces with a Shared Hole: Segmentation of the Neural Canal Opening in SD-OCT Volumes

    PubMed Central

    Antony, Bhavna J.; Miri, Mohammed S.; Abrŕmoff, Michael D.; Kwon, Young H.; Garvin, Mona K.

    2015-01-01

    The need to segment multiple interacting surfaces is a common problem in medical imaging and it is often assumed that such surfaces are continuous within the confines of the region of interest. However, in some application areas, the surfaces of interest may contain a shared hole in which the surfaces no longer exist and the exact location of the hole boundary is not known a priori. The boundary of the neural canal opening seen in spectral-domain optical coherence tomography volumes is an example of a “hole” embedded with multiple surrounding surfaces. Segmentation approaches that rely on finding the surfaces alone are prone to failures as deeper structures within the hole can “attract” the surfaces and pull them away from their correct location at the hole boundary. With this application area in mind, we present a graph-theoretic approach for segmenting multiple surfaces with a shared hole. The overall cost function that is optimized consists of both the costs of the surfaces outside the hole and the cost of boundary of the hole itself. The constraints utilized were appropriately adapted in order to ensure the smoothness of the hole boundary in addition to ensuring the smoothness of the non-overlapping surfaces. By using this approach, a significant improvement was observed over a more traditional two-pass approach in which the surfaces are segmented first (assuming the presence of no hole) followed by segmenting the neural canal opening. PMID:25333185

  11. Comparing manual and automatic segmentation of hippocampal volumes: reliability and validity issues in younger and older brains.

    PubMed

    Wenger, Elisabeth; Mĺrtensson, Johan; Noack, Hannes; Bodammer, Nils Christian; Kühn, Simone; Schaefer, Sabine; Heinze, Hans-Jochen; Düzel, Emrah; Bäckman, Lars; Lindenberger, Ulman; Lövdén, Martin

    2014-08-01

    We compared hippocampal volume measures obtained by manual tracing to automatic segmentation with FreeSurfer in 44 younger (20-30 years) and 47 older (60-70 years) adults, each measured with magnetic resonance imaging (MRI) over three successive time points, separated by four months. Retest correlations over time were very high for both manual and FreeSurfer segmentations. With FreeSurfer, correlations over time were significantly lower in the older than in the younger age group, which was not the case with manual segmentation. Pearson correlations between manual and FreeSurfer estimates were sufficiently high, numerically even higher in the younger group, whereas intra-class correlation coefficient (ICC) estimates were lower in the younger than in the older group. FreeSurfer yielded higher volume estimates than manual segmentation, particularly in the younger age group. Importantly, FreeSurfer consistently overestimated hippocampal volumes independently of manually assessed volume in the younger age group, but overestimated larger volumes in the older age group to a less extent, introducing a systematic age bias in the data. Age differences in hippocampal volumes were significant with FreeSurfer, but not with manual tracing. Manual tracing resulted in a significant difference between left and right hippocampus (right > left), whereas this asymmetry effect was considerably smaller with FreeSurfer estimates. We conclude that FreeSurfer constitutes a feasible method to assess differences in hippocampal volume in young adults. FreeSurfer estimates in older age groups should, however, be interpreted with care until the automatic segmentation pipeline has been further optimized to increase validity and reliability in this age group. PMID:24532539

  12. User's manual for strategic satellite system terminal segment life cycle cost model, volume 1

    NASA Astrophysics Data System (ADS)

    Cox, J. E.; Peters, D. B.

    1981-03-01

    A computerized Life Cycle Cost (LCC) Model has been developed for the Strategic Satellite System (SSS) Terminal Segment Program. The model is tailored for this program in its structuring of terminal equipment and in the particular trade-off analyses which it supports. The model emphasizes system support costs and design related cost drivers such as reliability and installation. A capability to study the cost trade-offs between fault isolation using built-in test equipment and using peculiar support equipment is included. The model also considers three-level maintenance philosophies which include centralized intermediate maintenance facilities. Volume 1 of this User's Manual provides detailed instructions for entering data and running the LCC Model through step-by-step instructions and illustrative runs. All Air Force inputs to the model equations are also provided. Volume 2 provides listings of the FORTRAN source code for the three programs comprising the model. This document is a part of the SSS Request for Proposal Package, and is specifically a supplement to the Statement of Work in that package. The model is intended to be used in SSS Source Selection.

  13. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  14. Extracellular and intracellular volume variations during postural change measured by segmental and wrist-ankle bioimpedance spectroscopy.

    PubMed

    Fenech, Marianne; Jaffrin, Michel Y

    2004-01-01

    Extracellular (ECW) and intracellular (ICW) volumes were measured using both segmental and wrist-ankle (W-A) bioimpedance spectroscopy (5-1000 kHz) in 15 healthy subjects (7 men, 8 women). In the 1st protocol, the subject, after sitting for 30 min, laid supine for at least 30 min. In the second protocol, the subject, who had been supine for 1 hr, sat up in bed for 10 min and returned to supine position for another hour. Segmental ECW and ICW resistances of legs, arms and trunk were measured by placing four voltage electrodes on wrist, shoulder, top of thigh and ankle and using Hanai's conductivity theory. W-A resistances were found to be very close to the sum of segmental resistances. When switching from sitting to supine (protocol 1), the mean ECW leg resistance increased by 18.2%, that of arm and W-A by 12.4%. Trunk resistance also increased but not significantly by 4.8%. Corresponding increases in ICW resistance were smaller for legs (3.7%) and arm (-0.7%) but larger for the trunk (21.4%). Total body ECW volumes from segmental measurements were in good agreement with W-A and Watson anthropomorphic correlation. The decrease in total ECW volume (when supine) calculated from segmental resistances was at 0.79 l less than the W-A one (1.12 l). Total ICW volume reductions were 3.4% (segmental) and 3.8% (W-A). Tests of protocol 2 confirmed that resistance and fluid volume values were not affected by a temporary position change. PMID:14723506

  15. An automatic method of brain tumor segmentation from MRI volume based on the symmetry of brain and level set method

    NASA Astrophysics Data System (ADS)

    Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su

    2010-02-01

    This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.

  16. Three-dimensional model-guided segmentation and analysis of medical images

    NASA Astrophysics Data System (ADS)

    Arata, Louis K.; Dhawan, Atam P.; Broderick, Joseph; Gaskill, Mary

    1992-06-01

    Automated or semi-automated analysis and labeling of structural brain images, such as magnetic resonance (MR) and computed tomography, is desirable for a number of reasons. Quantification of brain volumes can aid in the study of various diseases and the affect of various drug regimes. A labeled structural image, when registered with a functional image such as positron emission tomography or single photon emission computed tomography, allows the quantification of activity in various brain subvolumes such as the major lobes. Because even low resolution scans (7.5 to 8.0 mm slices) have 15 to 17 slices in order to image the entire head of the subject hand segmentation of these slices is a very laborious process. However, because of the spatial complexity of many of the brain structures notably the ventricles, automatic segmentation is not a simple undertaking. In order to accurately segment a structure such as the ventricles we must have a model of equal complexity to guide the segmentation. Also, we must have a model which can incorporate the variability among different subjects from a pre-specified group. Analysis of MR brain scans is accomplished by utilizing the data from T2 weighted and proton density images to isolate the regions of interest. Identification is then done automatically with the aid of a composite model formed from the operator assisted segmentation of MR scans of subjects from the same group. We describe the construction of the model and demonstrate its use in the segmentation and labeling of the ventricles in the brain.

  17. Influence of cold walls on PET image quantification and volume segmentation: A phantom study

    SciTech Connect

    Berthon, B.; Marshall, C.; Edwards, A.; Spezi, E.; Evans, M.

    2013-08-15

    Purpose: Commercially available fillable plastic inserts used in positron emission tomography phantoms usually have thick plastic walls, separating their content from the background activity. These “cold” walls can modify the intensity values of neighboring active regions due to the partial volume effect, resulting in errors in the estimation of standardized uptake values. Numerous papers suggest that this is an issue for phantom work simulating tumor tissue, quality control, and calibration work. This study aims to investigate the influence of the cold plastic wall thickness on the quantification of 18F-fluorodeoxyglucose on the image activity recovery and on the performance of advanced automatic segmentation algorithms for the delineation of active regions delimited by plastic walls.Methods: A commercial set of six spheres of different diameters was replicated using a manufacturing technique which achieves a reduction in plastic walls thickness of up to 90%, while keeping the same internal volume. Both sets of thin- and thick-wall inserts were imaged simultaneously in a custom phantom for six different tumor-to-background ratios. Intensity values were compared in terms of mean and maximum standardized uptake values (SUVs) in the spheres and mean SUV of the hottest 1 ml region (SUV{sub max}, SUV{sub mean}, and SUV{sub peak}). The recovery coefficient (RC) was also derived for each sphere. The results were compared against the values predicted by a theoretical model of the PET-intensity profiles for the same tumor-to-background ratios (TBRs), sphere sizes, and wall thicknesses. In addition, ten automatic segmentation methods, written in house, were applied to both thin- and thick-wall inserts. The contours obtained were compared to computed tomography derived gold standard (“ground truth”), using five different accuracy metrics.Results: The authors' results showed that thin-wall inserts achieved significantly higher SUV{sub mean}, SUV{sub max}, and RC values (up to 25%, 16%, and 25% higher, respectively) compared to thick-wall inserts, which was in agreement with the theory. This effect decreased with increasing sphere size and TBR, and resulted in substantial (>5%) differences between thin- and thick-wall inserts for spheres up to 30 mm diameter and TBR up to 4. Thinner plastic walls were also shown to significantly improve the delineation accuracy for the majority of the segmentation methods tested, by increasing the proportion of lesion voxels detected, although the errors in image quantification remained non-negligible.Conclusions: This study quantified the significant effect of a 90% reduction in the thickness of insert walls on SUV quantification and PET-based boundary detection. Mean SUVs inside the inserts and recovery coefficients were particularly affected by the presence of thick cold walls, as predicted by a theoretical approach. The accuracy of some delineation algorithms was also significantly improved by the introduction of thin wall inserts instead of thick wall inserts. This study demonstrates the risk of errors deriving from the use of cold wall inserts to assess and compare the performance of PET segmentation methods.

  18. High volume data storage architecture analysis

    NASA Technical Reports Server (NTRS)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  19. Study of tracking and data acquisition system for the 1990's. Volume 4: TDAS space segment architecture

    NASA Technical Reports Server (NTRS)

    Orr, R. S.

    1984-01-01

    Tracking and data acquisition system (TDAS) requirements, TDAS architectural goals, enhanced TDAS subsystems, constellation and networking options, TDAS spacecraft options, crosslink implementation, baseline TDAS space segment architecture, and treat model development/security analysis are addressed.

  20. An analysis of segmentation dynamics throughout embryogenesis in the centipede Strigamia maritima

    PubMed Central

    2013-01-01

    Background Most segmented animals add segments sequentially as the animal grows. In vertebrates, segment patterning depends on oscillations of gene expression coordinated as travelling waves in the posterior, unsegmented mesoderm. Recently, waves of segmentation gene expression have been clearly documented in insects. However, it remains unclear whether cyclic gene activity is widespread across arthropods, and possibly ancestral among segmented animals. Previous studies have suggested that a segmentation oscillator may exist in Strigamia, an arthropod only distantly related to insects, but further evidence is needed to document this. Results Using the genes even skipped and Delta as representative of genes involved in segment patterning in insects and in vertebrates, respectively, we have carried out a detailed analysis of the spatio-temporal dynamics of gene expression throughout the process of segment patterning in Strigamia. We show that a segmentation clock is involved in segment formation: most segments are generated by cycles of dynamic gene activity that generate a pattern of double segment periodicity, which is only later resolved to the definitive single segment pattern. However, not all segments are generated by this process. The most posterior segments are added individually from a localized sub-terminal area of the embryo, without prior pair-rule patterning. Conclusions Our data suggest that dynamic patterning of gene expression may be widespread among the arthropods, but that a single network of segmentation genes can generate either oscillatory behavior at pair-rule periodicity or direct single segment patterning, at different stages of embryogenesis. PMID:24289308

  1. Monotone Signal Segments Analysis as a novel method of breath detection and breath-to-breath interval analysis in rat

    PubMed Central

    Bojic, Tijana; Saponjic, Jasna; Radulovacki, Miodrag; Carley, David W.; Kalauzi, Aleksandar

    2008-01-01

    We applied a novel approach to respiratory waveform analysis - Monotone Signal Segments Analysis (MSSA) on 6-h recordings of respiratory signals in rats. To validate MSSA as a respiratory signal analysis tool we tested it by detecting: breaths and breath-to-breath intervals; by detecting respiratory timing and volume modes; and by detecting changes in respiratory pattern caused by lesions of monoaminergic systems in rats. MSSA differentiated three respiratory timing (tachypneic, eupneic, bradypneic-apneic), and three volume (artifacts, normovolemic, hypervolemic-sighs) modes. Lesion-induced respiratory pattern modulation was visible as shifts in the distributions of monotone signal segment amplitudes, and of breath-to-breath intervals. Specifically, noradrenergic lesion induced an increase in mean volume (p ? 0.03), with no change of the mean breath-to-breath interval duration (p ? 0.06). MSSA of timing modes detected noradrenergic lesion-induced interdependent changes in the balance of eupneic (decrease; p ? 0.02), and tachypneic (an increase; p ? 0.02) breath intervals with respect to control. In terms of breath durations within each timing mode, there was a tendency toward prolongation of the eupneic (p ? 0.08) and bradypneic-apneic (p ? 0.06) intervals. These results demonstrate that MSSA is sensitive to subtle shifts in respiratory rhythmogenesis not detectable by simple respiratory pattern descriptive statistics. MSSA represents a potentially valuable new tool for investigations of respiratory pattern control. PMID:18420469

  2. Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging

    PubMed Central

    Beaton, L.; Mazzaferri, J.; Lalonde, F.; Hidalgo-Aguirre, M.; Descovich, D.; Lesk, M. R.; Costantino, S.

    2015-01-01

    We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373

  3. Breast Density Analysis Using an Automatic Density Segmentation Algorithm.

    PubMed

    Oliver, Arnau; Tortajada, Meritxell; Lladó, Xavier; Freixenet, Jordi; Ganau, Sergi; Tortajada, Lidia; Vilagran, Mariona; Sentís, Melcior; Martí, Robert

    2015-10-01

    Breast density is a strong risk factor for breast cancer. In this paper, we present an automated approach for breast density segmentation in mammographic images based on a supervised pixel-based classification and using textural and morphological features. The objective of the paper is not only to show the feasibility of an automatic algorithm for breast density segmentation but also to prove its potential application to the study of breast density evolution in longitudinal studies. The database used here contains three complete screening examinations, acquired 2 years apart, of 130 different patients. The approach was validated by comparing manual expert annotations with automatically obtained estimations. Transversal analysis of the breast density analysis of craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts acquired in the same study showed a correlation coefficient of ??=?0.96 between the mammographic density percentage for left and right breasts, whereas a comparison of both mammographic views showed a correlation of ??=?0.95. A longitudinal study of breast density confirmed the trend that dense tissue percentage decreases over time, although we noticed that the decrease in the ratio depends on the initial amount of breast density. PMID:25720749

  4. Automated target recognition technique for image segmentation and scene analysis

    NASA Astrophysics Data System (ADS)

    Baumgart, Chris W.; Ciarcia, Christopher A.

    1994-03-01

    Automated target recognition (ATR) software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army's Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multisensor system designed to detect buried and surface- emplaced metallic and nonmetallic antitank mines. The basic requirements for this ATR software were the following: (1) an ability to separate target objects from the background in low signal-noise conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed using an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics, which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a tradeoff between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  5. Improvement of partial volume segmentation for brain tissue on diffusion tensor images using multiple-tensor estimation.

    PubMed

    Kumazawa, Seiji; Yoshiura, Takashi; Honda, Hiroshi; Toyofuku, Fukai

    2013-12-01

    To improve evaluations of cortical and subcortical diffusivity in neurological diseases, it is necessary to improve the accuracy of brain diffusion tensor imaging (DTI) data segmentation. The conventional partial volume segmentation method fails to classify voxels with multiple white matter (WM) fiber orientations such as fiber-crossing regions. Our purpose was to improve the performance of segmentation by taking into account the partial volume effects due to both multiple tissue types and multiple WM fiber orientations. We quantitatively evaluated the overall performance of the proposed method using digital DTI phantom data. Moreover, we applied our method to human DTI data, and compared our results with those of a conventional method. In the phantom experiments, the conventional method and proposed method yielded almost the same root mean square error (RMSE) for gray matter (GM) and cerebrospinal fluid (CSF), while the RMSE in the proposed method was smaller than that in the conventional method for WM. The volume overlap measures between our segmentation results and the ground truth of the digital phantom were more than 0.8 in all three tissue types, and were greater than those in the conventional method. In visual comparisons for human data, the WM/GM/CSF regions obtained using our method were in better agreement with the corresponding regions depicted in the structural image than those obtained using the conventional method. The results of the digital phantom experiment and human data demonstrated that our method improved accuracy in the segmentation of brain tissue data on DTI compared to the conventional method. PMID:23589185

  6. Matching 3D segmented objects using wire frame analysis

    NASA Astrophysics Data System (ADS)

    Allen, Charles R.; O'Brien, Stephan

    1993-08-01

    This paper describes a novel technique in 3D sensory fusion for autonomous mobile vehicles. The primary sensor is a monocular camera mounted on a robot manipulator which pans to up to three positions on a 0.5 m vertical circle, while mounted on the mobile vehicle. The passive scene is analyzed using a method of inverse perspective, which is described and the resulting scene analysis comprises 3D wire frames of all surfaces detected in 3D. The 3D scene analysis uses a dual T-800 transputer based multiprocessor which cycles through generating primary scene information at rates of 1 update per 10 seconds. A PC-based 3D matching algorithm is then used to match the segmented objects to a database of pre-taught 3D wire frames. The matching software is written in Prolog.

  7. Fully Automated Renal Tissue Volumetry in MR Volume Data Using Prior-Shape-Based Segmentation in Subject-Specific Probability Maps.

    PubMed

    Gloger, Oliver; Tönnies, Klaus; Laqua, Rene; Völzke, Henry

    2015-10-01

    Organ segmentation in magnetic resonance (MR) volume data is of increasing interest in epidemiological studies and clinical practice. Especially in large-scale population-based studies, organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time consuming and prone to reader variability, large-scale studies need automatic methods to perform organ segmentation. In this paper, we present an automated framework for renal tissue segmentation that computes renal parenchyma, cortex, and medulla volumetry in native MR volume data without any user interaction. We introduce a novel strategy of subject-specific probability map computation for renal tissue types, which takes inter- and intra-MR-intensity variability into account. Several kinds of tissue-related 2-D and 3-D prior-shape knowledge are incorporated in modularized framework parts to segment renal parenchyma in a final level set segmentation strategy. Subject-specific probabilities for medulla and cortex tissue are applied in a fuzzy clustering technique to delineate cortex and medulla tissue inside segmented parenchyma regions. The novel subject-specific computation approach provides clearly improved tissue probability map quality than existing methods. Comparing to existing methods, the framework provides improved results for parenchyma segmentation. Furthermore, cortex and medulla segmentation qualities are very promising but cannot be compared to existing methods since state-of-the art methods for automated cortex and medulla segmentation in native MR volume data are still missing. PMID:25915954

  8. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abrŕmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  9. Meteorological analysis models, volume 2

    NASA Technical Reports Server (NTRS)

    Langland, R. A.; Stark, D. L.

    1976-01-01

    As part of the SEASAT program, two sets of analysis programs were developed. One set of programs produce 63 x 63 horizontal mesh analyses on a polar stereographic grid. The other set produces 187 x 187 third mesh analyses. The parameters analyzed include sea surface temperature, sea level pressure and twelve levels of upper air temperature, height and wind analyses. Both sets use operational data provided by a weather bureau. The analysis output is used to initialize the primitive equation forecast models also included.

  10. Analysis of Retinal Peripapillary Segmentation in Early Alzheimer's Disease Patients

    PubMed Central

    Salobrar-Garcia, Elena; Hoyas, Irene; Leal, Mercedes; de Hoz, Rosa; Rojas, Blanca; Ramirez, Ana I.; Salazar, Juan J.; Yubero, Raquel; Gil, Pedro; Triviño, Alberto; Ramirez, José M.

    2015-01-01

    Decreased thickness of the retinal nerve fiber layer (RNFL) may reflect retinal neuronal-ganglion cell death. A decrease in the RNFL has been demonstrated in Alzheimer's disease (AD) in addition to aging by optical coherence tomography (OCT). Twenty-three mild-AD patients and 28 age-matched control subjects with mean Mini-Mental State Examination 23.3 and 28.2, respectively, with no ocular disease or systemic disorders affecting vision, were considered for study. OCT peripapillary and macular segmentation thickness were examined in the right eye of each patient. Compared to controls, eyes of patients with mild-AD patients showed no statistical difference in peripapillary RNFL thickness (P > 0.05); however, sectors 2, 3, 4, 8, 9, and 11 of the papilla showed thinning, while in sectors 1, 5, 6, 7, and 10 there was thickening. Total macular volume and RNFL thickness of the fovea in all four inner quadrants and in the outer temporal quadrants proved to be significantly decreased (P < 0.01). Despite the fact that peripapillary RNFL thickness did not statistically differ in comparison to control eyes, the increase in peripapillary thickness in our mild-AD patients could correspond to an early neurodegeneration stage and may entail the existence of an inflammatory process that could lead to progressive peripapillary fiber damage. PMID:26557684

  11. Recurrence interval analysis of trading volumes

    NASA Astrophysics Data System (ADS)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals ? between successive trading volumes exceeding a certain threshold q . The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  12. Bifilar analysis study, volume 1

    NASA Technical Reports Server (NTRS)

    Miao, W.; Mouzakis, T.

    1980-01-01

    A coupled rotor/bifilar/airframe analysis was developed and utilized to study the dynamic characteristics of the centrifugally tuned, rotor-hub-mounted, bifilar vibration absorber. The analysis contains the major components that impact the bifilar absorber performance, namely, an elastic rotor with hover aerodynamics, a flexible fuselage, and nonlinear individual degrees of freedom for each bifilar mass. Airspeed, rotor speed, bifilar mass and tuning variations are considered. The performance of the bifilar absorber is shown to be a function of its basic parameters: dynamic mass, damping and tuning, as well as the impedance of the rotor hub. The effect of the dissimilar responses of the individual bifilar masses which are caused by tolerance induced mass, damping and tuning variations is also examined.

  13. Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET

    NASA Astrophysics Data System (ADS)

    Bousse, Alexandre; Pedemonte, Stefano; Thomas, Benjamin A.; Erlandsson, Kjell; Ourselin, SĂ©bastien; Arridge, Simon; Hutton, Brian F.

    2012-10-01

    In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image.

  14. The development and testing of a digital PET phantom for the evaluation of tumor volume segmentation techniques.

    PubMed

    Aristophanous, Michalis; Penney, Bill C; Pelizzari, Charles A

    2008-07-01

    Methods for accurate tumor volume segmentation of positron emission tomography (PET) images have been under investigation in recent years partly as a result of the increased use of PET in radiation treatment planning (RTP). None of the developed automated or semiautomated segmentation methods, however, has been shown reliable enough to be regarded as the standard. One reason for this is that there is no source of well characterized and reliable test data for evaluating such techniques. The authors have constructed a digital tumor phantom to address this need. The phantom was created using the Zubal phantom as input to the SimSET software used for PET simulations. Synthetic tumors were placed in the lung of the Zubal phantom to provide the targets for segmentation. The authors concentrated on the lung, since much of the interest to include PET in RTP is for nonsmall cell lung cancer. Several tests were performed on the phantom to ensure its close resemblance to clinical PET scans. The authors measured statistical quantities to compare image intensity distributions from regions-of-interest (ROIs) placed in the liver, the lungs, and tumors in phantom and clinical reconstructions. Using ROIs they also made measurements of autocorrelation functions to ensure the image texture is similar in clinical and phantom data. The authors also compared the intensity profile and appearance of real and simulated uniform activity spheres within uniform background. These measurements, along with visual comparisons of the phantom with clinical scans, indicate that the simulated phantom mimics reality quite well. Finally, they investigate and quantify the relationship between the threshold required to segment a tumor and the inhomogeneity of the tumor's image intensity distribution. The tests and various measurements performed in this study demonstrate how the phantom can offer a reliable way of testing and investigating tumor volume segmentation in PET. PMID:18697557

  15. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  16. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI

    PubMed Central

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2014-01-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  17. Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images

    PubMed Central

    Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.

    2015-01-01

    Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634

  18. SEMI-AUTOMATIC SEGMENTATION OF THE TONGUE FOR 3D MOTION ANALYSIS WITH DYNAMIC MRI

    PubMed Central

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2013-01-01

    Accurate segmentation is an important preprocessing step for measuring the internal deformation of the tongue during speech and swallowing using 3D dynamic MRI. In an MRI stack, manual segmentation of every 2D slice and time frame is time-consuming due to the large number of volumes captured over the entire task cycle. In this paper, we propose a semi-automatic segmentation workflow for processing 3D dynamic MRI of the tongue. The steps comprise seeding a few slices, seed propagation by deformable registration, random walker segmentation of the temporal stack of images and 3D super-resolution volumes. This method was validated on the tongue of two subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of 52 volumes showed an average dice similarity coefficient (DSC) score of 0.9 with reduced segmented volume variability compared to manual segmentations. PMID:24443699

  19. Automated cerebellar lobule segmentation with application to cerebellar structural analysis in cerebellar disease.

    PubMed

    Yang, Zhen; Ye, Chuyang; Bogovic, John A; Carass, Aaron; Jedynak, Bruno M; Ying, Sarah H; Prince, Jerry L

    2016-02-15

    The cerebellum plays an important role in both motor control and cognitive function. Cerebellar function is topographically organized and diseases that affect specific parts of the cerebellum are associated with specific patterns of symptoms. Accordingly, delineation and quantification of cerebellar sub-regions from magnetic resonance images are important in the study of cerebellar atrophy and associated functional losses. This paper describes an automated cerebellar lobule segmentation method based on a graph cut segmentation framework. Results from multi-atlas labeling and tissue classification contribute to the region terms in the graph cut energy function and boundary classification contributes to the boundary term in the energy function. A cerebellar parcellation is achieved by minimizing the energy function using the α-expansion technique. The proposed method was evaluated using a leave-one-out cross-validation on 15 subjects including both healthy controls and patients with cerebellar diseases. Based on reported Dice coefficients, the proposed method outperforms two state-of-the-art methods. The proposed method was then applied to 77 subjects to study the region-specific cerebellar structural differences in three spinocerebellar ataxia (SCA) genetic subtypes. Quantitative analysis of the lobule volumes shows distinct patterns of volume changes associated with different SCA subtypes consistent with known patterns of atrophy in these genetic subtypes. PMID:26408861

  20. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  1. Applicability of semi-automatic segmentation for volumetric analysis of brain lesions.

    PubMed

    Heinonen, T; Dastidar, P; Eskola, H; Frey, H; Ryymin, P; Laasonen, E

    1998-01-01

    This project involves the development of a fast semi-automatic segmentation procedure to make an accurate volumetric estimation of brain lesions. This method has been applied in the segmentation of demyelination plaques in Multiple Sclerosis (MS) and right cerebral hemispheric infarctions in patients with neglect. The developed segmentation method includes several image processing techniques, such as image enhancement, amplitude segmentation, and region growing. The entire program operates on a PC-based computer and applies graphical user interfaces. Twenty three patients with MS and 43 patients with right cerebral hemisphere infarctions were studied on a 0.5 T MRI unit. The MS plaques and cerebral infarctions were thereafter segmented. The volumetric accuracy of the program was demonstrated by segmenting Magnetic Resonance (MR) images of fluid filled syringes. The relative error of the total volume measurement based on the MR images of syringes was 1.5%. Also the repeatability test was carried out as inter-and intra-observer study in which MS plaques of six randomly selected patients were segmented. These tests indicated 7% variability in the inter-observer study and 4% variability in the intra-observer study. Average time used to segment and calculate the total plaque volumes for one patient was 10 min. This simple segmentation method can be utilized in the quantitation of anatomical structures, such as air cells in the sinonasal and temporal bone area, as well as in different pathological conditions, such as brain tumours, intracerebral haematomas and bony destructions. PMID:9680601

  2. Automated segmentation of chronic stroke lesions using LINDA: Lesion identification with neighborhood data analysis.

    PubMed

    Pustina, Dorian; Coslett, H Branch; Turkeltaub, Peter E; Tustison, Nicholas; Schwartz, Myrna F; Avants, Brian

    2016-04-01

    The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696 ± 0.16, Hausdorff distance of 17.9 ± 9.8 mm, and average displacement of 2.54 ± 1.38 mm. The manual and predicted lesion volumes correlated at r = 0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. Hum Brain Mapp 37:1405-1421, 2016. © 2016 Wiley Periodicals, Inc. PMID:26756101

  3. A novel approach for the automated segmentation and volume quantification of cardiac fats on computed tomography.

    PubMed

    Rodrigues, É O; Morais, F F C; Morais, N A O S; Conci, L S; Neto, L V; Conci, A

    2016-01-01

    The deposits of fat on the surroundings of the heart are correlated to several health risk factors such as atherosclerosis, carotid stiffness, coronary artery calcification, atrial fibrillation and many others. These deposits vary unrelated to obesity, which reinforces its direct segmentation for further quantification. However, manual segmentation of these fats has not been widely deployed in clinical practice due to the required human workload and consequential high cost of physicians and technicians. In this work, we propose a unified method for an autonomous segmentation and quantification of two types of cardiac fats. The segmented fats are termed epicardial and mediastinal, and stand apart from each other by the pericardium. Much effort was devoted to achieve minimal user intervention. The proposed methodology mainly comprises registration and classification algorithms to perform the desired segmentation. We compare the performance of several classification algorithms on this task, including neural networks, probabilistic models and decision tree algorithms. Experimental results of the proposed methodology have shown that the mean accuracy regarding both epicardial and mediastinal fats is 98.5% (99.5% if the features are normalized), with a mean true positive rate of 98.0%. In average, the Dice similarity index was equal to 97.6%. PMID:26474835

  4. Cumulative Heat Diffusion Using Volume Gradient Operator for Volume Analysis.

    PubMed

    Gurijala, K C; Wang, Lei; Kaufman, A

    2012-12-01

    We introduce a simple, yet powerful method called the Cumulative Heat Diffusion for shape-based volume analysis, while drastically reducing the computational cost compared to conventional heat diffusion. Unlike the conventional heat diffusion process, where the diffusion is carried out by considering each node separately as the source, we simultaneously consider all the voxels as sources and carry out the diffusion, hence the term cumulative heat diffusion. In addition, we introduce a new operator that is used in the evaluation of cumulative heat diffusion called the Volume Gradient Operator (VGO). VGO is a combination of the LBO and a data-driven operator which is a function of the half gradient. The half gradient is the absolute value of the difference between the voxel intensities. The VGO by its definition captures the local shape information and is used to assign the initial heat values. Furthermore, VGO is also used as the weighting parameter for the heat diffusion process. We demonstrate that our approach can robustly extract shape-based features and thus forms the basis for an improved classification and exploration of features based on shape. PMID:26357113

  5. Fractal Segmentation and Clustering Analysis for Seismic Time Slices

    NASA Astrophysics Data System (ADS)

    Ronquillo, G.; Oleschko, K.; Korvin, G.; Arizabalo, R. D.

    2002-05-01

    Fractal analysis has become part of the standard approach for quantifying texture on gray-tone or colored images. In this research we introduce a multi-stage fractal procedure to segment, classify and measure the clustering patterns on seismic time slices from a 3-D seismic survey. Five fractal classifiers (c1)-(c5) were designed to yield standardized, unbiased and precise measures of the clustering of seismic signals. The classifiers were tested on seismic time slices from the AKAL field, Cantarell Oil Complex, Mexico. The generalized lacunarity (c1), fractal signature (c2), heterogeneity (c3), rugosity of boundaries (c4) and continuity resp. tortuosity (c5) of the clusters are shown to be efficient measures of the time-space variability of seismic signals. The Local Fractal Analysis (LFA) of time slices has proved to be a powerful edge detection filter to detect and enhance linear features, like faults or buried meandering rivers. The local fractal dimensions of the time slices were also compared with the self-affinity dimensions of the corresponding parts of porosity-logs. It is speculated that the spectral dimension of the negative-amplitude parts of the time-slice yields a measure of connectivity between the formation's high-porosity zones, and correlates with overall permeability.

  6. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  7. REACH. Teacher's Guide, Volume III. Task Analysis.

    ERIC Educational Resources Information Center

    Morris, James Lee; And Others

    Designed for use with individualized instructional units (CE 026 345-347, CE 026 349-351) in the electromechanical cluster, this third volume of the postsecondary teacher's guide presents the task analysis which was used in the development of the REACH (Refrigeration, Electro-Mechanical, Air Conditioning, Heating) curriculum. The major blocks of…

  8. REACH. Teacher's Guide, Volume III. Task Analysis.

    ERIC Educational Resources Information Center

    Morris, James Lee; And Others

    Designed for use with individualized instructional units (CE 026 345-347, CE 026 349-351) in the electromechanical cluster, this third volume of the postsecondary teacher's guide presents the task analysis which was used in the development of the REACH (Refrigeration, Electro-Mechanical, Air Conditioning, Heating) curriculum. The major blocks of…

  9. Estimation of body composition in Chinese and British men by ultrasonographic assessment of segmental adipose tissue volume.

    PubMed

    Eston, R; Evans, R; Fu, F

    1994-03-01

    It has been shown that ultrasonographic measurements can be used to predict body composition in adults. The purpose of this study was to assess the relationship between ultrasonograph and caliper (SKF) measurements of subcutaneous adipose tissue thickness in athletic Caucasian (English, E) and Asian (Chinese, C) men against estimates of body composition determined from hydrodensitometry (HYD). The usefulness of a proposed ultrasonographic method of estimating lean and fat proportions in the upper and lower limbs was also evaluated as a potential method of predicting body composition. Ultrasonography (US) was used to measure adipose and skin thickness at the following sites: biceps, triceps, subscapular, suprailiac, abdominal, pectoral, thigh and calf. Caliper measurements were also made at the above sites. Subcutaneous fat thickness and segmental radius were measured directly from the display screen of the ultrasonic scanner (Aloka 500 SD). By applying the geometry of a cone, the proximal and distal radii of the upper arm and upper leg were used to calculate the proportionate volumes of adipose tissue. The best correlations for US and SKF were obtained at the quadriceps, subscapular and pectoral sites for E (r = 0.96, 0.93 and 0.90, respectively) and at the quadriceps, calf and abdominal sites for C (r = 0.90, 0.81 and 0.75, respectively). The best ultrasonographic predictor of the percentage fat in both groups was the percentage adipose tissue volume in the upper leg (r = 0.83 and 0.79 for C and E, respectively). Stepwise multiple regression analysis indicated that the prediction of percentage fat was improved by the addition of the ultrasonographic abdomen measurement in both groups: Chinese sample: %fat = %fat(leg) (0.491) + US abdomen (0.337) + 0.95 ( R = 0.89, s.e.e. = 1.9%); English sample: %fat = %fat(leg) (0.435) + US abdomen (0.230) - 0.765 ( R = 0.80, s.e.e. = 3.6%). It is concluded that ultrasonographic measurements of subcutaneous adipose tissue and volumetric assessment of percentage adipose tissue in the thigh are useful estimates of body composition in athletic English and Chinese males. PMID:8044501

  10. Estimation of body composition in Chinese and British men by ultrasonographic assessment of segmental adipose tissue volume.

    PubMed Central

    Eston, R; Evans, R; Fu, F

    1994-01-01

    It has been shown that ultrasonographic measurements can be used to predict body composition in adults. The purpose of this study was to assess the relationship between ultrasonograph and caliper (SKF) measurements of subcutaneous adipose tissue thickness in athletic Caucasian (English, E) and Asian (Chinese, C) men against estimates of body composition determined from hydrodensitometry (HYD). The usefulness of a proposed ultrasonographic method of estimating lean and fat proportions in the upper and lower limbs was also evaluated as a potential method of predicting body composition. Ultrasonography (US) was used to measure adipose and skin thickness at the following sites: biceps, triceps, subscapular, suprailiac, abdominal, pectoral, thigh and calf. Caliper measurements were also made at the above sites. Subcutaneous fat thickness and segmental radius were measured directly from the display screen of the ultrasonic scanner (Aloka 500 SD). By applying the geometry of a cone, the proximal and distal radii of the upper arm and upper leg were used to calculate the proportionate volumes of adipose tissue. The best correlations for US and SKF were obtained at the quadriceps, subscapular and pectoral sites for E (r = 0.96, 0.93 and 0.90, respectively) and at the quadriceps, calf and abdominal sites for C (r = 0.90, 0.81 and 0.75, respectively). The best ultrasonographic predictor of the percentage fat in both groups was the percentage adipose tissue volume in the upper leg (r = 0.83 and 0.79 for C and E, respectively). Stepwise multiple regression analysis indicated that the prediction of percentage fat was improved by the addition of the ultrasonographic abdomen measurement in both groups: Chinese sample: %fat = %fat(leg) (0.491) + US abdomen (0.337) + 0.95 ( R = 0.89, s.e.e. = 1.9%); English sample: %fat = %fat(leg) (0.435) + US abdomen (0.230) - 0.765 ( R = 0.80, s.e.e. = 3.6%). It is concluded that ultrasonographic measurements of subcutaneous adipose tissue and volumetric assessment of percentage adipose tissue in the thigh are useful estimates of body composition in athletic English and Chinese males. Images Figure 1 Figure 2 Figure 3 p13-a PMID:8044501

  11. SU-E-J-238: Monitoring Lymph Node Volumes During Radiotherapy Using Semi-Automatic Segmentation of MRI Images

    SciTech Connect

    Veeraraghavan, H; Tyagi, N; Riaz, N; McBride, S; Lee, N; Deasy, J

    2014-06-01

    Purpose: Identification and image-based monitoring of lymph nodes growing due to disease, could be an attractive alternative to prophylactic head and neck irradiation. We evaluated the accuracy of the user-interactive Grow Cut algorithm for volumetric segmentation of radiotherapy relevant lymph nodes from MRI taken weekly during radiotherapy. Method: The algorithm employs user drawn strokes in the image to volumetrically segment multiple structures of interest. We used a 3D T2-wturbo spin echo images with an isotropic resolution of 1 mm3 and FOV of 492Ă—492Ă—300 mm3 of head and neck cancer patients who underwent weekly MR imaging during the course of radiotherapy. Various lymph node (LN) levels (N2, N3, N4'5) were individually contoured on the weekly MR images by an expert physician and used as ground truth in our study. The segmentation results were compared with the physician drawn lymph nodes based on DICE similarity score. Results: Three head and neck patients with 6 weekly MR images were evaluated. Two patients had level 2 LN drawn and one patient had level N2, N3 and N4'5 drawn on each MR image. The algorithm took an average of a minute to segment the entire volume (512Ă—512Ă—300 mm3). The algorithm achieved an overall DICE similarity score of 0.78. The time taken for initializing and obtaining the volumetric mask was about 5 mins for cases with only N2 LN and about 15 mins for the case with N2,N3 and N4'5 level nodes. The longer initialization time for the latter case was due to the need for accurate user inputs to separate overlapping portions of the different LN. The standard deviation in segmentation accuracy at different time points was utmost 0.05. Conclusions: Our initial evaluation of the grow cut segmentation shows reasonably accurate and consistent volumetric segmentations of LN with minimal user effort and time.

  12. Finite Volume Methods: Foundation and Analysis

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Ohlberger, Mario

    2003-01-01

    Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

  13. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  14. Verifying volume rendering using discretization error analysis.

    PubMed

    Etiene, Tiago; Jönsson, Daniel; Ropinski, Timo; Scheidegger, Carlos; Comba, Joăo L D; Nonato, Luis Gustavo; Kirby, Robert M; Ynnerman, Anders; Silva, Cláudio T

    2014-01-01

    We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice, and discuss its limitations. We also report the errors identified by our approach when applied to two publicly available volume rendering packages. PMID:24201332

  15. Verifying Volume Rendering Using Discretization Error Analysis.

    PubMed

    Etiene, Tiago; Jonsson, Daniel; Ropinski, Timo; Scheidegger, Carlos; Comba, Joao; Nonato, L Gustavo; Kirby, Robert M; Ynnerman, Anders; Silva, Claudio T

    2013-06-13

    We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice and discuss its limitations. We also report the errors identified by our approach when applied to two publicly-available volume rendering packages. PMID:23775481

  16. Analysis of segmental phosphate absorption in intact rats. A compartmental analysis approach.

    PubMed Central

    Kayne, L H; D'Argenio, D Z; Meyer, J H; Hu, M S; Jamgotchian, N; Lee, D B

    1993-01-01

    Available information supports the dominance of the proximal intestine in inorganic phosphate (Pi) absorption. However, there is no strategy for analyzing segmental Pi absorption from a spontaneously propelled meal in an intact animal. We propose a solution using compartmental analysis. After intragastric administration of a 32P-labeled Pi liquid meal containing a nonabsorbable marker, [14C]polyethylene glycol (PEG), rats were killed at 2, 10, 20, 30, 60, 120, and 240 min. The gastrointestinal tract was removed and divided into seven segments, from which 32P and [14C]PEG were recovered. Data was expressed as a percentage of the dose fed, i.e., (32P[in segment] divided by 32P[fed]) and [14C]PEG[in segment] divided by [14C]PEG[fed]), respectively. A compartmental model was constructed and the rate constants for intersegmental transit and segmental absorption were estimated. The "goodness of fit" between the simulated model and the actual data indicates the estimated rate constants reflect in vivo events. The duodenum, with the highest transit and absorption rates, accounted for a third of the total absorption. However, the terminal ileum, with a lower absorption rate but a longer transit time, absorbed an equal amount of Pi. This approach allows the analysis of the mechanism and the regulation of Pi absorption under more authentic in vivo conditions. Images PMID:8450069

  17. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  18. Supervised Manifold Distance Segmentation.

    PubMed

    Kniss, J; Guanyu Wang

    2011-11-01

    We present a simple and robust method for image and volume data segmentation based on manifold distance metrics. This is done by treating the image as a function that maps the 2D (image) or 3D (volume) to a 2D or 3D manifold in a higher dimensional feature space. We explore a range of possible feature spaces, including value, gradient, and probabilistic measures, and examine the consequences of including these measures in the feature space. The time and space computational complexity of our segmentation algorithm is O(N), which allows interactive, user-centric segmentation even for large data sets. We show that this method, given appropriate choice of feature vector, produces results both qualitatively and quantitatively similar to Level Sets, Random Walkers, and others. We validate the robustness of this segmentation scheme with comparisons to standard ground-truth models and sensitivity analysis of the algorithm. PMID:20855917

  19. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  20. Glioma grading using apparent diffusion coefficient map: application of histogram analysis based on automatic segmentation.

    PubMed

    Lee, Jeongwon; Choi, Seung Hong; Kim, Ji-Hoon; Sohn, Chul-Ho; Lee, Sooyeul; Jeong, Jaeseung

    2014-09-01

    The accurate diagnosis of glioma subtypes is critical for appropriate treatment, but conventional histopathologic diagnosis often exhibits significant intra-observer variability and sampling error. The aim of this study was to investigate whether histogram analysis using an automatically segmented region of interest (ROI), excluding cystic or necrotic portions, could improve the differentiation between low-grade and high-grade gliomas. Thirty-two patients (nine low-grade and 23 high-grade gliomas) were included in this retrospective investigation. The outer boundaries of the entire tumors were manually drawn in each section of the contrast-enhanced T1 -weighted MR images. We excluded cystic or necrotic portions from the entire tumor volume. The histogram analyses were performed within the ROI on normalized apparent diffusion coefficient (ADC) maps. To evaluate the contribution of the proposed method to glioma grading, we compared the area under the receiver operating characteristic (ROC) curves. We found that an ROI excluding cystic or necrotic portions was more useful for glioma grading than was an entire tumor ROI. In the case of the fifth percentile values of the normalized ADC histogram, the area under the ROC curve for the tumor ROIs excluding cystic or necrotic portions was significantly higher than that for the entire tumor ROIs (p?segmentation of a cystic or necrotic area probably improves the ability to differentiate between high- and low-grade gliomas on an ADC map. PMID:25042540

  1. Application of taxonomy theory, Volume 1: Computing a Hopf bifurcation-related segment of the feasibility boundary. Final report

    SciTech Connect

    Zaborszky, J.; Venkatasubramanian, V.

    1995-10-01

    Taxonomy Theory is the first precise comprehensive theory for large power system dynamics modeled in any detail. The motivation for this project is to show that it can be used, practically, for analyzing a disturbance that actually occurred on a large system, which affected a sizable portion of the Midwest with supercritical Hopf type oscillations. This event is well documented and studied. The report first summarizes Taxonomy Theory with an engineering flavor. Then various computational approaches are sighted and analyzed for desirability to use with Taxonomy Theory. Then working equations are developed for computing a segment of the feasibility boundary that bounds the region of (operating) parameters throughout which the operating point can be moved without losing stability. Then experimental software incorporating large EPRI software packages PSAPAC is developed. After a summary of the events during the subject disturbance, numerous large scale computations, up to 7600 buses, are reported. These results are reduced into graphical and tabular forms, which then are analyzed and discussed. The report is divided into two volumes. This volume illustrates the use of the Taxonomy Theory for computing the feasibility boundary and presents evidence that the event indeed led to a Hopf type oscillation on the system. Furthermore it proves that the Feasibility Theory can indeed be used for practical computation work with very large systems. Volume 2, a separate volume, will show that the disturbance has led to a supercritical (that is stable oscillation) Hopf bifurcation.

  2. Atlas-Based Segmentation Improves Consistency and Decreases Time Required for Contouring Postoperative Endometrial Cancer Nodal Volumes

    SciTech Connect

    Young, Amy V.; Wortham, Angela; Wernick, Iddo; Evans, Andrew; Ennis, Ronald D.

    2011-03-01

    Purpose: Accurate target delineation of the nodal volumes is essential for three-dimensional conformal and intensity-modulated radiotherapy planning for endometrial cancer adjuvant therapy. We hypothesized that atlas-based segmentation ('autocontouring') would lead to time savings and more consistent contours among physicians. Methods and Materials: A reference anatomy atlas was constructed using the data from 15 postoperative endometrial cancer patients by contouring the pelvic nodal clinical target volume on the simulation computed tomography scan according to the Radiation Therapy Oncology Group 0418 trial using commercially available software. On the simulation computed tomography scans from 10 additional endometrial cancer patients, the nodal clinical target volume autocontours were generated. Three radiation oncologists corrected the autocontours and delineated the manual nodal contours under timed conditions while unaware of the other contours. The time difference was determined, and the overlap of the contours was calculated using Dice's coefficient. Results: For all physicians, manual contouring of the pelvic nodal target volumes and editing the autocontours required a mean {+-} standard deviation of 32 {+-} 9 vs. 23 {+-} 7 minutes, respectively (p = .000001), a 26% time savings. For each physician, the time required to delineate the manual contours vs. correcting the autocontours was 30 {+-} 3 vs. 21 {+-} 5 min (p = .003), 39 {+-} 12 vs. 30 {+-} 5 min (p = .055), and 29 {+-} 5 vs. 20 {+-} 5 min (p = .0002). The mean overlap increased from manual contouring (0.77) to correcting the autocontours (0.79; p = .038). Conclusion: The results of our study have shown that autocontouring leads to increased consistency and time savings when contouring the nodal target volumes for adjuvant treatment of endometrial cancer, although the autocontours still required careful editing to ensure that the lymph nodes at risk of recurrence are properly included in the target volume.

  3. Adolescents and alcohol: an explorative audience segmentation analysis

    PubMed Central

    2012-01-01

    Background So far, audience segmentation of adolescents with respect to alcohol has been carried out mainly on the basis of socio-demographic characteristics. In this study we examined whether it is possible to segment adolescents according to their values and attitudes towards alcohol to use as guidance for prevention programmes. Methods A random sample of 7,000 adolescents aged 12 to 18 was drawn from the Municipal Basic Administration (MBA) of 29 Local Authorities in the province North-Brabant in the Netherlands. By means of an online questionnaire data were gathered on values and attitudes towards alcohol, alcohol consumption and socio-demographic characteristics. Results We were able to distinguish a total of five segments on the basis of five attitude factors. Moreover, the five segments also differed in drinking behavior independently of socio-demographic variables. Conclusions Our investigation was a first step in the search for possibilities of segmenting by factors other than socio-demographic characteristics. Further research is necessary in order to understand these results for alcohol prevention policy in concrete terms. PMID:22950946

  4. Accurate segmentation for quantitative analysis of vascular trees in 3D micro-CT images

    NASA Astrophysics Data System (ADS)

    Riedel, Christian H.; Chuah, Siang C.; Zamir, Mair; Ritman, Erik L.

    2002-04-01

    Quantitative analysis of the branching geometry of multiple branching-order vascular trees from 3D micro-CT data requires an efficient segmentation algorithm that leads to a consistent, accurate representation of the tree structure. To explore different segmentation techniques, we use isotropic micro-CT-images of intact rat coronary, pulmonary and hepatic opacified arterial trees with cubic voxel-side length of 5-20 micrometer. We implemented an active topology adaptive surface model for segmentation and compared the results from this algorithm with segmentations of the same image data using conventional segmentation methods. Because of the modulation transfer function of the micro-CT scanner, thresholding and region growing techniques usually underestimate small, or overestimate large, vessel diameters depending on the chosen grayscale thresholds. Furthermore, these approaches lack the robustness needed to overcome the effects of typical imaging artifacts, such as image noise at the vessel surfaces, which tend to propagate errors in the analysis of the tree due to its hierarchical nature. Our adaptable surface models include local gray- scale statistics, object boundary and object size information into the segmentation algorithm, thus leading to a higher stability and accuracy of the segmentation process. 5-20 micrometer. We implemented an active topology adaptive surface model for segmentation and compared the results from this algorithm with segmentations of the same image data using conventional segmentation methods. Because of the modulation transfer function of the micro-CT scanner, thresholding and region growing techniques usually underestimate small, or overestimate large, vessel diameters depending on the chosen grayscale thresholds. Furthermore, these approaches lack the robustness needed to overcome the e*ects of typical imaging artifacts, such as image noise at the vessel surfaces, which tend to propagate errors in the analysis of the tree due to its hierarchical nature. Our adaptable surface models include local gray-scale statistics, object boundary and object size information into the segmentation algorithm, thus leading to a higher stability and accuracy of the segmentation process.

  5. Three-dimensional visualization of the craniofacial patient: volume segmentation, data integration and animation.

    PubMed

    Enciso, R; Memon, A; Mah, J

    2003-01-01

    The research goal at the Craniofacial Virtual Reality Laboratory of the School of Dentistry in conjunction with the Integrated Media Systems Center, School of Engineering, University of Southern California, is to develop computer methods to accurately visualize patients in three dimensions using advanced imaging and data acquisition devices such as cone-beam computerized tomography (CT) and mandibular motion capture. Data from these devices were integrated for three-dimensional (3D) patient-specific visualization, modeling and animation. Generic methods are in development that can be used with common CT image format (DICOM), mesh format (STL) and motion data (3D position over time). This paper presents preliminary descriptive studies on: 1) segmentation of the lower and upper jaws with two types of CT data--(a) traditional whole head CT data and (b) the new dental Newtom CT; 2) manual integration of accurate 3D tooth crowns with the segmented lower jaw 3D model; 3) realistic patient-specific 3D animation of the lower jaw. PMID:14606537

  6. Analysis and comparison of space/spatial-frequency and multiscale methods for texture segmentation

    NASA Astrophysics Data System (ADS)

    Zhu, Yue Min; Goutte, Robert

    1995-01-01

    We investigate the use of space/spatial-frequency and multiscale analysis methods for texture segmentation, with emphasis on the 2D Wigner-Ville distribution and Morlet wavelet transform. For these two methods, the discrete versions that are necessary for numerical implementations are discussed. Texture segmentation paradigms making use of local spectral measurements from these two types of representations are described. The interest of the proposed spatial-frequency- and scale-based segmentation methods is illustrated with the aid of examples on both synthesized and natural images, and their segmentation performance is analyzed and compared.

  7. Investigation into the use of market segmentation analysis in transportation energy planning

    SciTech Connect

    Trombly, J.W.

    1985-01-01

    This research explores the application of market-segmentation analysis in transportation energy planning. The study builds on the concepts of market segmentation developed in the marketing literature to suggest a strategy of segmentation analysis for use in transportation planning. Results of the two statewide telephone surveys conducted in 1979 and 1980 for the New York State Department of Transportation are used as the data base for identifying target segments. Subjects in these surveys were asked to indicate which of 18 energy conservation actions had been implemented over the prior year to conserve gasoline. These responses serve as the basis for segmentation. Two alternative methods are pursued in identifying target market segments for purposes of transportation energy planning. The first approach consists of the application of conventional multivariate analysis procedures. The second method exploits the principles of latent trait or modern test theory. Results of the conventional analysis suggest that the data collected can be divided into eight segments. Results of the application of latent trait theory identify three market segments. Results of this study may be used to design future responses to energy shortages in addition to suggesting strategies to be pursued in measuring consumer response.

  8. Infant Word Segmentation and Childhood Vocabulary Development: A Longitudinal Analysis

    ERIC Educational Resources Information Center

    Singh, Leher; Reznick, J. Steven; Xuehua, Liang

    2012-01-01

    Infants begin to segment novel words from speech by 7.5 months, demonstrating an ability to track, encode and retrieve words in the context of larger units. Although it is presumed that word recognition at this stage is a prerequisite to constructing a vocabulary, the continuity between these stages of development has not yet been empirically…

  9. Health Lifestyles: Audience Segmentation Analysis for Public Health Interventions.

    ERIC Educational Resources Information Center

    Slater, Michael D.; Flora, June A.

    This paper is concerned with the application of market research techniques to segment large populations into homogeneous units in order to improve the reach, utilization, and effectiveness of health programs. The paper identifies seven distinctive patterns of health attitudes, social influences, and behaviors using cluster analytic techniques in a…

  10. Segmentation-based and rule-based spectral mixture analysis for estimating urban imperviousness

    NASA Astrophysics Data System (ADS)

    Li, Miao; Zang, Shuying; Wu, Changshan; Deng, Yingbin

    2015-03-01

    For detailed estimation of urban imperviousness, numerous image processing methods have been developed, and applied to different urban areas with some success. Most of these methods, however, are global techniques. That is, they have been applied to the entire study area without considering spatial and contextual variations. To address this problem, this paper explores whether two spatio-contextual analysis techniques, namely segmentation-based and rule-based analysis, can improve urban imperviousness estimation. These two spatio-contextual techniques were incorporated to a classic urban imperviousness estimation technique, fully-constrained linear spectral mixture analysis (FCLSMA) method. In particular, image segmentation was applied to divide the image to homogenous segments, and spatially varying endmembers were chosen for each segment. Then an FCLSMA was applied for each segment to estimate the pixel-wise fractional coverage of high-albedo material, low-albedo material, vegetation, and soil. Finally, a rule-based analysis was carried out to estimate the percent impervious surface area (%ISA). The developed technique was applied to a Landsat TM image acquired in Milwaukee River Watershed, an urbanized watershed in Wisconsin, United States. Results indicate that the performance of the developed segmentation-based and rule-based LSMA (S-R-LSMA) outperforms traditional SMA techniques, with a mean average error (MAE) of 5.44% and R2 of 0.88. Further, a comparative analysis shows that, when compared to segmentation, rule-based analysis plays a more essential role in improving the estimation accuracy.

  11. Segmentation, Recognition and Tracing Analysis for High-Content Cell-Cycle Screening

    NASA Astrophysics Data System (ADS)

    Yu, Donggang; Pham, Tuan D.; Zhou, Xiaobo; Wong, Stephen T. C.

    2007-11-01

    We present in this paper some new and efficient algorithms for segmentation, recognition and tracing analysis of cell phases for high-content screening. The conceptual frameworks are based on the morphological structures of cells where a series of morphological structural points are established. Furthermore, we address the issue of touching cells and then propose morphological techniques for cell separation, reconstruction and tracing analysis. The new segmentation method can resolve the question of over-segmentation. The tracing analysis of cell phases is based on cell shape, geometrical features and difference information of corresponding neighbor frames. Experiment results test the efficiency of the new method.

  12. Kinematic analysis of musculoskeletal structures via volumetric MRI and unsupervised segmentation

    NASA Astrophysics Data System (ADS)

    Tamez-Pena, Jose G.; Totterman, Saara; Parker, Kevin J.

    1999-05-01

    In this work we present a comprehensive approach for the kinematic analysis of musculoskeletal structures based on 4D MRI data sets and unsupervised segmentation. We applied this approach to the kinematics analysis of the knee flexion. The unsupervised segmentation algorithm automatically detects the number of spatially independent structures present in the medical image. The motion tracking algorithm is able to pass simultaneously the segmentation of all the structures which allows an automatic segmentation and tracking of the soft tissue and bone structures of knee in a series of volumetric images. Our approach requires a minimum of interactivity with the user, eliminating the need for exhaustive tracings and editing of image data. This segmentation approach allowed us to visualize and analyze the 3D knee flexion, and the local kinematics of the meniscus.

  13. Gross and Segmental Motion Analysis in Dynamic Cardiac Imagery*

    PubMed Central

    Tsotsos, J.; Covvey, H.D.; Mylopoulos, J.; Wigle, E.D.

    1978-01-01

    A knowledge base driven image recognition and description system for the purpose of analyzing cardiac images is under development. Algorithms have been developed to recognize and follow ventricular wall shape features. Toward the goal of summarizing or abstracting wall behaviour, a formalism has been developed for the characterization of cardiac events and the description of segmental motion. This is a progress report on our activities.

  14. Automated compromised right lung segmentation method using a robust atlas-based active volume model with sparse shape composition prior in CT.

    PubMed

    Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren

    2015-12-01

    To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. PMID:26256737

  15. Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Syed, Abdul Haleem

    Land-use-and-land-cover (LULC) mapping is crucial in precision agriculture, environmental monitoring, disaster response, and military applications. The demand for improved and more accurate LULC maps has led to the emergence of a key methodology known as Geographic Object-Based Image Analysis (GEOBIA). The core idea of the GEOBIA for an object-based classification system (OBC) is to change the unit of analysis from single-pixels to groups-of-pixels called `objects' through segmentation. While this new paradigm solved problems and improved global accuracy, it also raised new challenges such as the loss of accuracy in categories that are less abundant, but potentially important. Although this trade-off may be acceptable in some domains, the consequences of such an accuracy loss could be potentially fatal in others (for instance, landmine detection). This thesis proposes a method to improve OBC performance by eliminating such accuracy losses. Specifically, we examine the two key players of an OBC system: Hierarchical Segmentation and Supervised Classification. Further, we propose a model to understand the source of accuracy errors in minority categories and provide a method called Scale Fusion to eliminate those errors. This proposed fusion method involves two stages. First, the characteristic scale for each category is estimated through a combination of segmentation and supervised classification. Next, these estimated scales (segmentation maps) are fused into one combined-object-map. Classification performance is evaluated by comparing results of the multi-cut-and-fuse approach (proposed) to the traditional single-cut (SC) scale selection strategy. Testing on four different data sets revealed that our proposed algorithm improves accuracy on minority classes while performing just as well on abundant categories. Another active obstacle, presented by today's remotely sensed images, is the volume of information produced by our modern sensors with high spatial and temporal resolution. For instance, over this decade, it is projected that 353 earth observation satellites from 41 countries are to be launched. Timely production of geo-spatial information, from these large volumes, is a challenge. This is because in the traditional methods, the underlying representation and information processing is still primarily pixel-based, which implies that as the number of pixels increases, so does the computational complexity. To overcome this bottleneck, created by pixel-based representation, this thesis proposes a dart-based discrete topological representation (DBTR), where the DBTR differs from pixel-based methods in its use of a reduced boundary based representation. Intuitively, the efficiency gains arise from the observation that, it is lighter to represent a region by its boundary (darts) than by its area (pixels). We found that our implementation of DBTR, not only improved our computational efficiency, but also enhanced our ability to encode and extract spatial information. Overall, this thesis presents solutions to two problems of an object-based classification system: accuracy and efficiency. Our proposed Scale Fusion method demonstrated improvements in accuracy, while our dart-based topology representation (DBTR) showed improved efficiency in the extraction and encoding of spatial information.

  16. Computed Tomographic Image Analysis Based on FEM Performance Comparison of Segmentation on Knee Joint Reconstruction

    PubMed Central

    Jang, Seong-Wook; Seo, Young-Jin; Yoo, Yon-Sik

    2014-01-01

    The demand for an accurate and accessible image segmentation to generate 3D models from CT scan data has been increasing as such models are required in many areas of orthopedics. In this paper, to find the optimal image segmentation to create a 3D model of the knee CT data, we compared and validated segmentation algorithms based on both objective comparisons and finite element (FE) analysis. For comparison purposes, we used 1 model reconstructed in accordance with the instructions of a clinical professional and 3 models reconstructed using image processing algorithms (Sobel operator, Laplacian of Gaussian operator, and Canny edge detection). Comparison was performed by inspecting intermodel morphological deviations with the iterative closest point (ICP) algorithm, and FE analysis was performed to examine the effects of the segmentation algorithm on the results of the knee joint movement analysis. PMID:25538950

  17. The curve integration method is comparable to manual segmentation for the analysis of bone/scaffold composites using micro-CT.

    PubMed

    Hilldore, Amanda J; Morgan, Abby W; Woodard, Joseph R; Wagoner Johnson, Amy J

    2009-01-01

    Microcomputed tomography (micro-CT) is becoming a more common imaging technique in tissue engineering and has been used to characterize scaffold pore size, pore fraction, and bone ingrowth, among other characteristics. Despite the increasingly widespread use, no standards exist for segmenting images. Manual segmentation, a common segmentation method, is subjective, time consuming, and has been shown to be inaccurate and unreliable. The curve integration method was previously introduced as a method to accurately calculate the volume fraction of constituents in bone scaffolds from micro-CT data. In this article, the curve integration method is compared to manual image segmentation in order to validate the former method. Three cases are presented from two in vivo bone regeneration studies that include cross-sections from a rabbit calvarial defect used to study drug delivery, and cross-sections and small volumes of hydroxyapatite scaffold-bone composites from a porcine intramuscular study. The analysis shows that the curve integration method models the data accurately and can be used to calculate volume fractions of the materials in the sample. Furthermore, the curve integration method is faster and less labor intensive than manual image segmentation. PMID:18683226

  18. A Two-Step Segmentation Method for Breast Ultrasound Masses Based on Multi-resolution Analysis.

    PubMed

    Rodrigues, Rafael; Braz, Rui; Pereira, Manuela; Moutinho, José; Pinheiro, Antonio M G

    2015-06-01

    Breast ultrasound images have several attractive properties that make them an interesting tool in breast cancer detection. However, their intrinsic high noise rate and low contrast turn mass detection and segmentation into a challenging task. In this article, a fully automated two-stage breast mass segmentation approach is proposed. In the initial stage, ultrasound images are segmented using support vector machine or discriminant analysis pixel classification with a multiresolution pixel descriptor. The features are extracted using non-linear diffusion, bandpass filtering and scale-variant mean curvature measures. A set of heuristic rules complement the initial segmentation stage, selecting the region of interest in a fully automated manner. In the second segmentation stage, refined segmentation of the area retrieved in the first stage is attempted, using two different techniques. The AdaBoost algorithm uses a descriptor based on scale-variant curvature measures and non-linear diffusion of the original image at lower scales, to improve the spatial accuracy of the ROI. Active contours use the segmentation results from the first stage as initial contours. Results for both proposed segmentation paths were promising, with normalized Dice similarity coefficients of 0.824 for AdaBoost and 0.813 for active contours. Recall rates were 79.6% for AdaBoost and 77.8% for active contours, whereas the precision rate was 89.3% for both methods. PMID:25736608

  19. Combined texture feature analysis of segmentation and classification of benign and malignant tumour CT slices.

    PubMed

    Padma, A; Sukanesh, R

    2013-01-01

    A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity. PMID:23094909

  20. Landmine detection using IR image segmentation by means of fractal dimension analysis

    NASA Astrophysics Data System (ADS)

    Abbate, Horacio A.; Gambini, Juliana; Delrieux, Claudio; Castro, Eduardo H.

    2009-05-01

    This work is concerned with buried landmines detection by long wave infrared images obtained during the heating or cooling of the soil and a segmentation process of the images. The segmentation process is performed by means of a local fractal dimension analysis (LFD) as a feature descriptor. We use two different LFD estimators, box-counting dimension (BC), and differential box counting dimension (DBC). These features are computed in a per pixel basis, and the set of features is clusterized by means of the K-means method. This segmentation technique produces outstanding results, with low computational cost.

  1. Segmentation of Moving Objects by Long Term Video Analysis.

    PubMed

    Ochs, Peter; Malik, Jitendra; Brox, Thomas

    2014-06-01

    Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short-term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects. PMID:26353280

  2. Segmentation of Moving Objects by Long Term Video Analysis.

    PubMed

    Ochs, Peter; Malik, Jitendra; Brox, Thomas

    2013-12-11

    Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects. PMID:24344074

  3. Theoretical analysis and experimental verification on valve-less piezoelectric pump with hemisphere-segment bluff-body

    NASA Astrophysics Data System (ADS)

    Ji, Jing; Zhang, Jianhui; Xia, Qixiao; Wang, Shouyin; Huang, Jun; Zhao, Chunsheng

    2014-05-01

    Existing researches on no-moving part valves in valve-less piezoelectric pumps mainly concentrate on pipeline valves and chamber bottom valves, which leads to the complex structure and manufacturing process of pump channel and chamber bottom. Furthermore, position fixed valves with respect to the inlet and outlet also makes the adjustability and controllability of flow rate worse. In order to overcome these shortcomings, this paper puts forward a novel implantable structure of valve-less piezoelectric pump with hemisphere-segments in the pump chamber. Based on the theory of flow around bluff-body, the flow resistance on the spherical and round surface of hemisphere-segment is different when fluid flows through, and the macroscopic flow resistance differences thus formed are also different. A novel valve-less piezoelectric pump with hemisphere-segment bluff-body (HSBB) is presented and designed. HSBB is the no-moving part valve. By the method of volume and momentum comparison, the stress on the bluff-body in the pump chamber is analyzed. The essential reason of unidirectional fluid pumping is expounded, and the flow rate formula is obtained. To verify the theory, a prototype is produced. By using the prototype, experimental research on the relationship between flow rate, pressure difference, voltage, and frequency has been carried out, which proves the correctness of the above theory. This prototype has six hemisphere-segments in the chamber filled with water, and the effective diameter of the piezoelectric bimorph is 30mm. The experiment result shows that the flow rate can reach 0.50 mL/s at the frequency of 6 Hz and the voltage of 110 V. Besides, the pressure difference can reach 26.2 mm H2O at the frequency of 6 Hz and the voltage of 160 V. This research proposes a valve-less piezoelectric pump with hemisphere-segment bluff-body, and its validity and feasibility is verified through theoretical analysis and experiment.

  4. Analysis of radially cracked ring segments subject to forces and couples

    NASA Technical Reports Server (NTRS)

    Gross, B.; Srawley, J. E.

    1977-01-01

    Results of planar boundary collocation analysis are given for ring segment (C-shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5 and ratios of crack length to segment width in the range 0.1 to 0.8.

  5. Analysis of radially cracked ring segments subject to forces and couples

    NASA Technical Reports Server (NTRS)

    Gross, B.; Strawley, J. E.

    1975-01-01

    Results of planar boundary collocation analysis are given for ring segment (C shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5, and ratios of crack length to segment width in the range 0.1 to 0.8.

  6. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations

    PubMed Central

    Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.

    2015-01-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key points Musculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest. Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis. Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349

  7. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations.

    PubMed

    Hart, Nicolas H; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L; Newton, Robert U

    2015-09-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key pointsMusculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest.Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis.Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349

  8. Decomposition analysis of differential dose volume histograms

    SciTech Connect

    Heuvel, Frank van den

    2006-02-15

    Dose volume histograms are a common tool to assess the value of a treatment plan for various forms of radiation therapy treatment. The purpose of this work is to introduce, validate, and apply a set of tools to analyze differential dose volume histograms by decomposing them into physically and clinically meaningful normal distributions. A weighted sum of the decomposed normal distributions (e.g., weighted dose) is proposed as a new measure of target dose, rather than the more unstable point dose. The method and its theory are presented and validated using simulated distributions. Additional validation is performed by analyzing simple four field box techniques encompassing a predefined target, using different treatment energies inside a water phantom. Furthermore, two clinical situations are analyzed using this methodology to illustrate practical usefulness. A comparison of a treatment plan for a breast patient using a tangential field setup with wedges is compared to a comparable geometry using dose compensators. Finally, a normal tissue complication probability (NTCP) calculation is refined using this decomposition. The NTCP calculation is performed on a liver as organ at risk in a treatment of a mesothelioma patient with involvement of the right lung. The comparison of the wedged breast treatment versus the compensator technique yields comparable classical dose parameters (e.g., conformity index {approx_equal}1 and equal dose at the ICRU dose point). The methodology proposed here shows a 4% difference in weighted dose outlining the difference in treatment using a single parameter instead of at least two in a classical analysis (e.g., mean dose, and maximal dose, or total dose variance). NTCP-calculations for the mesothelioma case are generated automatically and show a 3% decrease with respect to the classical calculation. The decrease is slightly dependant on the fractionation and on the {alpha}/{beta}-value utilized. In conclusion, this method is able to distinguish clinically important differences between treatment plans using a single parameter. This methodology shows promise as an objective tool for analyzing NTCP and doses in larger studies, as the only information needed is the dose volume histogram.

  9. Label-fusion-segmentation and deformation-based shape analysis of deep gray matter in multiple sclerosis: the impact of thalamic subnuclei on disability.

    PubMed

    Magon, Stefano; Chakravarty, M Mallar; Amann, Michael; Weier, Katrin; Naegelin, Yvonne; Andelova, Michaela; Radue, Ernst-Wilhelm; Stippich, Christoph; Lerch, Jason P; Kappos, Ludwig; Sprenger, Till

    2014-08-01

    Deep gray matter (DGM) atrophy has been reported in patients with multiple sclerosis (MS) already at early stages of the disease and progresses throughout the disease course. We studied DGM volume and shape and their relation to disability in a large cohort of clinically well-described MS patients using new subcortical segmentation methods and shape analysis. Structural 3D magnetic resonance images were acquired at 1.5 T in 118 patients with relapsing remitting MS. Subcortical structures were segmented using a multiatlas technique that relies on the generation of an automatically generated template library. To localize focal morphological changes, shape analysis was performed by estimating the vertex-wise displacements each subject must undergo to deform to a template. Multiple linear regression analysis showed that the volume of specific thalamic nuclei (the ventral nuclear complex) together with normalized gray matter volume explains a relatively large proportion of expanded disability status scale (EDSS) variability. The deformation-based displacement analysis confirmed the relation between thalamic shape and EDSS scores. Furthermore, white matter lesion volume was found to relate to the shape of all subcortical structures. This novel method for the analysis of subcortical volume and shape allows depicting specific contributions of DGM abnormalities to neurological deficits in MS patients. The results stress the importance of ventral thalamic nuclei in this respect. PMID:24510715

  10. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends.

    PubMed

    Mansoor, Awais; Bagci, Ulas; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z; Folio, Les R; Udupa, Jayaram K; Mollura, Daniel J

    2015-01-01

    The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy-guided, and (e) machine learning-based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. PMID:26172351

  11. Analysis of wear mechanism and influence factors of drum segment of hot rolling coiler

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Peng, Yan; Liu, Hongmin; Liu, Yunfei

    2013-03-01

    Because the work environment of segment is complex, and the wear failures usually happen, the wear mechanism corresponding to the load is a key factor for the solution of this problem. At present, many researchers have investigated the failure of segment, but have not taken into account the compositive influences of matching and coiling process. To investigate the wear failure of the drum segment of the hot rolling coiler, the MMU-5G abrasion tester is applied to simulate the wear behavior under different temperatures, different loads and different stages, and the friction coefficients and wear rates are acquired. Scanning electron microscopy(SEM) is used to observe the micro-morphology of worn surface, X-ray energy dispersive spectroscopy(EDS) is used to analyze the chemical composition of worn surface, finally the wear mechanism of segment in working process is judged and the influence regulars of the environmental factors on the material wear behaviors are found. The test and analysis results show that under certain load, the wear of the segment changes into oxidation wear from abrasive wear step by step with the temperature increases, and the wear degree reduces; under certain temperature, the main wear mechanism of segment changes into spalling wear from abrasive wear with the load increases, and the wear degree slightly increases. The proposed research provides a theoretical foundation and a practical reference for optimizing the wear behavior and extending the working life of segment.

  12. Segmental hair analysis can demonstrate external contamination in postmortem cases.

    PubMed

    Kintz, Pascal

    2012-02-10

    Excluding laboratory mistakes, a false positive hair result can be observed in case of contamination from environmental pollution (external contamination) or after drug incorporation into the hair from the individual body fluids, such as sweat or putrefactive fluid (post mortem artifact). From our 20 years experience of hair testing, it appears that artifact(s) cannot be excluded in some post mortem cases, despite a decontamination procedure. As a consequence, interpretation of the results is a challenge that deserves particular attention. Our strategy will be reviewed in this paper, based on six cases. In all cases, a decontamination procedure with two washes of 5 ml of dichloromethane for 5 min was performed and the last dichloromethane wash was negative for each target drug. From the histories, there was no suspicion of chronic drug use. In all six cases, the concentrations detected were similar along the hair shaft, irrespective of the tested segment. We have considered this as indicative of external contamination and suggested to the forces or the judges that it is not possible to indicate exposure before death. In contrast to smoke, it seems that contamination due to aqueous matrices (sweat, putrefactive fluid, blood) is much more difficult to remove. To explain potential incorporation of 7-aminoflunitrazepam via putrefactive material, the author incubated negative hair strands in blood spiked at 100 ng/ml and stored at +4°C, room temperature and +40 °C for 7, 14 and 28 days. After routine decontamination, 7-aminoflunitrazepam tested positive in hair, irrespective of the incubation temperature, as early as after 7 days (233-401 pg/mg). In all periods, maximum concentrations were observed after incubation at room temperature. The highest concentration (742 pg/mg) was observed after 28 days incubation at room temperature. It is concluded that a standard decontamination procedure is not able to completely remove external contamination in case of post mortem specimens. Homogenous segmental analyses can be probably indicative of external contamination and therefore a single hair result should not be used to discriminate long-term exposure to a drug. Nor should the presence of a metabolite be considered as a discrimination tool, as it can also be present in putrefactive material. PMID:21354729

  13. Automated abdominal lymph node segmentation based on RST analysis and SVM

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Misawa, Kazunari; Mori, Kensaku

    2014-03-01

    This paper describes a segmentation method for abdominal lymph node (LN) using radial structure tensor analysis (RST) and support vector machine. LN analysis is one of crucial parts of lymphadenectomy, which is a surgical procedure to remove one or more LNs in order to evaluate them for the presence of cancer. Several works for automated LN detection and segmentation have been proposed. However, there are a lot of false positives (FPs). The proposed method consists of LN candidate segmentation and FP reduction. LN candidates are extracted using RST analysis in each voxel of CT scan. RST analysis can discriminate between difference local intensity structures without influence of surrounding structures. In FP reduction process, we eliminate FPs using support vector machine with shape and intensity information of the LN candidates. The experimental result reveals that the sensitivity of the proposed method was 82.0 % with 21.6 FPs/case.

  14. Imalytics Preclinical: Interactive Analysis of Biomedical Volume Data

    PubMed Central

    Gremse, Felix; Stärk, Marius; Ehling, Josef; Menzel, Jan Robert; Lammers, Twan; Kiessling, Fabian

    2016-01-01

    A software tool is presented for interactive segmentation of volumetric medical data sets. To allow interactive processing of large data sets, segmentation operations, and rendering are GPU-accelerated. Special adjustments are provided to overcome GPU-imposed constraints such as limited memory and host-device bandwidth. A general and efficient undo/redo mechanism is implemented using GPU-accelerated compression of the multiclass segmentation state. A broadly applicable set of interactive segmentation operations is provided which can be combined to solve the quantification task of many types of imaging studies. A fully GPU-accelerated ray casting method for multiclass segmentation rendering is implemented which is well-balanced with respect to delay, frame rate, worst-case memory consumption, scalability, and image quality. Performance of segmentation operations and rendering are measured using high-resolution example data sets showing that GPU-acceleration greatly improves the performance. Compared to a reference marching cubes implementation, the rendering was found to be superior with respect to rendering delay and worst-case memory consumption while providing sufficiently high frame rates for interactive visualization and comparable image quality. The fast interactive segmentation operations and the accurate rendering make our tool particularly suitable for efficient analysis of multimodal image data sets which arise in large amounts in preclinical imaging studies. PMID:26909109

  15. A robust and fast line segment detector based on top-down smaller eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing

    2014-01-01

    In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.

  16. Segmenting Business Students Using Cluster Analysis Applied to Student Satisfaction Survey Results

    ERIC Educational Resources Information Center

    Gibson, Allen

    2009-01-01

    This paper demonstrates a new application of cluster analysis to segment business school students according to their degree of satisfaction with various aspects of the academic program. The resulting clusters provide additional insight into drivers of student satisfaction that are not evident from analysis of the responses of the student body as a…

  17. Control volume based hydrocephalus research; analysis of human data

    NASA Astrophysics Data System (ADS)

    Cohen, Benjamin; Wei, Timothy; Voorhees, Abram; Madsen, Joseph; Anor, Tomer

    2010-11-01

    Hydrocephalus is a neuropathophysiological disorder primarily diagnosed by increased cerebrospinal fluid volume and pressure within the brain. To date, utilization of clinical measurements have been limited to understanding of the relative amplitude and timing of flow, volume and pressure waveforms; qualitative approaches without a clear framework for meaningful quantitative comparison. Pressure volume models and electric circuit analogs enforce volume conservation principles in terms of pressure. Control volume analysis, through the integral mass and momentum conservation equations, ensures that pressure and volume are accounted for using first principles fluid physics. This approach is able to directly incorporate the diverse measurements obtained by clinicians into a simple, direct and robust mechanics based framework. Clinical data obtained for analysis are discussed along with data processing techniques used to extract terms in the conservation equation. Control volume analysis provides a non-invasive, physics-based approach to extracting pressure information from magnetic resonance velocity data that cannot be measured directly by pressure instrumentation.

  18. Automatic segmentation of the colon

    NASA Astrophysics Data System (ADS)

    Wyatt, Christopher L.; Ge, Yaorong; Vining, David J.

    1999-05-01

    Virtual colonoscopy is a minimally invasive technique that enables detection of colorectal polyps and cancer. Normally, a patient's bowel is prepared with colonic lavage and gas insufflation prior to computed tomography (CT) scanning. An important step for 3D analysis of the image volume is segmentation of the colon. The high-contrast gas/tissue interface that exists in the colon lumen makes segmentation of the majority of the colon relatively easy; however, two factors inhibit automatic segmentation of the entire colon. First, the colon is not the only gas-filled organ in the data volume: lungs, small bowel, and stomach also meet this criteria. User-defined seed points placed in the colon lumen have previously been required to spatially isolate only the colon. Second, portions of the colon lumen may be obstructed by peristalsis, large masses, and/or residual feces. These complicating factors require increased user interaction during the segmentation process to isolate additional colon segments. To automate the segmentation of the colon, we have developed a method to locate seed points and segment the gas-filled lumen with no user supervision. We have also developed an automated approach to improve lumen segmentation by digitally removing residual contrast-enhanced fluid resulting from a new bowel preparation that liquefies and opacifies any residual feces.

  19. Segmentation of ECG-gated multidetector row-CT cardiac images for functional analysis

    NASA Astrophysics Data System (ADS)

    Kim, Jin Sung; Na, Yonghum; Bae, Kyongtae T.

    2002-05-01

    Multi-row detector CT (MDCT) gated with ECG-tracing allows continuous image acquisition of the heart during a breath-hold with a high spatial and temporal resolution. Dynamic segmentation and display of CT images, especially short- and long-axis view, is important in functional analysis of cardiac morphology. The size of dynamic MDCT cardiac images, however, is typically very large involving several hundred CT images and thus a manual analysis of these images can be time-consuming and tedious. In this paper, an automatic scheme was proposed to segment and reorient the left ventricular images in MDCT. Two segmentation techniques, deformable model and region-growing methods, were developed and tested. The contour of the ventricular cavity was segmented iteratively from a set of initial coarse boundary points placed on a transaxial CT image and was propagated to adjacent CT images. Segmented transaxial diastolic cardiac phase MDCT images were reoriented along the long- and short-axis of the left ventricle. The axes were estimated by calculating the principal components of the ventricular boundary points and then confirmed or adjusted by an operator. The reorientation of the coordinates was applied to other transaxial MDCT image sets reconstructed at different cardiac phases. Estimated short-axes of the left ventricle were in a close agreement with the qualitative assessment by a radiologist. Preliminary results from our methods were promising, with a considerable reduction in analysis time and manual operations.

  20. Preliminary analysis of effect of random segment errors on coronagraph performance

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-09-01

    "Are we alone in the Universe?" is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 1010 of the host star's light with a 10-11 stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segments. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  1. Analysis of a kinetic multi-segment foot model. Part I: Model repeatability and kinematic validity.

    PubMed

    Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L

    2012-04-01

    Kinematic multi-segment foot models are still evolving, but have seen increased use in clinical and research settings. The addition of kinetics may increase knowledge of foot and ankle function as well as influence multi-segment foot model evolution; however, previous kinetic models are too complex for clinical use. In this study we present a three-segment kinetic foot model and thorough evaluation of model performance during normal gait. In this first of two companion papers, model reference frames and joint centers are analyzed for repeatability, joint translations are measured, segment rigidity characterized, and sample joint angles presented. Within-tester and between-tester repeatability were first assessed using 10 healthy pediatric participants, while kinematic parameters were subsequently measured on 17 additional healthy pediatric participants. Repeatability errors were generally low for all sagittal plane measures as well as transverse plane Hindfoot and Forefoot segments (median<3°), while the least repeatable orientations were the Hindfoot coronal plane and Hallux transverse plane. Joint translations were generally less than 2mm in any one direction, while segment rigidity analysis suggested rigid body behavior for the Shank and Hindfoot, with the Forefoot violating the rigid body assumptions in terminal stance/pre-swing. Joint excursions were consistent with previously published studies. PMID:22421190

  2. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  3. Fire flame detection using color segmentation and space-time analysis

    NASA Astrophysics Data System (ADS)

    Ruchanurucks, Miti; Saengngoen, Praphin; Sajjawiso, Theeraphat

    2011-10-01

    This paper presents a fire flame detection using CCTV cameras based on image processing. The scheme relies on color segmentation and space-time analysis. The segmentation is performed to extract fire-like-color regions in an image. Many methods are benchmarked against each other to find the best for practical CCTV camera. After that, the space-time analysis is used to recognized fire behavior. A space-time window is generated from contour of the threshold image. Feature extraction is done in Fourier domain of the window. Neural network is used for behavior recognition. The system will be shown to be practical and robust.

  4. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  5. Microreactors with integrated UV/Vis spectroscopic detection for online process analysis under segmented flow.

    PubMed

    Yue, Jun; Falke, Floris H; Schouten, Jaap C; Nijhuis, T Alexander

    2013-12-21

    Combining reaction and detection in multiphase microfluidic flow is becoming increasingly important for accelerating process development in microreactors. We report the coupling of UV/Vis spectroscopy with microreactors for online process analysis under segmented flow conditions. Two integration schemes are presented: one uses a cross-type flow-through cell subsequent to a capillary microreactor for detection in the transmission mode; the other uses embedded waveguides on a microfluidic chip for detection in the evanescent wave field. Model experiments reveal the capabilities of the integrated systems in real-time concentration measurements and segmented flow characterization. The application of such integration for process analysis during gold nanoparticle synthesis is demonstrated, showing its great potential in process monitoring in microreactors operated under segmented flow. PMID:24178763

  6. 3-D segmentation and quantitative analysis of inner and outer walls of thrombotic abdominal aortic aneurysms

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Yin, Yin; Wahle, Andreas; Olszewski, Mark E.; Sonka, Milan

    2008-03-01

    An abdominal aortic aneurysm (AAA) is an area of a localized widening of the abdominal aorta, with a frequent presence of thrombus. A ruptured aneurysm can cause death due to severe internal bleeding. AAA thrombus segmentation and quantitative analysis are of paramount importance for diagnosis, risk assessment, and determination of treatment options. Until now, only a small number of methods for thrombus segmentation and analysis have been presented in the literature, either requiring substantial user interaction or exhibiting insufficient performance. We report a novel method offering minimal user interaction and high accuracy. Our thrombus segmentation method is composed of an initial automated luminal surface segmentation, followed by a cost function-based optimal segmentation of the inner and outer surfaces of the aortic wall. The approach utilizes the power and flexibility of the optimal triangle mesh-based 3-D graph search method, in which cost functions for thrombus inner and outer surfaces are based on gradient magnitudes. Sometimes local failures caused by image ambiguity occur, in which case several control points are used to guide the computer segmentation without the need to trace borders manually. Our method was tested in 9 MDCT image datasets (951 image slices). With the exception of a case in which the thrombus was highly eccentric, visually acceptable aortic lumen and thrombus segmentation results were achieved. No user interaction was used in 3 out of 8 datasets, and 7.80 +/- 2.71 mouse clicks per case / 0.083 +/- 0.035 mouse clicks per image slice were required in the remaining 5 datasets.

  7. Three-dimensional analysis of cervical spine segmental motion in rotation

    PubMed Central

    Zhao, Xiong; Wu, Zi-xiang; Han, Bao-jun; Yan, Ya-bo; Zhang, Yang

    2013-01-01

    Introduction The movements of the cervical spine during head rotation are too complicated to measure using conventional radiography or computed tomography (CT) techniques. In this study, we measure three-dimensional segmental motion of cervical spine rotation in vivo using a non-invasive measurement technique. Material and methods Sixteen healthy volunteers underwent three-dimensional CT of the cervical spine during head rotation. Occiput (Oc) – T1 reconstructions were created of volunteers in each of 3 positions: supine and maximum left and right rotations of the head with respect to the bosom. Segmental motions were calculated using Euler angles and volume merge methods in three major planes. Results Mean maximum axial rotation of the cervical spine to one side was 1.6° to 38.5° at each level. Coupled lateral bending opposite to lateral bending was observed in the upper cervical levels, while in the subaxial cervical levels, it was observed in the same direction as axial rotation. Coupled extension was observed in the cervical levels of C5-T1, while coupled flexion was observed in the cervical levels of Oc-C5. Conclusions The three-dimensional cervical segmental motions in rotation were accurately measured with the non-invasive measure. These findings will be helpful as the basis for understanding cervical spine movement in rotation and abnormal conditions. The presented data also provide baseline segmental motions for the design of prostheses for the cervical spine. PMID:23847675

  8. Combining multiset resolution and segmentation for hyperspectral image analysis of biological tissues.

    PubMed

    Piqueras, S; Krafft, C; Beleites, C; Egodage, K; von Eggeling, F; Guntinas-Lichius, O; Popp, J; Tauler, R; de Juan, A

    2015-06-30

    Hyperspectral images can provide useful biochemical information about tissue samples. Often, Fourier transform infrared (FTIR) images have been used to distinguish different tissue elements and changes caused by pathological causes. The spectral variation between tissue types and pathological states is very small and multivariate analysis methods are required to describe adequately these subtle changes. In this work, a strategy combining multivariate curve resolution-alternating least squares (MCR-ALS), a resolution (unmixing) method, which recovers distribution maps and pure spectra of image constituents, and K-means clustering, a segmentation method, which identifies groups of similar pixels in an image, is used to provide efficient information on tissue samples. First, multiset MCR-ALS analysis is performed on the set of images related to a particular pathology status to provide basic spectral signatures and distribution maps of the biological contributions needed to describe the tissues. Later on, multiset segmentation analysis is applied to the obtained MCR scores (concentration profiles), used as compressed initial information for segmentation purposes. The multiset idea is transferred to perform image segmentation of different tissue samples. Doing so, a difference can be made between clusters associated with relevant biological parts common to all images, linked to general trends of the type of samples analyzed, and sample-specific clusters, that reflect the natural biological sample-to-sample variability. The last step consists of performing separate multiset MCR-ALS analyses on the pixels of each of the relevant segmentation clusters for the pathology studied to obtain a finer description of the related tissue parts. The potential of the strategy combining multiset resolution on complete images, multiset segmentation and multiset local resolution analysis will be shown on a study focused on FTIR images of tissue sections recorded on inflamed and non-inflamed palatine tonsils. PMID:26041517

  9. Robust Detection and Identification of Sparse Segments in Ultra-High Dimensional Data Analysis

    PubMed Central

    Cai, T. Tony; Jeng, X. Jessie; Li, Hongzhe

    2012-01-01

    Summary Copy number variants (CNVs) are alternations of DNA of a genome that results in the cell having a less or more than two copies of segments of the DNA. CNVs correspond to relatively large regions of the genome, ranging from about one kilobase to several megabases, that are deleted or duplicated. Motivated by CNV analysis based on next generation sequencing data, we consider the problem of detecting and identifying sparse short segments hidden in a long linear sequence of data with an unspecified noise distribution. We propose a computationally efficient method that provides a robust and near-optimal solution for segment identification over a wide range of noise distributions. We theoretically quantify the conditions for detecting the segment signals and show that the method near-optimally estimates the signal segments whenever it is possible to detect their existence. Simulation studies are carried out to demonstrate the efficiency of the method under different noise distributions. We present results from a CNV analysis of a HapMap Yoruban sample to further illustrate the theory and the methods. PMID:23393425

  10. Segmentation of vascular structures and hematopoietic cells in 3D microscopy images and quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mu, Jian; Yang, Lin; Kamocka, Malgorzata M.; Zollman, Amy L.; Carlesso, Nadia; Chen, Danny Z.

    2015-03-01

    In this paper, we present image processing methods for quantitative study of how the bone marrow microenvironment changes (characterized by altered vascular structure and hematopoietic cell distribution) caused by diseases or various factors. We develop algorithms that automatically segment vascular structures and hematopoietic cells in 3-D microscopy images, perform quantitative analysis of the properties of the segmented vascular structures and cells, and examine how such properties change. In processing images, we apply local thresholding to segment vessels, and add post-processing steps to deal with imaging artifacts. We propose an improved watershed algorithm that relies on both intensity and shape information and can separate multiple overlapping cells better than common watershed methods. We then quantitatively compute various features of the vascular structures and hematopoietic cells, such as the branches and sizes of vessels and the distribution of cells. In analyzing vascular properties, we provide algorithms for pruning fake vessel segments and branches based on vessel skeletons. Our algorithms can segment vascular structures and hematopoietic cells with good quality. We use our methods to quantitatively examine the changes in the bone marrow microenvironment caused by the deletion of Notch pathway. Our quantitative analysis reveals property changes in samples with deleted Notch pathway. Our tool is useful for biologists to quantitatively measure changes in the bone marrow microenvironment, for developing possible therapeutic strategies to help the bone marrow microenvironment recovery.

  11. Mean platelet volume is associated with infarct size and microvascular obstruction estimated by cardiac magnetic resonance in ST segment elevation myocardial infarction.

    PubMed

    Fabregat-Andrés, Óscar; Cubillos, Andrés; Ferrando-Beltrán, Mónica; Bochard-Villanueva, Bruno; Estornell-Erill, Jordi; Fácila, Lorenzo; Ridocci-Soriano, Francisco; Morell, Salvador

    2013-06-01

    Mean platelet volume (MPV) is an indicator of platelet activation. High MPV has been recently considered as an independent risk factor for poor outcomes after ST-segment elevation myocardial infarction (STEMI). We analyzed 128 patients diagnosed with first STEMI successfully reperfused during three consecutive years. MPV was measured on admission and a cardiac magnetic resonance (CMR) exam was performed within the first week in all patients. Myocardial necrosis size was estimated by the area of late gadolinium enhancement (LGE), identifying microvascular obstruction (MVO), if present. Clinical outcomes were recorded at 1 year follow-up. High MPV was defined as a value in the third tertile (?9.5?fl), and a low MPV, as a value in the lower two. We found a slight but significant correlation between MPV and infarct size (r = 0.287, P = 0.008). Patients with high MPV had more extensive infarcted area (percentage of necrosis by LGE: 17.6 vs. 12.5%, P = 0.021) and more presence of MVO (patients with MVO pattern: 44.4 vs. 25.3%, P = 0.027). In a multivariable analysis, hazard ratio for major adverse cardiac events was 3.35 [95% confidence interval (CI) 1.1-9.9, P = 0.03] in patients with high MPV. High MPV in patients with first STEMI is associated with higher infarct size and more presence of MVO measured by CMR. PMID:23322274

  12. Predictive value of admission platelet volume indices for in-hospital major adverse cardiovascular events in acute ST-segment elevation myocardial infarction.

    PubMed

    Celik, Turgay; Kaya, Mehmet G; Akpek, Mahmut; Gunebakmaz, Ozgur; Balta, Sevket; Sarli, Bahadir; Duran, Mustafa; Demirkol, Sait; Uysal, Onur Kadir; Oguzhan, Abdurrahman; Gibson, C Michael

    2015-02-01

    Although mean platelet volume (MPV) is an independent correlate of impaired angiographic reperfusion and 6-month mortality in ST-segment elevation myocardial infarction (STEMI) treated with primary percutaneous coronary intervention (pPCI), there is less data regarding the association between platelet distribution width (PDW) and in-hospital major adverse cardiovascular events (MACEs). A total of 306 patients with STEMI pPCI were evaluated. No reflow was defined as a post-PCI thrombolysis in myocardial infarction (TIMI) flow grade of 0, 1, or 2 (group 1). Angiographic success was defined as TIMI flow grade 3 (group 2). The values of MPV and PDW were higher among patients with no reflow. In-stent thrombosis, nonfatal myocardial infarction, in-hospital mortality, and MACEs were significantly more frequent among patients with no reflow. In multivariate analysis, PDW, MPV, high-sensitivity C-reactive protein, and glucose on admission were independent correlates of in-hospital MACEs. Admission PDW and MPV are independent correlates of no reflow and in-hospital MACEs among patients with STEMI undergoing pPCI. PMID:24301422

  13. Morphotectonic Index Analysis as an Indicator of Neotectonic Segmentation of the Nicoya Peninsula, Costa Rica

    NASA Astrophysics Data System (ADS)

    Morrish, S.; Marshall, J. S.

    2013-12-01

    The Nicoya Peninsula lies within the Costa Rican forearc where the Cocos plate subducts under the Caribbean plate at ~8.5 cm/yr. Rapid plate convergence produces frequent large earthquakes (~50yr recurrence interval) and pronounced crustal deformation (0.1-2.0m/ky uplift). Seven uplifted segments have been identified in previous studies using broad geomorphic surfaces (Hare & Gardner 1984) and late Quaternary marine terraces (Marshall et al. 2010). These surfaces suggest long term net uplift and segmentation of the peninsula in response to contrasting domains of subducting seafloor (EPR, CNS-1, CNS-2). In this study, newer 10m contour digital topographic data (CENIGA- Terra Project) will be used to characterize and delineate this segmentation using morphotectonic analysis of drainage basins and correlation of fluvial terrace/ geomorphic surface elevations. The peninsula has six primary watersheds which drain into the Pacific Ocean; the Río Andamojo, Río Tabaco, Río Nosara, Río Ora, Río Bongo, and Río Ario which range in area from 200 km2 to 350 km2. The trunk rivers follow major lineaments that define morphotectonic segment boundaries and in turn their drainage basins are bisected by them. Morphometric analysis of the lower (1st and 2nd) order drainage basins will provide insight into segmented tectonic uplift and deformation by comparing values of drainage basin asymmetry, stream length gradient, and hypsometry with respect to margin segmentation and subducting seafloor domain. A general geomorphic analysis will be conducted alongside the morphometric analysis to map previously recognized (Morrish et al. 2010) but poorly characterized late Quaternary fluvial terraces. Stream capture and drainage divide migration are common processes throughout the peninsula in response to the ongoing deformation. Identification and characterization of basin piracy throughout the peninsula will provide insight into the history of landscape evolution in response to differential uplift. Conducting this morphotectonic analysis of the Nicoya Peninsula will provide further constraints on rates of segment uplift, location of segment boundaries, and advance the understanding of the long term deformation of the region in relation to subduction.

  14. Scientific and clinical evidence for the use of fetal ECG ST segment analysis (STAN).

    PubMed

    Steer, Philip J; Hvidman, Lone Egly

    2014-06-01

    Fetal electrocardiogram waveform analysis has been studied for many decades, but it is only in the last 20 years that computerization has made real-time analysis practical for clinical use. Changes in the ST segment have been shown to correlate with fetal condition, in particular with acid-base status. Meta-analysis of randomized trials (five in total, four using the computerized system) has shown that use of computerized ST segment analysis (STAN) reduces the need for fetal blood sampling by about 40%. However, although there are trends to lower rates of low Apgar scores and acidosis, the differences are not statistically significant. There is no effect on cesarean section rates. Disadvantages include the need for amniotic membranes to be ruptured so that a fetal scalp electrode can be applied, and the need for STAN values to be interpreted in conjunction with detailed fetal heart rate pattern analysis. PMID:24597897

  15. Loads analysis and testing of flight configuration solid rocket motor outer boot ring segments

    NASA Technical Reports Server (NTRS)

    Ahmed, Rafiq

    1990-01-01

    The loads testing on in-house-fabricated flight configuration Solid Rocket Motor (SRM) outer boot ring segments. The tests determined the bending strength and bending stiffness of these beams and showed that they compared well with the hand analysis. The bending stiffness test results compared very well with the finite element data.

  16. Phylogenomic analysis reveals ancient segmental duplications in the human genome.

    PubMed

    Hafeez, Madiha; Shabbir, Madiha; Altaf, Fouzia; Abbasi, Amir Ali

    2016-01-01

    Evolution of organismal complexity and origin of novelties during vertebrate history has been widely explored in context of both regulation of gene expression and gene duplication events. Ohno (1970) for the first time put forward the idea of two rounds whole genome duplication events as the most plausible explanation for evolutionarizing the vertebrate lineage (2R hypothesis). To test the validity of 2R hypothesis, a robust phylogenomic analysis of multigene families with triplicated or quadruplicated representation on human FGFR bearing chromosomes (4/5/8/10) was performed. Topology comparison approach categorized members of 80 families into five distinct co-duplicated groups. Genes belonging to one co-duplicated group are duplicated concurrently, whereas genes of two different co-duplicated groups do not share their duplication history and have not duplicated in congruency. Our findings contradict the 2R model and are indicative of small-scale duplications and rearrangements that cover the entire span of animal's history. PMID:26327327

  17. Effect of ST segment measurement point on performance of exercise ECG analysis.

    PubMed

    Lehtinen, R; Sievänen, H; Turjanmaa, V; Niemelä, K; Malmivuo, J

    1997-10-10

    To evaluate the effect of ST-segment measurement point on diagnostic performance of the ST-segment/heart rate (ST/HR) hysteresis, the ST/HR index, and the end-exercise ST-segment depression in the detection of coronary artery disease, we analysed the exercise electrocardiograms of 347 patients using ST-segment depression measured at 0, 20, 40, 60 and 80 ms after the J-point. Of these patients, 127 had and 13 had no significant coronary artery disease according to angiography, 18 had no myocardial perfusion defect according to technetium-99m sestamibi single-photon emission computed tomography, and 189 were clinically 'normal' having low likelihood of coronary artery disease. Comparison of areas under the receiver operating characteristic curves showed that the discriminative capacity of the above diagnostic variables improved systematically up to the ST-segment measurement point of 60 ms after the J-point. As compared to analysis at the J-point (0 ms), the areas based on the 60-ms point were 89 vs. 84% (p=0.0001) for the ST/HR hysteresis, 83 vs. 76% (p<0.0001) for the ST/HR index, and 76 vs. 61% (p<0.0001) for the end-exercise ST depression. These findings suggest that the ST-segment measurement at 60 ms after the J-point is the most reasonable point of choice in terms of discriminative capacity of both the simple and the heart rate-adjusted indices of ST depression. Moreover, the ST/HR hysteresis had the best discriminative capacity independently of the ST-segment measurement point, the observation thus giving further support to clinical utility of this new method in the detection of coronary artery disease. PMID:9363740

  18. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  19. Concerted Assembly and Cloning of Multiple DNA Segments Using In Vitro Site-Specific Recombination: Functional Analysis of Multi-Segment Expression Clones

    PubMed Central

    Cheo, David L.; Titus, Steven A.; Byrd, Devon R.N.; Hartley, James L.; Temple, Gary F.; Brasch, Michael A.

    2004-01-01

    The ability to clone and manipulate DNA segments is central to molecular methods that enable expression, screening, and functional characterization of genes, proteins, and regulatory elements. We previously described the development of a novel technology that utilizes in vitro site-specific recombination to provide a robust and flexible platform for high-throughput cloning and transfer of DNA segments. By using an expanded repertoire of recombination sites with unique specificities, we have extended the technology to enable the high-efficiency in vitro assembly and concerted cloning of multiple DNA segments into a vector backbone in a predefined order, orientation, and reading frame. The efficiency and flexibility of this approach enables collections of functional elements to be generated and mixed in a combinatorial fashion for the parallel assembly of numerous multi-segment constructs. The assembled constructs can be further manipulated by directing exchange of defined segments with alternate DNA segments. In this report, we demonstrate feasibility of the technology and application to the generation of fusion proteins, the linkage of promoters to genes, and the assembly of multiple protein domains. The technology has broad implications for cell and protein engineering, the expression of multidomain proteins, and gene function analysis. PMID:15489333

  20. Moving cast shadow resistant for foreground segmentation based on shadow properties analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Gao, Yun; Yuan, Guowu; Ji, Rongbin

    2015-12-01

    Moving object detection is the fundamental task in machine vision applications. However, moving cast shadows detection is one of the major concerns for accurate video segmentation. Since detected moving object areas are often contain shadow points, errors in measurements, localization, segmentation, classification and tracking may arise from this. A novel shadow elimination algorithm is proposed in this paper. A set of suspected moving object area are detected by the adaptive Gaussian approach. A model is established based on shadow optical properties analysis. And shadow regions are discriminated from the set of moving pixels by using the properties of brightness, chromaticity and texture in sequence.

  1. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  2. Two-dimensional finite-element analysis of tapered segmented structures

    NASA Astrophysics Data System (ADS)

    Rubio Noriega, Ruth; Hernandez-Figueroa, Hugo

    2013-03-01

    We present the results of the theoretical study and two-dimensional frequency domain finite-element simulation of tapered segmented waveguides. The application that we propose for this device is an adiabatically tapered and chirped PSW transmission, to eliminate higher order modes that can be propagated in a multimode semiconductor waveguide assuring mono mode propagation at 1.55ÎĽm. We demonstrate that by reducing the taper functions for the design of a segmented waveguide we can filter higher order modes at pump wavelength in WDM systems and at the same time low coupling losses between the continuous waveguide and the segmented waveguide. We obtained the cutoff wavelength as a function of the duty cycle of the segmented waveguide to show that we can, in fact, guide 1.55ÎĽm fundamental mode over a silicon-on-insulator platform using both, silica and SU-8 as substrate material. For the two-dimensional finite element analysis a new module over a commercial platform is proposed. Its contribution is the inclusion of the anisotropic perfectly matched layer that is more suitable for solving periodic segmented structures and other discontinuity problems.

  3. Computer model analysis of the relationship of ST-segment and ST-segment/heart rate slope response to the constituents of the ischemic injury source.

    PubMed

    Hyttinen, J; Viik, J; Lehtinen, R; Plonsey, R; Malmivuo, J

    1997-07-01

    The objective of the study was to investigate a proposed linear relationship between the extent of myocardial ischemic injury and the ST-segment/heart rate (ST/HR) slope by computer simulation of the injury sources arising in exercise electrocardiographic (ECG) tests. The extent and location of the ischemic injury were simulated for both single- and multivessel coronary artery disease by use of an accurate source-volume conductor model which assumes a linear relationship between heart rate and extent of ischemia. The results indicated that in some cases the ST/HR slope in leads II, aVF, and especially V5 may be related to the extent of ischemia. However, the simulations demonstrated that neither the ST-segment deviation nor the ST/HR slope was directly proportional to either the area of the ischemic boundary or the number of vessels occluded. Furthermore, in multivessel coronary artery disease, the temporal and spatial diversity of the generated multiple injury sources distorted the presumed linearity between ST-segment deviation and heart rate. It was concluded that the ST/HR slope and ST-segment deviation of the 12-lead ECG are not able to indicate extent of ischemic injury or number of vessels occluded. PMID:9261724

  4. Comparison of Five Segmentation Tools for {sup 18}F-Fluoro-Deoxy-Glucose-Positron Emission Tomography-Based Target Volume Definition in Head and Neck Cancer

    SciTech Connect

    Schinagl, Dominic A.X. Vogel, Wouter V.; Hoffmann, Aswin L.; Dalen, Jorn A. van; Oyen, Wim J.; Kaanders, Johannes H.A.M.

    2007-11-15

    Purpose: Target-volume delineation for radiation treatment to the head and neck area traditionally is based on physical examination, computed tomography (CT), and magnetic resonance imaging. Additional molecular imaging with {sup 18}F-fluoro-deoxy-glucose (FDG)-positron emission tomography (PET) may improve definition of the gross tumor volume (GTV). In this study, five methods for tumor delineation on FDG-PET are compared with CT-based delineation. Methods and Materials: Seventy-eight patients with Stages II-IV squamous cell carcinoma of the head and neck area underwent coregistered CT and FDG-PET. The primary tumor was delineated on CT, and five PET-based GTVs were obtained: visual interpretation, applying an isocontour of a standardized uptake value of 2.5, using a fixed threshold of 40% and 50% of the maximum signal intensity, and applying an adaptive threshold based on the signal-to-background ratio. Absolute GTV volumes were compared, and overlap analyses were performed. Results: The GTV method of applying an isocontour of a standardized uptake value of 2.5 failed to provide successful delineation in 45% of cases. For the other PET delineation methods, volume and shape of the GTV were influenced heavily by the choice of segmentation tool. On average, all threshold-based PET-GTVs were smaller than on CT. Nevertheless, PET frequently detected significant tumor extension outside the GTV delineated on CT (15-34% of PET volume). Conclusions: The choice of segmentation tool for target-volume definition of head and neck cancer based on FDG-PET images is not trivial because it influences both volume and shape of the resulting GTV. With adequate delineation, PET may add significantly to CT- and physical examination-based GTV definition.

  5. FIELD VALIDATION OF EXPOSURE ASSESSMENT MODELS. VOLUME 2. ANALYSIS

    EPA Science Inventory

    This is the second of two volumes describing a series of dual tracer experiments designed to evaluate the PAL-DS model, a Gaussian diffusion model modified to take into account settling and deposition, as well as three other deposition models. In this volume, an analysis of the d...

  6. Mean-Field Analysis of Recursive Entropic Segmentation of Biological Sequences

    NASA Astrophysics Data System (ADS)

    Cheong, Siew-Ann; Stodghill, Paul; Schneider, David; Myers, Christopher

    2007-03-01

    Horizontal gene transfer in bacteria results in genomic sequences which are mosaic in nature. An important first step in the analysis of a bacterial genome would thus be to model the statistically nonstationary nucleotide or protein sequence with a collection of P stationary Markov chains, and partition the sequence of length N into M statistically stationary segments/domains. This can be done for Markov chains of order K = 0 using a recursive segmentation scheme based on the Jensen-Shannon divergence, where the unknown parameters P and M are estimated from a hypothesis testing/model selection process. In this talk, we describe how the Jensen-Shannon divergence can be generalized to Markov chains of order K > 0, as well as an algorithm optimizing the positions of a fixed number of domain walls. We then describe a mean field analysis of the generalized recursive Jensen-Shannon segmentation scheme, and show how most domain walls appear as local maxima in the divergence spectrum of the sequence, before highlighting the main problem associated with the recursive segmentation scheme, i.e. the strengths of the domain walls selected recursively do not decrease monotonically. This problem is especially severe in repetitive sequences, whose statistical signatures we will also discuss.

  7. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-01

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  8. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    PubMed

    Belevich, Ilya; Joensuu, Merja; Kumar, Darshan; Vihinen, Helena; Jokitalo, Eija

    2016-01-01

    Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program. PMID:26727152

  9. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets

    PubMed Central

    Belevich, Ilya; Joensuu, Merja; Kumar, Darshan; Vihinen, Helena; Jokitalo, Eija

    2016-01-01

    Understanding the structure–function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program. PMID:26727152

  10. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  11. Automated iterative neutrosophic lung segmentation for image analysis in thoracic computed tomography

    PubMed Central

    Guo, Yanhui; Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Kazerooni, Ella A.

    2013-01-01

    Purpose: Lung segmentation is a fundamental step in many image analysis applications for lung diseases and abnormalities in thoracic computed tomography (CT). The authors have previously developed a lung segmentation method based on expectation-maximization (EM) analysis and morphological operations (EMM) for our computer-aided detection (CAD) system for pulmonary embolism (PE) in CT pulmonary angiography (CTPA). However, due to the large variations in pathology that may be present in thoracic CT images, it is difficult to extract the lung regions accurately, especially when the lung parenchyma contains extensive lung diseases. The purpose of this study is to develop a new method that can provide accurate lung segmentation, including those affected by lung diseases. Methods: An iterative neutrosophic lung segmentation (INLS) method was developed to improve the EMM segmentation utilizing the anatomic features of the ribs and lungs. The initial lung regions (ILRs) were extracted using our previously developed EMM method, in which the ribs were extracted using 3D hierarchical EM segmentation and the ribcage was constructed using morphological operations. Based on the anatomic features of ribs and lungs, the initial EMM segmentation was refined using INLS to obtain the final lung regions. In the INLS method, the anatomic features were mapped into a neutrosophic domain, and the neutrosophic operation was performed iteratively to refine the ILRs. With IRB approval, 5 and 58 CTPA scans were collected retrospectively and used as training and test sets, of which 2 and 34 cases had lung diseases, respectively. The lung regions manually outlined by an experienced thoracic radiologist were used as reference standard for performance evaluation of the automated lung segmentation. The percentage overlap area (POA), the Hausdorff distance (Hdist), and the average distance (AvgDist) of the lung boundaries relative to the reference standard were used as performance metrics. Results: The proposed method achieved larger POAs and smaller distance errors than the EMM method. For the 58 test cases, the average POA, Hdist, and AvgDist were improved from 85.4 ± 18.4%, 22.6 ± 29.4 mm, and 3.5 ± 5.4 mm using EMM to 91.2 ± 6.7%, 16.0 ± 11.3 mm, and 2.5 ± 1.0 mm using INLS, respectively. The improvements were statistically significant (p < 0.05). To evaluate the accuracy of the INLS method in the identification of the lung boundaries affected by lung diseases, the authors separately analyzed the performance of the proposed method on the cases with versus without the lung diseases. The results showed that the cases without lung diseases were segmented more accurately than the cases with lung diseases by both the EMM and the INLS methods, but the INLS method achieved better performance than the EMM method in both cases. Conclusions: The new INLS method utilizing the anatomic features of the rib and lung significantly improved the accuracy of lung segmentation, especially for the cases affected by lung diseases. Improvement in lung segmentation will facilitate many image analysis tasks and CAD applications for lung diseases and abnormalities in thoracic CT, including automated PE detection. PMID:23927326

  12. Mimicking human expert interpretation of remotely sensed raster imagery by using a novel segmentation analysis within ArcGIS

    NASA Astrophysics Data System (ADS)

    Le Bas, Tim; Scarth, Anthony; Bunting, Peter

    2015-04-01

    Traditional computer based methods for the interpretation of remotely sensed imagery use each pixel individually or the average of a small window of pixels to calculate a class or thematic value, which provides an interpretation. However when a human expert interprets imagery, the human eye is excellent at finding coherent and homogenous areas and edge features. It may therefore be advantageous for computer analysis to mimic human interpretation. A new toolbox for ArcGIS 10.x will be presented that segments the data layers into a set of polygons. Each polygon is defined by a K-means clustering and region growing algorithm, thus finding areas, their edges and any lineations in the imagery. Attached to each polygon are the characteristics of the imagery such as mean and standard deviation of the pixel values, within the polygon. The segmentation of imagery into a jigsaw of polygons also has the advantage that the human interpreter does not need to spend hours digitising the boundaries. The segmentation process has been taken from the RSGIS library of analysis and classification routines (Bunting et al., 2014). These routines are freeware and have been modified to be available in the ArcToolbox under the Windows (v7) operating system. Input to the segmentation process is a multi-layered raster image, for example; a Landsat image, or a set of raster datasets made up from derivatives of topography. The size and number of polygons are set by the user and are dependent on the imagery used. Examples will be presented of data from the marine environment utilising bathymetric depth, slope, rugosity and backscatter from a multibeam system. Meaningful classification of the polygons using their numerical characteristics is the next goal. Object based image analysis (OBIA) should help this workflow. Fully calibrated imagery systems will allow numerical classification to be translated into more readily understandable terms. Peter Bunting, Daniel Clewley, Richard M. Lucas and Sam Gillingham. 2014. The Remote Sensing and GIS Software Library (RSGISLib), Computers & Geosciences. Volume 62, Pages 216-226 http://dx.doi.org/10.1016/j.cageo.2013.08.007.

  13. Effects of slice thickness and head rotation when measuring glioma sizes on MRI: in support of volume segmentation versus two largest diameters methods.

    PubMed

    Schmitt, Pierre; Mandonnet, Emmanuel; Perdreau, Adrien; Angelini, Elsa D

    2013-04-01

    This paper presents a study of the effects of scanning parameters variability when assessing glioma sizes on MRI. A database of lesions of various shapes and sizes, segmented on 3D-SPGR MRI images, was acquired on 65 patients with low-grade glioma. Simulations of large slice thickness and patient's head rotation were performed, allowing us to study their influence on two size indices: the bi-dimensional diameter product index (computed with the two largest diameters method) and the equivalent diameter index (computed with the volume segmentation method). Results show that thick slices and axial plane rotation can induce average (maximal) uncertainties on the bi-dimensional diameter product index between 32 and 6 % (150 %) for small and large tumors (size range 0.5-286 ml). The uncertainty on the equivalent diameter index, for the same categories of tumors, drops below 8 and 0.1 % (23 %). This study shows that the volume segmentation method is subject to less variability inherent to scanning conditions compared to the two largest diameters method. It also emphasizes the need for strict clinical guidelines on the replication of scanning conditions when performing MRI follow-ups on patients harboring small tumors. These implications await confirmation on a series of real patients being re-scanned with FLAIR MRI. PMID:23397270

  14. The Impact of Policy Guidelines on Hospital Antibiotic Use over a Decade: A Segmented Time Series Analysis

    PubMed Central

    Chandy, Sujith J.; Naik, Girish S.; Charles, Reni; Jeyaseelan, Visalakshi; Naumova, Elena N.; Thomas, Kurien; Lundborg, Cecilia Stalsby

    2014-01-01

    Introduction Antibiotic pressure contributes to rising antibiotic resistance. Policy guidelines encourage rational prescribing behavior, but effectiveness in containing antibiotic use needs further assessment. This study therefore assessed the patterns of antibiotic use over a decade and analyzed the impact of different modes of guideline development and dissemination on inpatient antibiotic use. Methods Antibiotic use was calculated monthly as defined daily doses (DDD) per 100 bed days for nine antibiotic groups and overall. This time series compared trends in antibiotic use in five adjacent time periods identified as ‘Segments,’ divided based on differing modes of guideline development and implementation: Segment 1– Baseline prior to antibiotic guidelines development; Segment 2– During preparation of guidelines and booklet dissemination; Segment 3– Dormant period with no guidelines dissemination; Segment 4– Booklet dissemination of revised guidelines; Segment 5– Booklet dissemination of revised guidelines with intranet access. Regression analysis adapted for segmented time series and adjusted for seasonality assessed changes in antibiotic use trend. Results Overall antibiotic use increased at a monthly rate of 0.95 (SE?=?0.18), 0.21 (SE?=?0.08) and 0.31 (SE?=?0.06) for Segments 1, 2 and 3, stabilized in Segment 4 (0.05; SE?=?0.10) and declined in Segment 5 (?0.37; SE?=?0.11). Segments 1, 2 and 4 exhibited seasonal fluctuations. Pairwise segmented regression adjusted for seasonality revealed a significant drop in monthly antibiotic use of 0.401 (SE?=?0.089; p<0.001) for Segment 5 compared to Segment 4. Most antibiotic groups showed similar trends to overall use. Conclusion Use of overall and specific antibiotic groups showed varied patterns and seasonal fluctuations. Containment of rising overall antibiotic use was possible during periods of active guideline dissemination. Wider access through intranet facilitated significant decline in use. Stakeholders and policy makers are urged to develop guidelines, ensure active dissemination and enable accessibility through computer networks to contain antibiotic use and decrease antibiotic pressure. PMID:24647339

  15. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    SciTech Connect

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time (compared to an average of 39 min per case by manual segmentation). Conclusions: The computerized liver extraction scheme provides an efficient and accurate way of measuring liver volumes in CT.

  16. Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis

    NASA Astrophysics Data System (ADS)

    Chung, Howard; Cobzas, Dana; Birdsell, Laura; Lieffers, Jessica; Baracos, Vickie

    2009-02-01

    The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.

  17. Analysis of gene expression levels in individual bacterial cells without image segmentation

    SciTech Connect

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J.

    2012-05-11

    Highlights: Black-Right-Pointing-Pointer We present a method for extracting gene expression data from images of bacterial cells. Black-Right-Pointing-Pointer The method does not employ cell segmentation and does not require high magnification. Black-Right-Pointing-Pointer Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. Black-Right-Pointing-Pointer We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

  18. An Improved Level Set for Liver Segmentation and Perfusion Analysis in MRIs.

    PubMed

    Chen, Gang; Gu, Lixu; Qian, Lijun; Xu, Jianrong

    2009-01-01

    Determining liver segmentation accurately from MRIs is the primary and crucial step for any automated liver perfusion analysis, which provides important information about the blood supply to the liver. Although implicit contour extraction methods, such as level set methods (LSMs) and active contours, are often used to segment livers, the results are not always satisfactory due to the presence of artifacts and low-gradient response on the liver boundary. In this paper, we propose a multiple-initialization, multiple-step LSM to overcome the leakage and over-segmentation problems. The multiple-initialization curves are first evolved separately using the fast marching methods and LSMs, which are then combined with a convex hull algorithm to obtain a rough liver contour. Finally, the contour is evolved again using global level set smoothing to determine a precise liver boundary. Experimental results on 12 abdominal MRI series showed that the proposed approach obtained better liver segmentation results, so that a refined liver perfusion curve without respiration affection can be obtained by using a modified chamfer matching algorithm and the perfusion curve is evaluated by radiologists. PMID:19129028

  19. Stress Analysis of Bolted, Segmented Cylindrical Shells Exhibiting Flange Mating-Surface Waviness

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2009-01-01

    Bolted, segmented cylindrical shells are a common structural component in many engineering systems especially for aerospace launch vehicles. Segmented shells are often needed due to limitations of manufacturing capabilities or transportation issues related to very long, large-diameter cylindrical shells. These cylindrical shells typically have a flange or ring welded to opposite ends so that shell segments can be mated together and bolted to form a larger structural system. As the diameter of these shells increases, maintaining strict fabrication tolerances for the flanges to be flat and parallel on a welded structure is an extreme challenge. Local fit-up stresses develop in the structure due to flange mating-surface mismatch (flange waviness). These local stresses need to be considered when predicting a critical initial flaw size. Flange waviness is one contributor to the fit-up stress state. The present paper describes the modeling and analysis effort to simulate fit-up stresses due to flange waviness in a typical bolted, segmented cylindrical shell. Results from parametric studies are presented for various flange mating-surface waviness distributions and amplitudes.

  20. Analysis, design, and test of a graphite/polyimide Shuttle orbiter body flap segment

    NASA Technical Reports Server (NTRS)

    Graves, S. R.; Morita, W. H.

    1982-01-01

    For future missions, increases in Space Shuttle orbiter deliverable and recoverable payload weight capability may be needed. Such increases could be obtained by reducing the inert weight of the Shuttle. The application of advanced composites in orbiter structural components would make it possible to achieve such reductions. In 1975, NASA selected the orbiter body flap as a demonstration component for the Composite for Advanced Space Transportation Systems (CASTS) program. The progress made in 1977 through 1980 was integrated into a design of a graphite/polyimide (Gr/Pi) body flap technology demonstration segment (TDS). Aspects of composite body flap design and analysis are discussed, taking into account the direct-bond fibrous refractory composite insulation (FRCI) tile on Gr/Pi structure, Gr/Pi body flap weight savings, the body flap design concept, and composite body flap analysis. Details regarding the Gr/Pi technology demonstration segment are also examined.

  1. Sequence analysis of 12 genome segments of mud crab reovirus (MCRV).

    PubMed

    Deng, Xie-xiong; Lü, Ling; Ou, Yu-jie; Su, Hong-jun; Li, Guang; Guo, Zhi-xun; Zhang, Rui; Zheng, Pei-rui; Chen, Yong-gui; He, Jian-guo; Weng, Shao-ping

    2012-01-20

    Mud crab reovirus (MCRV) is the causative agent of a serious disease with high mortality in cultured mud crab (Scylla serrata). This study sequenced and analyzed 12 genome segments of MCRV. The 12 genome segments had a total length of 24.464 kb, showing a total G+C content of 41.29% and predicted 15 ORFs. Sequence analysis showed that the majority of MCRV genes shared low homology with the counterpart genes of other reoviruses, e.g., the amino acid identity of RNA-dependent RNA polymerase (RdRp) was lower than 13.0% compared to the RdRp sequences of other reoviruses. Nucleotide and amino acid sequences of RdRp and capping enzyme suggested MCRV as a single group. Further genome-based phylogenetical analysis of conserved termini and reovirus polymerase motif indicates that this MCRV belongs to a new genus of the Reoviridae family, tentatively named as Crabreovirus. PMID:22088215

  2. Screening Analysis : Volume 1, Description and Conclusions.

    SciTech Connect

    Bonneville Power Administration; Corps of Engineers; Bureau of Reclamation

    1992-08-01

    The SOR consists of three analytical phases leading to a Draft EIS. The first phase Pilot Analysis, was performed for the purpose of testing the decision analysis methodology being used in the SOR. The Pilot Analysis is described later in this chapter. The second phase, Screening Analysis, examines all possible operating alternatives using a simplified analytical approach. It is described in detail in this and the next chapter. This document also presents the results of screening. The final phase, Full-Scale Analysis, will be documented in the Draft EIS and is intended to evaluate comprehensively the few, best alternatives arising from the screening analysis. The purpose of screening is to analyze a wide variety of differing ways of operating the Columbia River system to test the reaction of the system to change. The many alternatives considered reflect the range of needs and requirements of the various river users and interests in the Columbia River Basin. While some of the alternatives might be viewed as extreme, the information gained from the analysis is useful in highlighting issues and conflicts in meeting operating objectives. Screening is also intended to develop a broad technical basis for evaluation including regional experts and to begin developing an evaluation capability for each river use that will support full-scale analysis. Finally, screening provides a logical method for examining all possible options and reaching a decision on a few alternatives worthy of full-scale analysis. An organizational structure was developed and staffed to manage and execute the SOR, specifically during the screening phase and the upcoming full-scale analysis phase. The organization involves ten technical work groups, each representing a particular river use. Several other groups exist to oversee or support the efforts of the work groups.

  3. Fetal autonomic brain age scores, segmented heart rate variability analysis, and traditional short term variability

    PubMed Central

    Hoyer, Dirk; Kowalski, Eva-Maria; Schmidt, Alexander; Tetschke, Florian; Nowack, Samuel; Rudolph, Anja; Wallwitz, Ulrike; Kynass, Isabelle; Bode, Franziska; Tegtmeyer, Janine; Kumm, Kathrin; Moraru, Liviu; Götz, Theresa; Haueisen, Jens; Witte, Otto W.; Schleußner, Ekkehard; Schneider, Uwe

    2014-01-01

    Disturbances of fetal autonomic brain development can be evaluated from fetal heart rate patterns (HRP) reflecting the activity of the autonomic nervous system. Although HRP analysis from cardiotocographic (CTG) recordings is established for fetal surveillance, temporal resolution is low. Fetal magnetocardiography (MCG), however, provides stable continuous recordings at a higher temporal resolution combined with a more precise heart rate variability (HRV) analysis. A direct comparison of CTG and MCG based HRV analysis is pending. The aims of the present study are: (i) to compare the fetal maturation age predicting value of the MCG based fetal Autonomic Brain Age Score (fABAS) approach with that of CTG based Dawes-Redman methodology; and (ii) to elaborate fABAS methodology by segmentation according to fetal behavioral states and HRP. We investigated MCG recordings from 418 normal fetuses, aged between 21 and 40 weeks of gestation. In linear regression models we obtained an age predicting value of CTG compatible short term variability (STV) of R2 = 0.200 (coefficient of determination) in contrast to MCG/fABAS related multivariate models with R2 = 0.648 in 30 min recordings, R2 = 0.610 in active sleep segments of 10 min, and R2 = 0.626 in quiet sleep segments of 10 min. Additionally segmented analysis under particular exclusion of accelerations (AC) and decelerations (DC) in quiet sleep resulted in a novel multivariate model with R2 = 0.706. According to our results, fMCG based fABAS may provide a promising tool for the estimation of fetal autonomic brain age. Beside other traditional and novel HRV indices as possible indicators of developmental disturbances, the establishment of a fABAS score normogram may represent a specific reference. The present results are intended to contribute to further exploration and validation using independent data sets and multicenter research structures. PMID:25505399

  4. Fetal autonomic brain age scores, segmented heart rate variability analysis, and traditional short term variability.

    PubMed

    Hoyer, Dirk; Kowalski, Eva-Maria; Schmidt, Alexander; Tetschke, Florian; Nowack, Samuel; Rudolph, Anja; Wallwitz, Ulrike; Kynass, Isabelle; Bode, Franziska; Tegtmeyer, Janine; Kumm, Kathrin; Moraru, Liviu; Götz, Theresa; Haueisen, Jens; Witte, Otto W; Schleußner, Ekkehard; Schneider, Uwe

    2014-01-01

    Disturbances of fetal autonomic brain development can be evaluated from fetal heart rate patterns (HRP) reflecting the activity of the autonomic nervous system. Although HRP analysis from cardiotocographic (CTG) recordings is established for fetal surveillance, temporal resolution is low. Fetal magnetocardiography (MCG), however, provides stable continuous recordings at a higher temporal resolution combined with a more precise heart rate variability (HRV) analysis. A direct comparison of CTG and MCG based HRV analysis is pending. The aims of the present study are: (i) to compare the fetal maturation age predicting value of the MCG based fetal Autonomic Brain Age Score (fABAS) approach with that of CTG based Dawes-Redman methodology; and (ii) to elaborate fABAS methodology by segmentation according to fetal behavioral states and HRP. We investigated MCG recordings from 418 normal fetuses, aged between 21 and 40 weeks of gestation. In linear regression models we obtained an age predicting value of CTG compatible short term variability (STV) of R (2) = 0.200 (coefficient of determination) in contrast to MCG/fABAS related multivariate models with R (2) = 0.648 in 30 min recordings, R (2) = 0.610 in active sleep segments of 10 min, and R (2) = 0.626 in quiet sleep segments of 10 min. Additionally segmented analysis under particular exclusion of accelerations (AC) and decelerations (DC) in quiet sleep resulted in a novel multivariate model with R (2) = 0.706. According to our results, fMCG based fABAS may provide a promising tool for the estimation of fetal autonomic brain age. Beside other traditional and novel HRV indices as possible indicators of developmental disturbances, the establishment of a fABAS score normogram may represent a specific reference. The present results are intended to contribute to further exploration and validation using independent data sets and multicenter research structures. PMID:25505399

  5. Association of mean platelet volume with impaired myocardial reperfusion and short-term mortality in patients with ST-segment elevation myocardial infarction undergoing primary percutaneous coronary intervention.

    PubMed

    Lai, Hong-Mei; Chen, Qing-Jie; Yang, Yi-Ning; Ma, Yi-Tong; Li, Xiao-Mei; Xu, Rui; Zhai, Hui; Liu, Fen; Chen, Bang-Dang; Zhao, Qian

    2016-01-01

    Impaired myocardial reperfusion, defined angiographically by myocardial blush grade (MBG) 0 or 1, is associated with adverse clinical outcomes in patients with ST-segment elevation myocardial infarction (STEMI). The aim of this study was to investigate the impact of admission mean platelet volume (MPV) on the myocardial reperfusion and 30-day all-cause mortality in patients with STEMI with successful epicardial reperfusion after primary percutaneous coronary intervention (PCI). A total of 453 patients with STEMI who underwent primary PCI within 12?h of symptoms onset and achieved thrombolysis in myocardial infarction (TIMI) 3 flow at infarct-related artery after PCI were enrolled and divided into two groups based on postinterventional MBG: those with MBG 2/3 and those with MBG 0/1. Admission MPV was measured before coronary angiography. The primary endpoint was all-cause mortality at 30 days. MPV was significantly higher in patients with MBG 0/1 than in patients with MBG 2/3 (10.38?±?0.98 vs. 9.59?±?0.73, P?analysis demonstrated MPV was independently associated with postinterventional impaired myocardial reperfusion (odds ratio 2.684, 95% confidence interval 2.010-3.585, P?

  6. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  7. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  8. Information architecture. Volume 2, Part 1: Baseline analysis summary

    SciTech Connect

    1996-12-01

    The Department of Energy (DOE) Information Architecture, Volume 2, Baseline Analysis, is a collaborative and logical next-step effort in the processes required to produce a Departmentwide information architecture. The baseline analysis serves a diverse audience of program management and technical personnel and provides an organized way to examine the Department`s existing or de facto information architecture. A companion document to Volume 1, The Foundations, it furnishes the rationale for establishing a Departmentwide information architecture. This volume, consisting of the Baseline Analysis Summary (part 1), Baseline Analysis (part 2), and Reference Data (part 3), is of interest to readers who wish to understand how the Department`s current information architecture technologies are employed. The analysis identifies how and where current technologies support business areas, programs, sites, and corporate systems.

  9. Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data

    NASA Astrophysics Data System (ADS)

    Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus

    The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.

  10. Texture analysis based on the Hermite transform for image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus

    2012-06-01

    Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.

  11. Laser power conversion system analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-ground laser power conversion system analysis investigated the feasibility and cost effectiveness of converting solar energy into laser energy in space, and transmitting the laser energy to earth for conversion to electrical energy. The analysis included space laser systems with electrical outputs on the ground ranging from 100 to 10,000 MW. The space laser power system was shown to be feasible and a viable alternate to the microwave solar power satellite. The narrow laser beam provides many options and alternatives not attainable with a microwave beam.

  12. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    SciTech Connect

    Akhbardeh, Alireza; Jacobs, Michael A.

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data. In the synthetic data, the authors demonstrated the performance of the NLDR method compared with conventional linear DR methods. The NLDR approach enabled successful segmentation of the structures, whereas, in most cases, PCA and MDS failed. The NLDR approach was able to segment different breast tissue types with a high accuracy and the embedded image of the breast MRI data demonstrated fuzzy boundaries between the different types of breast tissue, i.e., fatty, glandular, and tissue with lesions (>86%). Conclusions: The proposed hybrid NLDR methods were able to segment clinical breast data with a high accuracy and construct an embedded image that visualized the contribution of different radiological parameters.

  13. Scanning and transmission electron microscopic analysis of ampullary segment of oviduct during estrous cycle in caprines.

    PubMed

    Sharma, R K; Singh, R; Bhardwaj, J K

    2015-01-01

    The ampullary segment of the mammalian oviduct provides suitable milieu for fertilization and development of zygote before implantation into uterus. It is, therefore, in the present study, the cyclic changes in the morphology of ampullary segment of goat oviduct were studied during follicular and luteal phases using scanning and transmission electron microscopy techniques. Topographical analysis revealed the presence of uniformly ciliated ampullary epithelia, concealing apical processes of non-ciliated cells along with bulbous secretory cells during follicular phase. The luteal phase was marked with decline in number of ciliated cells with increased occurrence of secretory cells. The ultrastructure analysis has demonstrated the presence of indented nuclear membrane, supranuclear cytoplasm, secretory granules, rough endoplasmic reticulum, large lipid droplets, apically located glycogen masses, oval shaped mitochondria in the secretory cells. The ciliated cells were characterized by the presence of elongated nuclei, abundant smooth endoplasmic reticulum, oval or spherical shaped mitochondria with crecentric cristae during follicular phase. However, in the luteal phase, secretory cells were possessing highly indented nucleus with diffused electron dense chromatin, hyaline nucleosol, increased number of lipid droplets. The ciliated cells had numerous fibrous granules and basal bodies. The parallel use of scanning and transmission electron microscopy techniques has enabled us to examine the cyclic and hormone dependent changes occurring in the topography and fine structure of epithelium of ampullary segment and its cells during different reproductive phases that will be great help in understanding major bottle neck that limits success rate in vitro fertilization and embryo transfer technology. PMID:25491952

  14. FEM correlation and shock analysis of a VNC MEMS mirror segment

    NASA Astrophysics Data System (ADS)

    Aguayo, Eduardo J.; Lyon, Richard; Helmbrecht, Michael; Khomusi, Sausan

    2014-08-01

    Microelectromechanical systems (MEMS) are becoming more prevalent in today's advanced space technologies. The Visible Nulling Coronagraph (VNC) instrument, being developed at the NASA Goddard Space Flight Center, uses a MEMS Mirror to correct wavefront errors. This MEMS Mirror, the Multiple Mirror Array (MMA), is a key component that will enable the VNC instrument to detect Jupiter and ultimately Earth size exoplanets. Like other MEMS devices, the MMA faces several challenges associated with spaceflight. Therefore, Finite Element Analysis (FEA) is being used to predict the behavior of a single MMA segment under different spaceflight-related environments. Finite Element Analysis results are used to guide the MMA design and ensure its survival during launch and mission operations. A Finite Element Model (FEM) has been developed of the MMA using COMSOL. This model has been correlated to static loading on test specimens. The correlation was performed in several steps—simple beam models were correlated initially, followed by increasingly complex and higher fidelity models of the MMA mirror segment. Subsequently, the model has been used to predict the dynamic behavior and stresses of the MMA segment in a representative spaceflight mechanical shock environment. The results of the correlation and the stresses associated with a shock event are presented herein.

  15. Breast volume measurement of 248 women using biostereometric analysis.

    PubMed

    Loughry, C W; Sheffer, D B; Price, T E; Lackney, M J; Bartfai, R G; Morek, W M

    1987-10-01

    A study of volumes of the right and left breasts of 248 subjects was undertaken using biostereometric analysis. This measurement technique uses close-range stereophotogrammetry to characterize the shape of the breast and is noncontact, noninvasive, accurate, and rapid with respect to the subject involvement time. Volumes and volumetric differences between breast pairs were compared, using chi-square tests, with handedness, perception of breast size by each subject, age, and menstrual status. No significant relationship was found between the handedness of the subject and the larger breast volume. Several groups of subjects based on age and menstrual status were accurate in their perception of breast size difference. Analysis did not confirm the generally accepted clinical impression of left breast volume dominance. Although a size difference in breast pairs was documented, neither breast predominated. PMID:3659165

  16. Breast volume measurement of 598 women using biostereometric analysis.

    PubMed

    Loughry, C W; Sheffer, D B; Price, T E; Einsporn, R L; Bartfai, R G; Morek, W M; Meli, N M

    1989-05-01

    A study of the volumes of the right and left breasts of 598 subjects was undertaken using biostereometric analysis. This measurement uses close-range stereophotogrammetry to characterize the shape of the breast, and is noncontact, noninvasive, accurate, and rapid with respect to the subject involvement time. Using chi-square tests, volumes and volumetric differences between breast pairs were compared with handedness, perception of breast size by each subject, age, and menstrual status. No significant relationship was found between the handedness, age, or menstrual status of the subject and the breast volume. Several groups of subjects were accurate in their perception of breast size difference. Analysis did confirm the generally accepted clinical impression of left-breast volume dominance. These results are shown to be consistent with those of a previous study using 248 women. PMID:2729845

  17. Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation.

    PubMed

    Alzubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  18. Development and analysis of a linearly segmented CPC collector for industrial steam generation

    SciTech Connect

    Figueroa, J.A.A.

    1980-06-01

    This study involves the design, analysis and construction of a modular, non-imaging, trough, concentrating solar collector for generation of process steam in a tropical climate. The most innovative feature of this concentrator is that the mirror surface consists of long and narrow planar segments placed inside sealed low-cost glass tubes. The absorber is a cylindrical fin inside an evacuated glass tube. As an extension of the same study, the optical efficiency of the segmented concentrator has been simulated by means of a Monte-Carlo Ray-Tracing program. Laser Ray-Tracing techniques were also used to evaluate the possibilities of this new concept. A preliminary evaluation of the experimental concentrator was done using a relatively simple method that combines results from two experimental measurements: overall heat loss coefficient and optical efficiency. A transient behaviour test was used to measure the overall heat loss coefficient throughout a wide range of temperatures.

  19. Multiresolution Analysis Using Wavelet, Ridgelet, and Curvelet Transforms for Medical Image Segmentation

    PubMed Central

    AlZubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  20. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  1. Multivariate statistical analysis as a tool for the segmentation of 3D spectral data.

    PubMed

    Lucas, G; Burdet, P; Cantoni, M; HĂ©bert, C

    2013-01-01

    Acquisition of three-dimensional (3D) spectral data is nowadays common using many different microanalytical techniques. In order to proceed to the 3D reconstruction, data processing is necessary not only to deal with noisy acquisitions but also to segment the data in term of chemical composition. In this article, we demonstrate the value of multivariate statistical analysis (MSA) methods for this purpose, allowing fast and reliable results. Using scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) coupled with a focused ion beam (FIB), a stack of spectrum images have been acquired on a sample produced by laser welding of a nickel-titanium wire and a stainless steel wire presenting a complex microstructure. These data have been analyzed using principal component analysis (PCA) and factor rotations. PCA allows to significantly improve the overall quality of the data, but produces abstract components. Here it is shown that rotated components can be used without prior knowledge of the sample to help the interpretation of the data, obtaining quickly qualitative mappings representative of elements or compounds found in the material. Such abundance maps can then be used to plot scatter diagrams and interactively identify the different domains in presence by defining clusters of voxels having similar compositions. Identified voxels are advantageously overlaid on secondary electron (SE) images with higher resolution in order to refine the segmentation. The 3D reconstruction can then be performed using available commercial softwares on the basis of the provided segmentation. To asses the quality of the segmentation, the results have been compared to an EDX quantification performed on the same data. PMID:24035679

  2. Sampling and Electrophoretic Analysis of Segmented Flow Streams Using Virtual Walls in a Microfluidic Device

    PubMed Central

    Roman, Gregory T.; Wang, Meng; Shultz, Kristin N.; Jennings, Colin; Kennedy, Robert T.

    2008-01-01

    A method for sampling and electrophoretic analysis of aqueous plugs segmented in a stream of immiscible oil is described. In the method, an aqueous buffer and oil stream flow parallel to each other to form a stable virtual wall in a microfabricated K-shaped fluidic element. As aqueous sample plugs in the oil stream make contact with the virtual wall coalescence occurs and sample is electrokinetically transferred to the aqueous stream. Using this virtual wall, two methods of injection for channel electrophoresis were developed. In the first, discrete sample zones flow past the inlet of an electrophoresis channel and a portion is injected by electroosmotic flow, termed the “discrete injector”. With this approach at least 800 plugs could be injected without interruption from a continuous segmented stream with 5.1% RSD in peak area. This method generated up to 1,050 theoretical plates; although analysis of the injector suggested that improvements may be possible. In a second method, aqueous plugs are sampled in a way that allows them to form a continuous stream that is directed to a microfluidic cross-style injector, termed the “desegmenting injector”. This method does not analyze each individual plug but instead allows periodic sampling of a high-frequency stream of plugs. Using this system at least 1000 injections could be performed sequentially with 5.8% RSD in peak area and 53,500 theoretical plates. This method was demonstrated to be useful for monitoring concentration changes from a sampling device with 10 s temporal resolution. Aqueous plugs in segmented flows have been applied to many different chemical manipulations including synthesis, assays, sampling processing and sampling. Nearly all such studies have used optical methods to analyze plug contents. This method offers a new way to analyze such samples and should enable new applications of segmented flow systems. PMID:18831564

  3. A common neural substrate for the analysis of pitch and duration pattern in segmented sound?

    PubMed

    Griffiths, T D; Johnsrude, I; Dean, J L; Green, G G

    1999-12-16

    The analysis of patterns of pitch and duration over time in natural segmented sounds is fundamentally relevant to the analysis of speech, environmental sounds and music. The neural basis for differences between the processing of pitch and duration sequences is not established. We carried out a PET activation study on nine right-handed musically naive subjects, in order to examine the basis for early pitch- and duration-sequence analysis. The input stimuli and output task were closely controlled. We demonstrated a strikingly similar bilateral neural network for both types of analysis. The network is right lateralised and includes the cerebellum, posterior superior temporal cortices, and inferior frontal cortices. These data are consistent with a common initial mechanism for the analysis of pitch and duration patterns within sequences. PMID:10716217

  4. Method 349.0 Determination of Ammonia in Estuarine and Coastal Waters by Gas Segmented Continuous Flow Colorimetric Analysis

    EPA Science Inventory

    This method provides a procedure for the determination of ammonia in estuarine and coastal waters. The method is based upon the indophenol reaction,1-5 here adapted to automated gas-segmented continuous flow analysis.

  5. Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population

    USGS Publications Warehouse

    Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel

    2002-01-01

    A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.

  6. A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks.

    PubMed

    Changhan Wang; Xinchen Yan; Smith, Max; Kochhar, Kanika; Rubin, Marcie; Warren, Stephen M; Wrobel, James; Honglak Lee

    2015-08-01

    Wound surface area changes over multiple weeks are highly predictive of the wound healing process. Furthermore, the quality and quantity of the tissue in the wound bed also offer important prognostic information. Unfortunately, accurate measurements of wound surface area changes are out of reach in the busy wound practice setting. Currently, clinicians estimate wound size by estimating wound width and length using a scalpel after wound treatment, which is highly inaccurate. To address this problem, we propose an integrated system to automatically segment wound regions and analyze wound conditions in wound images. Different from previous segmentation techniques which rely on handcrafted features or unsupervised approaches, our proposed deep learning method jointly learns task-relevant visual features and performs wound segmentation. Moreover, learned features are applied to further analysis of wounds in two ways: infection detection and healing progress prediction. To the best of our knowledge, this is the first attempt to automate long-term predictions of general wound healing progress. Our method is computationally efficient and takes less than 5 seconds per wound image (480 by 640 pixels) on a typical laptop computer. Our evaluations on a large-scale wound database demonstrate the effectiveness and reliability of the proposed system. PMID:26736781

  7. Advanced finite element analysis of L4-L5 implanted spine segment

    NASA Astrophysics Data System (ADS)

    Pawlikowski, Marek; Domański, Janusz; Suchocki, Cyprian

    2015-09-01

    In the paper finite element (FE) analysis of implanted lumbar spine segment is presented. The segment model consists of two lumbar vertebrae L4 and L5 and the prosthesis. The model of the intervertebral disc prosthesis consists of two metallic plates and a polyurethane core. Bone tissue is modelled as a linear viscoelastic material. The prosthesis core is made of a polyurethane nanocomposite. It is modelled as a non-linear viscoelastic material. The constitutive law of the core, derived in one of the previous papers, is implemented into the FE software Abaqus®. It was done by means of the User-supplied procedure UMAT. The metallic plates are elastic. The most important parts of the paper include: description of the prosthesis geometrical and numerical modelling, mathematical derivation of stiffness tensor and Kirchhoff stress and implementation of the constitutive model of the polyurethane core into Abaqus® software. Two load cases were considered, i.e. compression and stress relaxation under constant displacement. The goal of the paper is to numerically validate the constitutive law, which was previously formulated, and to perform advanced FE analyses of the implanted L4-L5 spine segment in which non-standard constitutive law for one of the model materials, i.e. the prosthesis core, is implemented.

  8. Local multifractal detrended fluctuation analysis for non-stationary image's texture segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Zong-shou; Li, Jin-wei

    2014-12-01

    Feature extraction plays a great important role in image processing and pattern recognition. As a power tool, multifractal theory is recently employed for this job. However, traditional multifractal methods are proposed to analyze the objects with stationary measure and cannot for non-stationary measure. The works of this paper is twofold. First, the definition of stationary image and 2D image feature detection methods are proposed. Second, a novel feature extraction scheme for non-stationary image is proposed by local multifractal detrended fluctuation analysis (Local MF-DFA), which is based on 2D MF-DFA. A set of new multifractal descriptors, called local generalized Hurst exponent (Lhq) is defined to characterize the local scaling properties of textures. To test the proposed method, both the novel texture descriptor and other two multifractal indicators, namely, local Hölder coefficients based on capacity measure and multifractal dimension Dq based on multifractal differential box-counting (MDBC) method, are compared in segmentation experiments. The first experiment indicates that the segmentation results obtained by the proposed Lhq are better than the MDBC-based Dq slightly and superior to the local Hölder coefficients significantly. The results in the second experiment demonstrate that the Lhq can distinguish the texture images more effectively and provide more robust segmentations than the MDBC-based Dq significantly.

  9. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    PubMed

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types. PMID:18000328

  10. MRI-Assessment of Tumor Perfusion and 3D Segmented Volume at Baseline, During Treatment, and at Tumor Progression in Children with Newly Diagnosed Diffuse Intrinsic Pontine Glioma

    PubMed Central

    Sedlacik, J.; Winchell, A.; Kocak, M.; Loeffler, R.B.; Broniscer, A.; Hillenbrand, C.M.

    2014-01-01

    Background and Purpose Diffuse Intrinsic Pontine Glioma (DIPG) is among the most devastating brain tumors in children, necessitating the development of novel treatment strategies and advanced imaging markers such as perfusion to adequately monitor clinical trials. This study investigated tumor perfusion and 3D segmented tumor volume as predictive markers for outcome in children with newly diagnosed DIPG. Methods Imaging data were assessed at baseline, during, and after radiation therapy (RT), and every other month thereafter till progression, for 35 patients with newly diagnosed DIPG (age 2–16 years) enrolled on the phase I clinical study, NCT00472017. Patients were treated with conformal RT and vandetanib, a vascular endothelial growth factor receptor 2 inhibitor. Results Tumor perfusion increased and tumor volume decreased during combined RT and vandetanib therapy. These changes slowly diminished in follow-up scans till tumor progression. However, increased tumor perfusion and decreased tumor volume during combined therapy were associated with longer PFS. Apart from a longer OS for patients who showed elevated tumor perfusion after RT, there was no association for tumor volume and other perfusion variables with OS. Conclusion Our results suggest that tumor perfusion may be a useful predictive marker for the assessment of treatment response and tumor progression in children with DIPG treated with both RT and vandetanib. The assessment of tumor perfusion yields valuable information about tumor microvascular status and its response to therapy, which may to help better understand the biology of DIPGs and monitor novel treatment strategies in future clinical trials. PMID:23436052

  11. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a clinical context and showed a good accuracy both in ideal and in realistic conditions.

  12. Automated system for ST segment and arrhythmia analysis in exercise radionuclide ventriculography

    SciTech Connect

    Hsia, P.W.; Jenkins, J.M.; Shimoni, Y.; Gage, K.P.; Santinga, J.T.; Pitt, B.

    1986-06-01

    A computer-based system for interpretation of the electrocardiogram (ECG) in the diagnosis of arrhythmia and ST segment abnormality in an exercise system is presented. The system was designed for inclusion in a gamma camera so the ECG diagnosis could be combined with the diagnostic capability of radionuclide ventriculography. Digitized data are analyzed in a beat-by-beat mode and a contextual diagnosis of underlying rhythm is provided. Each beat is assigned a beat code based on a combination of waveform analysis and RR interval measurement. The waveform analysis employs a new correlation coefficient formula which corrects for baseline wander. Selective signal averaging, in which only normal beats are included, is done for an improved signal-to-noise ratio prior to ST segment analysis. Template generation, R wave detection, QRS window size, baseline correction, and continuous updating of heart rate have all been automated. ST level and slope measurements are computed on signal-averaged data. Arrhythmia analysis of 13 passages of abnormal rhythm by computer was found to be correct in 98.4 percent of all beats. 25 passages of exercise data, 1-5 min in length, were evaluated by the cardiologist and found to be in agreement in 95.8 percent in measurements of ST level and 91.7 percent in measurements of ST slope.

  13. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  14. A Comparison of Amplitude-Based and Phase-Based Positron Emission Tomography Gating Algorithms for Segmentation of Internal Target Volumes of Tumors Subject to Respiratory Motion

    SciTech Connect

    Jani, Shyam S.; Robinson, Clifford G.; Dahlbom, Magnus; White, Benjamin M.; Thomas, David H.; Gaudio, Sergio; Low, Daniel A.; Lamb, James M.

    2013-11-01

    Purpose: To quantitatively compare the accuracy of tumor volume segmentation in amplitude-based and phase-based respiratory gating algorithms in respiratory-correlated positron emission tomography (PET). Methods and Materials: List-mode fluorodeoxyglucose-PET data was acquired for 10 patients with a total of 12 fluorodeoxyglucose-avid tumors and 9 lymph nodes. Additionally, a phantom experiment was performed in which 4 plastic butyrate spheres with inner diameters ranging from 1 to 4 cm were imaged as they underwent 1-dimensional motion based on 2 measured patient breathing trajectories. PET list-mode data were gated into 8 bins using 2 amplitude-based (equal amplitude bins [A1] and equal counts per bin [A2]) and 2 temporal phase-based gating algorithms. Gated images were segmented using a commercially available gradient-based technique and a fixed 40% threshold of maximum uptake. Internal target volumes (ITVs) were generated by taking the union of all 8 contours per gated image. Segmented phantom ITVs were compared with their respective ground-truth ITVs, defined as the volume subtended by the tumor model positions covering 99% of breathing amplitude. Superior-inferior distances between sphere centroids in the end-inhale and end-exhale phases were also calculated. Results: Tumor ITVs from amplitude-based methods were significantly larger than those from temporal-based techniques (P=.002). For lymph nodes, A2 resulted in ITVs that were significantly larger than either of the temporal-based techniques (P<.0323). A1 produced the largest and most accurate ITVs for spheres with diameters of ?2 cm (P=.002). No significant difference was shown between algorithms in the 1-cm sphere data set. For phantom spheres, amplitude-based methods recovered an average of 9.5% more motion displacement than temporal-based methods under regular breathing conditions and an average of 45.7% more in the presence of baseline drift (P<.001). Conclusions: Target volumes in images generated from amplitude-based gating are larger and more accurate, at levels that are potentially clinically significant, compared with those from temporal phase-based gating.

  15. Segmentation of Unstructured Datasets

    NASA Technical Reports Server (NTRS)

    Bhat, Smitha

    1996-01-01

    Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

  16. Texture analysis improves level set segmentation of the anterior abdominal wall

    PubMed Central

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Landman, Bennett A.

    2013-01-01

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention. Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall. Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture. Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture analysis can improve the level set segmentation around the abdominal region. PMID:24320512

  17. Texture analysis improves level set segmentation of the anterior abdominal wall

    SciTech Connect

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Landman, Bennett A.

    2013-12-15

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention.Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall.Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture.Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture analysis can improve the level set segmentation around the abdominal region.

  18. Old document image segmentation using the autocorrelation function and multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Mehri, Maroua; Gomez-Krämer, Petra; Héroux, Pierre; Mullot, Rémy

    2013-01-01

    Recent progress in the digitization of heterogeneous collections of ancient documents has rekindled new challenges in information retrieval in digital libraries and document layout analysis. Therefore, in order to control the quality of historical document image digitization and to meet the need of a characterization of their content using intermediate level metadata (between image and document structure), we propose a fast automatic layout segmentation of old document images based on five descriptors. Those descriptors, based on the autocorrelation function, are obtained by multiresolution analysis and used afterwards in a specific clustering method. The method proposed in this article has the advantage that it is performed without any hypothesis on the document structure, either about the document model (physical structure), or the typographical parameters (logical structure). It is also parameter-free since it automatically adapts to the image content. In this paper, firstly, we detail our proposal to characterize the content of old documents by extracting the autocorrelation features in the different areas of a page and at several resolutions. Then, we show that is possible to automatically find the homogeneous regions defined by similar indices of autocorrelation without knowledge about the number of clusters using adapted hierarchical ascendant classification and consensus clustering approaches. To assess our method, we apply our algorithm on 316 old document images, which encompass six centuries (1200-1900) of French history, in order to demonstrate the performance of our proposal in terms of segmentation and characterization of heterogeneous corpus content. Moreover, we define a new evaluation metric, the homogeneity measure, which aims at evaluating the segmentation and characterization accuracy of our methodology. We find a 85% of mean homogeneity accuracy. Those results help to represent a document by a hierarchy of layout structure and content, and to define one or more signatures for each page, on the basis of a hierarchical representation of homogeneous blocks and their topology.

  19. Feature-driven model-based segmentation

    NASA Astrophysics Data System (ADS)

    Qazi, Arish A.; Kim, John; Jaffray, David A.; Pekar, Vladimir

    2011-03-01

    The accurate delineation of anatomical structures is required in many medical image analysis applications. One example is radiation therapy planning (RTP), where traditional manual delineation is tedious, labor intensive, and can require hours of clinician's valuable time. Majority of automated segmentation methods in RTP belong to either model-based or atlas-based approaches. One substantial limitation of model-based segmentation is that its accuracy may be restricted by the uncertainties in image content, specifically when segmenting low-contrast anatomical structures, e.g. soft tissue organs in computed tomography images. In this paper, we introduce a non-parametric feature enhancement filter which replaces raw intensity image data by a high level probabilistic map which guides the deformable model to reliably segment low-contrast regions. The method is evaluated by segmenting the submandibular and parotid glands in the head and neck region and comparing the results to manual segmentations in terms of the volume overlap. Quantitative results show that we are in overall good agreement with expert segmentations, achieving volume overlap of up to 80%. Qualitatively, we demonstrate that we are able to segment low-contrast regions, which otherwise are difficult to delineate with deformable models relying on distinct object boundaries from the original image data.

  20. Automated 3D Segmentation of Intraretinal Surfaces in SD-OCT Volumes in Normal and Diabetic Mice

    PubMed Central

    Antony, Bhavna J.; Jeong, Woojin; Abrŕmoff, Michael D.; Vance, Joseph; Sohn, Elliott H.; Garvin, Mona K.

    2014-01-01

    Purpose To describe an adaptation of an existing graph-theoretic method (initially developed for human optical coherence tomography [OCT] images) for the three-dimensional (3D) automated segmentation of 10 intraretinal surfaces in mice scans, and assess the accuracy of the method and the reproducibility of thickness measurements. Methods Ten intraretinal surfaces were segmented in repeat spectral domain (SD)-OCT volumetric images acquired from normal (n = 8) and diabetic (n = 10) mice. The accuracy of the method was assessed by computing the border position errors of the automated segmentation with respect to manual tracings obtained from two experts. The reproducibility was statistically assessed for four retinal layers within eight predefined regions using the mean and SD of the differences in retinal thickness measured in the repeat scans, the coefficient of variation (CV) and the intraclass correlation coefficients (ICC; with 95% confidence intervals [CIs]). Results The overall mean unsigned border position error for the 10 surfaces computed over 97 B-scans (10 scans, 10 normal mice) was 3.16 ± 0.91 ?m. The overall mean differences in retinal thicknesses computed from the normal and diabetic mice were 1.86 ± 0.95 and 2.15 ± 0.86 ?m, respectively. The CV of the retinal thicknesses for all the measured layers ranged from 1.04% to 5%. The ICCs for the total retinal thickness in the normal and diabetic mice were 0.78 [0.10, 0.92] and 0.83 [0.31, 0.96], respectively. Conclusion The presented method (publicly available as part of the Iowa Reference Algorithms) has acceptable accuracy and reproducibility and is expected to be useful in the quantitative study of intraretinal layers in mice. Translational Relevance The presented method, initially developed for human OCT, has been adapted for mice, with the potential to be adapted for other animals as well. Quantitative in vivo assessment of the retina in mice allows changes to be measured longitudinally, decreasing the need for them. PMID:25346873

  1. Lymph node segmentation using active contours

    NASA Astrophysics Data System (ADS)

    Honea, David M.; Ge, Yaorong; Snyder, Wesley E.; Hemler, Paul F.; Vining, David J.

    1997-04-01

    Node volume analysis is very important medically. An automatic method of segmenting the node in spiral CT x-ray images is needed to produce accurate, consistent, and efficient volume measurements. The method of active contours (snakes) is proposed here as good solution to the node segmentation problem. Optimum parameterization and search strategies for using a two-dimensional snake to find node cross-sections are described, and an energy normalization scheme which preserves important spatial variations in energy is introduced. Three-dimensional segmentation is achieved without additional operator interaction by propagating the 2D results to adjacent slices. The method gives promising segmentation results on both simulated and real node images.

  2. SNP discovery and haplotype analysis in the segmentally duplicated DRD5 coding region

    PubMed Central

    HOUSLEY, D. J. E.; NIKOLAS, M.; VENTA, P. J.; JERNIGAN, K. A.; WALDMAN, I. D.; NIGG, J. T.; FRIDERICI, K. H.

    2009-01-01

    SUMMARY The dopamine receptor 5 gene (DRD5) holds much promise as a candidate locus for contributing to neuropsychiatric disorders and other diseases influenced by the dopaminergic system, as well as having potential to affect normal behavioral variation. However, detailed analyses of this gene have been complicated by its location within a segmentally duplicated chromosomal region. Microsatellites and SNPs upstream from the coding region have been used for association studies, but we find, using bioinformatics resources, that these markers all lie within a previously unrecognized second segmental duplication (SD). In order to accurately analyze the DRD5 locus for polymorphisms in the absence of contaminating pseudogene sequences, we developed a fast and reliable method for sequence analysis and genotyping within the DRD5 coding region. We employed restriction enzyme digestion of genomic DNA to eliminate the pseudogenes prior to PCR amplification of the functional gene. This approach allowed us to determine the DRD5 haplotype structure using 31 trios and to reveal additional rare variants in 171 unrelated individuals. We clarify the inconsistencies and errors of the recorded SNPs in dbSNP and HapMap and illustrate the importance of using caution when choosing SNPs in regions of suspected duplications. The simple and relatively inexpensive method presented herein allows for convenient analysis of sequence variation in DRD5 and can be easily adapted to other duplicated genomic regions in order to obtain good quality sequence data. PMID:19397556

  3. SNP discovery and haplotype analysis in the segmentally duplicated DRD5 coding region.

    PubMed

    Housley, Donna J E; Nikolas, Molly; Venta, Patrick J; Jernigan, Kathrine A; Waldman, Irwin D; Nigg, Joel T; Friderici, Karen H

    2009-05-01

    The dopamine receptor 5 gene (DRD5) holds much promise as a candidate locus for contributing to neuropsychiatric disorders and other diseases influenced by the dopaminergic system, as well as having potential to affect normal behavioral variation. However, detailed analyses of this gene have been complicated by its location within a segmentally duplicated chromosomal region. Microsatellites and SNPs upstream from the coding region have been used for association studies, but we find, using bioinformatics resources, that these markers all lie within a previously unrecognized second segmental duplication (SD). In order to accurately analyze the DRD5 locus for polymorphisms in the absence of contaminating pseudogene sequences, we developed a fast and reliable method for sequence analysis and genotyping within the DRD5 coding region. We employed restriction enzyme digestion of genomic DNA to eliminate the pseudogenes prior to PCR amplification of the functional gene. This approach allowed us to determine the DRD5 haplotype structure using 31 trios and to reveal additional rare variants in 171 unrelated individuals. We clarify the inconsistencies and errors of the recorded SNPs in dbSNP and HapMap and illustrate the importance of using caution when choosing SNPs in regions of suspected duplications. The simple and relatively inexpensive method presented herein allows for convenient analysis of sequence variation in DRD5 and can be easily adapted to other duplicated genomic regions in order to obtain good quality sequence data. PMID:19397556

  4. An automated target recognition technique for image segmentation and scene analysis

    SciTech Connect

    Baumgart, C.W.; Ciarcia, C.A.

    1994-02-01

    Automated target recognition software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army`s Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multi-sensor system designed to detect buried and surface-emplaced metallic and non-metallic anti-tank mines. The basic requirements for this ATR software were: (1) an ability to separate target objects from the background in low S/N conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed utilizing an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a trade-off between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  5. Drive system design and error analysis of the 6 degrees of freedom segment erector of shield tunneling machine

    NASA Astrophysics Data System (ADS)

    Shi, Hu; Gong, Guofang; Yang, Huayong

    2011-09-01

    Focusing on a segment erector of a shield-tunneling machine developed with 6 degrees of freedom (DOF) and controlled by electro-hydraulic proportional systems, the kinematics of the segment erection process is presented. The perturbation method in the error analysis is introduced to establish the position and attitude error model, considering a number of factors such as the hydraulic drive, control accuracy, and tolerance in manufacturing and assembly. Dynamic simulations are carried out to obtain the controlling precision of electrohydraulic drive systems. Formulas for calculating the position and attitude error of the grip hand of the segment erector are derived. The calculation results verify the practicality and effectiveness of the error analysis, providing a foundation for practical designing, manufacturing, and assembling of the segment of the erecting mechanism.

  6. Segmentation and Visual Analysis of Whole-Body Mouse Skeleton microSPECT

    PubMed Central

    Khmelinskii, Artem; Groen, Harald C.; Baiker, Martin; de Jong, Marion; Lelieveldt, Boudewijn P. F.

    2012-01-01

    Whole-body SPECT small animal imaging is used to study cancer, and plays an important role in the development of new drugs. Comparing and exploring whole-body datasets can be a difficult and time-consuming task due to the inherent heterogeneity of the data (high volume/throughput, multi-modality, postural and positioning variability). The goal of this study was to provide a method to align and compare side-by-side multiple whole-body skeleton SPECT datasets in a common reference, thus eliminating acquisition variability that exists between the subjects in cross-sectional and multi-modal studies. Six whole-body SPECT/CT datasets of BALB/c mice injected with bone targeting tracers 99mTc-methylene diphosphonate (99mTc-MDP) and 99mTc-hydroxymethane diphosphonate (99mTc-HDP) were used to evaluate the proposed method. An articulated version of the MOBY whole-body mouse atlas was used as a common reference. Its individual bones were registered one-by-one to the skeleton extracted from the acquired SPECT data following an anatomical hierarchical tree. Sequential registration was used while constraining the local degrees of freedom (DoFs) of each bone in accordance to the type of joint and its range of motion. The Articulated Planar Reformation (APR) algorithm was applied to the segmented data for side-by-side change visualization and comparison of data. To quantitatively evaluate the proposed algorithm, bone segmentations of extracted skeletons from the correspondent CT datasets were used. Euclidean point to surface distances between each dataset and the MOBY atlas were calculated. The obtained results indicate that after registration, the mean Euclidean distance decreased from 11.5±12.1 to 2.6±2.1 voxels. The proposed approach yielded satisfactory segmentation results with minimal user intervention. It proved to be robust for “incomplete” data (large chunks of skeleton missing) and for an intuitive exploration and comparison of multi-modal SPECT/CT cross-sectional mouse data. PMID:23152834

  7. Leukocyte telomere length and hippocampus volume: a meta-analysis

    PubMed Central

    Nilsonne, Gustav; Tamm, Sandra; Mĺnsson, Kristoffer N. T.; Ĺkerstedt, Torbjörn; Lekander, Mats

    2015-01-01

    Leukocyte telomere length has been shown to correlate to hippocampus volume, but effect estimates differ in magnitude and are not uniformly positive. This study aimed primarily to investigate the relationship between leukocyte telomere length and hippocampus gray matter volume by meta-analysis and secondarily to investigate possible effect moderators. Five studies were included with a total of 2107 participants, of which 1960 were contributed by one single influential study. A random-effects meta-analysis estimated the effect to r = 0.12 [95% CI -0.13, 0.37] in the presence of heterogeneity and a subjectively estimated moderate to high risk of bias. There was no evidence that apolipoprotein E (APOE) genotype was an effect moderator, nor that the ratio of leukocyte telomerase activity to telomere length was a better predictor than leukocyte telomere length for hippocampus volume. This meta-analysis, while not proving a positive relationship, also is not able to disprove the earlier finding of a positive correlation in the one large study included in analyses. We propose that a relationship between leukocyte telomere length and hippocamus volume may be mediated by transmigrating monocytes which differentiate into microglia in the brain parenchyma. PMID:26674112

  8. Sequence and phylogenetic analysis of M-class genome segments of novel duck reovirus NP03

    PubMed Central

    Wang, Shao; Chen, Shilong; Cheng, Xiaoxia; Chen, Shaoying; Lin, FengQiang; Jiang, Bing; Zhu, Xiaoli; Li, Zhaolong; Wang, Jinxiang

    2015-01-01

    We report the sequence and phylogenetic analysis of the entire M1, M2, and M3 genome segments of the novel duck reovirus (NDRV) NP03. Alignment between the newly determined nucleotide sequences as well as their deduced amino acid sequences and the published sequences of avian reovirus (ARV) was carried out with DNASTAR software. Sequence comparison showed that the M2 gene had the most variability among the M-class genes of DRV. Phylogenetic analysis of the M-class genes of ARV strains revealed different lineages and clusters within DRVs. The 5 NDRV strains used in this study fall into a well-supported lineage that includes chicken ARV strains, whereas Muscovy DRV (MDRV) strains are separate from NDRV strains and form a distinct genetic lineage in the M2 gene tree. However, the MDRV and NDRV strains are closely related and located in a common lineage in the M1 and M3 gene trees, respectively. PMID:25852231

  9. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  10. Integration of 3D scale-based pseudo-enhancement correction and partial volume image segmentation for improving electronic colon cleansing in CT colonograpy.

    PubMed

    Zhang, Hao; Li, Lihong; Zhu, Hongbin; Han, Hao; Song, Bowen; Liang, Zhengrong

    2014-01-01

    Orally administered tagging agents are usually used in CT colonography (CTC) to differentiate residual bowel content from native colonic structures. However, the high-density contrast agents tend to introduce pseudo-enhancement (PE) effect on neighboring soft tissues and elevate their observed CT attenuation value toward that of the tagged materials (TMs), which may result in an excessive electronic colon cleansing (ECC) since the pseudo-enhanced soft tissues are incorrectly identified as TMs. To address this issue, we integrated a 3D scale-based PE correction into our previous ECC pipeline based on the maximum a posteriori expectation-maximization partial volume (PV) segmentation. The newly proposed ECC scheme takes into account both the PE and PV effects that commonly appear in CTC images. We evaluated the new scheme on 40 patient CTC scans, both qualitatively through display of segmentation results, and quantitatively through radiologists' blind scoring (human observer) and computer-aided detection (CAD) of colon polyps (computer observer). Performance of the presented algorithm has shown consistent improvements over our previous ECC pipeline, especially for the detection of small polyps submerged in the contrast agents. The CAD results of polyp detection showed that 4 more submerged polyps were detected for our new ECC scheme over the previous one. PMID:24699352

  11. Semi-automatic segmentation and modeling of the cervical spinal cord for volume quantification in multiple sclerosis patients from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sonkova, Pavlina; Evangelou, Iordanis E.; Gallo, Antonio; Cantor, Fredric K.; Ohayon, Joan; McFarland, Henry F.; Bagnato, Francesca

    2008-03-01

    Spinal cord (SC) tissue loss is known to occur in some patients with multiple sclerosis (MS), resulting in SC atrophy. Currently, no measurement tools exist to determine the magnitude of SC atrophy from Magnetic Resonance Images (MRI). We have developed and implemented a novel semi-automatic method for quantifying the cervical SC volume (CSCV) from Magnetic Resonance Images (MRI) based on level sets. The image dataset consisted of SC MRI exams obtained at 1.5 Tesla from 12 MS patients (10 relapsing-remitting and 2 secondary progressive) and 12 age- and gender-matched healthy volunteers (HVs). 3D high resolution image data were acquired using an IR-FSPGR sequence acquired in the sagittal plane. The mid-sagittal slice (MSS) was automatically located based on the entropy calculation for each of the consecutive sagittal slices. The image data were then pre-processed by 3D anisotropic diffusion filtering for noise reduction and edge enhancement before segmentation with a level set formulation which did not require re-initialization. The developed method was tested against manual segmentation (considered ground truth) and intra-observer and inter-observer variability were evaluated.

  12. Incorporation of texture-based features in optimal graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Abrŕmoff, Michael D.; Sonka, Milan; Kwon, Young H.; Garvin, Mona K.

    2012-02-01

    While efficient graph-theoretic approaches exist for the optimal (with respect to a cost function) and simultaneous segmentation of multiple surfaces within volumetric medical images, the appropriate design of cost functions remains an important challenge. Previously proposed methods have used simple cost functions or optimized a combination of the same, but little has been done to design cost functions using learned features from a training set, in a less biased fashion. Here, we present a method to design cost functions for the simultaneous segmentation of multiple surfaces using the graph-theoretic approach. Classified texture features were used to create probability maps, which were incorporated into the graph-search approach. The efficiency of such an approach was tested on 10 optic nerve head centered optical coherence tomography (OCT) volumes obtained from 10 subjects that presented with glaucoma. The mean unsigned border position error was computed with respect to the average of manual tracings from two independent observers and compared to our previously reported results. A significant improvement was noted in the overall means which reduced from 9.25 +/- 4.03?m to 6.73 +/- 2.45?m (p < 0.01) and is also comparable with the inter-observer variability of 8.85 +/- 3.85?m.

  13. Quantitative analysis of peristaltic and segmental motion in vivo in the rat small intestine using dynamic MRI.

    PubMed

    Ailiani, Amit C; Neuberger, Thomas; Brasseur, James G; Banco, Gino; Wang, Yanxing; Smith, Nadine B; Webb, Andrew G

    2009-07-01

    Conventional methods of quantifying segmental and peristaltic motion in animal models are highly invasive; involving, for example, the external isolation of segments of the gastrointestinal (GI) tract either from dead or anesthetized animals. The present study was undertaken to determine the utility of MRI to quantitatively analyze these motions in the jejunum region of anesthetized rats (N = 6) noninvasively. Dynamic images of the GI tract after oral gavage with a Gd contrast agent were acquired at a rate of six frames per second, followed by image segmentation based on a combination of three-dimensional live wire (3D LW) and directional dynamic gradient vector flow snakes (DDGVFS). Quantitative analysis of the variation in diameter at a fixed constricting location showed clear indications of both segmental and peristaltic motions. Quantitative analysis of the frequency response gave results in good agreement with those acquired in previous studies using invasive measurement techniques. Principal component analysis (PCA) of the segmented data using active shape models resulted in three major modes. The individual modes revealed unique spatial patterns for peristaltic and segmental motility. PMID:19353667

  14. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  15. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    SciTech Connect

    Not Available

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  16. Computerized analysis of coronary artery disease: Performance evaluation of segmentation and tracking of coronary arteries in CT angiograms

    SciTech Connect

    Zhou, Chuan Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean; Agarwal, Prachi; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Patel, Smita; Wei, Jun

    2014-08-15

    Purpose: The authors are developing a computer-aided detection system to assist radiologists in analysis of coronary artery disease in coronary CT angiograms (cCTA). This study evaluated the accuracy of the authors’ coronary artery segmentation and tracking method which are the essential steps to define the search space for the detection of atherosclerotic plaques. Methods: The heart region in cCTA is segmented and the vascular structures are enhanced using the authors’ multiscale coronary artery response (MSCAR) method that performed 3D multiscale filtering and analysis of the eigenvalues of Hessian matrices. Starting from seed points at the origins of the left and right coronary arteries, a 3D rolling balloon region growing (RBG) method that adapts to the local vessel size segmented and tracked each of the coronary arteries and identifies the branches along the tracked vessels. The branches are queued and subsequently tracked until the queue is exhausted. With Institutional Review Board approval, 62 cCTA were collected retrospectively from the authors’ patient files. Three experienced cardiothoracic radiologists manually tracked and marked center points of the coronary arteries as reference standard following the 17-segment model that includes clinically significant coronary arteries. Two radiologists visually examined the computer-segmented vessels and marked the mistakenly tracked veins and noisy structures as false positives (FPs). For the 62 cases, the radiologists marked a total of 10191 center points on 865 visible coronary artery segments. Results: The computer-segmented vessels overlapped with 83.6% (8520/10191) of the center points. Relative to the 865 radiologist-marked segments, the sensitivity reached 91.9% (795/865) if a true positive is defined as a computer-segmented vessel that overlapped with at least 10% of the reference center points marked on the segment. When the overlap threshold is increased to 50% and 100%, the sensitivities were 86.2% and 53.4%, respectively. For the 62 test cases, a total of 55 FPs were identified by radiologist in 23 of the cases. Conclusions: The authors’ MSCAR-RBG method achieved high sensitivity for coronary artery segmentation and tracking. Studies are underway to further improve the accuracy for the arterial segments affected by motion artifacts, severe calcified and noncalcified soft plaques, and to reduce the false tracking of the veins and other noisy structures. Methods are also being developed to detect coronary artery disease along the tracked vessels.

  17. Investigating materials for breast nodules simulation by using segmentation and similarity analysis of digital images

    NASA Astrophysics Data System (ADS)

    Siqueira, Paula N.; Marcomini, Karem D.; Sousa, Maria A. Z.; Schiabel, Homero

    2015-03-01

    The task of identifying the malignancy of nodular lesions on mammograms becomes quite complex due to overlapped structures or even to the granular fibrous tissue which can cause confusion in classifying masses shape, leading to unnecessary biopsies. Efforts to develop methods for automatic masses detection in CADe (Computer Aided Detection) schemes have been made with the aim of assisting radiologists and working as a second opinion. The validation of these methods may be accomplished for instance by using databases with clinical images or acquired through breast phantoms. With this aim, some types of materials were tested in order to produce radiographic phantom images which could characterize a good enough approach to the typical mammograms corresponding to actual breast nodules. Therefore different nodules patterns were physically produced and used on a previous developed breast phantom. Their characteristics were tested according to the digital images obtained from phantom exposures at a LORAD M-IV mammography unit. Two analysis were realized the first one by the segmentation of regions of interest containing the simulated nodules by an automated segmentation technique as well as by an experienced radiologist who has delineated the contour of each nodule by means of a graphic display digitizer. Both results were compared by using evaluation metrics. The second one used measure of quality Structural Similarity (SSIM) to generate quantitative data related to the texture produced by each material. Although all the tested materials proved to be suitable for the study, the PVC film yielded the best results.

  18. Theoretical analysis of segmented Wolter/LSM X-ray telescope systems

    NASA Technical Reports Server (NTRS)

    Shealy, D. L.; Chao, S. H.

    1986-01-01

    The Segmented Wolter I/LSM X-ray Telescope, which consists of a Wolter I Telescope with a tilted, off-axis convex spherical Layered Synthetic Microstructure (LSM) optics placed near the primary focus to accommodate multiple off-axis detectors, has been analyzed. The Skylab ATM Experiment S056 Wolter I telescope and the Stanford/MSFC nested Wolter-Schwarzschild x-ray telescope have been considered as the primary optics. A ray trace analysis has been performed to calculate the RMS blur circle radius, point spread function (PSF), the meridional and sagittal line functions (LST), and the full width half maximum (PWHM) of the PSF to study the spatial resolution of the system. The effects on resolution of defocussing the image plane, tilting and decentrating of the multilayer (LSM) optics have also been investigated to give the mounting and alignment tolerances of the LSM optic. Comparison has been made between the performance of the segmented Wolter/LSM optical system and that of the Spectral Slicing X-ray Telescope (SSXRT) systems.

  19. Sequencing and analysis of chromosome 1 of Eimeria tenella reveals a unique segmental organization

    PubMed Central

    Ling, King-Hwa; Rajandream, Marie-Adele; Rivailler, Pierre; Ivens, Alasdair; Yap, Soon-Joo; Madeira, Alda M.B.N.; Mungall, Karen; Billington, Karen; Yee, Wai-Yan; Bankier, Alan T.; Carroll, Fionnadh; Durham, Alan M.; Peters, Nicholas; Loo, Shu-San; Mat Isa, Mohd Noor; Novaes, Jeniffer; Quail, Michael; Rosli, Rozita; Nor Shamsudin, Mariana; Sobreira, Tiago J.P.; Tivey, Adrian R.; Wai, Siew-Fun; White, Sarah; Wu, Xikun; Kerhornou, Arnaud; Blake, Damer; Mohamed, Rahmah; Shirley, Martin; Gruber, Arthur; Berriman, Matthew; Tomley, Fiona; Dear, Paul H.; Wan, Kiew-Lian

    2007-01-01

    Eimeria tenella is an intracellular protozoan parasite that infects the intestinal tracts of domestic fowl and causes coccidiosis, a serious and sometimes lethal enteritis. Eimeria falls in the same phylum (Apicomplexa) as several human and animal parasites such as Cryptosporidium, Toxoplasma, and the malaria parasite, Plasmodium. Here we report the sequencing and analysis of the first chromosome of E. tenella, a chromosome believed to carry loci associated with drug resistance and known to differ between virulent and attenuated strains of the parasite. The chromosome—which appears to be representative of the genome—is gene-dense and rich in simple-sequence repeats, many of which appear to give rise to repetitive amino acid tracts in the predicted proteins. Most striking is the segmentation of the chromosome into repeat-rich regions peppered with transposon-like elements and telomere-like repeats, alternating with repeat-free regions. Predicted genes differ in character between the two types of segment, and the repeat-rich regions appear to be associated with strain-to-strain variation. PMID:17284678

  20. Evolutionary analysis of the segment from helix 3 through helix 5 in vertebrate progesterone receptors.

    PubMed

    Baker, Michael E; Uh, Kayla Y

    2012-10-01

    The interaction between helix 3 and helix 5 in the human mineralocorticoid receptor [MR], progesterone receptor [PR] and glucocorticoid receptor [GR] influences their response to steroids. For the human PR, mutations at Gly-722 on helix 3 and Met-759 on helix 5 alter responses to progesterone. We analyzed the evolution of these two sites and the rest of a 59 residue segment containing helices 3, 4 and 5 in vertebrate PRs and found that a glycine corresponding to Gly-722 on helix 3 in human PR first appears in platypus, a monotreme. In lamprey, skates, fish, amphibians and birds, cysteine is found at this position in helix 3. This suggests that the cysteine to glycine replacement in helix 3 in the PR was important in the evolution of mammals. Interestingly, our analysis of the rest of the 59 residue segment finds 100% sequence conservation in almost all mammal PRs, substantial conservation in reptile and amphibian PRs and divergence of land vertebrate PR sequences from the fish PR sequences. The differences between fish and land vertebrate PRs may be important in the evolution of different biological progestins in fish and mammalian PR, as well as differences in susceptibility to environmental chemicals that disrupt PR-mediated physiology. PMID:22575083

  1. ANALYSIS OF THE SEGMENTAL IMPACTION OF FEMORAL HEAD FOLLOWING AN ACETABULAR FRACTURE SURGICALLY MANAGED

    PubMed Central

    GuimarĂŁes, Rodrigo Pereira; Kaleka, Camila Cohen; Cohen, Carina; Daniachi, Daniel; Keiske Ono, Nelson; Honda, Emerson Kiyoshi; Polesello, Giancarlo Cavalli; Riccioli, Walter

    2015-01-01

    Objective: Correlate the postoperative radiographic evaluation with variables accompanying acetabular fractures in order to determine the predictive factors for segmental impaction of femoral head. Methods: Retrospective analysis of medial files of patients submitted to open reduction surgery with internal acetabular fixation. Within approximately 35 years, 596 patients were treated for acetabular fractures; 267 were followed up for at least two years. The others were excluded either because their follow up was shorter than the minimum time, or as a result of the lack of sufficient data reported on files, or because they had been submitted to non-surgical treatment. The patients were followed up by one of three surgeons of the group using the Merle d'Aubigné and Postel clinical scales as well as radiological studies. Results: Only tow studied variables-age and amount of postoperative reductionshowed statistically significant correlation with femoral head impaction. Conclusions: The quality of reduction-anatomical or with up to 2mm residual deviation-presents a good radiographic evolution, reducing the potential for segmental impaction of the femoral head, a statistically significant finding.

  2. Quantitative analysis of volume images: electron microscopic tomography of HIV

    NASA Astrophysics Data System (ADS)

    Nystroem, Ingela; Bengtsson, Ewert W.; Nordin, Bo G.; Borgefors, Gunilla

    1994-05-01

    Three-dimensional objects should be represented by 3D images. So far, most of the evaluation of images of 3D objects have been done visually, either by looking at slices through the volumes or by looking at 3D graphic representations of the data. In many applications a more quantitative evaluation would be valuable. Our application is the analysis of volume images of the causative agent of the acquired immune deficiency syndrome (AIDS), namely human immunodeficiency virus (HIV), produced by electron microscopic tomography (EMT). A structural analysis of the virus is of importance. The representation of some of the interesting structural features will depend on the orientation and the position of the object relative to the digitization grid. We describe a method of defining orientation and position of objects based on the moment of inertia of the objects in the volume image. In addition to a direct quantification of the 3D object a quantitative description of the convex deficiency may provide valuable information about the geometrical properties. The convex deficiency is the volume object subtracted from its convex hull. We describe an algorithm for creating an enclosing polyhedron approximating the convex hull of an arbitrarily shaped object.

  3. Identifying radiotherapy target volumes in brain cancer by image analysis.

    PubMed

    Cheng, Kun; Montgomery, Dean; Feng, Yang; Steel, Robin; Liao, Hanqing; McLaren, Duncan B; Erridge, Sara C; McLaughlin, Stephen; Nailon, William H

    2015-10-01

    To establish the optimal radiotherapy fields for treating brain cancer patients, the tumour volume is often outlined on magnetic resonance (MR) images, where the tumour is clearly visible, and mapped onto computerised tomography images used for radiotherapy planning. This process requires considerable clinical experience and is time consuming, which will continue to increase as more complex image sequences are used in this process. Here, the potential of image analysis techniques for automatically identifying the radiation target volume on MR images, and thereby assisting clinicians with this difficult task, was investigated. A gradient-based level set approach was applied on the MR images of five patients with grades II, III and IV malignant cerebral glioma. The relationship between the target volumes produced by image analysis and those produced by a radiation oncologist was also investigated. The contours produced by image analysis were compared with the contours produced by an oncologist and used for treatment. In 93% of cases, the Dice similarity coefficient was found to be between 60 and 80%. This feasibility study demonstrates that image analysis has the potential for automatic outlining in the management of brain cancer patients, however, more testing and validation on a much larger patient cohort is required. PMID:26609418

  4. Do tumor volume, percent tumor volume predict biochemical recurrence after radical prostatectomy? A meta-analysis

    PubMed Central

    Meng, Yang; Li, He; Xu, Peng; Wang, Jia

    2015-01-01

    The aim of this meta-analysis was to explore the effects of tumor volume (TV) and percent tumor volume (PTV) on biochemical recurrence (BCR) after radical prostatectomy (RP). An electronic search of Medline, Embase and CENTRAL was performed for relevant studies. Studies evaluated the effects of TV and/or PTV on BCR after RP and provided detailed results of multivariate analyses were included. Combined hazard ratios (HRs) and their corresponding 95% confidence intervals (CIs) were calculated using random-effects or fixed-effects models. A total of 15 studies with 16 datasets were included in the meta-analysis. Our study showed that both TV (HR 1.04, 95% CI: 1.00-1.07; P=0.03) and PTV (HR 1.01, 95% CI: 1.00-1.02; P=0.02) were predictors of BCR after RP. The subgroup analyses revealed that TV predicted BCR in studies from Asia, PTV was significantly correlative with BCR in studies in which PTV was measured by computer planimetry, and both TV and PTV predicted BCR in studies with small sample sizes (<1000). In conclusion, our meta-analysis demonstrated that both TV and PTV were significantly associated with BCR after RP. Therefore, TV and PTV should be considered when assessing the risk of BCR in RP specimens. PMID:26885209

  5. Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives.

    PubMed

    Krishnamurthy, Senthilkumar; Narasimhan, Ganesh; Rengasamy, Umamaheswari

    2016-01-01

    The three-dimensional analysis on lung computed tomography scan was carried out in this study to detect the malignant lung nodules. An automatic three-dimensional segmentation algorithm proposed here efficiently segmented the tissue clusters (nodules) inside the lung. However, an automatic morphological region-grow segmentation algorithm that was implemented to segment the well-circumscribed nodules present inside the lung did not segment the juxta-pleural nodule present on the inner surface of wall of the lung. A novel edge bridge and fill technique is proposed in this article to segment the juxta-pleural and pleural-tail nodules accurately. The centroid shift of each candidate nodule was computed. The nodules with more centroid shift in the consecutive slices were eliminated since malignant nodule's resultant position did not usually deviate. The three-dimensional shape variation and edge sharp analyses were performed to reduce the false positives and to classify the malignant nodules. The change in area and equivalent diameter was more for malignant nodules in the consecutive slices and the malignant nodules showed a sharp edge. Segmentation was followed by three-dimensional centroid, shape and edge analysis which was carried out on a lung computed tomography database of 20 patient with 25 malignant nodules. The algorithms proposed in this article precisely detected 22 malignant nodules and failed to detect 3 with a sensitivity of 88%. Furthermore, this algorithm correctly eliminated 216 tissue clusters that were initially segmented as nodules; however, 41 non-malignant tissue clusters were detected as malignant nodules. Therefore, the false positive of this algorithm was 2.05 per patient. PMID:26721427

  6. Dynamic analysis and control of mirror segment actuators for the European Extremely Large Telescope

    NASA Astrophysics Data System (ADS)

    Witvoet, Gert; den Breeje, Remco; Nijenhuis, Jan; Hazelebach, René; Doelman, Niek

    2015-01-01

    Segmented primary mirror telescopes require dedicated piston-tip-tilt actuators for optimal optical performance. Netherlands Organisation for Applied Scientific Research (TNO) has developed various prototypes of such actuators, in particular for the E-ELT. This paper presents the dynamics analysis and feedback control results for a specific two-stage prototype. First, the dynamics of the actuator in interconnection with the to-be-positioned mass has been analyzed, both using frequency response measurements and first principles modeling, resulting in a detailed understanding of the dynamic behavior of the system. Next, feedback controllers for both the fine and the coarse stage have been designed and implemented. Finally, the feedback-controlled actuator has been subjected to a realistic tracking experiment; the achieved results have demonstrated that the TNO actuator is able to suppress wind force disturbances and ground vibrations with more than a factor 103, down to 1.4 nm root mean square, which is compliant with the requirements.

  7. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images

    PubMed Central

    Pang, Jincheng; Ă–zkucur, Nurdan; Ren, Michael; Kaplan, David L.; Levin, Michael; Miller, Eric L.

    2015-01-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach. PMID:26601004

  8. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images.

    PubMed

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L; Levin, Michael; Miller, Eric L

    2015-11-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach. PMID:26601004

  9. Segmentation of fluorescence microscopy images for quantitative analysis of cell nuclear architecture.

    PubMed

    Russell, Richard A; Adams, Niall M; Stephens, David A; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S

    2009-04-22

    Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments. PMID:19383481

  10. A new approach of graph cuts based segmentation for thermal IR image analysis

    NASA Astrophysics Data System (ADS)

    Hu, Xuezhang; Chakravarty, Sumit

    2012-12-01

    Thermal Infra Red images are one of the most investigated and popular data modalities whose usage has grown exponentially from humble origins to being one of the most extensively harnessed imaging forms. Instead of capturing the radiometry in visible spectra, the thermal Images focus on the near to mid Infrared spectra thereby producing a scene structure quite different from their visual counterpart images. Also traditionally the spatial resolution of the infra red images has been typically lower than traditional color images. The above reasons have contributed to the past trend of minimal automated analysis of thermal images wherein intensity (which corresponds to heat content) and to a lesser extent spatiality formed the primary features of interest in an IR image. In this work we extend the automated processing of Infra red images by using an advanced image analysis technique called Graph cuts. Graph cuts have the unique property of providing global optimal segmentation which has contributed to its popularity. We side step the extensive computational requirements of a Graph cuts procedure (which consider pixels as the vertices of graphs) by performing preprocessing by performing initial segmentation to obtain a short list of candidate regions. Features extracted on the candidate regions are then used as an input for the graph cut procedure. Appropriate energy functions are used to combine traditionally used graph cuts feature like intensity feature with new salient features like gradients. The results show the effectiveness of using the above technique for automated processing of thermal infrared images especially when compared with traditional techniques like intensity thresholding.

  11. Detection and evolution of rhythmic components in ictal EEG using short segment spectra and discriminant analysis.

    PubMed

    Hilfiker, P; Egli, M

    1992-04-01

    An automated method for analysis of ictal EEG is described which aims to reliably detect one or several rhythmic components in short EEG segments (2 sec) and to display their presence, frequency, amplitude, location, and temporal evolution. Spectra were estimated and compared using fast Fourier transform (FFT) and autoregressive modelling (AR). A subsequent linear discriminant analysis decided whether a spectral peak is likely to be caused by rhythmic activity or by the inherent statistical variability. FFT was found to perform better than AR in the detection of rhythmic components, yielding a false-positive rate of 0.825%, a false-negative rate of 2% (signal to noise ratio -4.6 dB), a frequency resolution of 2 Hz, and a temporal resolution of 0.5 sec. Seizure analysis revealed that the ictal scalp EEG of even short seizures can show a complex evolution of rhythmic patterns which are impossible or difficult to recognize by visual inspection or conventional spectral analysis. The following phenomena are demonstrated: superposition of two rhythmic components suggesting two cerebral regions discharging simultaneously and independently with their own pacemakers, sudden and gradual change of frequencies, and gradual development of harmonic frequencies. It is suggested that a more precise correlation between rhythmic generators and seizure symptomatology might allow more predictable pharmacological responses in antiepileptic therapy. PMID:1372547

  12. A novel technique of three-dimensional reconstruction segmentation and analysis for sliced images of biological tissues*

    PubMed Central

    Li, Jing; Zhao, Hai-yan; Ruan, Xing-yun; Xu, Yong-qing; Meng, Wei-zheng; Li, Kun-peng; Zhang, Jing-qiang

    2005-01-01

    A novel technique of three-dimensional (3D) reconstruction, segmentation, display and analysis of series slices of images including microscopic wide field optical sectioning by deconvolution method, cryo-electron microscope slices by Fourier-Bessel synthesis and electron tomography (ET), and a series of computed tomography (CT) was developed to perform simultaneous measurement on the structure and function of biomedical samples. The paper presents the 3D reconstruction segmentation display and analysis results of pollen spore, chaperonin, virus, head, cervical bone, tibia and carpus. At the same time, it also puts forward some potential applications of the new technique in the biomedical realm. PMID:16358381

  13. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  14. Optimal analysis for segmented mirror capture and alignment in space optics system

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofang; Yu, Xin; Wang, Xia; Zhao, Lei

    2008-07-01

    A great deal segmented mirror errors consisting of piston and tip-tilt exist when space large aperture segmented optics system deploys. These errors will result in the departure of segmented mirrors images from the view. For that, proper scanning function should be adopted to control actuators rotating the segmented mirror, so that the images of segmented mirror can be put into the view and placed in the ideal position. In my paper, the scanning functions such as screw-type, rose-type, and helianthus-type and so on are analyzed and discussed. And the optimal scanning function principle based on capturing images by the fastest velocity is put forward. After capturing, each outer segmented mirror should be brought back into alignment with the central segment. In my paper, the central and outer segments with surface errors have the different figure, a new way to control the alignment accuracy is present, which can decrease the bad effects from mirror surface and position errors effectively. As a sample, a simulation experiment is carried to study the characteristics of different scanning functions and the effects of mirror surface and position errors on alignment accuracy. In simulation experiment, the piston and tip-tilt errors scale and the ideal position of segmented mirror are given, the capture and alignment process is realized by utilizing the improved optics design software ZEMAX, the optimal scanning function and the alignment accuracy is determined.

  15. Application of Control Volume Analysis to Cerebrospinal Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Wei, Timothy; Cohen, Benjamin; Anor, Tomer; Madsen, Joseph

    2011-11-01

    Hydrocephalus is among the most common birth defects and may not be prevented nor cured. Afflicted individuals face serious issues, which at present are too complicated and not well enough understood to treat via systematic therapies. This talk outlines the framework and application of a control volume methodology to clinical Phase Contrast MRI data. Specifically, integral control volume analysis utilizes a fundamental, fluid dynamics methodology to quantify intracranial dynamics within a precise, direct, and physically meaningful framework. A chronically shunted, hydrocephalic patient in need of a revision procedure was used as an in vivo case study. Magnetic resonance velocity measurements within the patient's aqueduct were obtained in four biomedical state and were analyzed using the methods presented in this dissertation. Pressure force estimates were obtained, showing distinct differences in amplitude, phase, and waveform shape for different intracranial states within the same individual. Thoughts on the physiological and diagnostic research and development implications/opportunities will be presented.

  16. Quantitative Analysis of the Drosophila Segmentation Regulatory Network Using Pattern Generating Potentials

    PubMed Central

    Richards, Adam; McCutchan, Michael; Wakabayashi-Ito, Noriko; Hammonds, Ann S.; Celniker, Susan E.; Kumar, Sudhir; Wolfe, Scot A.; Brodsky, Michael H.; Sinha, Saurabh

    2010-01-01

    Cis-regulatory modules that drive precise spatial-temporal patterns of gene expression are central to the process of metazoan development. We describe a new computational strategy to annotate genomic sequences based on their “pattern generating potential” and to produce quantitative descriptions of transcriptional regulatory networks at the level of individual protein-module interactions. We use this approach to convert the qualitative understanding of interactions that regulate Drosophila segmentation into a network model in which a confidence value is associated with each transcription factor-module interaction. Sequence information from multiple Drosophila species is integrated with transcription factor binding specificities to determine conserved binding site frequencies across the genome. These binding site profiles are combined with transcription factor expression information to create a model to predict module activity patterns. This model is used to scan genomic sequences for the potential to generate all or part of the expression pattern of a nearby gene, obtained from available gene expression databases. Interactions between individual transcription factors and modules are inferred by a statistical method to quantify a factor's contribution to the module's pattern generating potential. We use these pattern generating potentials to systematically describe the location and function of known and novel cis-regulatory modules in the segmentation network, identifying many examples of modules predicted to have overlapping expression activities. Surprisingly, conserved transcription factor binding site frequencies were as effective as experimental measurements of occupancy in predicting module expression patterns or factor-module interactions. Thus, unlike previous module prediction methods, this method predicts not only the location of modules but also their spatial activity pattern and the factors that directly determine this pattern. As databases of transcription factor specificities and in vivo gene expression patterns grow, analysis of pattern generating potentials provides a general method to decode transcriptional regulatory sequences and networks. PMID:20808951

  17. Sequence Analysis of the Segmental Duplication Responsible for Paris Sex-Ratio Drive in Drosophila simulans.

    PubMed

    Fouvry, Lucie; Ogereau, David; Berger, Anne; Gavory, Frederick; Montchamp-Moreau, Catherine

    2011-10-01

    Sex-ratio distorters are X-linked selfish genetic elements that facilitate their own transmission by subverting Mendelian segregation at the expense of the Y chromosome. Naturally occurring cases of sex-linked distorters have been reported in a variety of organisms, including several species of Drosophila; they trigger genetic conflict over the sex ratio, which is an important evolutionary force. However, with a few exceptions, the causal loci are unknown. Here, we molecularly characterize the segmental duplication involved in the Paris sex-ratio system that is still evolving in natural populations of Drosophila simulans. This 37.5 kb tandem duplication spans six genes, from the second intron of the Trf2 gene (TATA box binding protein-related factor 2) to the first intron of the org-1 gene (optomotor-blind-related-gene-1). Sequence analysis showed that the duplication arose through the production of an exact copy on the template chromosome itself. We estimated this event to be less than 500 years old. We also detected specific signatures of the duplication mechanism; these support the Duplication-Dependent Strand Annealing model. The region at the junction between the two duplicated segments contains several copies of an active transposable element, Hosim1, alternating with 687 bp repeats that are noncoding but transcribed. The almost-complete sequence identity between copies made it impossible to complete the sequencing and assembly of this region. These results form the basis for the functional dissection of Paris sex-ratio drive and will be valuable for future studies designed to better understand the dynamics and the evolutionary significance of sex chromosome drive. PMID:22384350

  18. Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.

    PubMed

    Jakobsson, Gerd; Kronstrand, Robert

    2014-06-01

    A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10?mg) was incubated in 2M sodium hydroxide (80°C, 10?min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4?min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5?ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. PMID:24817045

  19. VascuSynth: simulating vascular trees for generating volumetric image data with ground-truth segmentation and tree analysis.

    PubMed

    Hamarneh, Ghassan; Jassi, Preet

    2010-12-01

    Automated segmentation and analysis of tree-like structures from 3D medical images are important for many medical applications, such as those dealing with blood vasculature or lung airways. However, there is an absence of large databases of expert segmentations and analyses of such 3D medical images, which impedes the validation and training of proposed image analysis algorithms. In this work, we simulate volumetric images of vascular trees and generate the corresponding ground-truth segmentations, bifurcation locations, branch properties, and tree hierarchy. The tree generation is performed by iteratively growing a vascular structure based on a user-defined (possibly spatially varying) oxygen demand map. We describe the details of the algorithm and provide a variety of example results. PMID:20656456

  20. Ultratrace LC-MS/MS analysis of segmented calf hair for retrospective assessment of time of clenbuterol administration in Agriforensics.

    PubMed

    Duvivier, Wilco F; van Beek, Teris A; Meijer, Thijs; Peeters, Ruth J P; Groot, Maria J; Sterk, Saskia S; Nielen, Michel W F

    2015-01-21

    In agriforensics, time of administration is often debated when illegal drug residues, such as clenbuterol, are found in frequently traded cattle. In this proof-of-concept work, the feasibility of obtaining retrospective timeline information from segmented calf tail hair analyses has been studied. First, an ultraperformance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) hair analysis method was adapted to accommodate smaller sample sizes and in-house validated. Then, longitudinal 1 cm segments of calf tail hair were analyzed to obtain clenbuterol concentration profiles. The profiles found were in good agreement with calculated, theoretical positions of the clenbuterol residues along the hair. Following assessment of the average growth rate of calf tail hair, time of clenbuterol administration could be retrospectively determined from segmented hair analysis data. The data from the initial animal treatment study (n = 2) suggest that time of treatment can be retrospectively estimated with an error of 3-17 days. PMID:25537490

  1. Identifying Like-Minded Audiences for Global Warming Public Engagement Campaigns: An Audience Segmentation Analysis and Tool Development

    PubMed Central

    Maibach, Edward W.; Leiserowitz, Anthony; Roser-Renouf, Connie; Mertz, C. K.

    2011-01-01

    Background Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation – a process of identifying coherent groups within a population – can be used to improve the effectiveness of public engagement campaigns. Methodology/Principal Findings In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164) to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%), to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%). Three of the segments (totaling 70%) were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18%) were unsupportive, and one was largely disengaged (12%), having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively. Conclusions/Significance In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are available to assist in that process. PMID:21423743

  2. Precise delineation of clinical target volume for crossing-segments thoracic esophageal squamous cell carcinoma based on the pattern of lymph node metastases

    PubMed Central

    Dong, Yuanli; Guan, Hui; Huang, Wei; Zhang, Zicheng; Zhao, Dongbo; Liu, Yang; Zhou, Tao

    2015-01-01

    Background This work aims to investigate lymph node metastases (LNM) pattern of crossing-segments thoracic esophageal squamous cell carcinoma (ESCC) and its significance in clinical target volume (CTV) delineation. Methods From January 2000 to December 2014, 3,587 patients with thoracic ESCC underwent surgery including esophagectomy and lymphadenectomy at Shandong Cancer Hospital and Institute. Information of tumor location based on preoperative endoscopic ultrasonography (EUS) and postoperative pathological results were retrospectively collected. The extent of the irradiation field was determined based on LNM pattern. Results Among the patients reviewed, 1,501 (41.8%) were crossing-segments thoracic ESCC patients. The rate of LNM were 12.1%, 15.2%, 8.0%, 3.0%, and 7.1% in neck, upper mediastinum, middle mediastinum, lower mediastinum, and abdominal cavity for patients with upper-middle thoracic ESCC, 10.3%, 8.2%, 11.0%, 4.8%, 8.2% for middle-upper thoracic ESCC, 4.8%, 4.8%, 24.1%, 6.3%, 22.8% for middle-lower thoracic ESCC and 3.9%, 3.1%, 22.8%, 11.9%, 25.8% for lower-middle thoracic ESCC, respectively. The top three sites of LNM were 105 (12.1%), 108 (6.1%), 101 (6.1%) for upper-middle thoracic ESCC, 108 (8.2%), 105 (7.5%), 106 (6.8%) for middle-upper thoracic ESCC, 1 (18.8%), 108 (17.9%), 107 (9.6%) for middle-lower thoracic ESCC, 1 (21.3%), 108 (16.1%), 107 (10.1%) for lower-middle thoracic ESCC. Conclusions Crossing-segments thoracic ESCC was remarkably common among patients. When delineating their CTV, tumor location should be taken into consideration seriously. For upper-middle and middle-upper thoracic ESCC, abdominal cavity may be free from irradiation. For middle-lower and lower-middle thoracic ESCC, besides irradiation of relative mediastinal, irradiation of abdominal cavity can’t be neglected. PMID:26793353

  3. Segmentation with Area Constraints

    PubMed Central

    Niethammer, Marc; Zach, Christopher

    2012-01-01

    Image segmentation approaches typically incorporate weak regularity conditions such as boundary length or curvature terms, or use shape information. High-level information such as a desired area or volume, or a particular topology are only implicitly specified. In this paper we develop a segmentation method with explicit bounds on the segmented area. Area constraints allow for the soft selection of meaningful solutions, and can counteract the shrinking bias of length-based regularization. We analyze the intrinsic problems of convex relaxations proposed in the literature for segmentation with size constraints. Hence, we formulate the area-constrained segmentation task as a mixed integer program, propose a branch and bound method for exact minimization, and use convex relaxations to obtain the required lower energy bounds on candidate solutions. We also provide a numerical scheme to solve the convex subproblems. We demonstrate the method for segmentations of vesicles from electron tomography images. PMID:23084504

  4. Optical granulometric analysis of sedimentary deposits by color segmentation-based software: OPTGRAN-CS

    NASA Astrophysics Data System (ADS)

    Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.

    2015-12-01

    The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.

  5. [Assessment of cardiac function by left heart catheterization: an analysis of left ventricular pressure-volume (length) loops].

    PubMed

    Sasayama, S; Nonogi, H; Sakurai, T; Kawai, C; Fujita, M; Eiho, S; Kuwahara, M

    1984-01-01

    The mechanical property of the cardiac muscle has been classically analyzed in two ways; shortening of muscle fiber, and the development of tension within the muscle. In the ejecting ventricle, left ventricular (LV) function can be analyzed by the analogous two-dimensional framework of pressure-volume loops, which are provided by plotting the instantaneous volume against corresponding LV pressure. The integral pressure with respect to volume allows to assess a total external ventricular work during ejection. The diastolic pressure-volume relations reflect a chamber stiffness of the ventricle. Force-velocity relations also provide an useful conceptual framework for understanding how the ventricle contracts under given afterload, with modification of preload. In the presence of coronary artery disease, the regional nature of left ventricular contractile function should be defined as well as the global ventricular function as described above, because the latter is determined by the complex interaction of dysfunction of the ischemic myocardium and of compensatory augmentation of shortening of the normally perfused myocardium. We utilized a computer technique to analyze the local wall motion of the ischemic heart by cineventriculography. The boundaries of serial ventricular images are automatically traced and superimposed using the external reference system. Radial grids are drawn from the center of gravity of the end-diastolic image. Measurement of length of each radial grid throughout cardiac cycle provides the analysis of movement of the ventricle at a particular point on the circumference. Using phasic pressure obtained simultaneously with opacification as the common parameter, segmental pressure-length loops are constructed simultaneously at various segments. The loops are similar over the entire circumference in the normal heart, being rectangular in morphology and with synchronous behavior during contraction and relaxation. However, the marked distortion of pressure-length loops with clockwise rotation or figure of eight inscription is observed in the ischemic segments. Systolic work of the ischemic segment diminishes dramatically, and the loops exhibit varying degrees of inclination. The control segment loops also show an inclination to the opposite direction of the ischemic loops. These differences are presumably related to the local redistribution of the myocardial tension during systole in the ischemic ventricle. Thus, the method described should be of particular value in assessing the regional myocardial function in the ischemic ventricle and effects of various interventions which modify ischemia. PMID:6394655

  6. Understanding the market for geographic information: A market segmentation and characteristics analysis

    NASA Technical Reports Server (NTRS)

    Piper, William S.; Mick, Mark W.

    1994-01-01

    Findings and results from a marketing research study are presented. The report identifies market segments and the product types to satisfy demand in each. An estimate of market size is based on the specific industries in each segment. A sample of ten industries was used in the study. The scientific study covered U.S. firms only.

  7. Unconventional Word Segmentation in Emerging Bilingual Students' Writing: A Longitudinal Analysis

    ERIC Educational Resources Information Center

    Sparrow, Wendy

    2014-01-01

    This study explores cross-language and longitudinal patterns in unconventional word segmentation in 25 emerging bilingual students' (Spanish/English) writing from first through third grade. Spanish and English writing samples were collected annually and analyzed for two basic types of unconventional word segmentation: hyposegmentation, in…

  8. A Theoretical Analysis of How Segmentation of Dynamic Visualizations Optimizes Students' Learning

    ERIC Educational Resources Information Center

    Spanjers, Ingrid A. E.; van Gog, Tamara; van Merrienboer, Jeroen J. G.

    2010-01-01

    This article reviews studies investigating segmentation of dynamic visualizations (i.e., showing dynamic visualizations in pieces with pauses in between) and discusses two not mutually exclusive processes that might underlie the effectiveness of segmentation. First, cognitive activities needed for dealing with the transience of dynamic…

  9. A coronary artery segmentation method based on multiscale analysis and region growing.

    PubMed

    Kerkeni, Asma; Benabdallah, Asma; Manzanera, Antoine; Bedoui, Mohamed Hedi

    2016-03-01

    Accurate coronary artery segmentation is a fundamental step in various medical imaging applications such as stenosis detection, 3D reconstruction and cardiac dynamics assessing. In this paper, a multiscale region growing (MSRG) method for coronary artery segmentation in 2D X-ray angiograms is proposed. First, a region growing rule incorporating both vesselness and direction information in a unique way is introduced. Then an iterative multiscale search based on this criterion is performed. Selected points in each step are considered as seeds for the following step. By combining vesselness and direction information in the growing rule, this method is able to avoid blockage caused by low vesselness values in vascular regions, which in turn, yields continuous vessel tree. Performing the process in a multiscale fashion helps to extract thin and peripheral vessels often missed by other segmentation methods. Quantitative evaluation performed on real angiography images shows that the proposed segmentation method identifies about 80% of the total coronary artery tree in relatively easy images and 70% in challenging cases with a mean precision of 82% and outperforms others segmentation methods in terms of sensitivity. The MSRG segmentation method was also implemented with different enhancement filters and it has been shown that the Frangi filter gives better results. The proposed segmentation method has proven to be tailored for coronary artery segmentation. It keeps an acceptable performance when dealing with challenging situations such as noise, stenosis and poor contrast. PMID:26748040

  10. Stereophotogrammetrie Mass Distribution Parameter Determination Of The Lower Body Segments For Use In Gait Analysis

    NASA Astrophysics Data System (ADS)

    Sheffer, Daniel B.; Schaer, Alex R.; Baumann, Juerg U.

    1989-04-01

    Inclusion of mass distribution information in biomechanical analysis of motion is a requirement for the accurate calculation of external moments and forces acting on the segmental joints during locomotion. Regression equations produced from a variety of photogrammetric, anthropometric and cadaeveric studies have been developed and espoused in literature. Because of limitations in the accuracy of predicted inertial properties based on the application of regression equation developed on one population and then applied on a different study population, the employment of a measurement technique that accurately defines the shape of each individual subject measured is desirable. This individual data acquisition method is especially needed when analyzing the gait of subjects with large differences in their extremity geo-metry from those considered "normal", or who may possess gross asymmetries in shape in their own contralateral limbs. This study presents the photogrammetric acquisition and data analysis methodology used to assess the inertial tensors of two groups of subjects, one with spastic diplegic cerebral palsy and the other considered normal.

  11. Image Segmentation and Analysis of Flexion-Extension Radiographs of Cervical Spines

    PubMed Central

    Enikov, Eniko T.

    2014-01-01

    We present a new analysis tool for cervical flexion-extension radiographs based on machine vision and computerized image processing. The method is based on semiautomatic image segmentation leading to detection of common landmarks such as the spinolaminar (SL) line or contour lines of the implanted anterior cervical plates. The technique allows for visualization of the local curvature of these landmarks during flexion-extension experiments. In addition to changes in the curvature of the SL line, it has been found that the cervical plates also deform during flexion-extension examination. While extension radiographs reveal larger curvature changes in the SL line, flexion radiographs on the other hand tend to generate larger curvature changes in the implanted cervical plates. Furthermore, while some lordosis is always present in the cervical plates by design, it actually decreases during extension and increases during flexion. Possible causes of this unexpected finding are also discussed. The described analysis may lead to a more precise interpretation of flexion-extension radiographs, allowing diagnosis of spinal instability and/or pseudoarthrosis in already seemingly fused spines.

  12. High-throughput histopathological image analysis via robust cell segmentation and hashing.

    PubMed

    Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting

    2015-12-01

    Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells . PMID:26599156

  13. Analysis of volume holographic storage allowing large-angle illumination

    NASA Astrophysics Data System (ADS)

    Shamir, Joseph

    2005-05-01

    Advanced technological developments have stimulated renewed interest in volume holography for applications such as information storage and wavelength multiplexing for communications and laser beam shaping. In these and many other applications, the information-carrying wave fronts usually possess narrow spatial-frequency bands, although they may propagate at large angles with respect to each other or a preferred optical axis. Conventional analytic methods are not capable of properly analyzing the optical architectures involved. For mitigation of the analytic difficulties, a novel approximation is introduced to treat narrow spatial-frequency band wave fronts propagating at large angles. This approximation is incorporated into the analysis of volume holography based on a plane-wave decomposition and Fourier analysis. As a result of the analysis, the recently introduced generalized Bragg selectivity is rederived for this more general case and is shown to provide enhanced performance for the above indicated applications. The power of the new theoretical description is demonstrated with the help of specific examples and computer simulations. The simulations reveal some interesting effects, such as coherent motion blur, that were predicted in an earlier publication.

  14. Synfuel program analysis. Volume I. Procedures-capabilities

    SciTech Connect

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This is the first of the two volumes describing the analytic procedures and resulting capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative synfuel projects and integrated programs. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specifiy cases and interpret outputs. It also contains an explicit description (with examples) of the types of results which can be obtained when applied to: the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. In all cases, the objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

  15. Parallel runway requirement analysis study. Volume 1: The analysis

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.

    1993-01-01

    The correlation of increased flight delays with the level of aviation activity is well recognized. A main contributor to these flight delays has been the capacity of airports. Though new airport and runway construction would significantly increase airport capacity, few programs of this type are currently underway, let alone planned, because of the high cost associated with such endeavors. Therefore, it is necessary to achieve the most efficient and cost effective use of existing fixed airport resources through better planning and control of traffic flows. In fact, during the past few years the FAA has initiated such an airport capacity program designed to provide additional capacity at existing airports. Some of the improvements that that program has generated thus far have been based on new Air Traffic Control procedures, terminal automation, additional Instrument Landing Systems, improved controller display aids, and improved utilization of multiple runways/Instrument Meteorological Conditions (IMC) approach procedures. A useful element to understanding potential operational capacity enhancements at high demand airports has been the development and use of an analysis tool called The PLAND_BLUNDER (PLB) Simulation Model. The objective for building this simulation was to develop a parametric model that could be used for analysis in determining the minimum safety level of parallel runway operations for various parameters representing the airplane, navigation, surveillance, and ATC system performance. This simulation is useful as: a quick and economical evaluation of existing environments that are experiencing IMC delays, an efficient way to study and validate proposed procedure modifications, an aid in evaluating requirements for new airports or new runways in old airports, a simple, parametric investigation of a wide range of issues and approaches, an ability to tradeoff air and ground technology and procedures contributions, and a way of considering probable blunder mechanisms and range of blunder scenarios. This study describes the steps of building the simulation and considers the input parameters, assumptions and limitations, and available outputs. Validation results and sensitivity analysis are addressed as well as outlining some IMC and Visual Meteorological Conditions (VMC) approaches to parallel runways. Also, present and future applicable technologies (e.g., Digital Autoland Systems, Traffic Collision and Avoidance System II, Enhanced Situational Awareness System, Global Positioning Systems for Landing, etc.) are assessed and recommendations made.

  16. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  17. User's operating procedures. Volume 2: Scout project financial analysis program

    NASA Technical Reports Server (NTRS)

    Harris, C. G.; Haris, D. K.

    1985-01-01

    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.

  18. Bivariate segmentation of SNP-array data for allele-specific copy number analysis in tumour samples

    PubMed Central

    2013-01-01

    Background SNP arrays output two signals that reflect the total genomic copy number (LRR) and the allelic ratio (BAF), which in combination allow the characterisation of allele-specific copy numbers (ASCNs). While methods based on hidden Markov models (HMMs) have been extended from array comparative genomic hybridisation (aCGH) to jointly handle the two signals, only one method based on change-point detection, ASCAT, performs bivariate segmentation. Results In the present work, we introduce a generic framework for bivariate segmentation of SNP array data for ASCN analysis. For the matter, we discuss the characteristics of the typically applied BAF transformation and how they affect segmentation, introduce concepts of multivariate time series analysis that are of concern in this field and discuss the appropriate formulation of the problem. The framework is implemented in a method named CnaStruct, the bivariate form of the structural change model (SCM), which has been successfully applied to transcriptome mapping and aCGH. Conclusions On a comprehensive synthetic dataset, we show that CnaStruct outperforms the segmentation of existing ASCN analysis methods. Furthermore, CnaStruct can be integrated into the workflows of several ASCN analysis tools in order to improve their performance, specially on tumour samples highly contaminated by normal cells. PMID:23497144

  19. Concept Area Two Objectives and Test Items (Rev.) Part One, Part Two. Economic Analysis Course. Segments 17-49.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This report deals with the second concept area of the course and focuses on macroeconomics. Segments 17 through 49 are presented,…

  20. Change Detection and Land Use / Land Cover Database Updating Using Image Segmentation, GIS Analysis and Visual Interpretation

    NASA Astrophysics Data System (ADS)

    Mas, J.-F.; González, R.

    2015-08-01

    This article presents a hybrid method that combines image segmentation, GIS analysis, and visual interpretation in order to detect discrepancies between an existing land use/cover map and satellite images, and assess land use/cover changes. It was applied to the elaboration of a multidate land use/cover database of the State of Michoacán, Mexico using SPOT and Landsat imagery. The method was first applied to improve the resolution of an existing 1:250,000 land use/cover map produced through the visual interpretation of 2007 SPOT images. A segmentation of the 2007 SPOT images was carried out to create spectrally homogeneous objects with a minimum area of two hectares. Through an overlay operation with the outdated map, each segment receives the "majority" category from the map. Furthermore, spectral indices of the SPOT image were calculated for each band and each segment; therefore, each segment was characterized from the images (spectral indices) and the map (class label). In order to detect uncertain areas which present discrepancy between spectral response and class label, a multivariate trimming, which consists in truncating a distribution from its least likely values, was applied. The segments that behave like outliers were detected and labeled as "uncertain" and a probable alternative category was determined by means of a digital classification using a decision tree classification algorithm. Then, the segments were visually inspected in the SPOT image and high resolution imagery to assign a final category. The same procedure was applied to update the map to 2014 using Landsat imagery. As a final step, an accuracy assessment was carried out using verification sites selected from a stratified random sampling and visually interpreted using high resolution imagery and ground truth.

  1. Using Paleoseismic Trenching and LiDAR Analysis to Evaluate Rupture Propagation Through Segment Boundaries of the Central Wasatch Fault Zone, Utah

    NASA Astrophysics Data System (ADS)

    Bennett, S. E. K.; DuRoss, C. B.; Reitman, N. G.; Devore, J. R.; Hiscock, A.; Gold, R. D.; Briggs, R. W.; Personius, S. F.

    2014-12-01

    Paleoseismic data near fault segment boundaries constrain the extent of past surface ruptures and the persistence of rupture termination at segment boundaries. Paleoseismic evidence for large (M≥7.0) earthquakes on the central Holocene-active fault segments of the 350-km-long Wasatch fault zone (WFZ) generally supports single-segment ruptures but also permits multi-segment rupture scenarios. The extent and frequency of ruptures that span segment boundaries remains poorly known, adding uncertainty to seismic hazard models for this populated region of Utah. To address these uncertainties we conducted four paleoseismic investigations near the Salt Lake City-Provo and Provo-Nephi segment boundaries of the WFZ. We examined an exposure of the WFZ at Maple Canyon (Woodland Hills, UT) and excavated the Flat Canyon trench (Salem, UT), 7 and 11 km, respectively, from the southern tip of the Provo segment. We document evidence for at least five earthquakes at Maple Canyon and four to seven earthquakes that post-date mid-Holocene fan deposits at Flat Canyon. These earthquake chronologies will be compared to seven earthquakes observed in previous trenches on the northern Nephi segment to assess rupture correlation across the Provo-Nephi segment boundary. To assess rupture correlation across the Salt Lake City-Provo segment boundary we excavated the Alpine trench (Alpine, UT), 1 km from the northern tip of the Provo segment, and the Corner Canyon trench (Draper, UT) 1 km from the southern tip of the Salt Lake City segment. We document evidence for six earthquakes at both sites. Ongoing geochronologic analysis (14C, optically stimulated luminescence) will constrain earthquake chronologies and help identify through-going ruptures across these segment boundaries. Analysis of new high-resolution (0.5m) airborne LiDAR along the entire WFZ will quantify latest Quaternary displacements and slip rates and document spatial and temporal slip patterns near fault segment boundaries.

  2. Motion analysis of knee joint using dynamic volume images

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Kohno, Takahiro; Suzuki, Masahiko; Moriya, Hideshige; Mori, Sin-ichiro; Endo, Masahiro

    2006-03-01

    Acquisition and analysis of three-dimensional movement of knee joint is desired in orthopedic surgery. We have developed two methods to obtain dynamic volume images of knee joint. One is a 2D/3D registration method combining a bi-plane dynamic X-ray fluoroscopy and a static three-dimensional CT, the other is a method using so-called 4D-CT that uses a cone-beam and a wide 2D detector. In this paper, we present two analyses of knee joint movement obtained by these methods: (1) transition of the nearest points between femur and tibia (2) principal component analysis (PCA) of six parameters representing the three dimensional movement of knee. As a preprocessing for the analysis, at first the femur and tibia regions are extracted from volume data at each time frame and then the registration of the tibia between different frames by an affine transformation consisting of rotation and translation are performed. The same transformation is applied femur as well. Using those image data, the movement of femur relative to tibia can be analyzed. Six movement parameters of femur consisting of three translation parameters and three rotation parameters are obtained from those images. In the analysis (1), axis of each bone is first found and then the flexion angle of the knee joint is calculated. For each flexion angle, the minimum distance between femur and tibia and the location giving the minimum distance are found in both lateral condyle and medial condyle. As a result, it was observed that the movement of lateral condyle is larger than medial condyle. In the analysis (2), it was found that the movement of the knee can be represented by the first three principal components with precision of 99.58% and those three components seem to strongly relate to three major movements of femur in the knee bend known in orthopedic surgery.

  3. Interactive 3D segmentation of the prostate in magnetic resonance images using shape and local appearance similarity analysis

    NASA Astrophysics Data System (ADS)

    Shahedi, Maysam; Fenster, Aaron; Cool, Derek W.; Romagnoli, Cesare; Ward, Aaron D.

    2013-03-01

    3D segmentation of the prostate in medical images is useful to prostate cancer diagnosis and therapy guidance, but is time-consuming to perform manually. Clinical translation of computer-assisted segmentation algorithms for this purpose requires a comprehensive and complementary set of evaluation metrics that are informative to the clinical end user. We have developed an interactive 3D prostate segmentation method for 1.5T and 3.0T T2-weighted magnetic resonance imaging (T2W MRI) acquired using an endorectal coil. We evaluated our method against manual segmentations of 36 3D images using complementary boundary-based (mean absolute distance; MAD), regional overlap (Dice similarity coefficient; DSC) and volume difference (?V) metrics. Our technique is based on inter-subject prostate shape and local boundary appearance similarity. In the training phase, we calculated a point distribution model (PDM) and a set of local mean intensity patches centered on the prostate border to capture shape and appearance variability. To segment an unseen image, we defined a set of rays - one corresponding to each of the mean intensity patches computed in training - emanating from the prostate centre. We used a radial-based search strategy and translated each mean intensity patch along its corresponding ray, selecting as a candidate the boundary point with the highest normalized cross correlation along each ray. These boundary points were then regularized using the PDM. For the whole gland, we measured a mean+/-std MAD of 2.5+/-0.7 mm, DSC of 80+/-4%, and ?V of 1.1+/-8.8 cc. We also provided an anatomic breakdown of these metrics within the prostatic base, mid-gland, and apex.

  4. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    SciTech Connect

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-19

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

  5. Semisupervised segmentation of MRI stroke studies

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Windham, Joe P.; Robbins, Linda

    1997-04-01

    Fast, accurate, and reproducible image segmentation is vital to the diagnosis, treatment, and evaluation of many medical situations. We present development and application of a semi-supervised method for segmenting normal and abnormal brain tissues from magnetic resonance images (MRI) of stroke patients. The method does not require manual drawing of the tissue boundaries. It is therefore faster and more reproducible than conventional methods. The steps of the new method are as follows: (1) T2- and T1-weighted MR images are co-registered using a head and hat approach. (2) Intracranial brain volume is segmented from the skull, scalp, and background using a multi-resolution edge tracking algorithm. (3) Additive noise is suppressed (image is restored) using a non-linear edge-preserving filter which preserves partial volume information on average. (4) Image nonuniformities are corrected using a modified lowpass filtering approach. (5) The resulting images are segmented using a self organizing data analysis technique which is similar in principle to the K-means clustering but includes a set of additional heuristic merging and splitting procedures to generate a meaningful segmentation. (6) Segmented regions are labeled white matter, gray matter, CSF, partial volumes of normal tissues, zones of stroke, or partial volumes between stroke and normal tissues. (7) Previous steps are repeated for each slice of the brain and the volume of each tissue type is estimated from the results. Details and significance of each step are explained. Experimental results using a simulation, a phantom, and selected clinical cases are presented.

  6. a New Framework for Object-Based Image Analysis Based on Segmentation Scale Space and Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Hadavand, A.; Saadatseresht, M.; Homayouni, S.

    2015-12-01

    In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS), a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  7. Volume analysis of heat-induced cracks in human molars: A preliminary study

    PubMed Central

    Sandholzer, Michael A.; Baron, Katharina; Heimel, Patrick; Metscher, Brian D.

    2014-01-01

    Context: Only a few methods have been published dealing with the visualization of heat-induced cracks inside bones and teeth. Aims: As a novel approach this study used nondestructive X-ray microtomography (micro-CT) for volume analysis of heat-induced cracks to observe the reaction of human molars to various levels of thermal stress. Materials and Methods: Eighteen clinically extracted third molars were rehydrated and burned under controlled temperatures (400, 650, and 800°C) using an electric furnace adjusted with a 25°C increase/min. The subsequent high-resolution scans (voxel-size 17.7 μm) were made with a compact micro-CT scanner (SkyScan 1174). In total, 14 scans were automatically segmented with Definiens XD Developer 1.2 and three-dimensional (3D) models were computed with Visage Imaging Amira 5.2.2. The results of the automated segmentation were analyzed with an analysis of variance (ANOVA) and uncorrected post hoc least significant difference (LSD) tests using Statistical Package for Social Sciences (SPSS) 17. A probability level of P < 0.05 was used as an index of statistical significance. Results: A temperature-dependent increase of heat-induced cracks was observed between the three temperature groups (P < 0.05, ANOVA post hoc LSD). In addition, the distributions and shape of the heat-induced changes could be classified using the computed 3D models. Conclusion: The macroscopic heat-induced changes observed in this preliminary study correspond with previous observations of unrestored human teeth, yet the current observations also take into account the entire microscopic 3D expansions of heat-induced cracks within the dental hard tissues. Using the same experimental conditions proposed in the literature, this study confirms previous results, adds new observations, and offers new perspectives in the investigation of forensic evidence. PMID:25125923

  8. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 4

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 4 of the four major tasks included in the study. Task 4 uses flight plan segment wind and temperature differences as indicators of dates and geographic areas for which significant forecast errors may have occurred. An in-depth analysis is then conducted for the days identified. The analysis show that significant errors occur in the operational forecast on 15 of the 33 arbitrarily selected days included in the study. Wind speeds in an area of maximum winds are underestimated by at least 20 to 25 kts. on 14 of these days. The analysis also show that there is a tendency to repeat the same forecast errors from prog to prog. Also, some perceived forecast errors from the flight plan comparisons could not be verified by visual inspection of the corresponding National Meteorological Center forecast and analyses charts, and it is likely that they are the result of weather data interpolation techniques or some other data processing procedure in the airlines' flight planning systems.

  9. Analysis of an externally radially crack ring segment subject to three-point radial loading

    NASA Technical Reports Server (NTRS)

    Gross, B.; Srawley, J. E.; Shannon, J. L., Jr.

    1985-01-01

    The boundary collocation method was used to generate Mode I stress intensity and crack mouth opening displacement coefficients for externally radially (through-the-thickness) cracked ring segments subjected to three-point radial loading. Numerical results were obtained for ring segment outer-to-inner radius ratios (Ro/Ri) ranging from 1.10 to 2.50 and crack length to segment width ratios (a/W) ranging from 0.1 to 0.8. Stress intensity and crack mouth displacement coefficients were found to depend on the ratios Ro/Ri and a/W as well as the included angle between the directions of the reaction forces.

  10. phenoVein—A Tool for Leaf Vein Segmentation and Analysis1[OPEN

    PubMed Central

    Pflugfelder, Daniel; Huber, Gregor; Scharr, Hanno; HĂĽlskamp, Martin; Koornneef, Maarten; Jahnke, Siegfried

    2015-01-01

    Precise measurements of leaf vein traits are an important aspect of plant phenotyping for ecological and genetic research. Here, we present a powerful and user-friendly image analysis tool named phenoVein. It is dedicated to automated segmenting and analyzing of leaf veins in images acquired with different imaging modalities (microscope, macrophotography, etc.), including options for comfortable manual correction. Advanced image filtering emphasizes veins from the background and compensates for local brightness inhomogeneities. The most important traits being calculated are total vein length, vein density, piecewise vein lengths and widths, areole area, and skeleton graph statistics, like the number of branching or ending points. For the determination of vein widths, a model-based vein edge estimation approach has been implemented. Validation was performed for the measurement of vein length, vein width, and vein density of Arabidopsis (Arabidopsis thaliana), proving the reliability of phenoVein. We demonstrate the power of phenoVein on a set of previously described vein structure mutants of Arabidopsis (hemivenata, ondulata3, and asymmetric leaves2-101) compared with wild-type accessions Columbia-0 and Landsberg erecta-0. phenoVein is freely available as open-source software. PMID:26468519

  11. Magnetic field analysis of Lorentz motors using a novel segmented magnetic equivalent circuit method.

    PubMed

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  12. Movement Analysis of Flexion and Extension of Honeybee Abdomen Based on an Adaptive Segmented Structure.

    PubMed

    Zhao, Jieliang; Wu, Jianing; Yan, Shaoze

    2015-01-01

    Honeybees (Apis mellifera) curl their abdomens for daily rhythmic activities. Prior to determining this fact, people have concluded that honeybees could curl their abdomen casually. However, an intriguing but less studied feature is the possible unidirectional abdominal deformation in free-flying honeybees. A high-speed video camera was used to capture the curling and to analyze the changes in the arc length of the honeybee abdomen not only in free-flying mode but also in the fixed sample. Frozen sections and environment scanning electron microscope were used to investigate the microstructure and motion principle of honeybee abdomen and to explore the physical structure restricting its curling. An adaptive segmented structure, especially the folded intersegmental membrane (FIM), plays a dominant role in the flexion and extension of the abdomen. The structural features of FIM were utilized to mimic and exhibit movement restriction on honeybee abdomen. Combining experimental analysis and theoretical demonstration, a unidirectional bending mechanism of honeybee abdomen was revealed. Through this finding, a new perspective for aerospace vehicle design can be imitated. PMID:26223946

  13. Breast cancer risk analysis based on a novel segmentation framework for digital mammograms.

    PubMed

    Chen, Xin; Moschidis, Emmanouil; Taylor, Chris; Astley, Susan

    2014-01-01

    The radiographic appearance of breast tissue has been established as a strong risk factor for breast cancer. Here we present a complete machine learning framework for automatic estimation of mammographic density (MD) and robust feature extraction for breast cancer risk analysis. Our framework is able to simultaneously classify the breast region, fatty tissue, pectoral muscle, glandular tissue and nipple region. Integral to our method is the extraction of measures of breast density (as the fraction of the breast area occupied by glandular tissue) and mammographic pattern. A novel aspect of the segmentation framework is that a probability map associated with the label mask is provided, which indicates the level of confidence of each pixel being classified as the current label. The Pearson correlation coefficient between the estimated MD value and the ground truth is 0.8012 (p-value < 0.0001). We demonstrate the capability of our methods to discriminate between women with and without cancer by analyzing the contralateral mammograms of 50 women with unilateral breast cancer, and 50 controls. Using MD we obtained an area under the ROC curve (AUC) of 0.61; however our texture-based measure of mammographic pattern significantly outperforms the MD discrimination with an AUC of 0.70. PMID:25333160

  14. phenoVein-A Tool for Leaf Vein Segmentation and Analysis.

    PubMed

    Bühler, Jonas; Rishmawi, Louai; Pflugfelder, Daniel; Huber, Gregor; Scharr, Hanno; Hülskamp, Martin; Koornneef, Maarten; Schurr, Ulrich; Jahnke, Siegfried

    2015-12-01

    Precise measurements of leaf vein traits are an important aspect of plant phenotyping for ecological and genetic research. Here, we present a powerful and user-friendly image analysis tool named phenoVein. It is dedicated to automated segmenting and analyzing of leaf veins in images acquired with different imaging modalities (microscope, macrophotography, etc.), including options for comfortable manual correction. Advanced image filtering emphasizes veins from the background and compensates for local brightness inhomogeneities. The most important traits being calculated are total vein length, vein density, piecewise vein lengths and widths, areole area, and skeleton graph statistics, like the number of branching or ending points. For the determination of vein widths, a model-based vein edge estimation approach has been implemented. Validation was performed for the measurement of vein length, vein width, and vein density of Arabidopsis (Arabidopsis thaliana), proving the reliability of phenoVein. We demonstrate the power of phenoVein on a set of previously described vein structure mutants of Arabidopsis (hemivenata, ondulata3, and asymmetric leaves2-101) compared with wild-type accessions Columbia-0 and Landsberg erecta-0. phenoVein is freely available as open-source software. PMID:26468519

  15. Movement Analysis of Flexion and Extension of Honeybee Abdomen Based on an Adaptive Segmented Structure

    PubMed Central

    Zhao, Jieliang; Wu, Jianing; Yan, Shaoze

    2015-01-01

    Honeybees (Apis mellifera) curl their abdomens for daily rhythmic activities. Prior to determining this fact, people have concluded that honeybees could curl their abdomen casually. However, an intriguing but less studied feature is the possible unidirectional abdominal deformation in free-flying honeybees. A high-speed video camera was used to capture the curling and to analyze the changes in the arc length of the honeybee abdomen not only in free-flying mode but also in the fixed sample. Frozen sections and environment scanning electron microscope were used to investigate the microstructure and motion principle of honeybee abdomen and to explore the physical structure restricting its curling. An adaptive segmented structure, especially the folded intersegmental membrane (FIM), plays a dominant role in the flexion and extension of the abdomen. The structural features of FIM were utilized to mimic and exhibit movement restriction on honeybee abdomen. Combining experimental analysis and theoretical demonstration, a unidirectional bending mechanism of honeybee abdomen was revealed. Through this finding, a new perspective for aerospace vehicle design can be imitated. PMID:26223946

  16. Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method

    PubMed Central

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  17. Finite element analysis of weightbath hydrotraction treatment of degenerated lumbar spine segments in elastic phase.

    PubMed

    Kurutz, M; Oroszváry, L

    2010-02-10

    3D finite element models of human lumbar functional spinal units (FSU) were used for numerical analysis of weightbath hydrotraction therapy (WHT) applied for treating degenerative diseases of the lumbar spine. Five grades of age-related degeneration were modeled by material properties. Tensile material parameters of discs were obtained by parameter identification based on in vivo measured elongations of lumbar segments during regular WHT, compressive material constants were obtained from the literature. It has been proved numerically that young adults of 40-45 years have the most deformable and vulnerable discs, while the stability of segments increases with further aging. The reasons were found by analyzing the separated contrasting effects of decreasing incompressibility and increasing hardening of nucleus, yielding non-monotonous functions of stresses and deformations in terms of aging and degeneration. WHT consists of indirect and direct traction phases. Discs show a bilinear material behaviour with higher resistance in indirect and smaller in direct traction phase. Consequently, although the direct traction load is only 6% of the indirect one, direct traction deformations are 15-90% of the indirect ones, depending on the grade of degeneration. Moreover, the ratio of direct stress relaxation remains equally about 6-8% only. Consequently, direct traction controlled by extra lead weights influences mostly the deformations being responsible for the nerve release; while the stress relaxation is influenced mainly by the indirect traction load coming from the removal of the compressive body weight and muscle forces in the water. A mildly degenerated disc in WHT shows 0.15mm direct, 0.45mm indirect and 0.6mm total extension; 0.2mm direct, 0.6mm indirect and 0.8mm total posterior contraction. A severely degenerated disc exhibits 0.05mm direct, 0.05mm indirect and 0.1mm total extension; 0.05mm direct, 0.25mm indirect and 0.3mm total posterior contraction. These deformations are related to the instant elastic phase of WHT that are doubled during the creep period of the treatment. The beneficial clinical impacts of WHT are still evident even 3 months later. PMID:19883918

  18. A comparison between handgrip strength, upper limb fat free mass by segmental bioelectrical impedance analysis (SBIA) and anthropometric measurements in young males

    NASA Astrophysics Data System (ADS)

    Gonzalez-Correa, C. H.; Caicedo-Eraso, J. C.; Varon-Serna, D. R.

    2013-04-01

    The mechanical function and size of a muscle may be closely linked. Handgrip strength (HGS) has been used as a predictor of functional performing. Anthropometric measurements have been made to estimate arm muscle area (AMA) and physical muscle mass volume of upper limb (ULMMV). Electrical volume estimation is possible by segmental BIA measurements of fat free mass (SBIA-FFM), mainly muscle-mass. Relationship among these variables is not well established. We aimed to determine if physical and electrical muscle mass estimations relate to each other and to what extent HGS is to be related to its size measured by both methods in normal or overweight young males. Regression analysis was used to determine association between these variables. Subjects showed a decreased HGS (65.5%), FFM, (85.5%) and AMA (74.5%). It was found an acceptable association between SBIA-FFM and AMA (r2 = 0.60) and poorer between physical and electrical volume (r2 = 0.55). However, a paired Student t-test and Bland and Altman plot showed that physical and electrical models were not interchangeable (pt<0.0001). HGS showed a very weak association with anthropometric (r2 = 0.07) and electrical (r2 = 0.192) ULMMV showing that muscle mass quantity does not mean muscle strength. Other factors influencing HGS like physical training or nutrition require more research.

  19. Relationship between methamphetamine use history and segmental hair analysis findings of MA users.

    PubMed

    Han, Eunyoung; Lee, Sangeun; In, Sanghwan; Park, Meejung; Park, Yonghoon; Cho, Sungnam; Shin, Junguk; Lee, Hunjoo

    2015-09-01

    The aim of this study was to investigate the relationship between methamphetamine (MA) use history and segmental hair analysis (1 and 3cm sections) and whole hair analysis results in Korean MA users in rehabilitation programs. Hair samples were collected from 26 Korean MA users. Eleven of the 26 subjects used cannabis with MA and two used cocaine, opiates, and MDMA with MA. Self-reported single dose of MA from the 26 subjects ranged from 0.03 to 0.5g/one time. Concentrations of MA and its metabolite amphetamine (AP) in hair were determined by gas chromatography mass spectrometry (GC/MS) after derivatization. The method used was well validated. Qualitative analysis from all 1cm sections (n=154) revealed a good correlation between positive or negative results for MA in hair and self-reported MA use (69.48%, n=107). In detail, MA results were positive in 66 hair specimens of MA users who reported administering MA, and MA results were negative in 41 hair specimens of MA users who denied MA administration in the corresponding month. Test results were false-negative in 10.39% (n=16) of hair specimens and false-positive in 20.13% (n=31) of hair specimens. In false positive cases, it is considered that after MA cessation it continued to be accumulated in hair still, while in false negative cases, self-reported histories showed a small amount of MA use or MA use 5-7 months previously. In terms of quantitative analysis, the concentrations of MA in 1 and 3cm long hair segments and in whole hair samples ranged from 1.03 to 184.98 (mean 22.01), 2.26 to 89.33 (mean 18.71), and 0.91 to 124.49 (mean 15.24)ng/mg, respectively. Ten subjects showed a good correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 7 among 10 subjects ranged from 0.71 to 0.98 (mean 0.85). Four subjects showed a low correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 4 subjects ranged from 0.36 to 0.55. Eleven subjects showed a poor correlation between MA use and MA concentration in hair. Correlation between MA use and MA concentration in hair of remaining one subject could not be determined or calculated. In this study, the correlation between accurate MA use histories obtained by psychiatrists and well-trained counselors and MA concentrations in hair was shown. This report provides objective scientific findings that should considerably aid the interpretation of forensic results and of the results of trials related to MA use. PMID:26197349

  20. Analysis of DNA Sequences through Segmentation: Exploring the Mosaic via Statistical Measures

    NASA Astrophysics Data System (ADS)

    Ramaswamy, Ramakrishna; Azad, Rajeev K.

    The Jensen-Shannon divergence provides a quantitative entropic measure through which genomic DNA can be divided into compositionally distinct domains by a standard recursive segmentation procedure. In this article we show the scaling behaviour observed in domain length distribution and further explore the significance of these domains in the context of gene location, in application to the segmentation of a complete bacterial genome. We also show that this entropic measure has the potential of detecting the horizontally transferred genes in a genome.

  1. Cell segmentation by multi-resolution analysis and maximum likelihood estimation (MAMLE)

    PubMed Central

    2013-01-01

    Background Cell imaging is becoming an indispensable tool for cell and molecular biology research. However, most processes studied are stochastic in nature, and require the observation of many cells and events. Ideally, extraction of information from these images ought to rely on automatic methods. Here, we propose a novel segmentation method, MAMLE, for detecting cells within dense clusters. Methods MAMLE executes cell segmentation in two stages. The first relies on state of the art filtering technique, edge detection in multi-resolution with morphological operator and threshold decomposition for adaptive thresholding. From this result, a correction procedure is applied that exploits maximum likelihood estimate as an objective function. Also, it acquires morphological features from the initial segmentation for constructing the likelihood parameter, after which the final segmentation is obtained. Conclusions We performed an empirical evaluation that includes sample images from different imaging modalities and diverse cell types. The new method attained very high (above 90%) cell segmentation accuracy in all cases. Finally, its accuracy was compared to several existing methods, and in all tests, MAMLE outperformed them in segmentation accuracy. PMID:24267594

  2. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  3. Three-dimensional volume analysis of vasculature in engineered tissues

    NASA Astrophysics Data System (ADS)

    YousefHussien, Mohammed; Garvin, Kelley; Dalecki, Diane; Saber, Eli; Helguera, María.

    2013-01-01

    Three-dimensional textural and volumetric image analysis holds great potential in understanding the image data produced by multi-photon microscopy. In this paper, an algorithm that quantitatively analyzes the texture and the morphology of vasculature in engineered tissues is proposed. The investigated 3D artificial tissues consist of Human Umbilical Vein Endothelial Cells (HUVEC) embedded in collagen exposed to two regimes of ultrasound standing wave fields under different pressure conditions. Textural features were evaluated using the normalized Gray-Scale Cooccurrence Matrix (GLCM) combined with Gray-Level Run Length Matrix (GLRLM) analysis. To minimize error resulting from any possible volume rotation and to provide a comprehensive textural analysis, an averaged version of nine GLCM and GLRLM orientations is used. To evaluate volumetric features, an automatic threshold using the gray level mean value is utilized. Results show that our analysis is able to differentiate among the exposed samples, due to morphological changes induced by the standing wave fields. Furthermore, we demonstrate that providing more textural parameters than what is currently being reported in the literature, enhances the quantitative understanding of the heterogeneity of artificial tissues.

  4. Flow Analysis on a Limited Volume Chilled Water System

    SciTech Connect

    Zheng, Lin

    2012-07-31

    LANL Currently has a limited volume chilled water system for use in a glove box, but the system needs to be updated. Before we start building our new system, a flow analysis is needed to ensure that there are no high flow rates, extreme pressures, or any other hazards involved in the system. In this project the piping system is extremely important to us because it directly affects the overall design of the entire system. The primary components necessary for the chilled water piping system are shown in the design. They include the pipes themselves (perhaps of more than one diameter), the various fitting used to connect the individual pipes to form the desired system, the flow rate control devices (valves), and the pumps that add energy to the fluid. Even the most simple pipe systems are actually quite complex when they are viewed in terms of rigorous analytical considerations. I used an 'exact' analysis and dimensional analysis considerations combined with experimental results for this project. When 'real-world' effects are important (such as viscous effects in pipe flows), it is often difficult or impossible to use only theoretical methods to obtain the desired results. A judicious combination of experimental data with theoretical considerations and dimensional analysis are needed in order to reduce risks to an acceptable level.

  5. Coal gasification systems engineering and analysis. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Feasibility analyses and systems engineering studies for a 20,000 tons per day medium Btu (MBG) coal gasification plant to be built by TVA in Northern Alabama were conducted. Major objectives were as follows: (1) provide design and cost data to support the selection of a gasifier technology and other major plant design parameters, (2) provide design and cost data to support alternate product evaluation, (3) prepare a technology development plan to address areas of high technical risk, and (4) develop schedules, PERT charts, and a work breakdown structure to aid in preliminary project planning. Volume one contains a summary of gasification system characterizations. Five gasification technologies were selected for evaluation: Koppers-Totzek, Texaco, Lurgi Dry Ash, Slagging Lurgi, and Babcock and Wilcox. A summary of the trade studies and cost sensitivity analysis is included.

  6. On the development of weighting factors for ballast ranking prioritization & development of the relationship and rate of defective segments based on volume of missing ballast

    NASA Astrophysics Data System (ADS)

    Cronin, John

    This thesis explores the effects of missing ballast on track behavior and degradation. As ballast is an integral part of the track structure, the hypothesized effect of missing ballast is that defects will be more common which in turn leads to more derailments. In order to quantify the volume of missing ballast, remote sensing technologies were used to provide an accurate profile of the ballast. When the existing profile is compared to an idealized profile, the area of missing ballast can be computed. The area is then subdivided into zones which represent the area in which the ballast performs a key function in the track structure. These areas are then extrapolated into the volume of missing ballast for each zone based on the distance between collected profiles. In order to emphasize the key functions that the zones previously created perform, weighting factors were developed based on common risk-increasing hazards, such as curves and heavy axle loads, which are commonly found on railways. These weighting factors are applied to the specified zones' missing ballast volume when such a hazard exists in that segment of track. Another set of weighting factors were developed to represent the increased risk, or preference for lower risk, for operational factors such as the transport of hazardous materials or for being a key route. Through these weighting factors, ballast replenishment can be prioritized to focus on the areas that pose a higher risk of derailments and their associated costs. For the special cases where the risk or aversion to risk comes from what is being transported, such as the case with hazardous materials or passengers, an economic risk assessment was completed in order to quantify the risk associated with their transport. This economic risk assessment looks at the increased costs associated with incidents that occur and how they compare to incidents which do not directly involve the special cargos. In order to provide support for the use of the previously developed weightings as well as to quantify the actual impact that missing ballast has on the rate of geometry defects, analyses which quantified the risk of missing ballast were performed. In addition to quantifying the rate of defects, analyses were performed which looked at the impact associated with curved track, how the location of missing ballast impacts the rate of geometry defects and how the combination of the two compared with the previous analyses. Through this research, the relationship between the volume of missing ballast and ballast-related defects has been identified and quantified. This relationship is positive for the aggregate of all ballast-related defects but does not always exist for individual defects which occasionally have unique behavior. For the non-ballast defects, a relationship between missing ballast and their rate of occurrence did not always appear to exist. The impact of curves was apparent, showing that the rate of defects was either similar to or exceeded the rate of defects for tangent track. For the analyses which looked at the location of ballast in crib or shoulder, the results were quite similar to the previous analyses. The development, application and improvements of a risk-based ballast maintenance prioritization system provides a relatively low-cost and effective method to improve the operational safety for all railroads.

  7. Global Warming’s Six Americas: An Audience Segmentation Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Roser-Renouf, C.; Maibach, E.; Leiserowitz, A.

    2009-12-01

    One of the first rules of effective communication is to “know thy audience.” People have different psychological, cultural and political reasons for acting - or not acting - to reduce greenhouse gas emissions, and climate change educators can increase their impact by taking these differences into account. In this presentation we will describe six unique audience segments within the American public that each responds to the issue in its own distinct way, and we will discuss methods of engaging each. The six audiences were identified using a nationally representative survey of American adults conducted in the fall of 2008 (N=2,164). In two waves of online data collection, the public’s climate change beliefs, attitudes, risk perceptions, values, policy preferences, conservation, and energy-efficiency behaviors were assessed. The data were subjected to latent class analysis, yielding six groups distinguishable on all the above dimensions. The Alarmed (18%) are fully convinced of the reality and seriousness of climate change and are already taking individual, consumer, and political action to address it. The Concerned (33%) - the largest of the Six Americas - are also convinced that global warming is happening and a serious problem, but have not yet engaged with the issue personally. Three other Americas - the Cautious (19%), the Disengaged (12%) and the Doubtful (11%) - represent different stages of understanding and acceptance of the problem, and none are actively involved. The final America - the Dismissive (7%) - are very sure it is not happening and are actively involved as opponents of a national effort to reduce greenhouse gas emissions. Mitigating climate change will require a diversity of messages, messengers and methods that take into account these differences within the American public. The findings from this research can serve as guideposts for educators on the optimal choices for reaching and influencing target groups with varied informational needs, values and beliefs.

  8. Synfuel program analysis. Volume 1: Procedures-capabilities

    NASA Astrophysics Data System (ADS)

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    The analytic procedures and capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative are described. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specify cases and interpret outputs. It contains an explicit description (with examples) of the types of results which can be obtained when applied for the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. The objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

  9. Sequence analysis on the information of folding initiation segments in ferredoxin-like fold proteins

    PubMed Central

    2014-01-01

    Background While some studies have shown that the 3D protein structures are more conservative than their amino acid sequences, other experimental studies have shown that even if two proteins share the same topology, they may have different folding pathways. There are many studies investigating this issue with molecular dynamics or Go-like model simulations, however, one should be able to obtain the same information by analyzing the proteins’ amino acid sequences, if the sequences contain all the information about the 3D structures. In this study, we use information about protein sequences to predict the location of their folding segments. We focus on proteins with a ferredoxin-like fold, which has a characteristic topology. Some of these proteins have different folding segments. Results Despite the simplicity of our methods, we are able to correctly determine the experimentally identified folding segments by predicting the location of the compact regions considered to play an important role in structural formation. We also apply our sequence analyses to some homologues of each protein and confirm that there are highly conserved folding segments despite the homologues’ sequence diversity. These homologues have similar folding segments even though the homology of two proteins’ sequences is not so high. Conclusion Our analyses have proven useful for investigating the common or different folding features of the proteins studied. PMID:24884463

  10. Stress and strain analysis of contractions during ramp distension in partially obstructed guinea pig jejunal segments

    PubMed Central

    Zhao, Jingbo; Liao, Donghua; Yang, Jian; Gregersen, Hans

    2011-01-01

    Previous studies have demonstrated morphological and biomechanical remodeling in the intestine proximal to an obstruction. The present study aimed to obtain stress and strain thresholds to initiate contraction and the maximal contraction stress and strain in partially obstructed guinea pig jejunal segments. Partial obstruction and sham operations were surgically created in mid-jejunum of male guinea pigs. The animals survived 2, 4, 7, and 14 days, respectively. Animals not being operated on served as normal controls. The segments were used for no-load state, zero-stress state and distension analyses. The segment was inflated to 10 cmH2O pressure in an organ bath containing 37°C Krebs solution and the outer diameter change was monitored. The stress and strain at the contraction threshold and at maximum contraction were computed from the diameter, pressure and the zero-stress state data. Young’s modulus was determined at the contraction threshold. The muscle layer thickness in obstructed intestinal segments increased up to 300%. Compared with sham-obstructed and normal groups, the contraction stress threshold, the maximum contraction stress and the Young’s modulus at the contraction threshold increased whereas the strain threshold and maximum contraction strain decreased after 7 days obstruction (P<0.05 and 0.01). In conclusion, in the partially obstructed intestinal segments, a larger distension force was needed to evoke contraction likely due to tissue remodeling. Higher contraction stresses were produced and the contraction deformation (strain) became smaller. PMID:21632056

  11. An analysis of methods for the selection of atlases for use in medical image segmentation

    NASA Astrophysics Data System (ADS)

    Prescott, Jeffrey W.; Best, Thomas M.; Haq, Furqan; Jackson, Rebecca; Gurcan, Metin

    2010-03-01

    The use of atlases has been shown to be a robust method for segmentation of medical images. In this paper we explore different methods of selection of atlases for the segmentation of the quadriceps muscles in magnetic resonance (MR) images, although the results are pertinent for a wide range of applications. The experiments were performed using 103 images from the Osteoarthritis Initiative (OAI). The images were randomly split into a training set consisting of 50 images and a testing set of 53 images. Three different atlas selection methods were systematically compared. First, a set of readers was assigned the task of selecting atlases from a training population of images, which were selected to be representative subgroups of the total population. Second, the same readers were instructed to select atlases from a subset of the training data which was stratified based on population modes. Finally, every image in the training set was employed as an atlas, with no input from the readers, and the atlas which had the best initial registration, judged by an appropriate registration metric, was used in the final segmentation procedure. The segmentation results were quantified using the Zijdenbos similarity index (ZSI). The results show that over all readers the agreement of the segmentation algorithm decreased from 0.76 to 0.74 when using population modes to assist in atlas selection. The use of every image in the training set as an atlas outperformed both manual atlas selection methods, achieving a ZSI of 0.82.

  12. Study of Alternate Space Shuttle Concepts. Volume 2, Part 2: Concept Analysis and Definition

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This is the final report of a Phase A Study of Alternate Space Shuttle Concepts by the Lockheed Missiles & Space Company (LMSC) for the National Aeronautics and Space Administration George C. Marshall Space Flight Center (MSFC). The eleven-month study, which began on 30 June 1970, is to examine the stage-and-one-half and other Space Shuttle configurations and to establish feasibility, performance, cost, and schedules for the selected concepts. This final report consists of four volumes as follows: Volume I - Executive Summary, Volume II - Concept Analysis and Definition, Volume III - Program Planning, and Volume IV - Data Cost Data. This document is Volume II, Concept Analysis and Definition.

  13. Breast Tissue 3D Segmentation and Visualization on MRI

    PubMed Central

    Cui, Xiangfei; Sun, Feifei

    2013-01-01

    Tissue segmentation and visualization are useful for breast lesion detection and quantitative analysis. In this paper, a 3D segmentation algorithm based on Kernel-based Fuzzy C-Means (KFCM) is proposed to separate the breast MR images into different tissues. Then, an improved volume rendering algorithm based on a new transfer function model is applied to implement 3D breast visualization. Experimental results have been shown visually and have achieved reasonable consistency. PMID:23983676

  14. Airway segmentation and analysis for the study of mouse models of lung disease using micro-CT

    NASA Astrophysics Data System (ADS)

    Artaechevarria, X.; Pérez-Martín, D.; Ceresa, M.; de Biurrun, G.; Blanco, D.; Montuenga, L. M.; van Ginneken, B.; Ortiz-de-Solorzano, C.; Muñoz-Barrutia, A.

    2009-11-01

    Animal models of lung disease are gaining importance in understanding the underlying mechanisms of diseases such as emphysema and lung cancer. Micro-CT allows in vivo imaging of these models, thus permitting the study of the progression of the disease or the effect of therapeutic drugs in longitudinal studies. Automated analysis of micro-CT images can be helpful to understand the physiology of diseased lungs, especially when combined with measurements of respiratory system input impedance. In this work, we present a fast and robust murine airway segmentation and reconstruction algorithm. The algorithm is based on a propagating fast marching wavefront that, as it grows, divides the tree into segments. We devised a number of specific rules to guarantee that the front propagates only inside the airways and to avoid leaking into the parenchyma. The algorithm was tested on normal mice, a mouse model of chronic inflammation and a mouse model of emphysema. A comparison with manual segmentations of two independent observers shows that the specificity and sensitivity values of our method are comparable to the inter-observer variability, and radius measurements of the mainstem bronchi reveal significant differences between healthy and diseased mice. Combining measurements of the automatically segmented airways with the parameters of the constant phase model provides extra information on how disease affects lung function.

  15. Analysis of an internally radially cracked ring segment subject to three-point radial loading

    NASA Technical Reports Server (NTRS)

    Gross, B.; Srawley, J. E.

    1983-01-01

    The boundary collocation method was used to generate Mode 1 stress intensity and crack mouth opening displacement coefficients for externally radially cracked ring segments subjected to three point radial loading. Numerical results were obtained for ring segment outer-to-inner radius ratios (R sub o/R sub i) ranging from 1.10 to 2.50 and crack length to segment width ratios (a/W) ranging from 0.1 to 0.8. Stress intensity and crack mouth displacement coefficients were found to depend on the ratios R sub o/R sub i and a/W as well as the included angle between the directions of the reaction forces. Previously announced in STAR as N83-35413

  16. Analysis of an Externally Radially Cracked Ring Segment Subject to Three-Point Radial Loading

    NASA Technical Reports Server (NTRS)

    Gross, B.; Srawlwy, J. E.; Shannon, J. L., Jr.

    1983-01-01

    The boundary collocation method was used to generate Mode 1 stress intensity and crack mouth opening displacement coefficients for externally radially cracked ring segments subjected to three point radial loading. Numerical results were obtained for ring segment outer-to-inner radius ratios (R sub o/R sub i) ranging from 1.10 to 2.50 and crack length to segment width ratios (a/W) ranging from 0.1 to 0.8. Stress intensity and crack mouth displacement coefficients were found to depend on the ratios R sub o/R sub i and a/W as well as the included angle between the directions of the reaction forces.

  17. Concepts and analysis for precision segmented reflector and feed support structures

    NASA Technical Reports Server (NTRS)

    Miller, Richard K.; Thomson, Mark W.; Hedgepeth, John M.

    1990-01-01

    Several issues surrounding the design of a large (20-meter diameter) Precision Segmented Reflector are investigated. The concerns include development of a reflector support truss geometry that will permit deployment into the required doubly-curved shape without significant member strains. For deployable and erectable reflector support trusses, the reduction of structural redundancy was analyzed to achieve reduced weight and complexity for the designs. The stiffness and accuracy of such reduced member trusses, however, were found to be affected to a degree that is unexpected. The Precision Segmented Reflector designs were developed with performance requirements that represent the Reflector application. A novel deployable sunshade concept was developed, and a detailed parametric study of various feed support structural concepts was performed. The results of the detailed study reveal what may be the most desirable feed support structure geometry for Precision Segmented Reflector/Large Deployable Reflector applications.

  18. Development and analysis of a linearly segmented CPC collector for industrial steam generation

    NASA Astrophysics Data System (ADS)

    Figueroa, J. A. A. F.

    1980-06-01

    The mirror consists of long and narrow planar segments placed inside sealed low-cost glass tubes. The absorber is a cylindrical fin inside an evacuated glass tube. The optical efficiency of the segmented concentrator was simulated by means of Monte-Carlo Ray-Tracing program. Laser Ray-Tracing techniques were also used to evaluate the possibilities of this new concept. A preliminary evaluation of the experimental concentrator was done using a relatively simple method that combines results from two experimental measurements: overall heat loss coefficient and optical efficienty. A transient behavior test was used to measure to overall heat loss coefficient throughout a wide range of temperature.

  19. A fully-automatic caudate nucleus segmentation of brain MRI: Application in volumetric analysis of pediatric attention-deficit/hyperactivity disorder

    PubMed Central

    2011-01-01

    Background Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD. PMID:22141926

  20. Spinal nerve segmentation in the chick embryo: analysis of distinct axon-repulsive systems.

    PubMed

    Vermeren, M M; Cook, G M; Johnson, A R; Keynes, R J; Tannahill, D

    2000-09-01

    In higher vertebrates, the segmental organization of peripheral spinal nerves is established by a repulsive mechanism whereby sensory and motor axons are excluded from the posterior half-somite. A number of candidate axon repellents have been suggested to mediate this barrier to axon growth, including Sema3A, Ephrin-B, and peanut agglutinin (PNA)-binding proteins. We have tested the candidacy of these factors in vitro by examining their contribution to the growth cone collapse-inducing activity of somite-derived protein extracts on sensory, motor, and retinal axons. We find that Sema3A is unlikely to play a role in the segmentation of sensory or motor axons and that Ephrin-B may contribute to motor but not sensory axon segmentation. We also provide evidence that the only candidate molecule(s) that induces the growth cone collapse of both sensory and motor axons binds to PNA and is not Sema3A or Ephrin-B. By grafting primary sensory, motor, and quail retinal neurons into the chick trunk in vivo, we provide further evidence that the posterior half-somite represents a universal barrier to growing axons. Taken together, these results suggest that the mechanisms of peripheral nerve segmentation should be considered in terms of repellent molecules in addition to the identified molecules. PMID:10964478

  1. Sequence and phylogenetic analysis of the S1 Genome segment of turkey-origin reoviruses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Based on previous reports characterizing the turkey-origin avian reovirus (TRV) sigma-B (sigma-2) major outer capsid protein gene, the TRVs may represent a new group within the fusogenic orthoreoviruses. However, no sequence data from other TRV genes or genome segments has been reported. The sigma...

  2. Infants' Early Ability to Segment the Conversational Speech Signal Predicts Later Language Development: A Retrospective Analysis

    ERIC Educational Resources Information Center

    Newman, Rochelle; Ratner, Nan Bernstein; Jusczyk, Ann Marie; Jusczyk, Peter W.; Dow, Kathy Ayala

    2006-01-01

    Two studies examined relationships between infants' early speech processing performance and later language and cognitive outcomes. Study 1 found that performance on speech segmentation tasks before 12 months of age related to expressive vocabulary at 24 months. However, performance on other tasks was not related to 2-year vocabulary. Study 2…

  3. Segmental and Positional Effects on Children's Coda Production: Comparing Evidence from Perceptual Judgments and Acoustic Analysis

    ERIC Educational Resources Information Center

    Theodore, Rachel M.; Demuth, Katherine; Shattuck-Hufnagel, Stephanie

    2012-01-01

    Children's early productions are highly variable. Findings from children's early productions of grammatical morphemes indicate that some of the variability is systematically related to segmental and phonological factors. Here, we extend these findings by assessing 2-year-olds' production of non-morphemic codas using both listener decisions and…

  4. Segmental and Positional Effects on Children's Coda Production: Comparing Evidence from Perceptual Judgments and Acoustic Analysis

    ERIC Educational Resources Information Center

    Theodore, Rachel M.; Demuth, Katherine; Shattuck-Hufnagel, Stephanie

    2012-01-01

    Children's early productions are highly variable. Findings from children's early productions of grammatical morphemes indicate that some of the variability is systematically related to segmental and phonological factors. Here, we extend these findings by assessing 2-year-olds' production of non-morphemic codas using both listener decisions and…

  5. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment

    PubMed Central

    Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols. PMID:26147196

  6. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 3

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.

  7. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  8. Cerebrospinal fluid volume analysis for hydrocephalus diagnosis and clinical research.

    PubMed

    Lebret, Alain; Hodel, Jérôme; Rahmouni, Alain; Decq, Philippe; Petit, Eric

    2013-04-01

    In this paper we analyze volumes of the cerebrospinal fluid spaces for the diagnosis of hydrocephalus, which are served as reference values for future studies. We first present an automatic method to estimate those volumes from a new three-dimensional whole body magnetic resonance imaging sequence. This enables us to statistically analyze the fluid volumes, and to show that the ratio of subarachnoid volume to ventricular one is a proportionality constant for healthy adults (=10.73), while in range [0.63, 4.61] for hydrocephalus patients. This indicates that a robust distinction between pathological and healthy cases can be achieved by using this ratio as an index. PMID:23570816

  9. National Evaluation of Family Support Programs. Final Report Volume A: The Meta-Analysis.

    ERIC Educational Resources Information Center

    Layzer, Jean I.; Goodson, Barbara D.; Bernstein, Lawrence; Price, Cristofer

    This volume is part of the final report of the National Evaluation of Family Support Programs and details findings from a meta-analysis of extant research on programs providing family support services. Chapter A1 of this volume provides a rationale for using meta-analysis. Chapter A2 describes the steps of preparation for the meta-analysis.…

  10. Incorporation of learned shape priors into a graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes of mice

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Song, Qi; Abrŕmoff, Michael D.; Sohn, Eliott; Wu, Xiaodong; Garvin, Mona K.

    2014-03-01

    Spectral-domain optical coherence tomography (SD-OCT) finds widespread use clinically for the detection and management of ocular diseases. This non-invasive imaging modality has also begun to find frequent use in research studies involving animals such as mice. Numerous approaches have been proposed for the segmentation of retinal surfaces in SD-OCT images obtained from human subjects; however, the segmentation of retinal surfaces in mice scans is not as well-studied. In this work, we describe a graph-theoretic segmentation approach for the simultaneous segmentation of 10 retinal surfaces in SD-OCT scans of mice that incorporates learned shape priors. We compared the method to a baseline approach that did not incorporate learned shape priors and observed that the overall unsigned border position errors reduced from 3.58 +/- 1.33 ?m to 3.20 +/- 0.56 ?m.

  11. Value and limitations of segmental analysis of stress thallium myocardial imaging for localization of coronary artery disease

    SciTech Connect

    Rigo, P.; Bailey, I.K.; Griffith, L.S.C.; Pitt, B.; Borow, R.D.; Wagner, H.N.; Becker, L.C.

    1980-05-01

    This study was done to determine the value of thallium-201 myocardial scintigraphic imaging (MSI) for identifying disease in the individual coronary arteries. Segmental analysis of rest and stress MSI was performed in 133 patients with ateriographically proved coronary artery disease (CAD). Certain scintigraphic segments were highly specific (97 to 100%) for the three major coronary arteries: anterior wall and septum for the left anterior descending (LAD) coronary artery; the inferior wall for the right coronary artery (RCA); and the proximal lateral wall for the circumflex (LCX) artery. Perfusion defects located in the anterolateral wall in the anterior view were highly specific for proximal disease in the LAD involving the major diagonal branches, but this was not true for septal defects. The apical segments were not specific for any of the three major vessels. Although MSI was abnormal in 89% of these patients with CAD, it was less sensitive for identifying individual vessel disease: 63% for LAD, 50% for RCA, and 21% for LCX disease (narrowings > = 50%). Sensitivity increased with the severity of stenosis, but even for 100% occlusions was only 87% for LAD, 58% for RCA and 38% for LCX. Sensitivity diminished as the number of vessels involved increased: with single-vessel disease, 80% of LAD, 54% of RAC and 33% of LCX lesions were detected, but in patients with triple-vessel disease, only 50% of LAD, 50% of RCA and 16% of LCX lesions were identified. Thus, although segmented analysis of MSI can identify disease in the individual coronary arteries with high specificity, only moderate sensitivity is achieved, reflecting the tendency of MSI to identify only the most severely ischemic area among several that may be present in a heart. Perfusion scintigrams display relative distributions rather than absolute values for myocardial blood flow.

  12. Volume component analysis for classification of LiDAR data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2015-03-01

    One of the most difficult challenges of working with LiDAR data is the large amount of data points that are produced. Analysing these large data sets is an extremely time consuming process. For this reason, automatic perception of LiDAR scenes is a growing area of research. Currently, most LiDAR feature extraction relies on geometrical features specific to the point cloud of interest. These geometrical features are scene-specific, and often rely on the scale and orientation of the object for classification. This paper proposes a robust method for reduced dimensionality feature extraction of 3D objects using a volume component analysis (VCA) approach.1 This VCA approach is based on principal component analysis (PCA). PCA is a method of reduced feature extraction that computes a covariance matrix from the original input vector. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix are used to describe an image. Block-based PCA is an adapted method for feature extraction in facial images because PCA, when performed in local areas of the image, can extract more significant features than can be extracted when the entire image is considered. The image space is split into several of these blocks, and PCA is computed individually for each block. This VCA proposes that a LiDAR point cloud can be represented as a series of voxels whose values correspond to the point density within that relative location. From this voxelized space, block-based PCA is used to analyze sections of the space where the sections, when combined, will represent features of the entire 3-D object. These features are then used as the input to a support vector machine which is trained to identify four classes of objects, vegetation, vehicles, buildings and barriers with an overall accuracy of 93.8%

  13. A Genetic Analysis of Brain Volumes and IQ in Children

    ERIC Educational Resources Information Center

    van Leeuwen, Marieke; Peper, Jiska S.; van den Berg, Stephanie M.; Brouwer, Rachel M.; Hulshoff Pol, Hilleke E.; Kahn, Rene S.; Boomsma, Dorret I.

    2009-01-01

    In a population-based sample of 112 nine-year old twin pairs, we investigated the association among total brain volume, gray matter and white matter volume, intelligence as assessed by the Raven IQ test, verbal comprehension, perceptual organization and perceptual speed as assessed by the Wechsler Intelligence Scale for Children-III. Phenotypic…

  14. EPA RREL'S MOBILE VOLUME REDUCTION UNIT -- APPLICATIONS ANALYSIS REPORT

    EPA Science Inventory

    The volume reduction unit (VRU) is a pilot-scale, mobile soil washing system designed to remove organic contaminants from the soil through particle size separation and solubilization. The VRU removes contaminants by suspending them in a wash solution and by reducing the volume of...

  15. Yucca Mountain transportation routes: Preliminary characterization and risk analysis; Volume 2, Figures [and] Volume 3, Technical Appendices

    SciTech Connect

    Souleyrette, R.R. II; Sathisan, S.K.; di Bartolo, R.

    1991-05-31

    This report presents appendices related to the preliminary assessment and risk analysis for high-level radioactive waste transportation routes to the proposed Yucca Mountain Project repository. Information includes data on population density, traffic volume, ecologically sensitive areas, and accident history.

  16. Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.

    2013-01-01

    The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.

  17. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity

    PubMed Central

    Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin

    2016-01-01

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ±40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design. PMID:26805844

  18. Design and Analysis of Modules for Segmented X-Ray Optics

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; BIskach, Michael P.; Chan, Kai-Wing; Saha, Timo T; Zhang, William W.

    2012-01-01

    Future X-ray astronomy missions demand thin, light, and closely packed optics which lend themselves to segmentation of the annular mirrors and, in turn, a modular approach to the mirror design. The modular approach to X-ray Flight Mirror Assembly (FMA) design allows excellent scalability of the mirror technology to support a variety of mission sizes and science objectives. This paper describes FMA designs using slumped glass mirror segments for several X-ray astrophysics missions studied by NASA and explores the driving requirements and subsequent verification tests necessary to qualify a slumped glass mirror module for space-flight. A rigorous testing program is outlined allowing Technical Development Modules to reach technical readiness for mission implementation while reducing mission cost and schedule risk.

  19. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity.

    PubMed

    Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin

    2016-01-01

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ±40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design. PMID:26805844

  20. An ECG ambulatory system with mobile embedded architecture for ST-segment analysis.

    PubMed

    Miranda-Cid, Alejandro; Alvarado-Serrano, Carlos

    2010-01-01

    A prototype of a ECG ambulatory system for long term monitoring of ST segment of 3 leads, low power, portability and data storage in solid state memory cards has been developed. The solution presented is based in a mobile embedded architecture of a portable entertainment device used as a tool for storage and processing of bioelectric signals, and a mid-range RISC microcontroller, PIC 16F877, which performs the digitalization and transmission of ECG. The ECG amplifier stage is a low power, unipolar voltage and presents minimal distortion of the phase response of high pass filter in the ST segment. We developed an algorithm that manages access to files through an implementation for FAT32, and the ECG display on the device screen. The records are stored in TXT format for further processing. After the acquisition, the system implemented works as a standard USB mass storage device. PMID:21095640

  1. Style, content and format guide for writing safety analysis documents. Volume 1, Safety analysis reports for DOE nuclear facilities

    SciTech Connect

    Not Available

    1994-06-01

    The purpose of Volume 1 of this 4-volume style guide is to furnish guidelines on writing and publishing Safety Analysis Reports (SARs) for DOE nuclear facilities at Sandia National Laboratories. The scope of Volume 1 encompasses not only the general guidelines for writing and publishing, but also the prescribed topics/appendices contents along with examples from typical SARs for DOE nuclear facilities.

  2. Segmental neurofibromatosis.

    PubMed

    Galhotra, Virat; Sheikh, Soheyl; Jindal, Sanjeev; Singla, Anshu

    2014-07-01

    Segmental neurofibromatosis is a rare disorder, characterized by neurofibromas or caf?-au-lait macules limited to one region of the body. Its occurrence on the face is extremely rare and only few cases of segmental neurofibromatosis over the face have been described so far. We present a case of segmental neurofibromatosis involving the buccal mucosa, tongue, cheek, ear, and neck on the right side of the face. PMID:25565748

  3. A New MRI-Based Pediatric Subcortical Segmentation Technique (PSST).

    PubMed

    Loh, Wai Yen; Connelly, Alan; Cheong, Jeanie L Y; Spittle, Alicia J; Chen, Jian; Adamson, Christopher; Ahmadzai, Zohra M; Fam, Lillian Gabra; Rees, Sandra; Lee, Katherine J; Doyle, Lex W; Anderson, Peter J; Thompson, Deanne K

    2016-01-01

    Volumetric and morphometric neuroimaging studies of the basal ganglia and thalamus in pediatric populations have utilized existing automated segmentation tools including FIRST (Functional Magnetic Resonance Imaging of the Brain's Integrated Registration and Segmentation Tool) and FreeSurfer. These segmentation packages, however, are mostly based on adult training data. Given that there are marked differences between the pediatric and adult brain, it is likely an age-specific segmentation technique will produce more accurate segmentation results. In this study, we describe a new automated segmentation technique for analysis of 7-year-old basal ganglia and thalamus, called Pediatric Subcortical Segmentation Technique (PSST). PSST consists of a probabilistic 7-year-old subcortical gray matter atlas (accumbens, caudate, pallidum, putamen and thalamus) combined with a customized segmentation pipeline using existing tools: ANTs (Advanced Normalization Tools) and SPM (Statistical Parametric Mapping). The segmentation accuracy of PSST in 7-year-old data was compared against FIRST and FreeSurfer, relative to manual segmentation as the ground truth, utilizing spatial overlap (Dice's coefficient), volume correlation (intraclass correlation coefficient, ICC) and limits of agreement (Bland-Altman plots). PSST achieved spatial overlap scores ≥90 % and ICC scores ≥0.77 when compared with manual segmentation, for all structures except the accumbens. Compared with FIRST and FreeSurfer, PSST showed higher spatial overlap (p FDR  < 0.05) and ICC scores, with less volumetric bias according to Bland-Altman plots. PSST is a customized segmentation pipeline with an age-specific atlas that accurately segments typical and atypical basal ganglia and thalami at age 7 years, and has the potential to be applied to other pediatric datasets. PMID:26381159

  4. Quantitative morphological analysis of curvilinear network for microscopic image based on individual fibre segmentation (IFS).

    PubMed

    Qiu, J; Li, F-F

    2014-12-01

    Microscopic images of curvilinear fibre network structure like cytoskeleton are traditionally analysed by qualitative observation, which can hardly provide quantitative information of their morphological properties. However, such information is crucially contributive to the understanding of important biological events, even helps to learn about the inner relations hard to perceive. Individual fibre segmentation-based curvilinear structure detector proposed in this study can identify each individual fibre in the network, as well as connections between different fibres. Quantitative information of each individual fibre, including length, orientation and position, can be extracted; so are the connecting modes in the fibre network, such as bifurcation, intersection and overlap. Distribution of fibres with different morphological properties is also presented. No manual intervening or subjective judging is required in the analysing process. Both synthesized and experimental microscopic images have verified that the detector is capable to segment curvilinear network at the subcellular level with strong noise immunity. The proposed detector is finally applied to the morphological study on cytoskeleton. It is believed that the individual fibre segmentation-based curvilinear structure detector can greatly enhance our understanding of those biological images generated from tons of biological experiments. PMID:25243901

  5. Growth and morphological analysis of segmented AuAg alloy nanowires created by pulsed electrodeposition in ion-track etched membranes

    PubMed Central

    Burr, Loic; Trautmann, Christina; Toimil-Molares, Maria Eugenia

    2015-01-01

    Summary Background: Multicomponent heterostructure nanowires and nanogaps are of great interest for applications in sensorics. Pulsed electrodeposition in ion-track etched polymer templates is a suitable method to synthesise segmented nanowires with segments consisting of two different types of materials. For a well-controlled synthesis process, detailed analysis of the deposition parameters and the size-distribution of the segmented wires is crucial. Results: The fabrication of electrodeposited AuAg alloy nanowires and segmented Au-rich/Ag-rich/Au-rich nanowires with controlled composition and segment length in ion-track etched polymer templates was developed. Detailed analysis by cyclic voltammetry in ion-track membranes, energy-dispersive X-ray spectroscopy and scanning electron microscopy was performed to determine the dependency between the chosen potential and the segment composition. Additionally, we have dissolved the middle Ag-rich segments in order to create small nanogaps with controlled gap sizes. Annealing of the created structures allows us to influence their morphology. Conclusion: AuAg alloy nanowires, segmented wires and nanogaps with controlled composition and size can be synthesised by electrodeposition in membranes, and are ideal model systems for investigation of surface plasmons. PMID:26199830

  6. Analysis of mercury in sequential micrometer segments of single hair strands of fish-eaters.

    PubMed

    Legrand, Melissa; Lam, Rebecca; Passos, Carlos José Sousa; Mergler, Donna; Salin, Eric D; Chan, Hing Man

    2007-01-15

    Although it has been established that mercury (Hg) can be detected in single hair strands using laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS), calibration remains a challenge due to the lack of well-characterized matrix-matched standards. We concurrently evaluated two strategies for quantifying Hg signals in single hair strands using LA-ICP-MS. The main objective was to obtain time-resolved Hg concentrations in single hair strands of fish-eaters that would correspond to the changes of their body burden over time. Experiments were conducted using hair samples collected from 10 individuals. The first experiment involved the construction of a calibration curve with four powdered hair standard reference materials (SRMs) with a range of Hg concentrations (0.573-23.2 mg/kg). An internal standard, sulfur, as 34S, was applied to correct for ablation efficiency for both the hair strands and the SRMs. Results showed a linear relationship (R2 = 0.899) between the ratio of 202Hg to 34S obtained by LA-ICP-MS and the certified total Hg concentration in the SRMs. Using this calibration curve, average Hg concentrations of 10 shots within a 1-cm segment of a hair strand were calculated and then compared to the total Hg concentrations in the matched 1-cm segment as measured by cold vapor atomic absorption spectrometry (CV-AAS). A significant difference (p < 0.05) was observed. The difference could be attributed to the highly variable ablation/sampling process caused by the use of the laser on the hair powder SRM pellets and the difference in the physical properties of the SRMs. An alternative approach was adopted to quantify consecutive 202Hg to 34S ratios by calibrating the signals against the average Hg concentration of the matched hair segment as measured by CV-AAS. Consecutive daily Hg deposition in single hairs of fish eaters was determined. Results showed that apparent daily changes in Hg concentrations within a hair segment that corresponds to 1 month of hair growth. In addition, a significant decreasing or increasing time-trend was observed. The difference between the minimum and maximum Hg concentration within each individual corresponded to a change of 26-40%. Our results showed that LA-ICP-MS can be used to reconstruct time-resolved Hg exposure in micrometer segments of a single hair strand. PMID:17310727

  7. Microstructural analysis of pineal volume using trueFISP imaging

    PubMed Central

    Bumb, Jan M; Brockmann, Marc A; Groden, Christoph; Nolte, Ingo

    2013-01-01

    AIM: To determine the spectrum of pineal microstructures (solid/cystic parts) in a large clinical population using a high-resolution 3D-T2-weighted sequence. METHODS: A total of 347 patients enrolled for cranial magnetic resonance imaging were randomly included in this study. Written informed consent was obtained from all patients. The exclusion criteria were artifacts or mass lesions prohibiting evaluation of the pineal gland in any of the sequences. True-FISP-3D-imaging (1.5-T, isotropic voxel 0.9 mm) was performed in 347 adults (55.4 ± 18.1 years). Pineal gland volume (PGV), cystic volume, and parenchyma volume (cysts excluded) were measured manually. RESULTS: Overall, 40.3% of pineal glands were cystic. The median PGV was 54.6 mm3 (78.33 ± 89.0 mm3), the median cystic volume was 5.4 mm3 (15.8 ± 37.2 mm3), and the median parenchyma volume was 53.6 mm3 (71.9 ± 66.7 mm3). In cystic glands, the standard deviation of the PGV was substantially higher than in solid glands (98% vs 58% of the mean). PGV declined with age (r = -0.130, P = 0.016). CONCLUSION: The high interindividual volume variation is mainly related to cysts. Pineal parenchyma volume decreased slightly with age, whereas gender-related effects appear to be negligible. PMID:23671752

  8. Local analysis of human cortex in MRI brain volume.

    PubMed

    Bourouis, Sami

    2014-01-01

    This paper describes a method for subcortical identification and labeling of 3D medical MRI images. Indeed, the ability to identify similarities between the most characteristic subcortical structures such as sulci and gyri is helpful for human brain mapping studies in general and medical diagnosis in particular. However, these structures vary greatly from one individual to another because they have different geometric properties. For this purpose, we have developed an efficient tool that allows a user to start with brain imaging, to segment the border gray/white matter, to simplify the obtained cortex surface, and to describe this shape locally in order to identify homogeneous features. In this paper, a segmentation procedure using geometric curvature properties that provide an efficient discrimination for local shape is implemented on the brain cortical surface. Experimental results demonstrate the effectiveness and the validity of our approach. PMID:24688452

  9. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    SciTech Connect

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction and resulted in statistically not significantly different segmentation error indices (ANOVA test, significance level of 0.05). Conclusions: All three experts were able to produce liver segmentations with low error rates. User interaction time savings of up to 71% compared to a 2D refinement approach demonstrate the utility and potential of our approach. The system offers a range of different tools to manipulate segmentation results, and some users might benefit from a longer learning phase to develop efficient segmentation refinement strategies. The presented approach represents a generally applicable segmentation approach that can be applied to many medical image segmentation problems.

  10. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems

    PubMed Central

    Kim, Won Hwa; Chung, Moo K.; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape’s local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem. PMID:24390194

  11. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem. PMID:24390194

  12. COAL CONVERSION CONTROL TECHNOLOGY. VOLUME III. ECONOMIC ANALYSIS; APPENDIX

    EPA Science Inventory

    This volume is the product of an information-gathering effort relating to coal conversion process streams. Available and developing control technology has been evaluated in view of the requirements of present and proposed federal, state, regional, and international environmental ...

  13. A Rapid and Efficient 2D/3D Nuclear Segmentation Method for Analysis of Early Mouse Embryo and Stem Cell Image Data

    PubMed Central

    Lou, Xinghua; Kang, Minjung; Xenopoulos, Panagiotis; Muńoz-Descalzo, Silvia; Hadjantonakis, Anna-Katerina

    2014-01-01

    Summary Segmentation is a fundamental problem that dominates the success of microscopic image analysis. In almost 25 years of cell detection software development, there is still no single piece of commercial software that works well in practice when applied to early mouse embryo or stem cell image data. To address this need, we developed MINS (modular interactive nuclear segmentation) as a MATLAB/C++-based segmentation tool tailored for counting cells and fluorescent intensity measurements of 2D and 3D image data. Our aim was to develop a tool that is accurate and efficient yet straightforward and user friendly. The MINS pipeline comprises three major cascaded modules: detection, segmentation, and cell position classification. An extensive evaluation of MINS on both 2D and 3D images, and comparison to related tools, reveals improvements in segmentation accuracy and usability. Thus, its accuracy and ease of use will allow MINS to be implemented for routine single-cell-level image analyses. PMID:24672759

  14. Texture-based segmentation and analysis of emphysema depicted on CT images

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Zheng, Bin; Wang, Xingwei; Lederman, Dror; Pu, Jiantao; Sciurba, Frank C.; Gur, David; Leader, J. Ken

    2011-03-01

    In this study we present a texture-based method of emphysema segmentation depicted on CT examination consisting of two steps. Step 1, a fractal dimension based texture feature extraction is used to initially detect base regions of emphysema. A threshold is applied to the texture result image to obtain initial base regions. Step 2, the base regions are evaluated pixel-by-pixel using a method that considers the variance change incurred by adding a pixel to the base in an effort to refine the boundary of the base regions. Visual inspection revealed a reasonable segmentation of the emphysema regions. There was a strong correlation between lung function (FEV1%, FEV1/FVC, and DLCO%) and fraction of emphysema computed using the texture based method, which were -0.433, -.629, and -0.527, respectively. The texture-based method produced more homogeneous emphysematous regions compared to simple thresholding, especially for large bulla, which can appear as speckled regions in the threshold approach. In the texture-based method, single isolated pixels may be considered as emphysema only if neighboring pixels meet certain criteria, which support the idea that single isolated pixels may not be sufficient evidence that emphysema is present. One of the strength of our complex texture-based approach to emphysema segmentation is that it goes beyond existing approaches that typically extract a single or groups texture features and individually analyze the features. We focus on first identifying potential regions of emphysema and then refining the boundary of the detected regions based on texture patterns.

  15. Functional analysis of centipede development supports roles for Wnt genes in posterior development and segment generation.

    PubMed

    Hayden, Luke; Schlosser, Gerhard; Arthur, Wallace

    2015-01-01

    The genes of the Wnt family play important and highly conserved roles in posterior growth and development in a wide range of animal taxa. Wnt genes also operate in arthropod segmentation, and there has been much recent debate regarding the relationship between arthropod and vertebrate segmentation mechanisms. Due to its phylogenetic position, body form, and possession of many (11) Wnt genes, the centipede Strigamia maritima is a useful system with which to examine these issues. This study takes a functional approach based on treatment with lithium chloride, which causes ubiquitous activation of canonical Wnt signalling. This is the first functional developmental study performed in any of the 15,000 species of the arthropod subphylum Myriapoda. The expression of all 11 Wnt genes in Strigamia was analyzed in relation to posterior development. Three of these genes, Wnt11, Wnt5, and WntA, were strongly expressed in the posterior region and, thus, may play important roles in posterior developmental processes. In support of this hypothesis, LiCl treatment of S. maritima embryos was observed to produce posterior developmental defects and perturbations in AbdB and Delta expression. The effects of LiCl differ depending on the developmental stage treated, with more severe effects elicited by treatment during germband formation than by treatment at later stages. These results support a role for Wnt signalling in conferring posterior identity in Strigamia. In addition, data from this study are consistent with the hypothesis of segmentation based on a "clock and wavefront" mechanism operating in this species. PMID:25627713

  16. Texture analysis of automatic graph cuts segmentations for detection of lung cancer recurrence after stereotactic radiotherapy

    NASA Astrophysics Data System (ADS)

    Mattonen, Sarah A.; Palma, David A.; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.

    2015-03-01

    Stereotactic ablative radiotherapy (SABR) is a treatment for early-stage lung cancer with local control rates comparable to surgery. After SABR, benign radiation induced lung injury (RILI) results in tumour-mimicking changes on computed tomography (CT) imaging. Distinguishing recurrence from RILI is a critical clinical decision determining the need for potentially life-saving salvage therapies whose high risks in this population dictate their use only for true recurrences. Current approaches do not reliably detect recurrence within a year post-SABR. We measured the detection accuracy of texture features within automatically determined regions of interest, with the only operator input being the single line segment measuring tumour diameter, normally taken during the clinical workflow. Our leave-one-out cross validation on images taken 2-5 months post-SABR showed robustness of the entropy measure, with classification error of 26% and area under the receiver operating characteristic curve (AUC) of 0.77 using automatic segmentation; the results using manual segmentation were 24% and 0.75, respectively. AUCs for this feature increased to 0.82 and 0.93 at 8-14 months and 14-20 months post SABR, respectively, suggesting even better performance nearer to the date of clinical diagnosis of recurrence; thus this system could also be used to support and reinforce the physician's decision at that time. Based on our ongoing validation of this automatic approach on a larger sample, we aim to develop a computer-aided diagnosis system which will support the physician's decision to apply timely salvage therapies and prevent patients with RILI from undergoing invasive and risky procedures.

  17. Segmentation of acute pyelonephritis area on kidney SPECT images using binary shape analysis

    NASA Astrophysics Data System (ADS)

    Wu, Chia-Hsiang; Sun, Yung-Nien; Chiu, Nan-Tsing

    1999-05-01

    Acute pyelonephritis is a serious disease in children that may result in irreversible renal scarring. The ability to localize the site of urinary tract infection and the extent of acute pyelonephritis has considerable clinical importance. In this paper, we are devoted to segment the acute pyelonephritis area from kidney SPECT images. A two-step algorithm is proposed. First, the original images are translated into binary versions by automatic thresholding. Then the acute pyelonephritis areas are located by finding convex deficiencies in the obtained binary images. This work gives important diagnosis information for physicians and improves the quality of medical care for children acute pyelonephritis disease.

  18. DEVELOPMENT AND APPLICATION OF A WATER SUPPLY COST ANALYSIS SYSTEM. VOLUME II

    EPA Science Inventory

    A cost analysis for system water supply utility management has been developed and implemented in Kenton County, Kentucky, Water District No. 1. This volume contains the program documentation for the cost analysis system.

  19. Oil-spill risk analysis: Cook inlet outer continental shelf lease sale 149. Volume 2: Conditional risk contour maps of seasonal conditional probabilities. Final report

    SciTech Connect

    Johnson, W.R.; Marshall, C.F.; Anderson, C.M.; Lear, E.M.

    1994-08-01

    The Federal Government has proposed to offer Outer Continental Shelf (OCS) lands in Cook Inlet for oil and gas leasing. Because oil spills may occur from activities associated with offshore oil production, the Minerals Management Service conducts a formal risk assessment. In evaluating the significance of accidental oil spills, it is important to remember that the occurrence of such spills is fundamentally probabilistic. The effects of oil spills that could occur during oil and gas production must be considered. This report summarizes results of an oil-spill risk analysis conducted for the proposed Cook Inlet OCS Lease Sale 149. The objective of this analysis was to estimate relative risks associated with oil and gas production for the proposed lease sale. To aid the analysis, conditional risk contour maps of seasonal conditional probabilities of spill contact were generated for each environmental resource or land segment in the study area. This aspect is discussed in this volume of the two volume report.

  20. A computer program for comprehensive ST-segment depression/heart rate analysis of the exercise ECG test.

    PubMed

    Lehtinen, R; Vänttinen, H; Sievänen, H; Malmivuo, J

    1996-06-01

    The ST-segment depression/heart rate (ST/HR) analysis has been found to improve the diagnostic accuracy of the exercise ECG test in detecting myocardial ischemia. Recently, three different continuous diagnostic variables based on the ST/HR analysis have been introduced; the ST/HR slope, the ST/HR index and the ST/HR hysteresis. The latter utilises both the exercise and recovery phases of the exercise ECG test, whereas the two former are based on the exercise phase only. This present article presents a computer program which not only calculates the above three diagnostic variables but also plots the full diagrams of ST-segment depression against heart rate during both exercise and recovery phases for each ECG lead from given ST/HR data. The program can be used in the exercise ECG diagnosis of daily clinical practice provided that the ST/HR data from the ECG measurement system can be linked to the program. At present, the main purpose of the program is to provide clinical and medical researchers with a practical tool for comprehensive clinical evaluation and development of the ST/HR analysis. PMID:8835841

  1. A Posteriori Error Analysis of a Cell-centered Finite Volume Method for Semilinear Elliptic Problems

    SciTech Connect

    Michael Pernice

    2009-11-01

    In this paper, we conduct an a posteriori analysis for the error in a quantity of interest computed from a cell-centered finite volume scheme. The a posteriori error analysis is based on variational analysis, residual error and the adjoint problem. To carry out the analysis, we use an equivalence between the cell-centered finite volume scheme and a mixed finite element method with special choice of quadrature.

  2. Three-Dimensional MRI Analysis of Individual Volume of Lacunes in CADASIL

    PubMed Central

    Hervé, Dominique; Godin, Ophélia; Dufouil, Carole; Viswanathan, Anand; Jouvent, Eric; Pachaď, Chahin; Guichard, Jean-Pierre; Bousser, Marie-Germaine; Dichgans, Martin; Chabriat, Hugues

    2011-01-01

    Background and Purpose Three-dimensional MRI segmentation may be useful to better understand the physiopathology of lacunar infarctions. Using this technique, the distribution of lacunar infarctions volumes has been recently reported in patients with cerebral autosomal-dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL). Whether the volume of each lacune (individual lacunar volume [ILV]) is associated with the patients’ other MRI lesions or vascular risk factors has never been investigated. The purpose of this study was to study the impact of age, vascular risk factors, and MRI markers on the ILV in a large cohort of patients with CADASIL. Methods Of 113 patients with CADASIL, 1568 lacunes were detected and ILV was estimated after automatic segmentation on 3-dimensional T1-weighted imaging. Relationships between ILV and age, blood pressure, cholesterol, diabetes, white matter hyperintensities load, number of cerebral microbleeds, apparent diffusion coefficient, brain parenchymal fraction, and mean and median of distribution of lacunes volumes at the patient level were investigated. We used random effect models to take into account intraindividual correlations. Results The ILV varied from 4.28 to 1619 mm3. ILV was not significantly correlated with age, vascular risk factors, or different MRI markers (white matter hyperintensity volume, cerebral microbleed number, mean apparent diffusion coefficient or brain parenchymal fraction). In contrast, ILV was positively correlated with the patients’ mean and median of lacunar volume distribution (P=0.0001). Conclusions These results suggest that the ILV is not related to the associated cerebral lesions or to vascular risk factors in CADASIL, but that an individual predisposition may explain predominating small or predominating large lacunes among patients. Local anatomic factors or genetic factors may be involved in these variations. PMID:18948610

  3. Risk for Adjacent Segment and Same Segment Reoperation After Surgery for Lumbar Stenosis: A subgroup analysis of the Spine Patient Outcomes Research Trial (SPORT)

    PubMed Central

    Radcliff, Kris; Curry, Patrick; Hilibrand, Alan; Kepler, Chris; Lurie, Jon; Zhao, Wenyan; Albert, Todd; Weinstein, James

    2013-01-01

    Study Design Subgroup analysis of prospective, randomized database. Objective The purpose of this study was to compare surgical or patient characteristics, such as fusion, instrumentation, or obesity, to identify whether these factors were associated with increased risk of reoperation for spinal stenosis. This prognostic information would be valuable to patients, healthcare professionals, and society as strategies to reduce reoperation, such as motion preservation, are developed. Summary of Background Data Reoperation due to recurrence of index level pathology or adjacent segment disease is a common clinical problem. Despite multiple studies on the incidence of reoperation, there have been few comparative studies establishing risk factors of reoperation after spinal stenosis surgery. The hypothesis of this subgroup analysis was that lumbar fusion or particular patient characteristics, such as obesity, would render patients with lumbar stenosis more susceptible to reoperation at the index or adjacent levels. Methods The study population combined the randomized and observational cohorts enrolled in SPORT for treatment of spinal stenosis. The surgically treated patients were stratified according to those who had reoperation (n=54) or no-reoperation (n= 359). Outcome measures were assessed at baseline, 1 year, 2 years, 3 years, and 4 years. The difference in improvement between those who had reoperation and those who did not was determined at each follow-period. Results Of the 413 patients who underwent surgical treatment for spinal stenosis, 54 patients had a reoperation within four years. At baseline, there were no significant differences in demographic characteristics or clinical outcome scores between reoperation and non-reoperation groups. Furthermore, between groups there were no differences in the severity of symptoms, obesity, physical examination signs, levels of stenosis, location of stenosis, stenosis severity, levels of fusion, levels of laminectomy, levels decompressed, operation time, intraoperative or postoperative complications. There was an increased percentage of patients with duration of symptoms greater than 12 months in the reoperation group (56% reoperation vs 36% no-reoperation, p<0.008). At final follow-up, there was significantly less improvement in the outcome of the reoperation group in SF36 PF (14.4 vs 22.6, p < 0.05), ODI (?12.4 vs. ?21.1, p < 0.01), and Sciatica Bothersomeness Index (?5 vs ?8.1, p < 0.006). Conclusion Lumbar fusion and instrumentation were not associated with increased rate of reoperation at index or adjacent levels compared to nonfusion techniques. The only specific risk factor for reoperation after treatment of spinal stenosis was duration of pretreatment symptoms > 12 months. The overall incidence of reoperations for spinal stenosis surgery was 13% and reoperations were equally distributed between index and adjacent lumbar levels. Reoperation may be related to the natural history of spinal degenerative disease. PMID:23154835

  4. Phylogenetic analysis, genomic diversity and classification of M class gene segments of turkey reoviruses.

    PubMed

    Mor, Sunil K; Marthaler, Douglas; Verma, Harsha; Sharafeldin, Tamer A; Jindal, Naresh; Porter, Robert E; Goyal, Sagar M

    2015-03-23

    From 2011 to 2014, 13 turkey arthritis reoviruses (TARVs) were isolated from cases of swollen hock joints in 2-18-week-old turkeys. In addition, two isolates from similar cases of turkey arthritis were received from another laboratory. Eight turkey enteric reoviruses (TERVs) isolated from fecal samples of turkeys were also used for comparison. The aims of this study were to characterize turkey reovirus (TRV) based on complete M class genome segments and to determine genetic diversity within TARVs in comparison to TERVs and chicken reoviruses (CRVs). Nucleotide (nt) cut off values of 84%, 83% and 85% for the M1, M2 and M3 gene segments were proposed and used for genotype classification, generating 5, 7, and 3 genotypes, respectively. Using these nt cut off values, we propose M class genotype constellations (GCs) for avian reoviruses. Of the seven GCs, GC1 and GC3 were shared between the TARVs and TERVs, indicating possible reassortment between turkey and chicken reoviruses. The TARVs and TERVs were divided into three GCs, and GC2 was unique to TARVs and TERVs. The proposed new GC approach should be useful in identifying reassortant viruses, which may ultimately be used in the design of a universal vaccine against both chicken and turkey reoviruses. PMID:25655814

  5. Analysis of flexible aircraft longitudinal dynamics and handling qualities. Volume 1: Analysis methods

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Schmidt, D. S.

    1985-01-01

    As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.

  6. Brain MRI Segmentation with Multiphase Minimal Partitioning: A Comparative Study

    PubMed Central

    Angelini, Elsa D.; Song, Ting; Mensh, Brett D.; Laine, Andrew F.

    2007-01-01

    This paper presents the implementation and quantitative evaluation of a multiphase three-dimensional deformable model in a level set framework for automated segmentation of brain MRIs. The segmentation algorithm performs an optimal partitioning of three-dimensional data based on homogeneity measures that naturally evolves to the extraction of different tissue types in the brain. Random seed initialization was used to minimize the sensitivity of the method to initial conditions while avoiding the need for a priori information. This random initialization ensures robustness of the method with respect to the initialization and the minimization set up. Postprocessing corrections with morphological operators were applied to refine the details of the global segmentation method. A clinical study was performed on a database of 10 adult brain MRI volumes to compare the level set segmentation to three other methods: “idealized” intensity thresholding, fuzzy connectedness, and an expectation maximization classification using hidden Markov random fields. Quantitative evaluation of segmentation accuracy was performed with comparison to manual segmentation computing true positive and false positive volume fractions. A statistical comparison of the segmentation methods was performed through a Wilcoxon analysis of these error rates and results showed very high quality and stability of the multiphase three-dimensional level set method. PMID:18253474

  7. Effects of immersion on visual analysis of volume data.

    PubMed

    Laha, Bireswar; Sensharma, Kriti; Schiffbauer, James D; Bowman, Doug A

    2012-04-01

    Volume visualization has been widely used for decades for analyzing datasets ranging from 3D medical images to seismic data to paleontological data. Many have proposed using immersive virtual reality (VR) systems to view volume visualizations, and there is anecdotal evidence of the benefits of VR for this purpose. However, there has been very little empirical research exploring the effects of higher levels of immersion for volume visualization, and it is not known how various components of immersion influence the effectiveness of visualization in VR. We conducted a controlled experiment in which we studied the independent and combined effects of three components of immersion (head tracking, field of regard, and stereoscopic rendering) on the effectiveness of visualization tasks with two x-ray microscopic computed tomography datasets. We report significant benefits of analyzing volume data in an environment involving those components of immersion. We find that the benefits do not necessarily require all three components simultaneously, and that the components have variable influence on different task categories. The results of our study improve our understanding of the effects of immersion on perceived and actual task performance, and provide guidance on the choice of display systems to designers seeking to maximize the effectiveness of volume visualization applications. PMID:22402687

  8. Fetal brain volumetry through MRI volumetric reconstruction and segmentation

    PubMed Central

    Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.

    2013-01-01

    Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848

  9. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 4: Mission peculiar spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) peculiar spacecraft segment and associated subsystems and modules are presented. The specifications considered include the following: (1) wideband communications subsystem module, (2) mission peculiar software, (3) hydrazine propulsion subsystem module, (4) solar array assembly, and (5) the scanning spectral radiometer.

  10. A combined machine-learning and graph-based framework for the segmentation of retinal surfaces in SD-OCT volumes

    PubMed Central

    Antony, Bhavna J.; Abrŕmoff, Michael D.; Harper, Matthew M.; Jeong, Woojin; Sohn, Elliott H.; Kwon, Young H.; Kardon, Randy; Garvin, Mona K.

    2013-01-01

    Optical coherence tomography is routinely used clinically for the detection and management of ocular diseases as well as in research where the studies may involve animals. This routine use requires that the developed automated segmentation methods not only be accurate and reliable, but also be adaptable to meet new requirements. We have previously proposed the use of a graph-theoretic approach for the automated 3-D segmentation of multiple retinal surfaces in volumetric human SD-OCT scans. The method ensures the global optimality of the set of surfaces with respect to a cost function. Cost functions have thus far been typically designed by hand by domain experts. This difficult and time-consuming task significantly impacts the adaptability of these methods to new models. Here, we describe a framework for the automated machine-learning based design of the cost function utilized by this graph-theoretic method. The impact of the learned components on the final segmentation accuracy are statistically assessed in order to tailor the method to specific applications. This adaptability is demonstrated by utilizing the method to segment seven, ten and five retinal surfaces from SD-OCT scans obtained from humans, mice and canines, respectively. The overall unsigned border position errors observed when using the recommended configuration of the graph-theoretic method was 6.45 ± 1.87 ?m, 3.35 ± 0.62 ?m and 9.75 ± 3.18 ?m for the human, mouse and canine set of images, respectively. PMID:24409375

  11. A combined machine-learning and graph-based framework for the segmentation of retinal surfaces in SD-OCT volumes.

    PubMed

    Antony, Bhavna J; Abrŕmoff, Michael D; Harper, Matthew M; Jeong, Woojin; Sohn, Elliott H; Kwon, Young H; Kardon, Randy; Garvin, Mona K

    2013-01-01

    Optical coherence tomography is routinely used clinically for the detection and management of ocular diseases as well as in research where the studies may involve animals. This routine use requires that the developed automated segmentation methods not only be accurate and reliable, but also be adaptable to meet new requirements. We have previously proposed the use of a graph-theoretic approach for the automated 3-D segmentation of multiple retinal surfaces in volumetric human SD-OCT scans. The method ensures the global optimality of the set of surfaces with respect to a cost function. Cost functions have thus far been typically designed by hand by domain experts. This difficult and time-consuming task significantly impacts the adaptability of these methods to new models. Here, we describe a framework for the automated machine-learning based design of the cost function utilized by this graph-theoretic method. The impact of the learned components on the final segmentation accuracy are statistically assessed in order to tailor the method to specific applications. This adaptability is demonstrated by utilizing the method to segment seven, ten and five retinal surfaces from SD-OCT scans obtained from humans, mice and canines, respectively. The overall unsigned border position errors observed when using the recommended configuration of the graph-theoretic method was 6.45 ± 1.87 ?m, 3.35 ± 0.62 ?m and 9.75 ± 3.18 ?m for the human, mouse and canine set of images, respectively. PMID:24409375

  12. Determination of fiber volume in graphite/epoxy materials using computer image analysis

    NASA Technical Reports Server (NTRS)

    Viens, Michael J.

    1990-01-01

    The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.

  13. Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images

    PubMed Central

    de Castro, J.; MĂ©ndez, A.; Tarquis, A. M.

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  14. Fractal analysis of laplacian pyramidal filters applied to segmentation of soil images.

    PubMed

    de Castro, J; Ballesteros, F; Méndez, A; Tarquis, A M

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  15. Sequence analysis of the medium (M) segment of Cache Valley virus, with comparison to other Bunyaviridae.

    PubMed

    Brockus, C L; Grimstad, P R

    1999-01-01

    The complete sequence of the medium (M) segment of Cache Valley virus (CVV), a human neuropathogen, has been determined using a series of overlapping cDNA clones. The viral complementary-sense RNA is comprised of 4463 nucleotides which encodes a polyprotein precursor of 1435 amino acids, starting at AUG at bases 49-51 to a UGA stop codon at bases 4351-4353. This polyprotein-encoding sequence is arranged as G2-NSm-G1. The base composition of the segment is 34.9% A, 17.0% C, 19.4% G and 28.7% U. Comparison of the nucleotide sequence to the prototype Bunyamwera virus sequence shows an identity of 63%, indicating several differences exist within the individual coding regions, most notably within the NSm and G1 coding regions. Based on two presumed cleavage points within the precursor, the G2 glycoprotein, encoded from nt 94-951, is 286 amino acids long, and has two sites of potential glycosylation. NSm, encoded from nt 952-1476, is 175 amino acids, while the largest glycoprotein, G1, encoded from nt 1477-4350, consists of 958 amino acids, and has five potential glycosylation sites, two of which appear to be unique to CVV. The subsequent study of these glycosylation sites and potential differences between the sequence of this prototype CVV strain and other geographic isolates may suggest the means for improving detection of human infections as well as mapping differences in neurovirulence, neuroinvasiveness and other aspects of pathogenicity. PMID:10499453

  16. A link-segment model of upright human posture for analysis of head-trunk coordination

    NASA Technical Reports Server (NTRS)

    Nicholas, S. C.; Doxey-Gasway, D. D.; Paloski, W. H.

    1998-01-01

    Sensory-motor control of upright human posture may be organized in a top-down fashion such that certain head-trunk coordination strategies are employed to optimize visual and/or vestibular sensory inputs. Previous quantitative models of the biomechanics of human posture control have examined the simple case of ankle sway strategy, in which an inverted pendulum model is used, and the somewhat more complicated case of hip sway strategy, in which multisegment, articulated models are used. While these models can be used to quantify the gross dynamics of posture control, they are not sufficiently detailed to analyze head-trunk coordination strategies that may be crucial to understanding its underlying mechanisms. In this paper, we present a biomechanical model of upright human posture that extends an existing four mass, sagittal plane, link-segment model to a five mass model including an independent head link. The new model was developed to analyze segmental body movements during dynamic posturography experiments in order to study head-trunk coordination strategies and their influence on sensory inputs to balance control. It was designed specifically to analyze data collected on the EquiTest (NeuroCom International, Clackamas, OR) computerized dynamic posturography system, where the task of maintaining postural equilibrium may be challenged under conditions in which the visual surround, support surface, or both are in motion. The performance of the model was tested by comparing its estimated ground reaction forces to those measured directly by support surface force transducers. We conclude that this model will be a valuable analytical tool in the search for mechanisms of balance control.

  17. Cargo Logistics Airlift Systems Study (CLASS). Volume 1: Analysis of current air cargo system

    NASA Technical Reports Server (NTRS)

    Burby, R. J.; Kuhlman, W. H.

    1978-01-01

    The material presented in this volume is classified into the following sections; (1) analysis of current routes; (2) air eligibility criteria; (3) current direct support infrastructure; (4) comparative mode analysis; (5) political and economic factors; and (6) future potential market areas. An effort was made to keep the observations and findings relating to the current systems as objective as possible in order not to bias the analysis of future air cargo operations reported in Volume 3 of the CLASS final report.

  18. Analysis of iris structure and iridocorneal angle parameters with anterior segment optical coherence tomography in Fuchs' uveitis syndrome.

    PubMed

    Basarir, Berna; Altan, Cigdem; Pinarci, Eylem Yaman; Celik, Ugur; Satana, Banu; Demirok, Ahmet

    2013-06-01

    To evaluate the differences in the biometric parameters of iridocorneal angle and iris structure measured by anterior segment optical coherence tomography (AS-OCT) in Fuchs' uveitis syndrome (FUS). Seventy-six eyes of 38 consecutive patients with the diagnosis of unilateral FUS were recruited into this prospective, cross-sectional and comparative study. After a complete ocular examination, anterior segment biometric parameters were measured by Visante(®) AS-OCT. All parameters were compared between the two eyes of each patient statistically. The mean age of the 38 subjects was 32.5 ± 7.5 years (18 female and 20 male). The mean visual acuity was lower in eyes with FUS (0.55 ± 0.31) than in healthy eyes (0.93 ± 0.17). The central corneal thickness did not differ significantly between eyes. All iridocorneal angle parameters (angle-opening distance 500 and 750, scleral spur angle, trabecular-iris space (TISA) 500 and 750) except TISA 500 in temporal quadrant were significantly larger in eyes with FUS than in healthy eyes. Anterior chamber depth was deeper in the eyes with FUS than in the unaffected eyes. With regard to iris measurements, iris thickness in the thickest part, iris bowing and iris shape were all statistically different between the affected eye and the healthy eye in individual patients with FUS. However, no statistically significant differences were evident in iris thickness 500 ?m, thickness in the middle and iris length. There were significant difference in iris shape between the two eyes of patients with glaucoma. AS-OCT as an imaging method provides us with many informative resultsin the analysis of anterior segment parameters in FUS. PMID:23277205

  19. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 3: General purpose spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) general purpose aircraft segment are presented. The satellite is designed to provide attitude stabilization, electrical power, and a communications data handling subsystem which can support various mission peculiar subsystems. The various specifications considered include the following: (1) structures subsystem, (2) thermal control subsystem, (3) communications and data handling subsystem module, (4) attitude control subsystem module, (5) power subsystem module, and (6) electrical integration subsystem.

  20. Industrial process heat data analysis and evaluation. Volume 1

    SciTech Connect

    Lewandowski, A; Gee, R; May, K

    1984-07-01

    The Solar Energy Research Institute (SERI) has modeled seven of the Department of Energy (DOE) sponsored solar Industrial Process Heat (IPH) field experiments and has generated thermal performance predictions for each project. Additionally, these performance predictions have been compared with actual performance measurements taken at the projects. Predictions were generated using SOLIPH, an hour-by-hour computer code with the capability for modeling many types of solar IPH components and system configurations. Comparisons of reported and predicted performance resulted in good agreement when the field test reliability and availability was high. Volume I contains the main body of the work: objective, model description, site configurations, model results, data comparisons, and summary. Volume II contains complete performance prediction results (tabular and graphic output) and computer program listings.

  1. Analysis of layered assays and volume microarrays in stratified media.

    PubMed

    Ghafari, Homanaz; Hanley, Quentin S

    2012-12-01

    Changing traditional microarray methods by using both sides of a substrate or stacking microarrays combined with optical sectioning enables the detection of more than one assay along the z-axis. Here we demonstrate two sided substrates, multilayer arrays with up to 5 substrates, and 2- and 3-dimensional antigen microarrays. By replacing standard substrates with multiple 30 ?m layers of glass or mica, high density multilayer and 3-dimensional volume arrays were created within a stratified medium. Although a decrease in fluorescence intensity with increasing number of substrate layers was observed together with a concomitant broadening of the axial resolution, quantitative results were obtained from this stratified system using calibrated intensities. Two- and three-dimensional antigen microarrays were generated via microcontact printing and detected as indirect immunoassays with quantum dot conjugated antibodies. Volume arrays were analysed by confocal laser scanning microscopy producing clear patterns, even when the assays were overlapped spatially. PMID:22911003

  2. Automatic segmentation and identification of solitary pulmonary nodules on follow-up CT scans based on local intensity structure analysis and non-rigid image registration

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Naito, Hideto; Nakamura, Yoshihiko; Kitasaka, Takayuki; Rueckert, Daniel; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2011-03-01

    This paper presents a novel method that can automatically segment solitary pulmonary nodule (SPN) and match such segmented SPNs on follow-up thoracic CT scans. Due to the clinical importance, a physician needs to find SPNs on chest CT and observe its progress over time in order to diagnose whether it is benign or malignant, or to observe the effect of chemotherapy for malignant ones using follow-up data. However, the enormous amount of CT images makes large burden tasks to a physician. In order to lighten this burden, we developed a method for automatic segmentation and assisting observation of SPNs in follow-up CT scans. The SPNs on input 3D thoracic CT scan are segmented based on local intensity structure analysis and the information of pulmonary blood vessels. To compensate lung deformation, we co-register follow-up CT scans based on an affine and a non-rigid registration. Finally, the matches of detected nodules are found from registered CT scans based on a similarity measurement calculation. We applied these methods to three patients including 14 thoracic CT scans. Our segmentation method detected 96.7% of SPNs from the whole images, and the nodule matching method found 83.3% correspondences from segmented SPNs. The results also show our matching method is robust to the growth of SPN, including integration/separation and appearance/disappearance. These confirmed our method is feasible for segmenting and identifying SPNs on follow-up CT scans.

  3. Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies

    NASA Astrophysics Data System (ADS)

    Yang, Jun

    2000-12-01

    Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.

  4. Multiscale remote sensing data segmentation and post-segmentation change detection based on logical modeling: Theoretical exposition and experimental results for forestland cover change analysis

    NASA Astrophysics Data System (ADS)

    Ouma, Yashon O.; Josaphat, S. S.; Tateishi, Ryutaro

    2008-07-01

    Quantification of forestland cover extents, changes and causes thereof are currently of regional and global research priority. Remote sensing data (RSD) play a significant role in this exercise. However, supervised classification-based forest mapping from RSD are limited by lack of ground-truth- and spectral-only-based methods. In this paper, first results of a methodology to detect change/no change based on unsupervised multiresolution image transformation are presented. The technique combines directional wavelet transformation texture and multispectral imagery in an anisotropic diffusion aggregation or segmentation algorithm. The segmentation algorithm was implemented in unsupervised self-organizing feature map neural network. Using Landsat TM (1986) and ETM+ (2001), logical-operations-based change detection results for part of Mau forest in Kenya are presented. An overall accuracy for change detection of 88.4%, corresponding to kappa of 0.8265, was obtained. The methodology is able to predict the change information a-posteriori as opposed to the conventional methods that require land cover classes a priori for change detection. Most importantly, the approach can be used to predict the existence, location and extent of disturbances within natural environmental systems.

  5. Automated segmentation of the lamina cribrosa using Frangi's filter: a novel approach for rapid identification of tissue volume fraction and beam orientation in a trabeculated structure in the eye.

    PubMed

    Campbell, Ian C; Coudrillier, Baptiste; Mensah, Johanne; Abel, Richard L; Ethier, C Ross

    2015-03-01

    The lamina cribrosa (LC) is a tissue in the posterior eye with a complex trabecular microstructure. This tissue is of great research interest, as it is likely the initial site of retinal ganglion cell axonal damage in glaucoma. Unfortunately, the LC is difficult to access experimentally, and thus imaging techniques in tandem with image processing have emerged as powerful tools to study the microstructure and biomechanics of this tissue. Here, we present a staining approach to enhance the contrast of the microstructure in micro-computed tomography (micro-CT) imaging as well as a comparison between tissues imaged with micro-CT and second harmonic generation (SHG) microscopy. We then apply a modified version of Frangi's vesselness filter to automatically segment the connective tissue beams of the LC and determine the orientation of each beam. This approach successfully segmented the beams of a porcine optic nerve head from micro-CT in three dimensions and SHG microscopy in two dimensions. As an application of this filter, we present finite-element modelling of the posterior eye that suggests that connective tissue volume fraction is the major driving factor of LC biomechanics. We conclude that segmentation with Frangi's filter is a powerful tool for future image-driven studies of LC biomechanics. PMID:25589572

  6. Automated segmentation of the lamina cribrosa using Frangi's filter: a novel approach for rapid identification of tissue volume fraction and beam orientation in a trabeculated structure in the eye

    PubMed Central

    Campbell, Ian C.; Coudrillier, Baptiste; Mensah, Johanne; Abel, Richard L.; Ethier, C. Ross

    2015-01-01

    The lamina cribrosa (LC) is a tissue in the posterior eye with a complex trabecular microstructure. This tissue is of great research interest, as it is likely the initial site of retinal ganglion cell axonal damage in glaucoma. Unfortunately, the LC is difficult to access experimentally, and thus imaging techniques in tandem with image processing have emerged as powerful tools to study the microstructure and biomechanics of this tissue. Here, we present a staining approach to enhance the contrast of the microstructure in micro-computed tomography (micro-CT) imaging as well as a comparison between tissues imaged with micro-CT and second harmonic generation (SHG) microscopy. We then apply a modified version of Frangi's vesselness filter to automatically segment the connective tissue beams of the LC and determine the orientation of each beam. This approach successfully segmented the beams of a porcine optic nerve head from micro-CT in three dimensions and SHG microscopy in two dimensions. As an application of this filter, we present finite-element modelling of the posterior eye that suggests that connective tissue volume fraction is the major driving factor of LC biomechanics. We conclude that segmentation with Frangi's filter is a powerful tool for future image-driven studies of LC biomechanics. PMID:25589572

  7. A STANDARD PROCEDURE FOR COST ANALYSIS OF POLLUTION CONTROL OPERATIONS. VOLUME II. APPENDICES

    EPA Science Inventory

    Volume I is a user guide for a standard procedure for the engineering cost analysis of pollution abatement operations and processes. The procedure applies to projects in various economic sectors: private, regulated, and public. Volume II, the bulk of the document, contains 11 app...

  8. PREDICTION OF MINERAL QUALITY OF IRRIGATION RETURN FLOW. VOLUME IV. DATA ANALYSIS UTILITY PROGRAMS

    EPA Science Inventory

    This volume of the report contains a description of the data analysis subroutines developed to support the modeling effort described in Volume III. The subroutines were used to evaluate and condition data used in the conjunctive use model. The subroutines include (1) regression a...

  9. Analysis of a segmented q-plate tunable retarder for the generation of first-order vector beams.

    PubMed

    Davis, Jeffrey A; Hashimoto, Nobuyuki; Kurihara, Makoto; Hurtado, Enrique; Pierce, Melanie; Sánchez-López, María M; Badham, Katherine; Moreno, Ignacio

    2015-11-10

    In this work we study a prototype q-plate segmented tunable liquid crystal retarder device. It shows a large modulation range (5π rad for a wavelength of 633 nm and near 2π for 1550 nm) and a large clear aperture of one inch diameter. We analyze the operation of the q-plate in terms of Jones matrices and provide different matrix decompositions useful for its analysis, including the polarization transformations, the effect of the tunable phase shift, and the effect of quantization levels (the device is segmented in 12 angular sectors). We also show a very simple and robust optical system capable of generating all polarization states on the first-order Poincaré sphere. An optical polarization rotator and a linear retarder are used in a geometry that allows the generation of all states in the zero-order Poincaré sphere simply by tuning two retardance parameters. We then use this system with the q-plate device to directly map an input arbitrary state of polarization to a corresponding first-order vectorial beam. This optical system would be more practical for high speed and programmable generation of vector beams than other systems reported so far. Experimental results are presented. PMID:26560790

  10. Practical considerations for the segmented-flow analysis of nitrate and ammonium in seawater and the avoidance of matrix effects

    NASA Astrophysics Data System (ADS)

    Rho, Tae Keun; Coverly, Stephen; Kim, Eun-Soo; Kang, Dong-Jin; Kahng, Sung-Hyun; Na, Tae-Hee; Cho, Sung-Rok; Lee, Jung-Moo; Moon, Cho-Rong

    2015-12-01

    In this study we describe measures taken in our laboratory to improve the long-term precision of nitrate and ammonia analysis in seawater using a microflow segmented-flow analyzer. To improve the nitrate reduction efficiency using a flow-through open tube cadmium reactor (OTCR), we compared alternative buffer formulations and regeneration procedures for an OTCR. We improved long-term stability for nitrate with a modified flow scheme and color reagent formulation and for ammonia by isolating samples from the ambient air and purifying the air used for bubble segmentation. We demonstrate the importance of taking into consideration the residual nutrient content of the artificial seawater used for the preparation of calibration standards. We describe how an operating procedure to eliminate errors from that source as well as from the refractive index of the matrix itself can be modified to include the minimization of dynamic refractive index effects resulting from differences between the matrix of the samples, the calibrants, and the wash solution. We compare the data for long-term measurements of certified reference material under two different conditions, using ultrapure water (UPW) and artificial seawater (ASW) for the sampler wash.

  11. Analysis in ultrasmall volumes: microdispensing of picoliter droplets and analysis without protection from evaporation.

    PubMed

    Neugebauer, Sebastian; Evans, Stephanie R; Aguilar, Zoraida P; Mosbach, Marcus; Fritsch, Ingrid; Schuhmann, Wolfgang

    2004-01-15

    A new approach is reported for analysis of ultrasmall volumes. It takes advantage of the versatile positioning of a dispenser to shoot approximately 150-pL droplets of liquid onto a specific location of a substrate where analysis is performed rapidly, in a fraction of the time that it takes for the droplet to evaporate. In this report, the site where the liquid is dispensed carries out fast-scan cyclic voltammetry (FSCV), although the detection method does not need to be restricted to electrochemistry. The FSCV is performed at a microcavity having individually addressable gold electrodes, where one serves as working electrode and another as counter/pseudoreference electrode. Five or six droplets of 10 mM [Ru(NH(3))(6)]Cl(3) in 0.1 M KCl were dispensed and allowed to dry, followed by redissolution of the redox species and electrolyte with one or five droplets of water and immediate FSCV, demonstrating the ability to easily concentrate a sample and the reproducibility of redissolution, respectively. Because this approach does not integrate detection with microfluidics on the same chip, it simplifies fabrication of devices for analysis of ultrasmall volumes. It may be useful for single-step and multistep sample preparation, analyses, and bioassays in microarray formats if dispensing and changing of solutions are automated. However, care must be taken to avoid factors that affect the aim of the dispenser, such as drafts and clogging of the nozzle. PMID:14719897

  12. Who Will More Likely Buy PHEV: A Detailed Market Segmentation Analysis

    SciTech Connect

    Lin, Zhenhong; Greene, David L

    2010-01-01

    Understanding the diverse PHEV purchase behaviors among prospective new car buyers is key for designing efficient and effective policies for promoting new energy vehicle technologies. The ORNL MA3T model developed for the U.S. Department of Energy is described and used to project PHEV purchase probabilities by different consumers. MA3T disaggregates the U.S. household vehicle market into 1458 consumer segments based on region, residential area, driver type, technology attitude, home charging availability and work charging availability and is calibrated to the EIA s Annual Energy Outlook. Simulation results from MA3T are used to identify the more likely PHEV buyers and provide explanations. It is observed that consumers who have home charging, drive more frequently and live in urban area are more likely to buy a PHEV. Early adopters are projected to be more likely PHEV buyers in the early market, but the PHEV purchase probability by the late majority consumer can increase over time when PHEV gradually becomes a familiar product. Copyright Form of EVS25.

  13. Extensive serum biomarker analysis in patients with ST segment elevation myocardial infarction (STEMI).

    PubMed

    Zhang, Yi; Lin, Peiyi; Jiang, Huilin; Xu, Jieling; Luo, Shuhong; Mo, Junrong; Li, Yunmei; Chen, Xiaohui

    2015-12-01

    ST segment elevation myocardial infarction (STEMI) is one of the leading causes of morbidity and mortality and some characteristics of STEMI are poorly understood. The aim of the present study is to detect protein expression profiles in the serum of STEMI patients, and to identify biomarkers for this disease. Cytokine profiles of serum from STEMI patients and healthy controls were analyzed with a semi-quantitative human antibody array for 174 proteins, and the results showed blood serum concentrations of 21 cytokines differed considerably between STEMI patients and healthy subjects. In the next phase, a sandwich ELISA kit individually validated eight biomarker results from 21 of the microarray experiments. Clinical validation demonstrated a significant increase of BNDF, PDGF-AA and MMP-9 in patients with AMI. Meanwhile, BNDF, PDGF-AA and MMP-9 distinguished AMI patients from healthy controls with a mean area under the receiver operating characteristic (ROC) curves of 0.870, 0.885, and 0.81, respectively, with diagnostic cut-off points of 0.688ng/mL, 297.86ng/mL and 690.066ng/mL. Our study indicated that these three cytokines were up-regulated in STEMI samples, and may hold promise for the assessment of STEMI. PMID:26153394

  14. Photogrammetric Digital Outcrop Model analysis of a segment of the Centovalli Line (Trontano, Italy)

    NASA Astrophysics Data System (ADS)

    Consonni, Davide; Pontoglio, Emanuele; Bistacchi, Andrea; Tunesi, Annalisa

    2015-04-01

    The Centovalli Line is a complex network of brittle faults developing between Domodossola (West) and Locarno (East), where it merges with the Canavese Line (western segment of the Periadriatic Lineament). The Centovalli Line roughly follows the Southern Steep Belt which characterizes the inner or "root" zone of the Penninic and Austroalpine units, which underwent several deformation phases under variable P-T conditions over all the Alpine orogenic history. The last deformation phases in this area developed under brittle conditions, resulting in an array of dextral-reverse subvertical faults with a general E-W trend that partly reactivates and partly crosscuts the metamorphic foliations and lithological boundaries. Here we report on a quantitative digital outcrop model (DOM) study aimed at quantifying the fault zone architecture in a particularly well exposed outcrop near Trontano, at the western edge of the Centovalli Line. The DOM was reconstructed with photogrammetry and allowed to perform a complete characterization of the damage zones and multiple fault cores on both point cloud and textured surfaces models. Fault cores have been characterized in terms of attitude, thickness, and internal distribution of fault rocks (gouge-bearing), including possibly seismogenic localized slip surfaces. In the damage zones, the fracture network has been characterized in terms of fracture intensity (both P10 and P21 on virtual scanlines and scan-areas), fracture attitude, fracture connectivity, etc.

  15. Comparative analysis of the distribution of segmented filamentous bacteria in humans, mice and chickens.

    PubMed

    Yin, Yeshi; Wang, Yu; Zhu, Liying; Liu, Wei; Liao, Ningbo; Jiang, Mizu; Zhu, Baoli; Yu, Hongwei D; Xiang, Charlie; Wang, Xin

    2013-03-01

    Segmented filamentous bacteria (SFB) are indigenous gut commensal bacteria. They are commonly detected in the gastrointestinal tracts of both vertebrates and invertebrates. Despite the significant role they have in the modulation of the development of host immune systems, little information exists regarding the presence of SFB in humans. The aim of this study was to investigate the distribution and diversity of SFB in humans and to determine their phylogenetic relationships with their hosts. Gut contents from 251 humans, 92 mice and 72 chickens were collected for bacterial genomic DNA extraction and subjected to SFB 16S rRNA-specific PCR detection. The results showed SFB colonization to be age-dependent in humans, with the majority of individuals colonized within the first 2 years of life, but this colonization disappeared by the age of 3 years. Results of 16S rRNA sequencing showed that multiple operational taxonomic units of SFB could exist in the same individuals. Cross-species comparison among human, mouse and chicken samples demonstrated that each host possessed an exclusive predominant SFB sequence. In summary, our results showed that SFB display host specificity, and SFB colonization, which occurs early in human life, declines in an age-dependent manner. PMID:23151642

  16. Segmentation and Tracking of Adherens Junctions in 3D for the Analysis of Epithelial Tissue Morphogenesis

    PubMed Central

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-01-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT) PMID:25884654

  17. Comparative analysis of the distribution of segmented filamentous bacteria in humans, mice and chickens

    PubMed Central

    Yin, Yeshi; Wang, Yu; Zhu, Liying; Liu, Wei; Liao, Ningbo; Jiang, Mizu; Zhu, Baoli; Yu, Hongwei D; Xiang, Charlie; Wang, Xin

    2013-01-01

    Segmented filamentous bacteria (SFB) are indigenous gut commensal bacteria. They are commonly detected in the gastrointestinal tracts of both vertebrates and invertebrates. Despite the significant role they have in the modulation of the development of host immune systems, little information exists regarding the presence of SFB in humans. The aim of this study was to investigate the distribution and diversity of SFB in humans and to determine their phylogenetic relationships with their hosts. Gut contents from 251 humans, 92 mice and 72 chickens were collected for bacterial genomic DNA extraction and subjected to SFB 16S rRNA-specific PCR detection. The results showed SFB colonization to be age-dependent in humans, with the majority of individuals colonized within the first 2 years of life, but this colonization disappeared by the age of 3 years. Results of 16S rRNA sequencing showed that multiple operational taxonomic units of SFB could exist in the same individuals. Cross-species comparison among human, mouse and chicken samples demonstrated that each host possessed an exclusive predominant SFB sequence. In summary, our results showed that SFB display host specificity, and SFB colonization, which occurs early in human life, declines in an age-dependent manner. PMID:23151642

  18. Analysis of the Vancouver lung nodule malignancy model with respect to manual and automated segmentation

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Boroczky, Lilla; Bergtholdt, Martin; Klinder, Tobias

    2015-03-01

    The recently published Vancouver model for lung nodule malignancy prediction holds great promise as a practically feasible tool to mitigate the clinical decision problem of how to act on a lung nodule detected at baseline screening. It provides a formula to compute a probability of malignancy from only nine clinical and radiologic features. The feature values are provided by user interaction but in principle could also be automatically pre-filled by appropriate image processing algorithms and RIS requests. Nodule diameter is a feature with crucial influence on the predicted malignancy, and leads to uncertainty caused by inter-reader variability. The purpose of this paper is to analyze how strongly the malignancy prediction of a lung nodule found with CT screening is affected by the inter-reader variation of the nodule diameter estimation. To this aim we have estimated the magnitude of the malignancy variability by applying the Vancouver malignancy model to the LIDC-IDRI database which contains independent delineations from several readers. It can be shown that using fully automatic nodule segmentation can significantly lower the variability of the estimated malignancy, while demonstrating excellent agreement with the expert readers.

  19. Distinction and quantification of carry-over and sample interaction in gas segmented continuous flow analysis

    PubMed Central

    Zhang, Jia-Zhong

    1997-01-01

    The formulae for calculation of carry-over and sample interaction are derived for the first time in this study. A scheme proposed by Thiers et al. (two samples of low concentration followed by a high concentration sample and low concentration sample) is verified and recommended for the determination of the carry-over coeffcient. The derivation demonstrates that both widely used schemes of a high concentration sample followed by two low concentration samples, and a low concentration sample followed by two high concentration samples actually measure the sum of the carry-over coeffcient and sample interaction coefficient. A scheme of three low concentration samples followed by a high concentration sample is proposed and verified for determination of the sample interaction coeffcient. Experimental results indicate that carry-over is a strong function of cycle time and a weak function of ratio of sample time to wash time. Sample dispersion is found to be a function of sample time. Fitted equations can be used to predict the carry-over, absorbance and dispersion given sample times, and wash times for an analytical system. Results clearly show the important role of intersample air segmentation in reducing carry-over, sample interaction and dispersion. PMID:18924810

  20. Distinction and quantification of carry-over and sample interaction in gas segmented continuous flow analysis.

    PubMed

    Zhang, J Z

    1997-01-01

    The formulae for calculation of carry-over and sample interaction are derived for the first time in this study. A scheme proposed by Thiers et al. (two samples of low concentration followed by a high concentration sample and low concentration sample) is verified and recommended for the determination of the carry-over coeffcient. The derivation demonstrates that both widely used schemes of a high concentration sample followed by two low concentration samples, and a low concentration sample followed by two high concentration samples actually measure the sum of the carry-over coeffcient and sample interaction coefficient. A scheme of three low concentration samples followed by a high concentration sample is proposed and verified for determination of the sample interaction coeffcient. Experimental results indicate that carry-over is a strong function of cycle time and a weak function of ratio of sample time to wash time. Sample dispersion is found to be a function of sample time. Fitted equations can be used to predict the carry-over, absorbance and dispersion given sample times, and wash times for an analytical system. Results clearly show the important role of intersample air segmentation in reducing carry-over, sample interaction and dispersion. PMID:18924810

  1. Corrections for volume hydrogen content in coal analysis by prompt gamma neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Salgado, J.; Oliveira, C.

    1992-05-01

    Prompt gamma neutron activation analysis, PGNAA, is a useful technique to determine the elemental composition of bulk samples in on-line measurements. Monte Carlo simulation studies performed in bulk coals of different compositions for given sample size and geometry have shown that both the gamma count rate for hydrogen and the gamma count rate per percent by weight for an arbitrary element due to (n, ?) reactions depend on the volume hydrogen content, being independent of coal composition. Experimental results using a 252Cf neutron source surrounded by a lead cylinder were obtained for nine different coal types. These show that the ?-peak originated by (n, n' ?) reactions in the lead shield depends on the sample density. Assuming that the source intensity is constant, this result enables the measurement of the coal bulk density. Taking into account the results just described, the present paper shows how the ?-peak intensities can be corrected for volume hydrogen content in order to obtain the percent by weight contents of the coal. The density is necessary to convert the volume hydrogen in percent by weight content and to calculate the bulk sample weight.

  2. A registration-based segmentation method with application to adiposity analysis of mice microCT images

    NASA Astrophysics Data System (ADS)

    Bai, Bing; Joshi, Anand; Brandhorst, Sebastian; Longo, Valter D.; Conti, Peter S.; Leahy, Richard M.

    2014-04-01

    Obesity is a global health problem, particularly in the U.S. where one third of adults are obese. A reliable and accurate method of quantifying obesity is necessary. Visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) are two measures of obesity that reflect different associated health risks, but accurate measurements in humans or rodent models are difficult. In this paper we present an automatic, registration-based segmentation method for mouse adiposity studies using microCT images. We co-register the subject CT image and a mouse CT atlas. Our method is based on surface matching of the microCT image and an atlas. Surface-based elastic volume warping is used to match the internal anatomy. We acquired a whole body scan of a C57BL6/J mouse injected with contrast agent using microCT and created a whole body mouse atlas by manually delineate the boundaries of the mouse and major organs. For method verification we scanned a C57BL6/J mouse from the base of the skull to the distal tibia. We registered the obtained mouse CT image to our atlas. Preliminary results show that we can warp the atlas image to match the posture and shape of the subject CT image, which has significant differences from the atlas. We plan to use this software tool in longitudinal obesity studies using mouse models.

  3. Risk factors for neovascular glaucoma after carbon ion radiotherapy of choroidal melanoma using dose-volume histogram analysis

    SciTech Connect

    Hirasawa, Naoki . E-mail: naoki_h@nirs.go.jp; Tsuji, Hiroshi; Ishikawa, Hitoshi; Koyama-Ito, Hiroko; Kamada, Tadashi; Mizoe, Jun-Etsu; Ito, Yoshiyuki; Naganawa, Shinji; Ohnishi, Yoshitaka; Tsujii, Hirohiko

    2007-02-01

    Purpose: To determine the risk factors for neovascular glaucoma (NVG) after carbon ion radiotherapy (C-ion RT) of choroidal melanoma. Methods and Materials: A total of 55 patients with choroidal melanoma were treated between 2001 and 2005 with C-ion RT based on computed tomography treatment planning. All patients had a tumor of large size or one located close to the optic disk. Univariate and multivariate analyses were performed to identify the risk factors of NVG for the following parameters; gender, age, dose-volumes of the iris-ciliary body and the wall of eyeball, and irradiation of the optic disk (ODI). Results: Neovascular glaucoma occurred in 23 patients and the 3-year cumulative NVG rate was 42.6 {+-} 6.8% (standard error), but enucleation from NVG was performed in only three eyes. Multivariate analysis revealed that the significant risk factors for NVG were V50{sub IC} (volume irradiated {>=}50 GyE to iris-ciliary body) (p = 0.002) and ODI (p = 0.036). The 3-year NVG rate for patients with V50{sub IC} {>=}0.127 mL and those with V50{sub IC} <0.127 mL were 71.4 {+-} 8.5% and 11.5 {+-} 6.3%, respectively. The corresponding rate for the patients with and without ODI were 62.9 {+-} 10.4% and 28.4 {+-} 8.0%, respectively. Conclusion: Dose-volume histogram analysis with computed tomography indicated that V50{sub IC} and ODI were independent risk factors for NVG. An irradiation system that can reduce the dose to both the anterior segment and the optic disk might be worth adopting to investigate whether or not incidence of NVG can be decreased with it.

  4. Segmentation of liver and liver tumor for the Liver-Workbench

    NASA Astrophysics Data System (ADS)

    Zhou, Jiayin; Ding, Feng; Xiong, Wei; Huang, Weimin; Tian, Qi; Wang, Zhimin; Venkatesh, Sudhakar K.; Leow, Wee Kheng

    2011-03-01

    Robust and efficient segmentation tools are important for the quantification of 3D liver and liver tumor volumes which can greatly help clinicians in clinical decision-making and treatment planning. A two-module image analysis procedure which integrates two novel semi-automatic algorithms has been developed to segment 3D liver and liver tumors from multi-detector computed tomography (MDCT) images. The first module is to segment the liver volume using a flippingfree mesh deformation model. In each iteration, before mesh deformation, the algorithm detects and avoids possible flippings which will cause the self-intersection of the mesh and then the undesired segmentation results. After flipping avoidance, Laplacian mesh deformation is performed with various constraints in geometry and shape smoothness. In the second module, the segmented liver volume is used as the ROI and liver tumors are segmented by using support vector machines (SVMs)-based voxel classification and propagational learning. First a SVM classifier was trained to extract tumor region from one single 2D slice in the intermediate part of a tumor by voxel classification. Then the extracted tumor contour, after some morphological operations, was projected to its neighboring slices for automated sampling, learning and further voxel classification in neighboring slices. This propagation procedure continued till all tumorcontaining slices were processed. The performance of the whole procedure was tested using 20 MDCT data sets and the results were promising: Nineteen liver volumes were successfully segmented out, with the mean relative absolute volume difference (RAVD), volume overlap error (VOE) and average symmetric surface distance (ASSD) to reference segmentation of 7.1%, 12.3% and 2.5 mm, respectively. For live tumors segmentation, the median RAVD, VOE and ASSD were 7.3%, 18.4%, 1.7 mm, respectively.

  5. Virtual Mastoidectomy Performance Evaluation through Multi-Volume Analysis

    PubMed Central

    Kerwin, Thomas; Stredney, Don; Wiet, Gregory; Shen, Han-Wei

    2012-01-01

    Purpose Development of a visualization system that provides surgical instructors with a method to compare the results of many virtual surgeries (n > 100). Methods A masked distance field models the overlap between expert and resident results. Multiple volume displays are used side-by-side with a 2D point display. Results Performance characteristics were examined by comparing the results of specific residents with those of experts and the entire class. Conclusions The software provides a promising approach for comparing performance between large groups of residents learning mastoidectomy techniques. PMID:22528058

  6. Synfuel program analysis. Volume 2: VENVAL users manual

    NASA Astrophysics Data System (ADS)

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This volume is intended for program analysts and is a users manual for the VENVAL model. It contains specific explanations as to input data requirements and programming procedures for the use of this model. VENVAL is a generalized computer program to aid in evaluation of prospective private sector production ventures. The program can project interrelated values of installed capacity, production, sales revenue, operating costs, depreciation, investment, dent, earnings, taxes, return on investment, depletion, and cash flow measures. It can also compute related public sector and other external costs and revenues if unit costs are furnished.

  7. Multispectral microscopy and cell segmentation for analysis of thyroid fine needle aspiration cytology smears.

    PubMed

    Wu, Xuqing; Thigpen, James; Shah, Shishir K

    2009-01-01

    This paper discusses the needs for automated tools to aid in the diagnosis of thyroid nodules based on analysis of fine needle aspiration cytology smears. While conventional practices rely on the analysis of grey scale or RGB color images, we present a multispectral microscopy system that uses thirty-one spectral bands for analysis. Discussed are methods and results for system calibration and cell delineation. PMID:19964406

  8. Computer Assisted Data Analysis in the Dye Dilution Technique for Plasma Volume Measurement.

    ERIC Educational Resources Information Center

    Bishop, Marvin; Robinson, Gerald D.

    1981-01-01

    Describes a method for undergraduate physiology students to measure plasma volume by the dye dilution technique, in which a computer is used to interpret data. Includes the computer program for the data analysis. (CS)

  9. Bayesian segmentation of MR images using 3D Gibbsian priors

    NASA Astrophysics Data System (ADS)

    Chang, Michael M.; Tekalp, A. Murat; Sezan, M. Ibrahim

    1993-04-01

    A Bayesian approach for segmentation of three-dimensional (3-D) magnetic resonance imaging (MRI) data of the human brain is presented. Connectivity and smoothness constraints are imposed on the segmentation in 3 dimensions. The resulting segmentation is suitable for 3-D display and for volumetric analysis of structures. The algorithm is based on the maximum a posteriori probability (MAP) criterion, where a 3-D Gibbs random field (GRF) is used to model the a priori probability distribution of the segmentation. The proposed method can be applied to a spatial sequence of 2-D images (cross-sections through a volume), as well as 3-D sampled data. We discuss the optimization methods for obtaining the MAP estimate. Experimental results obtained using clinical data are included.

  10. The effect of lead selection on traditional and heart rate-adjusted ST segment analysis in the detection of coronary artery disease during exercise testing.

    PubMed

    Viik, J; Lehtinen, R; Turjanmaa, V; Niemelä, K; Malmivuo, J

    1997-09-01

    Several methods of heart rate-adjusted ST segment (ST/HR) analysis have been suggested to improve the diagnostic accuracy of exercise electrocardiography in the identification of coronary artery disease compared with traditional ST segment analysis. However, no comprehensive comparison of these methods on a lead-by-lead basis in all 12 electrocardiographic leads has been reported. This article compares the diagnostic performances of ST/HR hysteresis, ST/HR index, ST segment depression 3 minutes after recovery from exercise, and ST segment depression at peak exercise in a study population of 128 patients with angiographically proved coronary artery disease and 189 patients with a low likelihood of the disease. The methods were determined in each lead of the Mason-Likar modification of the standard 12-lead exercise electrocardiogram for each patient. The ST/HR hysteresis, ST/HR index, ST segment depression 3 minutes after recovery from exercise, and ST segment depression at peak exercise achieved more than 85% area under the receiver-operating characteristic curve in nine, none, three, and one of the 12 standard leads, respectively. The diagnostic performance of ST/HR hysteresis was significantly superior in each lead, with the exception of leads a VL and V1. Examination of individual leads in each study method revealed the high diagnostic performance of leads I and -aVR, indicating that the importance of these leads has been undervalued. In conclusion, the results indicate that when traditional ST segment analysis is used for the detection of coronary artery disease, more attention should be paid to the leads chosen for analysis, and lead-specific cut points should be applied. On the other hand, ST/HR hysteresis, which integrates the ST/HR depression of the exercise and recovery phases, seems to be relatively insensitive to the lead selection and significantly increases the diagnostic performance of exercise electrocardiography in the detection of coronary artery disease. PMID:9327707

  11. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  12. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  13. Estimating temperature-dependent anisotropic hydrogen displacements with the invariom database and a new segmented rigid-body analysis program

    PubMed Central

    LĂĽbben, Jens; Bourhis, Luc J.; Dittrich, Birger

    2015-01-01

    Invariom partitioning and notation are used to estimate anisotropic hydrogen displacements for incorporation in crystallographic refinement models. Optimized structures of the generalized invariom database and their frequency computations provide the information required: frequencies are converted to internal atomic displacements and combined with the results of a TLS (translation–libration–screw) fit of experimental non-hydrogen anisotropic displacement parameters to estimate those of H atoms. Comparison with TLS+ONIOM and neutron diffraction results for four example structures where high-resolution X-ray and neutron data are available show that electron density transferability rules established in the invariom approach are also suitable for streamlining the transfer of atomic vibrations. A new segmented-body TLS analysis program called APD-Toolkit has been coded to overcome technical limitations of the established program THMA. The influence of incorporating hydrogen anisotropic displacement parameters on conventional refinement is assessed. PMID:26664341

  14. Linkage disequilibrium analysis by searching for shared segments: Mapping a locus for benign recurrent intrahepatic cholestasis (BRIC)

    SciTech Connect

    Freimer, N.; Baharloo, S.; Blankenship, K.

    1994-09-01

    The lod score method of linkage analysis has two important drawbacks: parameters must be specified for the transmission of the disease (e.g. penetrance), and large numbers of genetically informative individuals must be studied. Although several robust non-parametric methods are available, these also require large sample sizes. The availability of dense genetic maps permits genome screening to be conducted by linkage disequilibrium (LD) mapping methods, which are statistically powerful and non-parametric. Lander & Botstein proposed that LD mapping could be employed to screen the human genome for disease loci; we have now applied this strategy to map a gene for an autosomal recessive disorder, benign recurrent intrahepatic cholestatis (BRIC). Our approach to LD mapping was based on identifying chromosome segments shared between distantly related patients; we used 256 microsatellite markers to genotype three affected individuals, and their parents, from an isolated town in The Netherlands. Because endogamy occurred in this population for several generations, all of the BRIC patients are known to be distantly related to each other, but the pedigree structure and connections could not be certainly established more than three generations before the present, so lod score analysis was impossible. A 20 cM region on chromosome 18 is shared by 5/6 patient chromosomes; subsequently, we noted that 6/6 chromosomes shared an interval of about 3 cM in this region. Calculations indicate that it is extremely unlikely that such a region could be inherited by chance rather than by descent from a common ancestor. Thus, LD mapping by searching for shared chromosomal segments is an extremely powerful approach for genome screening to identify disease loci.

  15. Validation tools for image segmentation

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  16. STS-1 operational flight profile. Volume 6: Abort analysis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The abort analysis for the cycle 3 Operational Flight Profile (OFP) for the Space Transportation System 1 Flight (STS-1) is defined, superseding the abort analysis previously presented. Included are the flight description, abort analysis summary, flight design groundrules and constraints, initialization information, general abort description and results, abort solid rocket booster and external tank separation and disposal results, abort monitoring displays and discussion on both ground and onboard trajectory monitoring, abort initialization load summary for the onboard computer, list of the key abort powered flight dispersion analysis.

  17. Image-based segmentation for characterization and quantitative analysis of the spinal cord injuries by using diffusion patterns

    NASA Astrophysics Data System (ADS)

    Hannula, Markus; Olubamiji, Adeola; Kunttu, Iivari; Dastidar, Prasun; Soimakallio, Seppo; Ă–hman, Juha; Hyttinen, Jari

    2011-03-01

    In medical imaging, magnetic resonance imaging sequences are able to provide information of the damaged brain structure and the neuronal connections. The sequences can be analyzed to form 3D models of the geometry and further including functional information of the neurons of the specific brain area to develop functional models. Modeling offers a tool which can be used for the modeling of brain trauma from images of the patients and thus information to tailor the properties of the transplanted cells. In this paper, we present image-based methods for the analysis of human spinal cord injuries. In this effort, we use three dimensional diffusion tensor imaging, which is an effective method for analyzing the response of the water molecules. This way, our idea is to study how the injury affects on the tissues and how this can be made visible in the imaging. In this paper, we present here a study of spinal cord analysis to two subjects, one healthy volunteer and one spinal cord injury patient. We have done segmentations and volumetric analysis for detection of anatomical differences. The functional differences are analyzed by using diffusion tensor imaging. The obtained results show that this kind of analysis is capable of finding differences in spinal cords anatomy and function.

  18. Economic analysis of the space shuttle system, volume 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of the space shuttle system is presented. The analysis is based on economic benefits, recurring costs, non-recurring costs, and ecomomic tradeoff functions. The most economic space shuttle configuration is determined on the basis of: (1) objectives of reusable space transportation system, (2) various space transportation systems considered and (3) alternative space shuttle systems.

  19. Space shuttle navigation analysis. Volume 2: Baseline system navigation

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Rains, R. G.

    1980-01-01

    Studies related to the baseline navigation system for the orbiter are presented. The baseline navigation system studies include a covariance analysis of the Inertial Measurement Unit calibration and alignment procedures, postflight IMU error recovery for the approach and landing phases, on-orbit calibration of IMU instrument biases, and a covariance analysis of entry and prelaunch navigation system performance.

  20. Pressure vessels and piping design, analysis, and severe accidents. PVP-Volume 331

    SciTech Connect

    Dermenjian, A.A.

    1996-12-31

    The primary objective of the Design and Analysis Committee of the ASME Pressure Vessels and Piping Division is to provide a forum for the dissemination of information and the advancement of current theories and practices in the design and analysis of pressure vessels, piping systems, and components. This volume is divided into the following six sections: power plant piping and supports 1--3; applied dynamic response analysis; severe accident analysis; and student papers. Separate abstracts were prepared for 22 papers in this volume.

  1. Price-volume multifractal analysis and its application in Chinese stock markets

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Zhuang, Xin-tian; Liu, Zhi-ying

    2012-06-01

    An empirical research on Chinese stock markets is conducted using statistical tools. First, the multifractality of stock price return series, ri(ri=ln(Pt+1)-ln(Pt)) and trading volume variation series, vi(vi=ln(Vt+1)-ln(Vt)) is confirmed using multifractal detrended fluctuation analysis. Furthermore, a multifractal detrended cross-correlation analysis between stock price return and trading volume variation in Chinese stock markets is also conducted. It is shown that the cross relationship between them is also found to be multifractal. Second, the cross-correlation between stock price Pi and trading volume Vi is empirically studied using cross-correlation function and detrended cross-correlation analysis. It is found that both Shanghai stock market and Shenzhen stock market show pronounced long-range cross-correlations between stock price and trading volume. Third, a composite index R based on price and trading volume is introduced. Compared with stock price return series ri and trading volume variation series vi, R variation series not only remain the characteristics of original series but also demonstrate the relative correlation between stock price and trading volume. Finally, we analyze the multifractal characteristics of R variation series before and after three financial events in China (namely, Price Limits, Reform of Non-tradable Shares and financial crisis in 2008) in the whole period of sample to study the changes of stock market fluctuation and financial risk. It is found that the empirical results verified the validity of R.

  2. Ventriculogram segmentation using boosted decision trees

    NASA Astrophysics Data System (ADS)

    McDonald, John A.; Sheehan, Florence H.

    2004-05-01

    Left ventricular status, reflected in ejection fraction or end systolic volume, is a powerful prognostic indicator in heart disease. Quantitative analysis of these and other parameters from ventriculograms (cine xrays of the left ventricle) is infrequently performed due to the labor required for manual segmentation. None of the many methods developed for automated segmentation has achieved clinical acceptance. We present a method for semi-automatic segmentation of ventriculograms based on a very accurate two-stage boosted decision-tree pixel classifier. The classifier determines which pixels are inside the ventricle at key ED (end-diastole) and ES (end-systole) frames. The test misclassification rate is about 1%. The classifier is semi-automatic, requiring a user to select 3 points in each frame: the endpoints of the aortic valve and the apex. The first classifier stage is 2 boosted decision-trees, trained using features such as gray-level statistics (e.g. median brightness) and image geometry (e.g. coordinates relative to user supplied 3 points). Second stage classifiers are trained using the same features as the first, plus the output of the first stage. Border pixels are determined from the segmented images using dilation and erosion. A curve is then fit to the border pixels, minimizing a penalty function that trades off fidelity to the border pixels with smoothness. ED and ES volumes, and ejection fraction are estimated from border curves using standard area-length formulas. On independent test data, the differences between automatic and manual volumes (and ejection fractions) are similar in size to the differences between two human observers.

  3. The ACODEA Framework: Developing Segmentation and Classification Schemes for Fully Automatic Analysis of Online Discussions

    ERIC Educational Resources Information Center

    Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank

    2012-01-01

    Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…

  4. The ACODEA Framework: Developing Segmentation and Classification Schemes for Fully Automatic Analysis of Online Discussions

    ERIC Educational Resources Information Center

    Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank

    2012-01-01

    Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…

  5. SLUDGE TREATMENT PROJECT ALTERNATIVES ANALYSIS SUMMARY REPORT [VOLUME 1

    SciTech Connect

    FREDERICKSON JR; ROURK RJ; HONEYMAN JO; JOHNSON ME; RAYMOND RE

    2009-01-19

    Highly radioactive sludge (containing up to 300,000 curies of actinides and fission products) resulting from the storage of degraded spent nuclear fuel is currently stored in temporary containers located in the 105-K West storage basin near the Columbia River. The background, history, and known characteristics of this sludge are discussed in Section 2 of this report. There are many compelling reasons to remove this sludge from the K-Basin. These reasons are discussed in detail in Section1, and they include the following: (1) Reduce the risk to the public (from a potential release of highly radioactive material as fine respirable particles by airborne or waterborn pathways); (2) Reduce the risk overall to the Hanford worker; and (3) Reduce the risk to the environment (the K-Basin is situated above a hazardous chemical contaminant plume and hinders remediation of the plume until the sludge is removed). The DOE-RL has stated that a key DOE objective is to remove the sludge from the K-West Basin and River Corridor as soon as possible, which will reduce risks to the environment, allow for remediation of contaminated areas underlying the basins, and support closure of the 100-KR-4 operable unit. The environmental and nuclear safety risks associated with this sludge have resulted in multiple legal and regulatory remedial action decisions, plans,and commitments that are summarized in Table ES-1 and discussed in more detail in Volume 2, Section 9.

  6. Ceramic component development analysis -- Volume 1. Final report

    SciTech Connect

    Boss, D.E.

    1998-06-09

    The development of advanced filtration media for advanced fossil-fueled power generating systems is a critical step in meeting the performance and emissions requirements for these systems. While porous metal and ceramic candle-filters have been available for some time, the next generation of filters will include ceramic-matrix composites (CMCs) (Techniweave/Westinghouse, Babcock and Wilcox (B and W), DuPont Lanxide Composites), intermetallic alloys (Pall Corporation), and alternate filter geometries (CeraMem Separations). The goal of this effort was to perform a cursory review of the manufacturing processes used by 5 companies developing advanced filters from the perspective of process repeatability and the ability for their processes to be scale-up to produce volumes. Given the brief nature of the on-site reviews, only an overview of the processes and systems could be obtained. Each of the 5 companies had developed some level of manufacturing and quality assurance documentation, with most of the companies leveraging the procedures from other products they manufacture. It was found that all of the filter manufacturers had a solid understanding of the product development path. Given that these filters are largely developmental, significant additional work is necessary to understand the process-performance relationships and projecting manufacturing costs.

  7. Viscous wing theory development. Volume 1: Analysis, method and results

    NASA Technical Reports Server (NTRS)

    Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.

    1986-01-01

    Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.

  8. Structural analysis of cylindrical thrust chambers, volume 1

    NASA Technical Reports Server (NTRS)

    Armstrong, W. H.

    1979-01-01

    Life predictions of regeneratively cooled rocket thrust chambers are normally derived from classical material fatigue principles. The failures observed in experimental thrust chambers do not appear to be due entirely to material fatigue. The chamber coolant walls in the failed areas exhibit progressive bulging and thinning during cyclic firings until the wall stress finally exceeds the material rupture stress and failure occurs. A preliminary analysis of an oxygen free high conductivity (OFHC) copper cylindrical thrust chamber demonstrated that the inclusion of cumulative cyclic plastic effects enables the observed coolant wall thinout to be predicted. The thinout curve constructed from the referent analysis of 10 firing cycles was extrapolated from the tenth cycle to the 200th cycle. The preliminary OFHC copper chamber 10-cycle analysis was extended so that the extrapolated thinout curve could be established by performing cyclic analysis of deformed configurations at 100 and 200 cycles. Thus the original range of extrapolation was reduced and the thinout curve was adjusted by using calculated thinout rates at 100 and 100 cycles. An analysis of the same underformed chamber model constructed of half-hard Amzirc to study the effect of material properties on the thinout curve is included.

  9. Analysis of the genetic information of a DNA segment of a new virus from silkworm.

    PubMed

    Bando, H; Hayakawa, T; Asano, S; Sahara, K; Nakagaki, M; Iizuka, T

    1995-01-01

    In 1983, a parvo-like virus (Yamanashi isolate) was newly isolated from silkworm. However, unlike parvovirus, two DNA molecules (VD1 and 2) were always extracted from purified virions. To investigate the structure and organization of the virus genomes, we determined the complete nucleotide sequence of VD2. The sequence consisted of 6031 nucleotides (nts) and contained a large open reading frame (ORF1) with 3513 nts. A smaller open reading frame (ORF2) with 702 nts was found in the complementary sequence. Computer analysis revealed that both ORFs did not code for the major structural proteins (VP1, 2, 3, and 4). These results suggest that VD2 has not enough information to produce progeny virions by itself. Further, the structural importance of the terminal sequence (CTS) common to both VD1 and VD2 was also predicted by a computer analysis. PMID:7611885

  10. Finite element analysis of laminated plates and shells, volume 1

    NASA Technical Reports Server (NTRS)

    Seide, P.; Chang, P. N. H.

    1978-01-01

    The finite element method is used to investigate the static behavior of laminated composite flat plates and cylindrical shells. The analysis incorporates the effects of transverse shear deformation in each layer through the assumption that the normals to the undeformed layer midsurface remain straight but need not be normal to the mid-surface after deformation. A digital computer program was developed to perform the required computations. The program includes a very efficient equation solution code which permits the analysis of large size problems. The method is applied to the problem of stretching and bending of a perforated curved plate.

  11. Structural Analysis and Testing of an Erectable Truss for Precision Segmented Reflector Application

    NASA Technical Reports Server (NTRS)

    Collins, Timothy J.; Fichter, W. B.; Adams, Richard R.; Javeed, Mehzad

    1995-01-01

    This paper describes analysis and test results obtained at Langley Research Center (LaRC) on a doubly curved testbed support truss for precision reflector applications. Descriptions of test procedures and experimental results that expand upon previous investigations are presented. A brief description of the truss is given, and finite-element-analysis models are described. Static-load and vibration test procedures are discussed, and experimental results are shown to be repeatable and in generally good agreement with linear finite-element predictions. Truss structural performance (as determined by static deflection and vibration testing) is shown to be predictable and very close to linear. Vibration test results presented herein confirm that an anomalous mode observed during initial testing was due to the flexibility of the truss support system. Photogrammetric surveys with two 131-in. reference scales show that the root-mean-square (rms) truss-surface accuracy is about 0.0025 in. Photogrammetric measurements also indicate that the truss coefficient of thermal expansion (CTE) is in good agreement with that predicted by analysis. A detailed description of the photogrammetric procedures is included as an appendix.

  12. Comparison of CLASS and ITK-SNAP in segmentation of urinary bladder in CT urography

    NASA Astrophysics Data System (ADS)

    Cha, Kenny; Hadjiiski, Lubomir; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Zhou, Chuan

    2014-03-01

    We are developing a computerized method for bladder segmentation in CT urography (CTU) for computeraided diagnosis of bladder cancer. We have developed a Conjoint Level set Analysis and Segmentation System (CLASS) consisting of four stages: preprocessing and initial segmentation, 3D and 2D level set segmentation, and post-processing. In case the bladder contains regions filled with intravenous (IV) contrast and without contrast, CLASS segments the noncontrast (NC) region and the contrast (C) filled region separately and conjoins the contours. In this study, we compared the performance of CLASS to ITK-SNAP 2.4, which is a publicly available software application for segmentation of structures in 3D medical images. ITK-SNAP performs segmentation by using the edge-based level set on preprocessed images. The level set were initialized by manually placing a sphere at the boundary between the C and NC parts of the bladders with C and NC regions, and in the middle of the bladders that had only C or NC region. Level set parameters and the number of iterations were chosen after experimentation with bladder cases. Segmentation performances were compared using 30 randomly selected bladders. 3D hand-segmented contours were obtained as reference standard, and computerized segmentation accuracy was evaluated in terms of the average volume intersection %, average % volume error, average absolute % volume error, average minimum distance, and average Jaccard index. For CLASS, the values for these performance metrics were 79.0±8.2%, 16.1±16.3%, 19.9±11.1%, 3.5±1.3 mm, 75.7±8.4%, respectively. For ITK-SNAP, the corresponding values were 78.8±8.2%, 8.3±33.1%, 24.2±23.7%, 5.2±2.6 mm, 71.0±15.4%, respectively. CLASS on average performed better and exhibited less variations than ITK-SNAP for bladder segmentation.

  13. Space tug economic analysis study. Volume 2: Tug concepts analysis. Part 2: Economic analysis

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of space tug operations is presented. The subjects discussed are: (1) cost uncertainties, (2) scenario analysis, (3) economic sensitivities, (4) mixed integer programming formulation of the space tug problem, and (5) critical parameters in the evaluation of a public expenditure.

  14. Improving image segmentation performance and quantitative analysis via a computer-aided grading methodology for optical coherence tomography retinal image analysis

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Salinas, Harry M.; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M.; Puliafito, Carmen A.

    2010-07-01

    We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 ?m and 26.71 ?m when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 ?m and 0.6 and 1.76 ?m for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R2>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.

  15. Thermal characterization and analysis of microliter liquid volumes using the three-omega method

    NASA Astrophysics Data System (ADS)

    Roy-Panzer, Shilpi; Kodama, Takashi; Lingamneni, Srilakshmi; Panzer, Matthew A.; Asheghi, Mehdi; Goodson, Kenneth E.

    2015-02-01

    Thermal phenomena in many biological systems offer an alternative detection opportunity for quantifying relevant sample properties. While there is substantial prior work on thermal characterization methods for fluids, the push in the biology and biomedical research communities towards analysis of reduced sample volumes drives a need to extend and scale these techniques to these volumes of interest, which can be below 100 pl. This work applies the 3? technique to measure the temperature-dependent thermal conductivity and heat capacity of de-ionized water, silicone oil, and salt buffer solution droplets from 24 to 80 °C. Heater geometries range in length from 200 to 700 ?m and in width from 2 to 5 ?m to accommodate the size restrictions imposed by small volume droplets. We use these devices to measure droplet volumes of 2 ?l and demonstrate the potential to extend this technique down to pl droplet volumes based on an analysis of the thermally probed volume. Sensitivity and uncertainty analyses provide guidance for relevant design variables for characterizing properties of interest by investigating the tradeoffs between measurement frequency regime, device geometry, and substrate material. Experimental results show that we can extract thermal conductivity and heat capacity with these sample volumes to within less than 1% of thermal properties reported in the literature.

  16. Underground Test Area Subproject Phase I Data Analysis Task. Volume VI - Groundwater Flow Model Documentation Package

    SciTech Connect

    1996-11-01

    Volume VI of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the groundwater flow model data. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  17. Underground Test Area Subproject Phase I Data Analysis Task. Volume II - Potentiometric Data Document Package

    SciTech Connect

    1996-12-01

    Volume II of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment fo