Science.gov

Sample records for volume segmentation analysis

  1. Automated segmentation and dose-volume analysis with DICOMautomaton

    NASA Astrophysics Data System (ADS)

    Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.

    2014-03-01

    Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.

  2. Regional analysis of volumes and reproducibilities of automatic and manual hippocampal segmentations

    PubMed Central

    Vrenken, Hugo; Bijma, Fetsje; Barkhof, Frederik; van Herk, Marcel; de Munck, Jan C.

    2017-01-01

    Purpose Precise and reproducible hippocampus outlining is important to quantify hippocampal atrophy caused by neurodegenerative diseases and to spare the hippocampus in whole brain radiation therapy when performing prophylactic cranial irradiation or treating brain metastases. This study aimed to quantify systematic differences between methods by comparing regional volume and outline reproducibility of manual, FSL-FIRST and FreeSurfer hippocampus segmentations. Materials and methods This study used a dataset from ADNI (Alzheimer’s Disease Neuroimaging Initiative), including 20 healthy controls, 40 patients with mild cognitive impairment (MCI), and 20 patients with Alzheimer’s disease (AD). For each subject back-to-back (BTB) T1-weighted 3D MPRAGE images were acquired at time-point baseline (BL) and 12 months later (M12). Hippocampi segmentations of all methods were converted into triangulated meshes, regional volumes were extracted and regional Jaccard indices were computed between the hippocampi meshes of paired BTB scans to evaluate reproducibility. Regional volumes and Jaccard indices were modelled as a function of group (G), method (M), hemisphere (H), time-point (T), region (R) and interactions. Results For the volume data the model selection procedure yielded the following significant main effects G, M, H, T and R and interaction effects G-R and M-R. The same model was found for the BTB scans. For all methods volumes reduces with the severity of disease. Significant fixed effects for the regional Jaccard index data were M, R and the interaction M-R. For all methods the middle region was most reproducible, independent of diagnostic group. FSL-FIRST was most and FreeSurfer least reproducible. Discussion/Conclusion A novel method to perform detailed analysis of subtle differences in hippocampus segmentation is proposed. The method showed that hippocampal segmentation reproducibility was best for FSL-FIRST and worst for Freesurfer. We also found systematic

  3. Evaluation Of Human Segmental Body Volumes And Inertial Properties With Photogrammetry As A Basis For Gait Analysis

    NASA Astrophysics Data System (ADS)

    Baumann, J. U.; Schaer, A. R...; Sheffer, D. B.

    1986-07-01

    In order to be practically useful, gait analysis in patients with motor handicaps should provide information on joint loads, forces and moments. For accurate joint force estimates, mass and inertial properties of the limb segment must be known. A program for the determination of segmental mass and inertial properties was therefore set up, using stereophotogrammetry for the evaluation of segmental body volumes. The methodology using two stereo pairs of cameras is described. A 15 segment body model was defined with its segmental boundaries and its segmental anatomical axis systems.

  4. Automated localization and segmentation of lung tumor from PET-CT thorax volumes based on image feature analysis.

    PubMed

    Cui, Hui; Wang, Xiuying; Feng, Dagan

    2012-01-01

    Positron emission tomography - computed tomography (PET-CT) plays an essential role in early tumor detection, diagnosis, staging and treatment. Automated and more accurate lung tumor detection and delineation from PET-CT is challenging. In this paper, on the basis of quantitative analysis of contrast feature of PET volume in SUV (standardized uptake value), our method firstly automatically localized the lung tumor. Then based on analysing the surrounding CT features of the initial tumor definition, our decision strategy determines the tumor segmentation from CT or from PET. The algorithm has been validated on 20 PET-CT studies involving non-small cell lung cancer (NSCLC). Experimental results demonstrated that our method was able to segment the tumor when adjacent to mediastinum or chest wall, and the algorithm outperformed the other five lung segmentation methods in terms of overlapping measure.

  5. NSEG, a segmented mission analysis program for low and high speed aircraft. Volume 1: Theoretical development

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is presented. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed characteristics were specified in tabular form. The code also contains extensive flight envelope performance mapping capabilities. Approximate take off and landing analyses were performed. At high speeds, centrifugal lift effects were accounted for. Extensive turbojet and ramjet engine scaling procedures were incorporated in the code.

  6. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 3: Demonstration problems

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    Program NSEG is a rapid mission analysis code based on the use of approximate flight path equations of motion. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelope performance mapping capabilities. For example, rate-of-climb, turn rates, and energy maneuverability parameter values may be mapped in the Mach-altitude plane. Approximate take off and landing analyses are also performed. At high speeds, centrifugal lift effects are accounted for. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  7. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 2: Program users manual

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  8. SuRVoS: Super-Region Volume Segmentation Workbench.

    PubMed

    Luengo, Imanol; Darrow, Michele C; Spink, Matthew C; Sun, Ying; Dai, Wei; He, Cynthia Y; Chiu, Wah; Pridmore, Tony; Ashton, Alun W; Duke, Elizabeth M H; Basham, Mark; French, Andrew P

    2017-02-25

    Segmentation of biological volumes is a crucial step needed to fully analyse their scientific content. Not having access to convenient tools with which to segment or annotate the data means many biological volumes remain under-utilised. Automatic segmentation of biological volumes is still a very challenging research field, and current methods usually require a large amount of manually-produced training data to deliver a high-quality segmentation. However, the complex appearance of cellular features and the high variance from one sample to another, along with the time-consuming work of manually labelling complete volumes, makes the required training data very scarce or non-existent. Thus, fully automatic approaches are often infeasible for many practical applications. With the aim of unifying the segmentation power of automatic approaches with the user expertise and ability to manually annotate biological samples, we present a new workbench named SuRVoS (Super-Region Volume Segmentation). Within this software, a volume to be segmented is first partitioned into hierarchical segmentation layers (named Super-Regions) and is then interactively segmented with the user's knowledge input in the form of training annotations. SuRVoS first learns from and then extends user inputs to the rest of the volume, while using super-regions for quicker and easier segmentation than when using a voxel grid. These benefits are especially noticeable on noisy, low-dose, biological datasets.

  9. Bioimpedance Measurement of Segmental Fluid Volumes and Hemodynamics

    NASA Technical Reports Server (NTRS)

    Montgomery, Leslie D.; Wu, Yi-Chang; Ku, Yu-Tsuan E.; Gerth, Wayne A.; DeVincenzi, D. (Technical Monitor)

    2000-01-01

    Bioimpedance has become a useful tool to measure changes in body fluid compartment volumes. An Electrical Impedance Spectroscopic (EIS) system is described that extends the capabilities of conventional fixed frequency impedance plethysmographic (IPG) methods to allow examination of the redistribution of fluids between the intracellular and extracellular compartments of body segments. The combination of EIS and IPG techniques was evaluated in the human calf, thigh, and torso segments of eight healthy men during 90 minutes of six degree head-down tilt (HDT). After 90 minutes HDT the calf and thigh segments significantly (P < 0.05) lost conductive volume (eight and four percent, respectively) while the torso significantly (P < 0.05) gained volume (approximately three percent). Hemodynamic responses calculated from pulsatile IPG data also showed a segmental pattern consistent with vascular fluid loss from the lower extremities and vascular engorgement in the torso. Lumped-parameter equivalent circuit analyses of EIS data for the calf and thigh indicated that the overall volume decreases in these segments arose from reduced extracellular volume that was not completely balanced by increased intracellular volume. The combined use of IPG and EIS techniques enables noninvasive tracking of multi-segment volumetric and hemodynamic responses to environmental and physiological stresses.

  10. Partial volume effect modeling for segmentation and tissue classification of brain magnetic resonance images: A review.

    PubMed

    Tohka, Jussi

    2014-11-28

    Quantitative analysis of magnetic resonance (MR) brain images are facilitated by the development of automated segmentation algorithms. A single image voxel may contain of several types of tissues due to the finite spatial resolution of the imaging device. This phenomenon, termed partial volume effect (PVE), complicates the segmentation process, and, due to the complexity of human brain anatomy, the PVE is an important factor for accurate brain structure quantification. Partial volume estimation refers to a generalized segmentation task where the amount of each tissue type within each voxel is solved. This review aims to provide a systematic, tutorial-like overview and categorization of methods for partial volume estimation in brain MRI. The review concentrates on the statistically based approaches for partial volume estimation and also explains differences to other, similar image segmentation approaches.

  11. Amygdalar and hippocampal volume: A comparison between manual segmentation, Freesurfer and VBM.

    PubMed

    Grimm, Oliver; Pohlack, Sebastian; Cacciaglia, Raffaele; Winkelmann, Tobias; Plichta, Michael M; Demirakca, Traute; Flor, Herta

    2015-09-30

    Automated segmentation of the amygdala and the hippocampus is of interest for research looking at large datasets where manual segmentation of T1-weighted magnetic resonance tomography images is less feasible for morphometric analysis. Manual segmentation still remains the gold standard for subcortical structures like the hippocampus and the amygdala. A direct comparison of VBM8 and Freesurfer is rarely done, because VBM8 results are most often used for voxel-based analysis. We used the same region-of-interest (ROI) for Freesurfer and VBM8 to relate automated and manually derived volumes of the amygdala and the hippocampus. We processed a large manually segmented dataset of n=92 independent samples with an automated segmentation strategy (VBM8 vs. Freesurfer Version 5.0). For statistical analysis, we only calculated Pearsons's correlation coefficients, but used methods developed for comparison such as Lin's concordance coefficient. The correlation between automatic and manual segmentation was high for the hippocampus [0.58-0.76] and lower for the amygdala [0.45-0.59]. However, concordance coefficients point to higher concordance for the amygdala [0.46-0.62] instead of the hippocampus [0.06-0.12]. VBM8 and Freesurfer segmentation performed on a comparable level in comparison to manual segmentation. We conclude (1) that correlation alone does not capture systematic differences (e.g. of hippocampal volumes), (2) calculation of ROI volumes with VBM8 gives measurements comparable to Freesurfer V5.0 when using the same ROI and (3) systematic and proportional differences are caused mainly by different definitions of anatomic boundaries and only to a lesser part by different segmentation strategies. This work underscores the importance of using method comparison techniques and demonstrates that even with high correlation coefficients, there can be still large differences in absolute volume.

  12. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    PubMed Central

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís; Sastre-Garriga, Jaume; Montalban, Xavier; Rovira, Àlex; Lladó, Xavier

    2015-01-01

    Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS) lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM) and white matter (WM) using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations. PMID:26740917

  13. Accurate colon residue detection algorithm with partial volume segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Liang, Zhengrong; Zhang, PengPeng; Kutcher, Gerald J.

    2004-05-01

    Colon cancer is the second leading cause of cancer-related death in the United States. Earlier detection and removal of polyps can dramatically reduce the chance of developing malignant tumor. Due to some limitations of optical colonoscopy used in clinic, many researchers have developed virtual colonoscopy as an alternative technique, in which accurate colon segmentation is crucial. However, partial volume effect and existence of residue make it very challenging. The electronic colon cleaning technique proposed by Chen et al is a very attractive method, which is also kind of hard segmentation method. As mentioned in their paper, some artifacts were produced, which might affect the accurate colon reconstruction. In our paper, instead of labeling each voxel with a unique label or tissue type, the percentage of different tissues within each voxel, which we call a mixture, was considered in establishing a maximum a posterior probability (MAP) image-segmentation framework. A Markov random field (MRF) model was developed to reflect the spatial information for the tissue mixtures. The spatial information based on hard segmentation was used to determine which tissue types are in the specific voxel. Parameters of each tissue class were estimated by the expectation-maximization (EM) algorithm during the MAP tissue-mixture segmentation. Real CT experimental results demonstrated that the partial volume effects between four tissue types have been precisely detected. Meanwhile, the residue has been electronically removed and very smooth and clean interface along the colon wall has been obtained.

  14. Normal brain volume measurements using multispectral MRI segmentation.

    PubMed

    Vaidyanathan, M; Clarke, L P; Heidtman, C; Velthuizen, R P; Hall, L O

    1997-01-01

    The performance of a supervised k-nearest neighbor (kNN) classifier and a semisupervised fuzzy c-means (SFCM) clustering segmentation method are evaluated for reproducible measurement of the volumes of normal brain tissues and cerebrospinal fluid. The stability of the two segmentation methods is evaluated for (a) operator selection of training data, (b) reproducibility during repeat imaging sessions to determine any variations in the sensor performance over time, (c) variations in the measured volumes between different subjects, and (d) variability with different imaging parameters. The variations were found to be dependent on the type of measured tissue and the operator performing the segmentations. The variability during repeat imaging sessions for the SFCM method was < 3%. The absolute volumes of the brain matter and cerebrospinal fluid between subjects varied quite large, ranging from 9% to 13%. The intraobserver and interobserver reproducibility for SFCM were < 4% for the soft tissues and 6% for cerebrospinal fluid. The corresponding results for the kNN segmentation method were higher compared to the SFCM method.

  15. Fast global interactive volume segmentation with regional supervoxel descriptors

    NASA Astrophysics Data System (ADS)

    Luengo, Imanol; Basham, Mark; French, Andrew P.

    2016-03-01

    In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction.

  16. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  17. Tooth segmentation system with intelligent editing for cephalometric analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shoupu

    2015-03-01

    Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.

  18. Artificial Neural Network-Based System for PET Volume Segmentation

    PubMed Central

    Sharif, Mhd Saeed; Abbod, Maysam; Amira, Abbes; Zaidi, Habib

    2010-01-01

    Tumour detection, classification, and quantification in positron emission tomography (PET) imaging at early stage of disease are important issues for clinical diagnosis, assessment of response to treatment, and radiotherapy planning. Many techniques have been proposed for segmenting medical imaging data; however, some of the approaches have poor performance, large inaccuracy, and require substantial computation time for analysing large medical volumes. Artificial intelligence (AI) approaches can provide improved accuracy and save decent amount of time. Artificial neural networks (ANNs), as one of the best AI techniques, have the capability to classify and quantify precisely lesions and model the clinical evaluation for a specific problem. This paper presents a novel application of ANNs in the wavelet domain for PET volume segmentation. ANN performance evaluation using different training algorithms in both spatial and wavelet domains with a different number of neurons in the hidden layer is also presented. The best number of neurons in the hidden layer is determined according to the experimental results, which is also stated Levenberg-Marquardt backpropagation training algorithm as the best training approach for the proposed application. The proposed intelligent system results are compared with those obtained using conventional techniques including thresholding and clustering based approaches. Experimental and Monte Carlo simulated PET phantom data sets and clinical PET volumes of nonsmall cell lung cancer patients were utilised to validate the proposed algorithm which has demonstrated promising results. PMID:20936152

  19. Real-time volume rendering visualization of dual-modality PET/CT images with interactive fuzzy thresholding segmentation.

    PubMed

    Kim, Jinman; Cai, Weidong; Eberl, Stefan; Feng, Dagan

    2007-03-01

    Three-dimensional (3-D) visualization has become an essential part for imaging applications, including image-guided surgery, radiotherapy planning, and computer-aided diagnosis. In the visualization of dual-modality positron emission tomography and computed tomography (PET/CT), 3-D volume rendering is often limited to rendering of a single image volume and by high computational demand. Furthermore, incorporation of segmentation in volume rendering is usually restricted to visualizing the presegmented volumes of interest. In this paper, we investigated the integration of interactive segmentation into real-time volume rendering of dual-modality PET/CT images. We present and validate a fuzzy thresholding segmentation technique based on fuzzy cluster analysis, which allows interactive and real-time optimization of the segmentation results. This technique is then incorporated into a real-time multi-volume rendering of PET/CT images. Our method allows a real-time fusion and interchangeability of segmentation volume with PET or CT volumes, as well as the usual fusion of PET/CT volumes. Volume manipulations such as window level adjustments and lookup table can be applied to individual volumes, which are then fused together in real time as adjustments are made. We demonstrate the benefit of our method in integrating segmentation with volume rendering in its application to PET/CT images. Responsive frame rates are achieved by utilizing a texture-based volume rendering algorithm and the rapid transfer capability of the high-memory bandwidth available in low-cost graphic hardware.

  20. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  1. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET.

    PubMed

    Hatt, M; Lamare, F; Boussion, N; Turzo, A; Collet, C; Salzenstein, F; Roux, C; Jarritt, P; Carson, K; Cheze-Le Rest, C; Visvikis, D

    2007-06-21

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both

  2. Semiautomatic regional segmentation to measure orbital fat volumes in thyroid-associated ophthalmopathy. A validation study.

    PubMed

    Comerci, M; Elefante, A; Strianese, D; Senese, R; Bonavolontà, P; Alfano, B; Bonavolontà, B; Brunetti, A

    2013-08-01

    This study was designed to validate a novel semi-automated segmentation method to measure regional intra-orbital fat tissue volume in Graves' ophthalmopathy. Twenty-four orbits from 12 patients with Graves' ophthalmopathy, 24 orbits from 12 controls, ten orbits from five MRI study simulations and two orbits from a digital model were used. Following manual region of interest definition of the orbital volumes performed by two operators with different levels of expertise, an automated procedure calculated intra-orbital fat tissue volumes (global and regional, with automated definition of four quadrants). In patients with Graves' disease, clinical activity score and degree of exophthalmos were measured and correlated with intra-orbital fat volumes. Operator performance was evaluated and statistical analysis of the measurements was performed. Accurate intra-orbital fat volume measurements were obtained with coefficients of variation below 5%. The mean operator difference in total fat volume measurements was 0.56%. Patients had significantly higher intra-orbital fat volumes than controls (p<0.001 using Student's t test). Fat volumes and clinical score were significantly correlated (p<0.001). The semi-automated method described here can provide accurate, reproducible intra-orbital fat measurements with low inter-operator variation and good correlation with clinical data.

  3. Clinical value of prostate segmentation and volume determination on MRI in benign prostatic hyperplasia.

    PubMed

    Garvey, Brian; Türkbey, Barış; Truong, Hong; Bernardo, Marcelino; Periaswamy, Senthil; Choyke, Peter L

    2014-01-01

    Benign prostatic hyperplasia (BPH) is a nonmalignant pathological enlargement of the prostate, which occurs primarily in the transitional zone. BPH is highly prevalent and is a major cause of lower urinary tract symptoms in aging males, although there is no direct relationship between prostate volume and symptom severity. The progression of BPH can be quantified by measuring the volumes of the whole prostate and its zones, based on image segmentation on magnetic resonance imaging. Prostate volume determination via segmentation is a useful measure for patients undergoing therapy for BPH. However, prostate segmentation is not widely used due to the excessive time required for even experts to manually map the margins of the prostate. Here, we review and compare new methods of prostate volume segmentation using both manual and automated methods, including the ellipsoid formula, manual planimetry, and semiautomated and fully automated segmentation approaches. We highlight the utility of prostate segmentation in the clinical context of assessing BPH.

  4. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region

    PubMed Central

    Tian, Jing; Varga, Boglárka; Somfai, Gábor Márk; Lee, Wen-Hsiang; Smiddy, William E.; Cabrera DeBuc, Delia

    2015-01-01

    Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra’s algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (∼ 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data. PMID:26258430

  5. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    PubMed Central

    Hatt, Mathieu; Lamare, Frédéric; Boussion, Nicolas; Roux, Christian; Turzo, Alexandre; Cheze-Lerest, Catherine; Jarritt, Peter; Carson, Kathryn; Salzenstein, Fabien; Collet, Christophe; Visvikis, Dimitris

    2007-01-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the Fuzzy Hidden Markov Chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical Hidden Markov Chain (HMC) algorithm, FHMC takes into account noise, voxel’s intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the “fuzzy” nature of the object on interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8mm3 and 64mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both

  6. Volume Averaging of Spectral-Domain Optical Coherence Tomography Impacts Retinal Segmentation in Children

    PubMed Central

    Trimboli-Heidler, Carmelina; Vogt, Kelly; Avery, Robert A.

    2016-01-01

    Purpose To determine the influence of volume averaging on retinal layer thickness measures acquired with spectral-domain optical coherence tomography (SD-OCT) in children. Methods Macular SD-OCT images were acquired using three different volume settings (i.e., 1, 3, and 9 volumes) in children enrolled in a prospective OCT study. Total retinal thickness and five inner layers were measured around an Early Treatment Diabetic Retinopathy Scale (ETDRS) grid using beta version automated segmentation software for the Spectralis. The magnitude of manual segmentation required to correct the automated segmentation was classified as either minor (<12 lines adjusted), moderate (>12 and <25 lines adjusted), severe (>26 and <48 lines adjusted), or fail (>48 lines adjusted or could not adjust due to poor image quality). The frequency of each edit classification was assessed for each volume setting. Thickness, paired difference, and 95% limits of agreement of each anatomic quadrant were compared across volume density. Results Seventy-five subjects (median age 11.8 years, range 4.3–18.5 years) contributed 75 eyes. Less than 5% of the 9- and 3-volume scans required more than minor manual segmentation corrections, compared with 71% of 1-volume scans. The inner (3 mm) region demonstrated similar measures across all layers, regardless of volume number. The 1-volume scans demonstrated greater variability of the retinal nerve fiber layer (RNLF) thickness, compared with the other volumes in the outer (6 mm) region. Conclusions In children, volume averaging of SD-OCT acquisitions reduce retinal layer segmentation errors. Translational Relevance This study highlights the importance of volume averaging when acquiring macula volumes intended for multilayer segmentation. PMID:27570711

  7. Segmentation of histological structures for fractal analysis

    NASA Astrophysics Data System (ADS)

    Dixon, Vanessa; Kouznetsov, Alexei; Tambasco, Mauro

    2009-02-01

    Pathologists examine histology sections to make diagnostic and prognostic assessments regarding cancer based on deviations in cellular and/or glandular structures. However, these assessments are subjective and exhibit some degree of observer variability. Recent studies have shown that fractal dimension (a quantitative measure of structural complexity) has proven useful for characterizing structural deviations and exhibits great potential for automated cancer diagnosis and prognosis. Computing fractal dimension relies on accurate image segmentation to capture the architectural complexity of the histology specimen. For this purpose, previous studies have used techniques such as intensity histogram analysis and edge detection algorithms. However, care must be taken when segmenting pathologically relevant structures since improper edge detection can result in an inaccurate estimation of fractal dimension. In this study, we established a reliable method for segmenting edges from grayscale images. We used a Koch snowflake, an object of known fractal dimension, to investigate the accuracy of various edge detection algorithms and selected the most appropriate algorithm to extract the outline structures. Next, we created validation objects ranging in fractal dimension from 1.3 to 1.9 imitating the size, structural complexity, and spatial pixel intensity distribution of stained histology section images. We applied increasing intensity thresholds to the validation objects to extract the outline structures and observe the effects on the corresponding segmentation and fractal dimension. The intensity threshold yielding the maximum fractal dimension provided the most accurate fractal dimension and segmentation, indicating that this quantitative method could be used in an automated classification system for histology specimens.

  8. Design of a Single Segment Conductance Catheter for Measurement of Left Ventricular Volume

    DTIC Science & Technology

    2007-11-02

    segment catheter ( Cordis Webster, Baldvin Park, Calif.), which measures con- ductance in five segments, which all are added to form a global volume...Houston, Tex.) and a 7F, 12-pole conduc- tance catheter ( Cordis Webster, Baldvin Park, Calif.) with 7 or 9 mm spacing between the electrodes, depending on...segment 1. Electrode 3 will at each sample have the potential V1 plus V2; that is, the potential difference of seg- ment one plus two. Similarly, the

  9. Midbrain volume segmentation using active shape models and LBPs

    NASA Astrophysics Data System (ADS)

    Olveres, Jimena; Nava, Rodrigo; Escalante-Ramírez, Boris; Cristóbal, Gabriel; García-Moreno, Carla María.

    2013-09-01

    In recent years, the use of Magnetic Resonance Imaging (MRI) to detect different brain structures such as midbrain, white matter, gray matter, corpus callosum, and cerebellum has increased. This fact together with the evidence that midbrain is associated with Parkinson's disease has led researchers to consider midbrain segmentation as an important issue. Nowadays, Active Shape Models (ASM) are widely used in literature for organ segmentation where the shape is an important discriminant feature. Nevertheless, this approach is based on the assumption that objects of interest are usually located on strong edges. Such a limitation may lead to a final shape far from the actual shape model. This paper proposes a novel method based on the combined use of ASM and Local Binary Patterns for segmenting midbrain. Furthermore, we analyzed several LBP methods and evaluated their performance. The joint-model considers both global and local statistics to improve final adjustments. The results showed that our proposal performs substantially better than the ASM algorithm and provides better segmentation measurements.

  10. Multi-region unstructured volume segmentation using tetrahedron filling

    SciTech Connect

    Willliams, Sean Jamerson; Dillard, Scott E; Thoma, Dan J; Hlawitschka, Mario; Hamann, Bernd

    2010-01-01

    Segmentation is one of the most common operations in image processing, and while there are several solutions already present in the literature, they each have their own benefits and drawbacks that make them well-suited for some types of data and not for others. We focus on the problem of breaking an image into multiple regions in a single segmentation pass, while supporting both voxel and scattered point data. To solve this problem, we begin with a set of potential boundary points and use a Delaunay triangulation to complete the boundaries. We use heuristic- and interaction-driven Voronoi clustering to find reasonable groupings of tetrahedra. Apart from the computation of the Delaunay triangulation, our algorithm has linear time complexity with respect to the number of tetrahedra.

  11. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  12. Static Tests of Segments of Tunnel Linings. Volume II. Data.

    DTIC Science & Technology

    1979-06-30

    segments was performed by Rettig Machine Shop, Redlands, California, under the direction of D. F. Rettig. Design and casting of the cellular concrete...7N 7N N N N cl ~j NC~ 4 Yci iN l cicl l cir~ NNC~ l (i jci N j (I liNN l 11111111111NN NN liN 11 m111N1"* 0 YNciciN(i ~ ~ l I (Ii..1’..tdNN ’N NN ci

  13. LANDSAT-D program. Volume 2: Ground segment

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Raw digital data, as received from the LANDSAT spacecraft, cannot generate images that meet specifications. Radiometric corrections must be made to compensate for aging and for differences in sensitivity among the instrument sensors. Geometric corrections must be made to compensate for off-nadir look angle, and to calculate spacecraft drift from its prescribed path. Corrections must also be made for look-angle jitter caused by vibrations induced by spacecraft equipment. The major components of the LANDSAT ground segment and their functions are discussed.

  14. High volume production trial of mirror segments for the Thirty Meter Telescope

    NASA Astrophysics Data System (ADS)

    Oota, Tetsuji; Negishi, Mahito; Shinonaga, Hirohiko; Gomi, Akihiko; Tanaka, Yutaka; Akutsu, Kotaro; Otsuka, Itaru; Mochizuki, Shun; Iye, Masanori; Yamashita, Takuya

    2014-07-01

    The Thirty Meter Telescope is a next-generation optical/infrared telescope to be constructed on Mauna Kea, Hawaii toward the end of this decade, as an international project. Its 30 m primary mirror consists of 492 off-axis aspheric segmented mirrors. High volume production of hundreds of segments has started in 2013 based on the contract between National Astronomical Observatory of Japan and Canon Inc.. This paper describes the achievements of the high volume production trials. The Stressed Mirror Figuring technique which is established by Keck Telescope engineers is arranged and adopted. To measure the segment surface figure, a novel stitching algorithm is evaluated by experiment. The integration procedure is checked with prototype segment.

  15. Segmentation and Quantitative Analysis of Epithelial Tissues.

    PubMed

    Aigouy, Benoit; Umetsu, Daiki; Eaton, Suzanne

    2016-01-01

    Epithelia are tissues that regulate exchanges with the environment. They are very dynamic and can acquire virtually any shape; at the cellular level, they are composed of cells tightly connected by junctions. Most often epithelia are amenable to live imaging; however, the large number of cells composing an epithelium and the absence of informatics tools dedicated to epithelial analysis largely prevented tissue scale studies. Here we present Tissue Analyzer, a free tool that can be used to segment and analyze epithelial cells and monitor tissue dynamics.

  16. ROC Analysis of IR Segmentation Techniques.

    DTIC Science & Technology

    1994-12-01

    WrOChtAnattysisoRSgetton Ti’oc aechnique Gpro edo g’bia K lay e;ru Dseciutond LiutnanUA AFIT/GE/ENG/94D-15 ROC Analysis of IR Segmentation Techniques THESIS...classification systems was measured by the percentage of correct decisions, but that "percent correct" does not account for the false-positive and...false-negative errors involved [13]. For example, if 5% of people have a particular disease, then a system can be 95% accurate by calling everyone

  17. Segmentation propagation for the automated quantification of ventricle volume from serial MRI

    NASA Astrophysics Data System (ADS)

    Linguraru, Marius George; Butman, John A.

    2009-02-01

    Accurate ventricle volume estimates could potentially improve the understanding and diagnosis of communicating hydrocephalus. Postoperative communicating hydrocephalus has been recognized in patients with brain tumors where the changes in ventricle volume can be difficult to identify, particularly over short time intervals. Because of the complex alterations of brain morphology in these patients, the segmentation of brain ventricles is challenging. Our method evaluates ventricle size from serial brain MRI examinations; we (i) combined serial images to increase SNR, (ii) automatically segmented this image to generate a ventricle template using fast marching methods and geodesic active contours, and (iii) propagated the segmentation using deformable registration of the original MRI datasets. By applying this deformation to the ventricle template, serial volume estimates were obtained in a robust manner from routine clinical images (0.93 overlap) and their variation analyzed.

  18. Similarity enhancement for automatic segmentation of cardiac structures in computed tomography volumes

    PubMed Central

    Vera, Miguel; Bravo, Antonio; Garreau, Mireille; Medina, Rubén

    2011-01-01

    The aim of this research is proposing a 3–D similarity enhancement technique useful for improving the segmentation of cardiac structures in Multi-Slice Computerized Tomography (MSCT) volumes. The similarity enhancement is obtained by subtracting the intensity of the current voxel and the gray levels of their adjacent voxels in two volumes resulting after preprocessing. Such volumes are: a.- a volume obtained after applying a Gaussian distribution and a morphological top-hat filter to the input and b.- a smoothed volume generated by processing the input with an average filter. Then, the similarity volume is used as input to a region growing algorithm. This algorithm is applied to extract the shape of cardiac structures, such as left and right ventricles, in MSCT volumes. Qualitative and quantitative results show the good performance of the proposed approach for discrimination of cardiac cavities. PMID:22256220

  19. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  20. Multi-Segment Hemodynamic and Volume Assessment With Impedance Plethysmography: Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Ku, Yu-Tsuan E.; Montgomery, Leslie D.; Webbon, Bruce W. (Technical Monitor)

    1995-01-01

    Definition of multi-segmental circulatory and volume changes in the human body provides an understanding of the physiologic responses to various aerospace conditions. We have developed instrumentation and testing procedures at NASA Ames Research Center that may be useful in biomedical research and clinical diagnosis. Specialized two, four, and six channel impedance systems will be described that have been used to measure calf, thigh, thoracic, arm, and cerebral hemodynamic and volume changes during various experimental investigations.

  1. Comparison of supervised MRI segmentation methods for tumor volume determination during therapy.

    PubMed

    Vaidyanathan, M; Clarke, L P; Velthuizen, R P; Phuphanich, S; Bensaid, A M; Hall, L O; Bezdek, J C; Greenberg, H; Trotti, A; Silbiger, M

    1995-01-01

    Two different multispectral pattern recognition methods are used to segment magnetic resonance images (MRI) of the brain for quantitative estimation of tumor volume and volume changes with therapy. A supervised k-nearest neighbor (kNN) rule and a semi-supervised fuzzy c-means (SFCM) method are used to segment MRI slice data. Tumor volumes as determined by the kNN and SFCM segmentation methods are compared with two reference methods, based on image grey scale, as a basis for an estimation of ground truth, namely: (a) a commonly used seed growing method that is applied to the contrast enhanced T1-weighted image, and (b) a manual segmentation method using a custom-designed graphical user interface applied to the same raw image (T1-weighted) dataset. Emphasis is placed on measurement of intra and inter observer reproducibility using the proposed methods. Intra- and interobserver variation for the kNN method was 9% and 5%, respectively. The results for the SFCM method was a little better at 6% and 4%, respectively. For the seed growing method, the intra-observer variation was 6% and the interobserver variation was 17%, significantly larger when compared with the multispectral methods. The absolute tumor volume determined by the multispectral segmentation methods was consistently smaller than that observed for the reference methods. The results of this study are found to be very patient case-dependent. The results for SFCM suggest that it should be useful for relative measurements of tumor volume during therapy, but further studies are required. This work demonstrates the need for minimally supervised or unsupervised methods for tumor volume measurements.

  2. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes

    PubMed Central

    Eapen, Maya; Korah, Reeba; Geetha, G.

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  3. Sequential Registration-Based Segmentation of the Prostate Gland in MR Image Volumes.

    PubMed

    Khalvati, Farzad; Salmanpour, Aryan; Rahnamayan, Shahryar; Haider, Masoom A; Tizhoosh, H R

    2016-04-01

    Accurate and fast segmentation and volume estimation of the prostate gland in magnetic resonance (MR) images are necessary steps in the diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semi-automated segmentation of individual slices in T2-weighted MR image sequences. The proposed sequential registration-based segmentation (SRS) algorithm, which was inspired by the clinical workflow during medical image contouring, relies on inter-slice image registration and user interaction/correction to segment the prostate gland without the use of an anatomical atlas. It automatically generates contours for each slice using a registration algorithm, provided that the user edits and approves the marking in some previous slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid). Five radiation oncologists participated in the study where they contoured the prostate MR (T2-weighted) images of 15 patients both manually and using the SRS algorithm. Compared to the manual segmentation, on average, the SRS algorithm reduced the contouring time by 62% (a speedup factor of 2.64×) while maintaining the segmentation accuracy at the same level as the intra-user agreement level (i.e., Dice similarity coefficient of 91 versus 90%). The proposed algorithm exploits the inter-slice similarity of volumetric MR image series to achieve highly accurate results while significantly reducing the contouring time.

  4. Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory.

    PubMed

    Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin

    2012-01-01

    In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region.

  5. Milling Stability Analysis Based on Chebyshev Segmentation

    NASA Astrophysics Data System (ADS)

    HUANG, Jianwei; LI, He; HAN, Ping; Wen, Bangchun

    2016-09-01

    Chebyshev segmentation method was used to discretize the time period contained in delay differential equation, then the Newton second-order difference quotient method was used to calculate the cutter motion vector at each time endpoint, and the Floquet theory was used to determine the stability of the milling system after getting the transfer matrix of milling system. Using the above methods, a two degree of freedom milling system stability issues were investigated, and system stability lobe diagrams were got. The results showed that the proposed methods have the following advantages. Firstly, with the same calculation accuracy, the points needed to represent the time period are less by the Chebyshev Segmentation than those of the average segmentation, and the computational efficiency of the Chebyshev Segmentation is higher. Secondly, if the time period is divided into the same parts, the stability lobe diagrams got by Chebyshev segmentation method are more accurate than those of the average segmentation.

  6. Analysis of Random Segment Errors on Coronagraph Performance

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.; Stahl, H. Philip; Shaklan, Stuart B.; N'Diaye, Mamadou

    2016-01-01

    At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

  7. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    NASA Astrophysics Data System (ADS)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  8. Generalized method for partial volume estimation and tissue segmentation in cerebral magnetic resonance images

    PubMed Central

    Khademi, April; Venetsanopoulos, Anastasios; Moody, Alan R.

    2014-01-01

    Abstract. An artifact found in magnetic resonance images (MRI) called partial volume averaging (PVA) has received much attention since accurate segmentation of cerebral anatomy and pathology is impeded by this artifact. Traditional neurological segmentation techniques rely on Gaussian mixture models to handle noise and PVA, or high-dimensional feature sets that exploit redundancy in multispectral datasets. Unfortunately, model-based techniques may not be optimal for images with non-Gaussian noise distributions and/or pathology, and multispectral techniques model probabilities instead of the partial volume (PV) fraction. For robust segmentation, a PV fraction estimation approach is developed for cerebral MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead, the PV fraction is estimated directly from each image using an adaptively defined global edge map constructed by exploiting a relationship between edge content and PVA. The final PVA map is used to segment anatomy and pathology with subvoxel accuracy. Validation on simulated and real, pathology-free T1 MRI (Gaussian noise), as well as pathological fluid attenuation inversion recovery MRI (non-Gaussian noise), demonstrate that the PV fraction is accurately estimated and the resultant segmentation is robust. Comparison to model-based methods further highlight the benefits of the current approach. PMID:26158022

  9. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    NASA Astrophysics Data System (ADS)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  10. Trabecular-iris circumference volume in open angle eyes using swept-source fourier domain anterior segment optical coherence tomography.

    PubMed

    Rigi, Mohammed; Blieden, Lauren S; Nguyen, Donna; Chuang, Alice Z; Baker, Laura A; Bell, Nicholas P; Lee, David A; Mankiewicz, Kimberly A; Feldman, Robert M

    2014-01-01

    Purpose. To introduce a new anterior segment optical coherence tomography parameter, trabecular-iris circumference volume (TICV), which measures the integrated volume of the peripheral angle, and establish a reference range in normal, open angle eyes. Methods. One eye of each participant with open angles and a normal anterior segment was imaged using 3D mode by the CASIA SS-1000 (Tomey, Nagoya, Japan). Trabecular-iris space area (TISA) and TICV at 500 and 750 µm were calculated. Analysis of covariance was performed to examine the effect of age and its interaction with spherical equivalent. Results. The study included 100 participants with a mean age of 50 (±15) years (range 20-79). TICV showed a normal distribution with a mean (±SD) value of 4.75 µL (±2.30) for TICV500 and a mean (±SD) value of 8.90 µL (±3.88) for TICV750. Overall, TICV showed an age-related reduction (P = 0.035). In addition, angle volume increased with increased myopia for all age groups, except for those older than 65 years. Conclusions. This study introduces a new parameter to measure peripheral angle volume, TICV, with age-adjusted normal ranges for open angle eyes. Further investigation is warranted to determine the clinical utility of this new parameter.

  11. Different approaches to synovial membrane volume determination by magnetic resonance imaging: manual versus automated segmentation.

    PubMed

    Ostergaard, M

    1997-11-01

    Automated fast (5-20 min) synovial membrane volume determination by MRI, based on pre-set post-gadolinium-DTPA enhancement thresholds, was evaluated as a substitute for a time-consuming (45-120 min), previously validated, manual segmentation method. Twenty-nine knees [rheumatoid arthritis (RA) 13, osteoarthritis (OA) 16] and 17 RA wrists were examined. At enhancement thresholds between 30 and 60%, the automated volumes (Syn(x%)) were highly significantly correlated to manual volumes (SynMan) (knees: rho = 0.78-0.91, P < 10(-5) to < 10(-9); wrists: rho = 0.87-0.95, P < 10(-4) to < 10(-6)). The absolute values of the automated estimates were extremely dependent on the threshold chosen. At the optimal threshold of 45%, the median numerical difference from SynMan was 7 ml (17%) in knees and 2 ml (25%) in wrists. At this threshold, the difference was not related to diagnosis, clinical inflammation or synovial membrane volume, e.g. no systematic errors were found. The inter-MRI variation, evaluated in three knees and three wrists, was higher than by manual segmentation, particularly due to sensitivity to malalignment artefacts. Examination of test objects proved the high accuracy of the general methodology for volume determinations (maximal error 6.3%). Preceded by the determination of reproducibility and the optimal threshold at the available MR unit, automated 'threshold' segmentation appears to be acceptable when changes rather than absolute values of synovial membrane volumes are most important, e.g. in clinical trials.

  12. Comparison of three image segmentation techniques for target volume delineation in positron emission tomography.

    PubMed

    Drever, Laura A; Roa, Wilson; McEwan, Alexander; Robinson, Don

    2007-03-09

    Incorporation of positron emission tomography (PET) data into radiotherapy planning is currently under investigation for numerous sites including lung, brain, head and neck, breast, and prostate. Accurate tumor-volume quantification is essential to the proper utilization of the unique information provided by PET. Unfortunately,target delineation within PET currently remains a largely unaddressed problem. We therefore examined the ability of three segmentation methods-thresholding, Sobel edge detection, and the watershed approach-to yield accurate delineation of PET target cross-sections. A phantom study employing well-defined cylindrical and spherical volumes and activity distributions provided an opportunity to assess the relative efficacy with which the three approaches could yield accurate target delineation in PET. Results revealed that threshold segmentation can accurately delineate target cross-sections, but that the Sobel and watershed techniques both consistently fail to correctly identify the size of experimental volumes. The usefulness of threshold-based segmentation is limited, however, by the dependence of the correct threshold (that which returns the correct area at each image slice) on target size.

  13. A fuzzy locally adaptive Bayesian segmentation approach for volume determination in PET.

    PubMed

    Hatt, Mathieu; Cheze le Rest, Catherine; Turzo, Alexandre; Roux, Christian; Visvikis, Dimitris

    2009-06-01

    Accurate volume estimation in positron emission tomography (PET) is crucial for different oncology applications. The objective of our study was to develop a new fuzzy locally adaptive Bayesian (FLAB) segmentation for automatic lesion volume delineation. FLAB was compared with a threshold approach as well as the previously proposed fuzzy hidden Markov chains (FHMC) and the fuzzy C-Means (FCM) algorithms. The performance of the algorithms was assessed on acquired datasets of the IEC phantom, covering a range of spherical lesion sizes (10-37 mm), contrast ratios (4:1 and 8:1), noise levels (1, 2, and 5 min acquisitions), and voxel sizes (8 and 64 mm(3)). In addition, the performance of the FLAB model was assessed on realistic nonuniform and nonspherical volumes simulated from patient lesions. Results show that FLAB performs better than the other methodologies, particularly for smaller objects. The volume error was 5%-15% for the different sphere sizes (down to 13 mm), contrast and image qualities considered, with a high reproducibility (variation < 4%). By comparison, the thresholding results were greatly dependent on image contrast and noise, whereas FCM results were less dependent on noise but consistently failed to segment lesions < 2 cm. In addition, FLAB performed consistently better for lesions < 2 cm in comparison to the FHMC algorithm. Finally the FLAB model provided errors less than 10% for nonspherical lesions with inhomogeneous activity distributions. Future developments will concentrate on an extension of FLAB in order to allow the segmentation of separate activity distribution regions within the same functional volume as well as a robustness study with respect to different scanners and reconstruction algorithms.

  14. Segments.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Presents a market taxonomy for higher education, including what it reveals about the structure of the market, the model's technical attributes, and its capacity to explain pricing behavior. Details the identification of the principle seams separating one market segment from another and how student aspirations help to organize the market, making…

  15. Automatic coronary lumen segmentation with partial volume modeling improves lesions' hemodynamic significance assessment

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Lamash, Y.; Gilboa, G.; Nickisch, H.; Prevrhal, S.; Schmitt, H.; Vembar, M.; Goshen, L.

    2016-03-01

    The determination of hemodynamic significance of coronary artery lesions from cardiac computed tomography angiography (CCTA) based on blood flow simulations has the potential to improve CCTA's specificity, thus resulting in improved clinical decision making. Accurate coronary lumen segmentation required for flow simulation is challenging due to several factors. Specifically, the partial-volume effect (PVE) in small-diameter lumina may result in overestimation of the lumen diameter that can lead to an erroneous hemodynamic significance assessment. In this work, we present a coronary artery segmentation algorithm tailored specifically for flow simulations by accounting for the PVE. Our algorithm detects lumen regions that may be subject to the PVE by analyzing the intensity values along the coronary centerline and integrates this information into a machine-learning based graph min-cut segmentation framework to obtain accurate coronary lumen segmentations. We demonstrate the improvement in hemodynamic significance assessment achieved by accounting for the PVE in the automatic segmentation of 91 coronary artery lesions from 85 patients. We compare hemodynamic significance assessments by means of fractional flow reserve (FFR) resulting from simulations on 3D models generated by our segmentation algorithm with and without accounting for the PVE. By accounting for the PVE we improved the area under the ROC curve for detecting hemodynamically significant CAD by 29% (N=91, 0.85 vs. 0.66, p<0.05, Delong's test) with invasive FFR threshold of 0.8 as the reference standard. Our algorithm has the potential to facilitate non-invasive hemodynamic significance assessment of coronary lesions.

  16. A novel colonic polyp volume segmentation method for computer tomographic colonography

    NASA Astrophysics Data System (ADS)

    Wang, Huafeng; Li, Lihong C.; Han, Hao; Song, Bowen; Peng, Hao; Wang, Yunhong; Wang, Lihua; Liang, Zhengrong

    2014-03-01

    Colorectal cancer is the third most common type of cancer. However, this disease can be prevented by detection and removal of precursor adenomatous polyps after the diagnosis given by experts on computer tomographic colonography (CTC). During CTC diagnosis, the radiologist looks for colon polyps and measures not only the size but also the malignancy. It is a common sense that to segment polyp volumes from their complicated growing environment is of much significance for accomplishing the CTC based early diagnosis task. Previously, the polyp volumes are mainly given from the manually or semi-automatically drawing by the radiologists. As a result, some deviations cannot be avoided since the polyps are usually small (6~9mm) and the radiologists' experience and knowledge are varying from one to another. In order to achieve automatic polyp segmentation carried out by the machine, we proposed a new method based on the colon decomposition strategy. We evaluated our algorithm on both phantom and patient data. Experimental results demonstrate our approach is capable of segment the small polyps from their complicated growing background.

  17. A new method for volume segmentation of PET images, based on possibility theory.

    PubMed

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Lopes, Renaud; Huglo, Damien; Stute, Simon; Vermandel, Maximilien

    2011-02-01

    18F-fluorodeoxyglucose positron emission tomography (18FDG PET) has become an essential technique in oncology. Accurate segmentation and uptake quantification are crucial in order to enable objective follow-up, the optimization of radiotherapy planning, and therapeutic evaluation. We have designed and evaluated a new, nearly automatic and operator-independent segmentation approach. This incorporated possibility theory, in order to take into account the uncertainty and inaccuracy inherent in the image. The approach remained independent of PET facilities since it did not require any preliminary calibration. Good results were obtained from phantom images [percent error =18.38% (mean) ± 9.72% (standard deviation)]. Results on simulated and anatomopathological data sets were quantified using different similarity measures and showed the method was efficient (simulated images: Dice index =82.18% ± 13.53% for SUV =2.5 ). The approach could, therefore, be an efficient and robust tool for uptake volume segmentation, and lead to new indicators for measuring volume of interest activity.

  18. Improvements in analysis techniques for segmented mirror arrays

    NASA Astrophysics Data System (ADS)

    Michels, Gregory J.; Genberg, Victor L.; Bisson, Gary R.

    2016-08-01

    The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  19. Hierarchical probabilistic Gabor and MRF segmentation of brain tumours in MRI volumes.

    PubMed

    Subbanna, Nagesh K; Precup, Doina; Collins, D Louis; Arbel, Tal

    2013-01-01

    In this paper, we present a fully automated hierarchical probabilistic framework for segmenting brain tumours from multispectral human brain magnetic resonance images (MRIs) using multiwindow Gabor filters and an adapted Markov Random Field (MRF) framework. In the first stage, a customised Gabor decomposition is developed, based on the combined-space characteristics of the two classes (tumour and non-tumour) in multispectral brain MRIs in order to optimally separate tumour (including edema) from healthy brain tissues. A Bayesian framework then provides a coarse probabilistic texture-based segmentation of tumours (including edema) whose boundaries are then refined at the voxel level through a modified MRF framework that carefully separates the edema from the main tumour. This customised MRF is not only built on the voxel intensities and class labels as in traditional MRFs, but also models the intensity differences between neighbouring voxels in the likelihood model, along with employing a prior based on local tissue class transition probabilities. The second inference stage is shown to resolve local inhomogeneities and impose a smoothing constraint, while also maintaining the appropriate boundaries as supported by the local intensity difference observations. The method was trained and tested on the publicly available MICCAI 2012 Brain Tumour Segmentation Challenge (BRATS) Database [1] on both synthetic and clinical volumes (low grade and high grade tumours). Our method performs well compared to state-of-the-art techniques, outperforming the results of the top methods in cases of clinical high grade and low grade tumour core segmentation by 40% and 45% respectively.

  20. The segmented regional volumes of the cerebrum and cerebellum in boys with Tourette syndrome.

    PubMed Central

    Hong, Kang-E; Ock, Sun-Myeong; Kang, Min-Hee; Kim, Chul-Eung; Bae, Jae-Nam; Lim, Myung-Kwan; Suh, Chang-Hae; Chung, Sun-Ju; Cho, Soo-Churl; Lee, Jeong-Seop

    2002-01-01

    Neuropathological deficits are an etiological factor in Tourette syndrome (TS), and implicate a network linking the basal ganglia and the cerebrum, not a particular single brain region. In this study, the volumes of 20 cerebral and cerebellar regions and their symmetries were measured in normal boys and TS boys by brain magnetic resonance imaging. Brain magnetic resonance images were obtained prospectively in 19 boys with TS and 17 age-matched normal control boys. Cerebral and cerebellar regions were segmented to gray and white fractions using algorithm for semi-automated fuzzy tissue segmentation. The frontal, parietal, temporal, and the occipital lobes and the cerebellum were defined using the semiautomated Talairach atlas-based parcellation method. Boys with TS had smaller total brain volumes than control subjects. In the gray matter, although the smaller brain volume was taken into account, TS boys had a smaller right frontal lobe and a larger left frontal lobe and increased normal asymmetry (left>right). In addition, TS boys had more frontal lobe white matter. There were no significant differences in regions of interest of the parietal, temporal, or the occipital lobes or the cerebellum. These findings suggest that boys with TS may have neuropathological abnormalities in the gray and the white matter of the frontal lobe. PMID:12172051

  1. Influences of skull segmentation inaccuracies on EEG source analysis.

    PubMed

    Lanfer, B; Scherg, M; Dannhauer, M; Knösche, T R; Burger, M; Wolters, C H

    2012-08-01

    The low-conducting human skull is known to have an especially large influence on electroencephalography (EEG) source analysis. Because of difficulties segmenting the complex skull geometry out of magnetic resonance images, volume conductor models for EEG source analysis might contain inaccuracies and simplifications regarding the geometry of the skull. The computer simulation study presented here investigated the influences of a variety of skull geometry deficiencies on EEG forward simulations and source reconstruction from EEG data. Reference EEG data was simulated in a detailed and anatomically plausible reference model. Test models were derived from the reference model representing a variety of skull geometry inaccuracies and simplifications. These included erroneous skull holes, local errors in skull thickness, modeling cavities as bone, downward extension of the model and simplifying the inferior skull or the inferior skull and scalp as layers of constant thickness. The reference EEG data was compared to forward simulations in the test models, and source reconstruction in the test models was performed on the simulated reference data. The finite element method with high-resolution meshes was employed for all forward simulations. It was found that large skull geometry inaccuracies close to the source space, for example, when cutting the model directly below the skull, led to errors of 20mm and more for extended source space regions. Local defects, for example, erroneous skull holes, caused non-negligible errors only in the vicinity of the defect. The study design allowed a comparison of influence size, and guidelines for modeling the skull geometry were concluded.

  2. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results

    PubMed Central

    Lyden, Hannah; Gimbel, Sarah I.; Del Piero, Larissa; Tsai, A. Bryna; Sachs, Matthew E.; Kaplan, Jonas T.; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used. PMID:27656121

  3. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    NASA Astrophysics Data System (ADS)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  4. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    NASA Astrophysics Data System (ADS)

    Leibfarth, S.; Eckert, F.; Welz, S.; Siegel, C.; Schmidt, H.; Schwenzer, N.; Zips, D.; Thorwarth, D.

    2015-07-01

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only.

  5. Multi-atlas segmentation of the cartilage in knee MR images with sequential volume- and bone-mask-based registrations

    NASA Astrophysics Data System (ADS)

    Lee, Han Sang; Kim, Hyeun A.; Kim, Hyeonjin; Hong, Helen; Yoon, Young Cheol; Kim, Junmo

    2016-03-01

    In spite of its clinical importance in diagnosis of osteoarthritis, segmentation of cartilage in knee MRI remains a challenging task due to its shape variability and low contrast with surrounding soft tissues and synovial fluid. In this paper, we propose a multi-atlas segmentation of cartilage in knee MRI with sequential atlas registrations and locallyweighted voting (LWV). First, bone is segmented by sequential volume- and object-based registrations and LWV. Second, to overcome the shape variability of cartilage, cartilage is segmented by bone-mask-based registration and LWV. In experiments, the proposed method improved the bone segmentation by reducing misclassified bone region, and enhanced the cartilage segmentation by preventing cartilage leakage into surrounding similar intensity region, with the help of sequential registrations and LWV.

  6. MR volume segmentation of gray matter and white matter using manual thresholding: Dependence on image brightness

    SciTech Connect

    Harris, G.J.; Barta, P.E.; Peng, L.W.; Lee, S.; Brettschneider, P.D.; Shah, A.; Henderer, J.D.; Schlaepfer, T.E.; Pearlson, G.D. Tufts Univ. School of Medicine, Boston, MA )

    1994-02-01

    To describe a quantitative MR imaging segmentation method for determination of the volume of cerebrospinal fluid, gray matter, and white matter in living human brain, and to determine the method's reliability. We developed a computer method that allows rapid, user-friendly determination of cerebrospinal fluid, gray matter, and white matter volumes in a reliable manner, both globally and regionally. This method was applied to a large control population (N = 57). Initially, image brightness had a strong correlation with the gray-white ratio (r = .78). Bright images tended to overestimate, dim images to underestimate gray matter volumes. This artifact was corrected for by offsetting each image to an approximately equal brightness. After brightness correction, gray-white ratio was correlated with age (r = -.35). The age-dependent gray-white ratio was similar to that for the same age range in a prior neuropathology report. Interrater reliability was high (.93 intraclass correlation coefficient). The method described here for gray matter, white matter, and cerebrospinal fluid volume calculation is reliable and valid. A correction method for an artifact related to image brightness was developed. 12 refs., 3 figs.

  7. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  8. Machine learning based vesselness measurement for coronary artery segmentation in cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Loziczonek, Maciej; Georgescu, Bogdan; Zhou, S. Kevin; Vega-Higuera, Fernando; Comaniciu, Dorin

    2011-03-01

    Automatic coronary centerline extraction and lumen segmentation facilitate the diagnosis of coronary artery disease (CAD), which is a leading cause of death in developed countries. Various coronary centerline extraction methods have been proposed and most of them are based on shortest path computation given one or two end points on the artery. The major variation of the shortest path based approaches is in the different vesselness measurements used for the path cost. An empirically designed measurement (e.g., the widely used Hessian vesselness) is by no means optimal in the use of image context information. In this paper, a machine learning based vesselness is proposed by exploiting the rich domain specific knowledge embedded in an expert-annotated dataset. For each voxel, we extract a set of geometric and image features. The probabilistic boosting tree (PBT) is then used to train a classifier, which assigns a high score to voxels inside the artery and a low score to those outside. The detection score can be treated as a vesselness measurement in the computation of the shortest path. Since the detection score measures the probability of a voxel to be inside the vessel lumen, it can also be used for the coronary lumen segmentation. To speed up the computation, we perform classification only for voxels around the heart surface, which is achieved by automatically segmenting the whole heart from the 3D volume in a preprocessing step. An efficient voxel-wise classification strategy is used to further improve the speed. Experiments demonstrate that the proposed learning based vesselness outperforms the conventional Hessian vesselness in both speed and accuracy. On average, it only takes approximately 2.3 seconds to process a large volume with a typical size of 512x512x200 voxels.

  9. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  10. Volume change of segments II and III of the liver after gastrectomy in patients with gastric cancer

    PubMed Central

    Ozutemiz, Can; Obuz, Funda; Taylan, Abdullah; Atila, Koray; Bora, Seymen; Ellidokuz, Hulya

    2016-01-01

    PURPOSE We aimed to evaluate the relationship between gastrectomy and the volume of liver segments II and III in patients with gastric cancer. METHODS Computed tomography images of 54 patients who underwent curative gastrectomy for gastric adenocarcinoma were retrospectively evaluated by two blinded observers. Volumes of the total liver and segments II and III were measured. The difference between preoperative and postoperative volume measurements was compared. RESULTS Total liver volumes measured by both observers in the preoperative and postoperative scans were similar (P > 0.05). High correlation was found between both observers (preoperative r=0.99; postoperative r=0.98). Total liver volumes showed a mean reduction of 13.4% after gastrectomy (P = 0.977). The mean volume of segments II and III showed similar decrease in measurements of both observers (38.4% vs. 36.4%, P = 0.363); the correlation between the observers were high (preoperative r=0.97, P < 0.001; postoperative r=0.99, P < 0.001). Volume decrease in the rest of the liver was not different between the observers (8.2% vs. 9.1%, P = 0.388). Time had poor correlation with volume change of segments II and III and the total liver for each observer (observer 1, rseg2/3=0.32, rtotal=0.13; observer 2, rseg2/3=0.37, rtotal=0.16). CONCLUSION Segments II and III of the liver showed significant atrophy compared with the rest of the liver and the total liver after gastrectomy. Volume reduction had poor correlation with time. PMID:26899148

  11. 4-D segmentation and normalization of 3He MR images for intrasubject assessment of ventilated lung volumes

    NASA Astrophysics Data System (ADS)

    Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.

    2012-03-01

    Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.

  12. Segmentation guided registration of wide field-of-view retinal optical coherence tomography volumes

    PubMed Central

    Lezama, José; Mukherjee, Dibyendu; McNabb, Ryan P.; Sapiro, Guillermo; Kuo, Anthony N.; Farsiu, Sina

    2016-01-01

    Patient motion artifacts are often visible in densely sampled or large wide field-of-view (FOV) retinal optical coherence tomography (OCT) volumes. A popular strategy for reducing motion artifacts is to capture two orthogonally oriented volumetric scans. However, due to larger volume sizes, longer acquisition times, and corresponding larger motion artifacts, the registration of wide FOV scans remains a challenging problem. In particular, gaps in data acquisition due to eye motion, such as saccades, can be significant and their modeling becomes critical for successful registration. In this article, we develop a complete computational pipeline for the automatic motion correction and accurate registration of wide FOV orthogonally scanned OCT images of the human retina. The proposed framework utilizes the retinal boundary segmentation as a guide for registration and requires only a minimal transformation of the acquired data to produce a successful registration. It includes saccade detection and correction, a custom version of the optical flow algorithm for dense lateral registration and a linear optimization approach for axial registration. Utilizing a wide FOV swept source OCT system, we acquired retinal volumes of 12 subjects and we provide qualitative and quantitative experimental results to validate the state-of-the-art effectiveness of the proposed technique. The source code corresponding to the proposed algorithm is available online. PMID:28018709

  13. Issues about axial diffusion during segmental hair analysis.

    PubMed

    Kintz, Pascal

    2013-06-01

    matrix or changes in the hair structure due to cosmetic treatments might mislead the final result of hair analysis. To qualify for a single exposure in hair, the author proposes to consider that the highest drug concentration must be detected in the segment corresponding to the period of the alleged event (calculated with a hair growth rate at 1 cm/mo) and that the measured concentration be at least 3 times higher than those measured in the previous or the following segments. This must only be done using scalp hair after cutting the hair directly close to the scalp.

  14. Linear test bed. Volume 1: Test bed no. 1. [aerospike test bed with segmented combustor

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Linear Test Bed program was to design, fabricate, and evaluation test an advanced aerospike test bed which employed the segmented combustor concept. The system is designated as a linear aerospike system and consists of a thrust chamber assembly, a power package, and a thrust frame. It was designed as an experimental system to demonstrate the feasibility of the linear aerospike-segmented combustor concept. The overall dimensions are 120 inches long by 120 inches wide by 96 inches in height. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure, at a mixture ratio of 5.5. At the design conditions, the sea level thrust is 200,000 pounds. The complete program including concept selection, design, fabrication, component test, system test, supporting analysis and posttest hardware inspection is described.

  15. Segment clustering methodology for unsupervised Holter recordings analysis

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sotelo, Jose Luis; Peluffo-Ordoñez, Diego; Castellanos Dominguez, German

    2015-01-01

    Cardiac arrhythmia analysis on Holter recordings is an important issue in clinical settings, however such issue implicitly involves attending other problems related to the large amount of unlabelled data which means a high computational cost. In this work an unsupervised methodology based in a segment framework is presented, which consists of dividing the raw data into a balanced number of segments in order to identify fiducial points, characterize and cluster the heartbeats in each segment separately. The resulting clusters are merged or split according to an assumed criterion of homogeneity. This framework compensates the high computational cost employed in Holter analysis, being possible its implementation for further real time applications. The performance of the method is measure over the records from the MIT/BIH arrhythmia database and achieves high values of sensibility and specificity, taking advantage of database labels, for a broad kind of heartbeats types recommended by the AAMI.

  16. A comprehensive segmentation analysis of crude oil market based on time irreversibility

    NASA Astrophysics Data System (ADS)

    Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi

    2016-05-01

    In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.

  17. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  18. Microscopy image segmentation tool: robust image data analysis.

    PubMed

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  19. Three-dimensional segmentation of pulmonary artery volume from thoracic computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Lindenmaier, Tamas J.; Sheikh, Khadija; Bluemke, Emma; Gyacskov, Igor; Mura, Marco; Licskai, Christopher; Mielniczuk, Lisa; Fenster, Aaron; Cunningham, Ian A.; Parraga, Grace

    2015-03-01

    Chronic obstructive pulmonary disease (COPD), is a major contributor to hospitalization and healthcare costs in North America. While the hallmark of COPD is airflow limitation, it is also associated with abnormalities of the cardiovascular system. Enlargement of the pulmonary artery (PA) is a morphological marker of pulmonary hypertension, and was previously shown to predict acute exacerbations using a one-dimensional diameter measurement of the main PA. We hypothesized that a three-dimensional (3D) quantification of PA size would be more sensitive than 1D methods and encompass morphological changes along the entire central pulmonary artery. Hence, we developed a 3D measurement of the main (MPA), left (LPA) and right (RPA) pulmonary arteries as well as total PA volume (TPAV) from thoracic CT images. This approach incorporates segmentation of pulmonary vessels in cross-section for the MPA, LPA and RPA to provide an estimate of their volumes. Three observers performed five repeated measurements for 15 ex-smokers with ≥10 pack-years, and randomly identified from a larger dataset of 199 patients. There was a strong agreement (r2=0.76) for PA volume and PA diameter measurements, which was used as a gold standard. Observer measurements were strongly correlated and coefficients of variation for observer 1 (MPA:2%, LPA:3%, RPA:2%, TPA:2%) were not significantly different from observer 2 and 3 results. In conclusion, we generated manual 3D pulmonary artery volume measurements from thoracic CT images that can be performed with high reproducibility. Future work will involve automation for implementation in clinical workflows.

  20. Comparing manual and automatic segmentation of hippocampal volumes: reliability and validity issues in younger and older brains.

    PubMed

    Wenger, Elisabeth; Mårtensson, Johan; Noack, Hannes; Bodammer, Nils Christian; Kühn, Simone; Schaefer, Sabine; Heinze, Hans-Jochen; Düzel, Emrah; Bäckman, Lars; Lindenberger, Ulman; Lövdén, Martin

    2014-08-01

    We compared hippocampal volume measures obtained by manual tracing to automatic segmentation with FreeSurfer in 44 younger (20-30 years) and 47 older (60-70 years) adults, each measured with magnetic resonance imaging (MRI) over three successive time points, separated by four months. Retest correlations over time were very high for both manual and FreeSurfer segmentations. With FreeSurfer, correlations over time were significantly lower in the older than in the younger age group, which was not the case with manual segmentation. Pearson correlations between manual and FreeSurfer estimates were sufficiently high, numerically even higher in the younger group, whereas intra-class correlation coefficient (ICC) estimates were lower in the younger than in the older group. FreeSurfer yielded higher volume estimates than manual segmentation, particularly in the younger age group. Importantly, FreeSurfer consistently overestimated hippocampal volumes independently of manually assessed volume in the younger age group, but overestimated larger volumes in the older age group to a less extent, introducing a systematic age bias in the data. Age differences in hippocampal volumes were significant with FreeSurfer, but not with manual tracing. Manual tracing resulted in a significant difference between left and right hippocampus (right > left), whereas this asymmetry effect was considerably smaller with FreeSurfer estimates. We conclude that FreeSurfer constitutes a feasible method to assess differences in hippocampal volume in young adults. FreeSurfer estimates in older age groups should, however, be interpreted with care until the automatic segmentation pipeline has been further optimized to increase validity and reliability in this age group.

  1. Analysis of recent segmental duplications in the bovine genome

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Duplicated sequences are an important source of gene innovation and structural variation within mammalian genomes. We describe the first systematic and genome-wide analysis of segmental duplications in the modern domesticated cattle (Bos taurus). Using two distinct computational analyses, we estimat...

  2. Accurate airway segmentation based on intensity structure analysis and graph-cut

    NASA Astrophysics Data System (ADS)

    Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku

    2016-03-01

    This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.

  3. Salted and preserved duck eggs: a consumer market segmentation analysis.

    PubMed

    Arthur, Jennifer; Wiseman, Kelleen; Cheng, K M

    2015-08-01

    The combination of increasing ethnic diversity in North America and growing consumer support for local food products may present opportunities for local producers and processors in the ethnic foods product category. Our study examined the ethnic Chinese (pop. 402,000) market for salted and preserved duck eggs in Vancouver, British Columbia (BC), Canada. The objective of the study was to develop a segmentation model using survey data to categorize consumer groups based on their attitudes and the importance they placed on product attributes. We further used post-segmentation acculturation score, demographics and buyer behaviors to define these groups. Data were gathered via a survey of randomly selected Vancouver households with Chinese surnames (n = 410), targeting the adult responsible for grocery shopping. Results from principal component analysis and a 2-step cluster analysis suggest the existence of 4 market segments, described as Enthusiasts, Potentialists, Pragmatists, Health Skeptics (salted duck eggs), and Neutralists (preserved duck eggs). Kruskal Wallis tests and post hoc Mann-Whitney tests found significant differences between segments in terms of attitudes and the importance placed on product characteristics. Health Skeptics, preserved egg Potentialists, and Pragmatists of both egg products were significantly biased against Chinese imports compared to others. Except for Enthusiasts, segments disagreed that eggs are 'Healthy Products'. Preserved egg Enthusiasts had a significantly lower acculturation score (AS) compared to all others, while salted egg Enthusiasts had a lower AS compared to Health Skeptics. All segments rated "produced in BC, not mainland China" products in the "neutral to very likely" range for increasing their satisfaction with the eggs. Results also indicate that buyers of each egg type are willing to pay an average premium of at least 10% more for BC produced products versus imports, with all other characteristics equal. Overall

  4. Small rural hospitals: an example of market segmentation analysis.

    PubMed

    Mainous, A G; Shelby, R L

    1991-01-01

    In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution.

  5. Documented Safety Analysis for the B695 Segment

    SciTech Connect

    Laycak, D

    2008-09-11

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems

  6. Quantitative estimation of a ratio of intracranial cerebrospinal fluid volume to brain volume based on segmentation of CT images in patients with extra-axial hematoma.

    PubMed

    Nguyen, Ha Son; Patel, Mohit; Li, Luyuan; Kurpad, Shekar; Mueller, Wade

    2017-02-01

    Background Diminishing volume of intracranial cerebrospinal fluid (CSF) in patients with space-occupying masses have been attributed to unfavorable outcome associated with reduction of cerebral perfusion pressure and subsequent brain ischemia. Objective The objective of this article is to employ a ratio of CSF volume to brain volume for longitudinal assessment of space-volume relationships in patients with extra-axial hematoma and to determine variability of the ratio among patients with different types and stages of hematoma. Patients and methods In our retrospective study, we reviewed 113 patients with surgical extra-axial hematomas. We included 28 patients (age 61.7 +/- 17.7 years; 19 males, nine females) with an acute epidural hematoma (EDH) ( n = 5) and subacute/chronic subdural hematoma (SDH) ( n = 23). We excluded 85 patients, in order, due to acute SDH ( n = 76), concurrent intraparenchymal pathology ( n = 6), and bilateral pathology ( n = 3). Noncontrast CT images of the head were obtained using a CT scanner (2004 GE LightSpeed VCT CT system, tube voltage 140 kVp, tube current 310 mA, 5 mm section thickness) preoperatively, postoperatively (3.8 ± 5.8 hours from surgery), and at follow-up clinic visit (48.2 ± 27.7 days after surgery). Each CT scan was loaded into an OsiriX (Pixmeo, Switzerland) workstation to segment pixels based on radiodensity properties measured in Hounsfield units (HU). Based on HU values from -30 to 100, brain, CSF spaces, vascular structures, hematoma, and/or postsurgical fluid were segregated from bony structures, and subsequently hematoma and/or postsurgical fluid were manually selected and removed from the images. The remaining images represented overall brain volume-containing only CSF spaces, vascular structures, and brain parenchyma. Thereafter, the ratio between the total number of voxels representing CSF volume (based on values between 0 and 15 HU) to the total number of voxels

  7. Isointense Infant Brain Segmentation by Stacked Kernel Canonical Correlation Analysis

    PubMed Central

    Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Lin, Weili; Shen, Dinggang

    2016-01-01

    Segmentation of isointense infant brain (at ~6-months-old) MR images is challenging due to the ongoing maturation and myelination process in the first year of life. In particular, signal contrast between white and gray matters inverses around 6 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, thus posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenges based on stacked kernel canonical correlation analysis (KCCA). Our main idea is to utilize the 12-month-old brain image with high tissue contrast to guide the segmentation of 6-month-old brain images with extremely low contrast. Specifically, we use KCCA to learn the common feature representations for both 6-month-old and the subsequent 12-month-old brain images of same subjects to make their features comparable in the common space. Note that the longitudinal 12-month-old brain images are not required in the testing stage, and they are required only in the KCCA based training stage to provide a set of longitudinal 6- and 12-month-old image pairs for training. Moreover, for optimizing the common feature representations, we propose a stacked KCCA mapping, instead of using only the conventional one-step of KCCA mapping. In this way, we can better use the 12-month-old brain images as multiple atlases to guide the segmentation of isointense brain images. Specifically, sparse patch-based multi-atlas labeling is used to propagate tissue labels in the (12-month-old) atlases and segment isointense brain images by measuring patch similarity between testing and atlas images with their learned common features. The proposed method was evaluated on 20 isointense brain images via leave-one-out cross-validation, showing much better performance than the state-of-the-art methods.

  8. Image segmentation and registration for the analysis of joint motion from 3D MRI

    NASA Astrophysics Data System (ADS)

    Hu, Yangqiu; Haynor, David R.; Fassbind, Michael; Rohr, Eric; Ledoux, William

    2006-03-01

    We report an image segmentation and registration method for studying joint morphology and kinematics from in vivo MRI scans and its application to the analysis of ankle joint motion. Using an MR-compatible loading device, a foot was scanned in a single neutral and seven dynamic positions including maximal flexion, rotation and inversion/eversion. A segmentation method combining graph cuts and level sets was developed which allows a user to interactively delineate 14 bones in the neutral position volume in less than 30 minutes total, including less than 10 minutes of user interaction. In the subsequent registration step, a separate rigid body transformation for each bone is obtained by registering the neutral position dataset to each of the dynamic ones, which produces an accurate description of the motion between them. We have processed six datasets, including 3 normal and 3 pathological feet. For validation our results were compared with those obtained from 3DViewnix, a semi-automatic segmentation program, and achieved good agreement in volume overlap ratios (mean: 91.57%, standard deviation: 3.58%) for all bones. Our tool requires only 1/50 and 1/150 of the user interaction time required by 3DViewnix and NIH Image Plus, respectively, an improvement that has the potential to make joint motion analysis from MRI practical in research and clinical applications.

  9. High volume data storage architecture analysis

    NASA Technical Reports Server (NTRS)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  10. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  11. Automatic Segmentation and Quantitative Analysis of the Articular Cartilages From Magnetic Resonance Images of the Knee

    PubMed Central

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2010-01-01

    In this paper, we present a segmentation scheme that automatically and accurately segments all the cartilages from magnetic resonance (MR) images of nonpathological knees. Our scheme involves the automatic segmentation of the bones using a three-dimensional active shape model, the extraction of the expected bone-cartilage interface (BCI), and cartilage segmentation from the BCI using a deformable model that utilizes localization, patient specific tissue estimation and a model of the thickness variation. The accuracy of this scheme was experimentally validated using leave one out experiments on a database of fat suppressed spoiled gradient recall MR images. The scheme was compared to three state of the art approaches, tissue classification, a modified semi-automatic watershed algorithm and nonrigid registration (B-spline based free form deformation). Our scheme obtained an average Dice similarity coefficient (DSC) of (0.83, 0.83, 0.85) for the (patellar, tibial, femoral) cartilages, while (0.82, 0.81, 0.86) was obtained with a tissue classifier and (0.73, 0.79, 0.76) was obtained with nonrigid registration. The average DSC obtained for all the cartilages using a semi-automatic watershed algorithm (0.90) was slightly higher than our approach (0.89), however unlike this approach we segment each cartilage as a separate object. The effectiveness of our approach for quantitative analysis was evaluated using volume and thickness measures with a median volume difference error of (5.92, 4.65, 5.69) and absolute Laplacian thickness difference of (0.13, 0.24, 0.12) mm. PMID:19520633

  12. Study of tracking and data acquisition system for the 1990's. Volume 4: TDAS space segment architecture

    NASA Technical Reports Server (NTRS)

    Orr, R. S.

    1984-01-01

    Tracking and data acquisition system (TDAS) requirements, TDAS architectural goals, enhanced TDAS subsystems, constellation and networking options, TDAS spacecraft options, crosslink implementation, baseline TDAS space segment architecture, and treat model development/security analysis are addressed.

  13. Three dimensional level set based semiautomatic segmentation of atherosclerotic carotid artery wall volume using 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Hossain, Md. Murad; AlMuhanna, Khalid; Zhao, Limin; Lal, Brajesh K.; Sikdar, Siddhartha

    2014-03-01

    3D segmentation of carotid plaque from ultrasound (US) images is challenging due to image artifacts and poor boundary definition. Semiautomatic segmentation algorithms for calculating vessel wall volume (VWV) have been proposed for the common carotid artery (CCA) but they have not been applied on plaques in the internal carotid artery (ICA). In this work, we describe a 3D segmentation algorithm that is robust to shadowing and missing boundaries. Our algorithm uses distance regularized level set method with edge and region based energy to segment the adventitial wall boundary (AWB) and lumen-intima boundary (LIB) of plaques in the CCA, ICA and external carotid artery (ECA). The algorithm is initialized by manually placing points on the boundary of a subset of transverse slices with an interslice distance of 4mm. We propose a novel user defined stopping surface based energy to prevent leaking of evolving surface across poorly defined boundaries. Validation was performed against manual segmentation using 3D US volumes acquired from five asymptomatic patients with carotid stenosis using a linear 4D probe. A pseudo gold-standard boundary was formed from manual segmentation by three observers. The Dice similarity coefficient (DSC), Hausdor distance (HD) and modified HD (MHD) were used to compare the algorithm results against the pseudo gold-standard on 1205 cross sectional slices of 5 3D US image sets. The algorithm showed good agreement with the pseudo gold standard boundary with mean DSC of 93.3% (AWB) and 89.82% (LIB); mean MHD of 0.34 mm (AWB) and 0.24 mm (LIB); mean HD of 1.27 mm (AWB) and 0.72 mm (LIB). The proposed 3D semiautomatic segmentation is the first step towards full characterization of 3D plaque progression and longitudinal monitoring.

  14. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  15. Pulse shape analysis and position determination in segmented HPGe detectors: The AGATA detector library

    NASA Astrophysics Data System (ADS)

    Bruyneel, B.; Birkenbach, B.; Reiter, P.

    2016-03-01

    The AGATA Detector Library (ADL) was developed for the calculation of signals from highly segmented large volume high-purity germanium (HPGe) detectors. ADL basis sets comprise a huge amount of calculated position-dependent detector pulse shapes. A basis set is needed for Pulse Shape Analysis (PSA). By means of PSA the interaction position of a γ-ray inside the active detector volume is determined. Theoretical concepts of the calculations are introduced and cover the relevant aspects of signal formation in HPGe. The approximations and the realization of the computer code with its input parameters are explained in detail. ADL is a versatile and modular computer code; new detectors can be implemented in this library. Measured position resolutions of the AGATA detectors based on ADL are discussed.

  16. Three-dimensional choroidal segmentation in spectral OCT volumes using optic disc prior information

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Girkin, Christopher A.; Hariri, Amirhossein; Sadda, SriniVas R.

    2016-03-01

    Recently, much attention has been focused on determining the role of the peripapillary choroid - the layer between the outer retinal pigment epithelium (RPE)/Bruchs membrane (BM) and choroid-sclera (C-S) junction, whether primary or secondary in the pathogenesis of glaucoma. However, the automated choroidal segmentation in spectral-domain optical coherence tomography (SD-OCT) images of optic nerve head (ONH) has not been reported probably due to the fact that the presence of the BM opening (BMO, corresponding to the optic disc) can deflect the choroidal segmentation from its correct position. The purpose of this study is to develop a 3D graph-based approach to identify the 3D choroidal layer in ONH-centered SD-OCT images using the BMO prior information. More specifically, an initial 3D choroidal segmentation was first performed using the 3D graph search algorithm. Note that varying surface interaction constraints based on the choroidal morphological model were applied. To assist the choroidal segmentation, two other surfaces of internal limiting membrane and innerouter segment junction were also segmented. Based on the segmented layer between the RPE/BM and C-S junction, a 2D projection map was created. The BMO in the projection map was detected by a 2D graph search. The pre-defined BMO information was then incorporated into the surface interaction constraints of the 3D graph search to obtain more accurate choroidal segmentation. Twenty SD-OCT images from 20 healthy subjects were used. The mean differences of the choroidal borders between the algorithm and manual segmentation were at a sub-voxel level, indicating a high level segmentation accuracy.

  17. Evaluation of atlas based auto-segmentation for head and neck target volume delineation in adaptive/replan IMRT

    NASA Astrophysics Data System (ADS)

    Speight, R.; Karakaya, E.; Prestwich, R.; Sen, M.; Lindsay, R.; Harding, R.; Sykes, J.

    2014-03-01

    IMRT for head and neck patients requires clinicians to delineate clinical target volumes (CTV) on a planning-CT (>2hrs/patient). When patients require a replan-CT, CTVs must be re-delineated. This work assesses the performance of atlas-based autosegmentation (ABAS), which uses deformable image registration between planning and replan-CTs to auto-segment CTVs on the replan-CT, based on the planning contours. Fifteen patients with planning-CT and replan-CTs were selected. One clinician delineated CTVs on the planning-CTs and up to three clinicians delineated CTVs on the replan-CTs. Replan-CT volumes were auto-segmented using ABAS using the manual CTVs from the planning-CT as an atlas. ABAS CTVs were edited manually to make them clinically acceptable. Clinicians were timed to estimate savings using ABAS. CTVs were compared using dice similarity coefficient (DSC) and mean distance to agreement (MDA). Mean inter-observer variability (DSC>0.79 and MDA<2.1mm) was found to be greater than intra-observer variability (DSC>0.91 and MDA<1.5mm). Comparing ABAS to manual CTVs gave DSC=0.86 and MDA=2.07mm. Once edited, ABAS volumes agreed more closely with the manual CTVs (DSC=0.87 and MDA=1.87mm). The mean clinician time required to produce CTVs reduced from 169min to 57min when using ABAS. ABAS segments volumes with accuracy close to inter-observer variability however the volumes require some editing before clinical use. Using ABAS reduces contouring time by a factor of three.

  18. Open-source algorithm for automatic choroid segmentation of OCT volume reconstructions

    NASA Astrophysics Data System (ADS)

    Mazzaferri, Javier; Beaton, Luke; Hounye, Gisèle; Sayah, Diane N.; Costantino, Santiago

    2017-02-01

    The use of optical coherence tomography (OCT) to study ocular diseases associated with choroidal physiology is sharply limited by the lack of available automated segmentation tools. Current research largely relies on hand-traced, single B-Scan segmentations because commercially available programs require high quality images, and the existing implementations are closed, scarce and not freely available. We developed and implemented a robust algorithm for segmenting and quantifying the choroidal layer from 3-dimensional OCT reconstructions. Here, we describe the algorithm, validate and benchmark the results, and provide an open-source implementation under the General Public License for any researcher to use (https://www.mathworks.com/matlabcentral/fileexchange/61275-choroidsegmentation).

  19. Multi-level segment analysis: definition and applications in turbulence

    NASA Astrophysics Data System (ADS)

    Wang, Lipo

    2015-11-01

    The interaction of different scales is among the most interesting and challenging features in turbulence research. Existing approaches used for scaling analysis such as structure-function and Fourier spectrum method have their respective limitations, for instance scale mixing, i.e. the so-called infrared and ultraviolet effects. For a given function, by specifying different window sizes, the local extremal point set will be different. Such window size dependent feature indicates multi-scale statistics. A new method, multi-level segment analysis (MSA) based on the local extrema statistics, has been developed. The part of the function between two adjacent extremal points is defined as a segment, which is characterized by the functional difference and scale difference. The structure function can be differently derived from these characteristic parameters. Data test results show that MSA can successfully reveal different scaling regimes in turbulence systems such as Lagrangian and two-dimensional turbulence, which have been remaining controversial in turbulence research. In principle MSA can generally be extended for various analyses.

  20. Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET

    NASA Astrophysics Data System (ADS)

    Bousse, Alexandre; Pedemonte, Stefano; Thomas, Benjamin A.; Erlandsson, Kjell; Ourselin, Sébastien; Arridge, Simon; Hutton, Brian F.

    2012-10-01

    In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image.

  1. Layout pattern analysis using the Voronoi diagram of line segments

    NASA Astrophysics Data System (ADS)

    Dey, Sandeep Kumar; Cheilaris, Panagiotis; Gabrani, Maria; Papadopoulou, Evanthia

    2016-01-01

    Early identification of problematic patterns in very large scale integration (VLSI) designs is of great value as the lithographic simulation tools face significant timing challenges. To reduce the processing time, such a tool selects only a fraction of possible patterns which have a probable area of failure, with the risk of missing some problematic patterns. We introduce a fast method to automatically extract patterns based on their structure and context, using the Voronoi diagram of line-segments as derived from the edges of VLSI design shapes. Designers put line segments around the problematic locations in patterns called "gauges," along which the critical distance is measured. The gauge center is the midpoint of a gauge. We first use the Voronoi diagram of VLSI shapes to identify possible problematic locations, represented as gauge centers. Then we use the derived locations to extract windows containing the problematic patterns from the design layout. The problematic locations are prioritized by the shape and proximity information of the design polygons. We perform experiments for pattern selection in a portion of a 22-nm random logic design layout. The design layout had 38,584 design polygons (consisting of 199,946 line segments) on layer Mx, and 7079 markers generated by an optical rule checker (ORC) tool. The optical rules specify requirements for printing circuits with minimum dimension. Markers are the locations of some optical rule violations in the layout. We verify our approach by comparing the coverage of our extracted patterns to the ORC-generated markers. We further derive a similarity measure between patterns and between layouts. The similarity measure helps to identify a set of representative gauges that reduces the number of patterns for analysis.

  2. Meteorological analysis models, volume 2

    NASA Technical Reports Server (NTRS)

    Langland, R. A.; Stark, D. L.

    1976-01-01

    As part of the SEASAT program, two sets of analysis programs were developed. One set of programs produce 63 x 63 horizontal mesh analyses on a polar stereographic grid. The other set produces 187 x 187 third mesh analyses. The parameters analyzed include sea surface temperature, sea level pressure and twelve levels of upper air temperature, height and wind analyses. Both sets use operational data provided by a weather bureau. The analysis output is used to initialize the primitive equation forecast models also included.

  3. Hippocampus and amygdala volumes from magnetic resonance images in children: Assessing accuracy of FreeSurfer and FSL against manual segmentation.

    PubMed

    Schoemaker, Dorothee; Buss, Claudia; Head, Kevin; Sandman, Curt A; Davis, Elysia P; Chakravarty, M Mallar; Gauthier, Serge; Pruessner, Jens C

    2016-04-01

    The volumetric quantification of brain structures is of great interest in pediatric populations because it allows the investigation of different factors influencing neurodevelopment. FreeSurfer and FSL both provide frequently used packages for automatic segmentation of brain structures. In this study, we examined the accuracy and consistency of those two automated protocols relative to manual segmentation, commonly considered as the "gold standard" technique, for estimating hippocampus and amygdala volumes in a sample of preadolescent children aged between 6 to 11 years. The volumes obtained with FreeSurfer and FSL-FIRST were evaluated and compared with manual segmentations with respect to volume difference, spatial agreement and between- and within-method correlations. Results highlighted a tendency for both automated techniques to overestimate hippocampus and amygdala volumes, in comparison to manual segmentation. This was more pronounced when using FreeSurfer than FSL-FIRST and, for both techniques, the overestimation was more marked for the amygdala than the hippocampus. Pearson correlations support moderate associations between manual tracing and FreeSurfer for hippocampus (right r=0.69, p<0.001; left r=0.77, p<0.001) and amygdala (right r=0.61, p<0.001; left r=0.67, p<0.001) volumes. Correlation coefficients between manual segmentation and FSL-FIRST were statistically significant (right hippocampus r=0.59, p<0.001; left hippocampus r=0.51, p<0.001; right amygdala r=0.35, p<0.001; left amygdala r=0.31, p<0.001) but were significantly weaker, for all investigated structures. When computing intraclass correlation coefficients between manual tracing and automatic segmentation, all comparisons, except for left hippocampus volume estimated with FreeSurfer, failed to reach 0.70. When looking at each method separately, correlations between left and right hemispheric volumes showed strong associations between bilateral hippocampus and bilateral amygdala volumes when

  4. Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.

    ERIC Educational Resources Information Center

    Lay, Robert S.; Maguire, John J.

    1983-01-01

    Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)

  5. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations.

  6. Applications of recursive segmentation to the analysis of DNA sequences.

    PubMed

    Li, Wentian; Bernaola-Galván, Pedro; Haghighi, Fatameh; Grosse, Ivo

    2002-07-01

    Recursive segmentation is a procedure that partitions a DNA sequence into domains with a homogeneous composition of the four nucleotides A, C, G and T. This procedure can also be applied to any sequence converted from a DNA sequence, such as to a binary strong(G + C)/weak(A + T) sequence, to a binary sequence indicating the presence or absence of the dinucleotide CpG, or to a sequence indicating both the base and the codon position information. We apply various conversion schemes in order to address the following five DNA sequence analysis problems: isochore mapping, CpG island detection, locating the origin and terminus of replication in bacterial genomes, finding complex repeats in telomere sequences, and delineating coding and noncoding regions. We find that the recursive segmentation procedure can successfully detect isochore borders, CpG islands, and the origin and terminus of replication, but it needs improvement for detecting complex repeats as well as borders between coding and noncoding regions.

  7. Analysis of Retinal Peripapillary Segmentation in Early Alzheimer's Disease Patients

    PubMed Central

    Salobrar-Garcia, Elena; Hoyas, Irene; Leal, Mercedes; de Hoz, Rosa; Rojas, Blanca; Ramirez, Ana I.; Salazar, Juan J.; Yubero, Raquel; Gil, Pedro; Triviño, Alberto; Ramirez, José M.

    2015-01-01

    Decreased thickness of the retinal nerve fiber layer (RNFL) may reflect retinal neuronal-ganglion cell death. A decrease in the RNFL has been demonstrated in Alzheimer's disease (AD) in addition to aging by optical coherence tomography (OCT). Twenty-three mild-AD patients and 28 age-matched control subjects with mean Mini-Mental State Examination 23.3 and 28.2, respectively, with no ocular disease or systemic disorders affecting vision, were considered for study. OCT peripapillary and macular segmentation thickness were examined in the right eye of each patient. Compared to controls, eyes of patients with mild-AD patients showed no statistical difference in peripapillary RNFL thickness (P > 0.05); however, sectors 2, 3, 4, 8, 9, and 11 of the papilla showed thinning, while in sectors 1, 5, 6, 7, and 10 there was thickening. Total macular volume and RNFL thickness of the fovea in all four inner quadrants and in the outer temporal quadrants proved to be significantly decreased (P < 0.01). Despite the fact that peripapillary RNFL thickness did not statistically differ in comparison to control eyes, the increase in peripapillary thickness in our mild-AD patients could correspond to an early neurodegeneration stage and may entail the existence of an inflammatory process that could lead to progressive peripapillary fiber damage. PMID:26557684

  8. Improving the clinical correlation of multiple sclerosis black hole volume change by paired-scan analysis.

    PubMed

    Tam, Roger C; Traboulsee, Anthony; Riddehough, Andrew; Li, David K B

    2012-01-01

    The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes.

  9. Cumulative Heat Diffusion Using Volume Gradient Operator for Volume Analysis.

    PubMed

    Gurijala, K C; Wang, Lei; Kaufman, A

    2012-12-01

    We introduce a simple, yet powerful method called the Cumulative Heat Diffusion for shape-based volume analysis, while drastically reducing the computational cost compared to conventional heat diffusion. Unlike the conventional heat diffusion process, where the diffusion is carried out by considering each node separately as the source, we simultaneously consider all the voxels as sources and carry out the diffusion, hence the term cumulative heat diffusion. In addition, we introduce a new operator that is used in the evaluation of cumulative heat diffusion called the Volume Gradient Operator (VGO). VGO is a combination of the LBO and a data-driven operator which is a function of the half gradient. The half gradient is the absolute value of the difference between the voxel intensities. The VGO by its definition captures the local shape information and is used to assign the initial heat values. Furthermore, VGO is also used as the weighting parameter for the heat diffusion process. We demonstrate that our approach can robustly extract shape-based features and thus forms the basis for an improved classification and exploration of features based on shape.

  10. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  11. Open-source algorithm for automatic choroid segmentation of OCT volume reconstructions

    PubMed Central

    Mazzaferri, Javier; Beaton, Luke; Hounye, Gisèle; Sayah, Diane N.; Costantino, Santiago

    2017-01-01

    The use of optical coherence tomography (OCT) to study ocular diseases associated with choroidal physiology is sharply limited by the lack of available automated segmentation tools. Current research largely relies on hand-traced, single B-Scan segmentations because commercially available programs require high quality images, and the existing implementations are closed, scarce and not freely available. We developed and implemented a robust algorithm for segmenting and quantifying the choroidal layer from 3-dimensional OCT reconstructions. Here, we describe the algorithm, validate and benchmark the results, and provide an open-source implementation under the General Public License for any researcher to use (https://www.mathworks.com/matlabcentral/fileexchange/61275-choroidsegmentation). PMID:28181546

  12. Automated segmentation of chronic stroke lesions using LINDA: Lesion Identification with Neighborhood Data Analysis

    PubMed Central

    Pustina, Dorian; Coslett, H. Branch; Turkeltaub, Peter E.; Tustison, Nicholas; Schwartz, Myrna F.; Avants, Brian

    2015-01-01

    The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696±0.16, Hausdorff distance of 17.9±9.8mm, and average displacement of 2.54±1.38mm. The manual and predicted lesion volumes correlated at r=0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. PMID:26756101

  13. Automated segmentation of chronic stroke lesions using LINDA: Lesion identification with neighborhood data analysis.

    PubMed

    Pustina, Dorian; Coslett, H Branch; Turkeltaub, Peter E; Tustison, Nicholas; Schwartz, Myrna F; Avants, Brian

    2016-04-01

    The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696 ± 0.16, Hausdorff distance of 17.9 ± 9.8 mm, and average displacement of 2.54 ± 1.38 mm. The manual and predicted lesion volumes correlated at r = 0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients.

  14. REACH. Teacher's Guide, Volume III. Task Analysis.

    ERIC Educational Resources Information Center

    Morris, James Lee; And Others

    Designed for use with individualized instructional units (CE 026 345-347, CE 026 349-351) in the electromechanical cluster, this third volume of the postsecondary teacher's guide presents the task analysis which was used in the development of the REACH (Refrigeration, Electro-Mechanical, Air Conditioning, Heating) curriculum. The major blocks of…

  15. SU-E-J-238: Monitoring Lymph Node Volumes During Radiotherapy Using Semi-Automatic Segmentation of MRI Images

    SciTech Connect

    Veeraraghavan, H; Tyagi, N; Riaz, N; McBride, S; Lee, N; Deasy, J

    2014-06-01

    Purpose: Identification and image-based monitoring of lymph nodes growing due to disease, could be an attractive alternative to prophylactic head and neck irradiation. We evaluated the accuracy of the user-interactive Grow Cut algorithm for volumetric segmentation of radiotherapy relevant lymph nodes from MRI taken weekly during radiotherapy. Method: The algorithm employs user drawn strokes in the image to volumetrically segment multiple structures of interest. We used a 3D T2-wturbo spin echo images with an isotropic resolution of 1 mm3 and FOV of 492×492×300 mm3 of head and neck cancer patients who underwent weekly MR imaging during the course of radiotherapy. Various lymph node (LN) levels (N2, N3, N4'5) were individually contoured on the weekly MR images by an expert physician and used as ground truth in our study. The segmentation results were compared with the physician drawn lymph nodes based on DICE similarity score. Results: Three head and neck patients with 6 weekly MR images were evaluated. Two patients had level 2 LN drawn and one patient had level N2, N3 and N4'5 drawn on each MR image. The algorithm took an average of a minute to segment the entire volume (512×512×300 mm3). The algorithm achieved an overall DICE similarity score of 0.78. The time taken for initializing and obtaining the volumetric mask was about 5 mins for cases with only N2 LN and about 15 mins for the case with N2,N3 and N4'5 level nodes. The longer initialization time for the latter case was due to the need for accurate user inputs to separate overlapping portions of the different LN. The standard deviation in segmentation accuracy at different time points was utmost 0.05. Conclusions: Our initial evaluation of the grow cut segmentation shows reasonably accurate and consistent volumetric segmentations of LN with minimal user effort and time.

  16. Analysis of Vietnamization: Data Abstract. Volume 3

    DTIC Science & Technology

    1973-07-01

    AD/A-005 361 ANALYSIS OF VILTNAMIZATION: DATA ABS TRACT William G. Prince 0. 0 Bendix Corporation ca) V) Prepared for: Defense Advanced Research...DISSRISUTtOlN 8TATGEMENT Quali fied requestors may obcain copies of this report from D)C: It SURWLCMENTARV N6TES2 t TPONSORINGI %"LITANY ACTIVIY Defense Advanced ...BSR 4033 ANALYSIS OF VIETNAMIZATION: -"O DATA ABSTRACT Final Report Volume III Sponsored by: Defense Advanced Research Projects Agency ARPA Order No

  17. Evaluating and Improving the SAMA (Segmentation Analysis and Market Assessment) Recruiting Model

    DTIC Science & Technology

    2015-06-01

    IMPROVING THE SAMA (SEGMENTATION ANALYSIS AND MARKET ASSESSMENT) RECRUITING MODEL by William N. Marmion June 2015 Thesis Advisor: Lyn...REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EVALUATING AND IMPROVING THE SAMA (SEGMENTATION ANALYSIS AND MARKET ...maximum 200 words) Military recruiting for an all-volunteer force requires deliberate planning and market analysis in order to achieve prescribed

  18. Finite Volume Methods: Foundation and Analysis

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Ohlberger, Mario

    2003-01-01

    Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

  19. Three-dimensional freehand ultrasound: image reconstruction and volume analysis.

    PubMed

    Barry, C D; Allott, C P; John, N W; Mellor, P M; Arundel, P A; Thomson, D S; Waterton, J C

    1997-01-01

    A system is described that rapidly produces a regular 3-dimensional (3-D) data block suitable for processing by conventional image analysis and volume measurement software. The system uses electromagnetic spatial location of 2-dimensional (2-D) freehand-scanned ultrasound B-mode images, custom-built signal-conditioning hardware, UNIX-based computer processing and an efficient 3-D reconstruction algorithm. Utilisation of images from multiple angles of insonation, "compounding," reduces speckle contrast, improves structure coherence within the reconstructed grey-scale image and enhances the ability to detect structure boundaries and to segment and quantify features. Volume measurements using a series of water-filled latex and cylindrical foam rubber phantoms with volumes down to 0.7 mL show that a high degree of accuracy, precision and reproducibility can be obtained. Extension of the technique to handle in vivo data sets by allowing physiological criteria to be taken into account in selecting the images used for construction is also illustrated.

  20. Blood vessel segmentation using line-direction vector based on Hessian analysis

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Kitasaka, Takayuki; Mori, Kensaku

    2010-03-01

    For decision of the treatment strategy, grading of stenoses is important in diagnosis of vascular disease such as arterial occlusive disease or thromboembolism. It is also important to understand the vasculature in minimally invasive surgery such as laparoscopic surgery or natural orifice translumenal endoscopic surgery. Precise segmentation and recognition of blood vessel regions are indispensable tasks in medical image processing systems. Previous methods utilize only ``lineness'' measure, which is computed by Hessian analysis. However, difference of the intensity values between a voxel of thin blood vessel and a voxel of surrounding tissue is generally decreased by the partial volume effect. Therefore, previous methods cannot extract thin blood vessel regions precisely. This paper describes a novel blood vessel segmentation method that can extract thin blood vessels with suppressing false positives. The proposed method utilizes not only lineness measure but also line-direction vector corresponding to the largest eigenvalue in Hessian analysis. By introducing line-direction information, it is possible to distinguish between a blood vessel voxel and a voxel having a low lineness measure caused by noise. In addition, we consider the scale of blood vessel. The proposed method can reduce false positives in some line-like tissues close to blood vessel regions by utilization of iterative region growing with scale information. The experimental result shows thin blood vessel (0.5 mm in diameter, almost same as voxel spacing) can be extracted finely by the proposed method.

  1. Application of taxonomy theory, Volume 1: Computing a Hopf bifurcation-related segment of the feasibility boundary. Final report

    SciTech Connect

    Zaborszky, J.; Venkatasubramanian, V.

    1995-10-01

    Taxonomy Theory is the first precise comprehensive theory for large power system dynamics modeled in any detail. The motivation for this project is to show that it can be used, practically, for analyzing a disturbance that actually occurred on a large system, which affected a sizable portion of the Midwest with supercritical Hopf type oscillations. This event is well documented and studied. The report first summarizes Taxonomy Theory with an engineering flavor. Then various computational approaches are sighted and analyzed for desirability to use with Taxonomy Theory. Then working equations are developed for computing a segment of the feasibility boundary that bounds the region of (operating) parameters throughout which the operating point can be moved without losing stability. Then experimental software incorporating large EPRI software packages PSAPAC is developed. After a summary of the events during the subject disturbance, numerous large scale computations, up to 7600 buses, are reported. These results are reduced into graphical and tabular forms, which then are analyzed and discussed. The report is divided into two volumes. This volume illustrates the use of the Taxonomy Theory for computing the feasibility boundary and presents evidence that the event indeed led to a Hopf type oscillation on the system. Furthermore it proves that the Feasibility Theory can indeed be used for practical computation work with very large systems. Volume 2, a separate volume, will show that the disturbance has led to a supercritical (that is stable oscillation) Hopf bifurcation.

  2. Mobility out of Low-Paid Occupations: A Segmentation Analysis.

    ERIC Educational Resources Information Center

    Pomer, Marshall I.

    This study analyzes the mobility of workers initially employed in low-paid occupations who moved to moderately paid occupations, based on 18,347 observations of 1970 Census data, compared to 1965 data. The study relies on the concept of labor segment, which provides an antidote to the individualistic perspective. Two broad segments, a low-paid and…

  3. Atlas-Based Segmentation Improves Consistency and Decreases Time Required for Contouring Postoperative Endometrial Cancer Nodal Volumes

    SciTech Connect

    Young, Amy V.; Wortham, Angela; Wernick, Iddo; Evans, Andrew; Ennis, Ronald D.

    2011-03-01

    Purpose: Accurate target delineation of the nodal volumes is essential for three-dimensional conformal and intensity-modulated radiotherapy planning for endometrial cancer adjuvant therapy. We hypothesized that atlas-based segmentation ('autocontouring') would lead to time savings and more consistent contours among physicians. Methods and Materials: A reference anatomy atlas was constructed using the data from 15 postoperative endometrial cancer patients by contouring the pelvic nodal clinical target volume on the simulation computed tomography scan according to the Radiation Therapy Oncology Group 0418 trial using commercially available software. On the simulation computed tomography scans from 10 additional endometrial cancer patients, the nodal clinical target volume autocontours were generated. Three radiation oncologists corrected the autocontours and delineated the manual nodal contours under timed conditions while unaware of the other contours. The time difference was determined, and the overlap of the contours was calculated using Dice's coefficient. Results: For all physicians, manual contouring of the pelvic nodal target volumes and editing the autocontours required a mean {+-} standard deviation of 32 {+-} 9 vs. 23 {+-} 7 minutes, respectively (p = .000001), a 26% time savings. For each physician, the time required to delineate the manual contours vs. correcting the autocontours was 30 {+-} 3 vs. 21 {+-} 5 min (p = .003), 39 {+-} 12 vs. 30 {+-} 5 min (p = .055), and 29 {+-} 5 vs. 20 {+-} 5 min (p = .0002). The mean overlap increased from manual contouring (0.77) to correcting the autocontours (0.79; p = .038). Conclusion: The results of our study have shown that autocontouring leads to increased consistency and time savings when contouring the nodal target volumes for adjuvant treatment of endometrial cancer, although the autocontours still required careful editing to ensure that the lymph nodes at risk of recurrence are properly included in the target

  4. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  5. Level set-based core segmentation of mammographic masses facilitating three stage (core, periphery, spiculation) analysis.

    PubMed

    Ball, John E; Bruce, Lori Mann

    2007-01-01

    We present mammographic mass core segmentation, based on the Chan-Vese level set method. The proposed method is analyzed via resulting feature efficacies. Additionally, the core segmentation method is used to investigate the idea of a three stage segmentation approach, i.e. segment the mass core, periphery, and spiculations (if any exist) and use features from these three segmentations to classify the mass as either benign or malignant. The proposed core segmentation method and a proposed end-to-end computer aided detection (CAD) system using a three stage segmentation are implemented and experimentally tested with a set of 60 mammographic images from the Digital Database of Screening Mammography. Receiver operating characteristic (ROC) curve AZ values for morphological and texture features extracted from the core segmentation are shown to be on par, or better, than those extracted from a periphery segmentation. The efficacy of the core segmentation features when combined with the periphery and spiculation segmentation features are shown to be feature set dependent. The proposed end-to-end system uses stepwise linear discriminant analysis for feature selection and a maximum likelihood classifier. Using all three stages (core + periphery + spiculations) results in an overall accuracy (OA) of 90% with 2 false negatives (FN). Since many CAD systems only perform a periphery analysis, adding core features could be a benefit to potentially increase OA and reduce FN cases.

  6. Object density-based image segmentation and its applications in biomedical image analysis.

    PubMed

    Yu, Jinhua; Tan, Jinglu

    2009-12-01

    In many applications of medical image analysis, the density of an object is the most important feature for isolating an area of interest (image segmentation). In this research, an object density-based image segmentation methodology is developed, which incorporates intensity-based, edge-based and texture-based segmentation techniques. The proposed method consists of three main stages: preprocessing, object segmentation and final segmentation. Image enhancement, noise reduction and layer-of-interest extraction are several subtasks of preprocessing. Object segmentation utilizes a marker-controlled watershed technique to identify each object of interest (OI) from the background. A marker estimation method is proposed to minimize over-segmentation resulting from the watershed algorithm. Object segmentation provides an accurate density estimation of OI which is used to guide the subsequent segmentation steps. The final stage converts the distribution of OI into textural energy by using fractal dimension analysis. An energy-driven active contour procedure is designed to delineate the area with desired object density. Experimental results show that the proposed method is 98% accurate in segmenting synthetic images. Segmentation of microscopic images and ultrasound images shows the potential utility of the proposed method in different applications of medical image processing.

  7. An Analysis of Image Segmentation Time in Beam’s-Eye-View Treatment Planning

    SciTech Connect

    Li, Chun; Spelbring, D.R.; Chen, George T.Y.

    2015-01-15

    In this work we tabulate and histogram the image segmentation time for beam’s eye view (BEV) treatment planning in our center. The average time needed to generate contours on CT images delineating normal structures and treatment target volumes is calculated using a data base containing over 500 patients’ BEV plans. The average number of contours and total image segmentation time needed for BEV plans in three common treatment sites, namely, head/neck, lung/chest, and prostate, were estimated.

  8. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  9. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  10. Prioritization of brain MRI volumes using medical image perception model and tumor region segmentation.

    PubMed

    Mehmood, Irfan; Ejaz, Naveed; Sajjad, Muhammad; Baik, Sung Wook

    2013-10-01

    The objective of the present study is to explore prioritization methods in diagnostic imaging modalities to automatically determine the contents of medical images. In this paper, we propose an efficient prioritization of brain MRI. First, the visual perception of the radiologists is adapted to identify salient regions. Then this saliency information is used as an automatic label for accurate segmentation of brain lesion to determine the scientific value of that image. The qualitative and quantitative results prove that the rankings generated by the proposed method are closer to the rankings created by radiologists.

  11. Multiresolution mesh segmentation based on surface roughness and wavelet analysis

    NASA Astrophysics Data System (ADS)

    Roudet, Céline; Dupont, Florent; Baskurt, Atilla

    2007-01-01

    During the last decades, the three-dimensional objects have begun to compete with traditional multimedia (images, sounds and videos) and have been used by more and more applications. The common model used to represent them is a surfacic mesh due to its intrinsic simplicity and efficacity. In this paper, we present a new algorithm for the segmentation of semi-regular triangle meshes, via multiresolution analysis. Our method uses several measures which reflect the roughness of the surface for all meshes resulting from the decomposition of the initial model into different fine-to-coarse multiresolution meshes. The geometric data decomposition is based on the lifting scheme. Using that formulation, we have compared various interpolant prediction operators, associated or not with an update step. For each resolution level, the resulting approximation mesh is then partitioned into classes having almost constant roughness thanks to a clustering algorithm. Resulting classes gather regions having the same visual appearance in term of roughness. The last step consists in decomposing the mesh into connex groups of triangles using region growing ang merging algorithms. These connex surface patches are of particular interest for adaptive mesh compression, visualisation, indexation or watermarking.

  12. Short Segment versus Long Segment Pedicle Screws Fixation in Management of Thoracolumbar Burst Fractures: Meta-Analysis

    PubMed Central

    2017-01-01

    Posterior pedicle screw fixation has become a popular method for treating thoracolumbar burst fractures. However, it remains unclear whether additional fixation of more segments could improve clinical and radiological outcomes. This meta-analysis was performed to evaluate the effectiveness of fixation levels with pedicle screw fixation for thoracolumbar burst fractures. MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, Springer, and Google Scholar were searched for relevant randomized and quasirandomized controlled trials that compared the clinical and radiological efficacy of short versus long segment for thoracolumbar burst fractures managed by posterior pedicle screw fixation. Risk of bias in included studies was assessed using the Cochrane Risk of Bias tool. Based on predefined inclusion criteria, Nine eligible trials with a total of 365 patients were included in this meta-analysis. Results were expressed as risk difference for dichotomous outcomes and standard mean difference for continuous outcomes with 95% confidence interval. Baseline characteristics were similar between the short and long segment fixation groups. No significant difference was identified between the two groups regarding radiological outcome, functional outcome, neurologic improvement, and implant failure rate. The results of this meta-analysis suggested that extension of fixation was not necessary when thoracolumbar burst fracture was treated by posterior pedicle screw fixation. More randomized controlled trials with high quality are still needed in the future. PMID:28243383

  13. Simultaneous segmentation of retinal surfaces and microcystic macular edema in SDOCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Lang, Andrew; Swingle, Emily K.; Al-Louzi, Omar; Carass, Aaron; Solomon, Sharon; Calabresi, Peter A.; Saidha, Shiv; Prince, Jerry L.

    2016-03-01

    Optical coherence tomography (OCT) is a noninvasive imaging modality that has begun to find widespread use in retinal imaging for the detection of a variety of ocular diseases. In addition to structural changes in the form of altered retinal layer thicknesses, pathological conditions may also cause the formation of edema within the retina. In multiple sclerosis, for instance, the nerve fiber and ganglion cell layers are known to thin. Additionally, the formation of pseudocysts called microcystic macular edema (MME) have also been observed in the eyes of about 5% of MS patients, and its presence has been shown to be correlated with disease severity. Previously, we proposed separate algorithms for the segmentation of retinal layers and MME, but since MME mainly occurs within specific regions of the retina, a simultaneous approach is advantageous. In this work, we propose an automated globally optimal graph-theoretic approach that simultaneously segments the retinal layers and the MME in volumetric OCT scans. SD-OCT scans from one eye of 12 MS patients with known MME and 8 healthy controls were acquired and the pseudocysts manually traced. The overall precision and recall of the pseudocyst detection was found to be 86.0% and 79.5%, respectively.

  14. Simultaneous Segmentation of Retinal Surfaces and Microcystic Macular Edema in SDOCT Volumes

    PubMed Central

    Antony, Bhavna J.; Lang, Andrew; Swingle, Emily K.; Al-Louzi, Omar; Carass, Aaron; Solomon, Sharon; Calabresi, Peter A.; Saidha, Shiv; Prince, Jerry L.

    2016-01-01

    Optical coherence tomography (OCT) is a noninvasive imaging modality that has begun to find widespread use in retinal imaging for the detection of a variety of ocular diseases. In addition to structural changes in the form of altered retinal layer thicknesses, pathological conditions may also cause the formation of edema within the retina. In multiple sclerosis, for instance, the nerve fiber and ganglion cell layers are known to thin. Additionally, the formation of pseudocysts called microcystic macular edema (MME) have also been observed in the eyes of about 5% of MS patients, and its presence has been shown to be correlated with disease severity. Previously, we proposed separate algorithms for the segmentation of retinal layers and MME, but since MME mainly occurs within specific regions of the retina, a simultaneous approach is advantageous. In this work, we propose an automated globally optimal graph-theoretic approach that simultaneously segments the retinal layers and the MME in volumetric OCT scans. SD-OCT scans from one eye of 12 MS patients with known MME and 8 healthy controls were acquired and the pseudocysts manually traced. The overall precision and recall of the pseudocyst detection was found to be 86.0% and 79.5%, respectively. PMID:27199502

  15. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  16. Comparison of acute and chronic traumatic brain injury using semi-automatic multimodal segmentation of MR volumes.

    PubMed

    Irimia, Andrei; Chambers, Micah C; Alger, Jeffry R; Filippou, Maria; Prastawa, Marcel W; Wang, Bo; Hovda, David A; Gerig, Guido; Toga, Arthur W; Kikinis, Ron; Vespa, Paul M; Van Horn, John D

    2011-11-01

    Although neuroimaging is essential for prompt and proper management of traumatic brain injury (TBI), there is a regrettable and acute lack of robust methods for the visualization and assessment of TBI pathophysiology, especially for of the purpose of improving clinical outcome metrics. Until now, the application of automatic segmentation algorithms to TBI in a clinical setting has remained an elusive goal because existing methods have, for the most part, been insufficiently robust to faithfully capture TBI-related changes in brain anatomy. This article introduces and illustrates the combined use of multimodal TBI segmentation and time point comparison using 3D Slicer, a widely-used software environment whose TBI data processing solutions are openly available. For three representative TBI cases, semi-automatic tissue classification and 3D model generation are performed to perform intra-patient time point comparison of TBI using multimodal volumetrics and clinical atrophy measures. Identification and quantitative assessment of extra- and intra-cortical bleeding, lesions, edema, and diffuse axonal injury are demonstrated. The proposed tools allow cross-correlation of multimodal metrics from structural imaging (e.g., structural volume, atrophy measurements) with clinical outcome variables and other potential factors predictive of recovery. In addition, the workflows described are suitable for TBI clinical practice and patient monitoring, particularly for assessing damage extent and for the measurement of neuroanatomical change over time. With knowledge of general location, extent, and degree of change, such metrics can be associated with clinical measures and subsequently used to suggest viable treatment options.

  17. Adolescents and alcohol: an explorative audience segmentation analysis

    PubMed Central

    2012-01-01

    Background So far, audience segmentation of adolescents with respect to alcohol has been carried out mainly on the basis of socio-demographic characteristics. In this study we examined whether it is possible to segment adolescents according to their values and attitudes towards alcohol to use as guidance for prevention programmes. Methods A random sample of 7,000 adolescents aged 12 to 18 was drawn from the Municipal Basic Administration (MBA) of 29 Local Authorities in the province North-Brabant in the Netherlands. By means of an online questionnaire data were gathered on values and attitudes towards alcohol, alcohol consumption and socio-demographic characteristics. Results We were able to distinguish a total of five segments on the basis of five attitude factors. Moreover, the five segments also differed in drinking behavior independently of socio-demographic variables. Conclusions Our investigation was a first step in the search for possibilities of segmenting by factors other than socio-demographic characteristics. Further research is necessary in order to understand these results for alcohol prevention policy in concrete terms. PMID:22950946

  18. Rigid model-based 3D segmentation of the bones of joints in MR and CT images for motion analysis.

    PubMed

    Liu, Jiamin; Udupa, Jayaram K; Saha, Punam K; Odhner, Dewey; Hirsch, Bruce E; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A

    2008-08-01

    There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of images of the joint acquired under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. In this article, a two-step model-based segmentation strategy is proposed that utilizes the unique context of the current application wherein the shape of each individual bone is preserved in all scans of a particular joint while the spatial arrangement of the bones alters significantly among bones and scans. In the first step, a rigid deterministic model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. Subsequently, in other images of the same joint, this model is used to search for the same bone by minimizing an energy function that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations, yielding true positive and false positive volume fractions in the range 89%-97% and 0.2%-0.7%. The method requires 1-2 minutes of operator time and 6-7 min of computer time per data set, which makes it significantly more efficient than live wire-the method currently available for the task that can be used routinely.

  19. How Many Templates Does It Take for a Good Segmentation?: Error Analysis in Multiatlas Segmentation as a Function of Database Size.

    PubMed

    Awate, Suyash P; Zhu, Peihong; Whitaker, Ross T

    2012-01-01

    This paper proposes a novel formulation to model and analyze the statistical characteristics of some types of segmentation problems that are based on combining label maps / templates / atlases. Such segmentation-by-example approaches are quite powerful on their own for several clinical applications and they provide prior information, through spatial context, when combined with intensity-based segmentation methods. The proposed formulation models a class of multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of images. The paper presents a systematic analysis of the nonparametric estimation's convergence behavior (i.e. characterizing segmentation error as a function of the size of the multiatlas database) and shows that it has a specific analytic form involving several parameters that are fundamental to the specific segmentation problem (i.e. chosen anatomical structure, imaging modality, registration method, label-fusion algorithm, etc.). We describe how to estimate these parameters and show that several brain anatomical structures exhibit the trends determined analytically. The proposed framework also provides per-voxel confidence measures for the segmentation. We show that the segmentation error for large database sizes can be predicted using small-sized databases. Thus, small databases can be exploited to predict the database sizes required ("how many templates") to achieve "good" segmentations having errors lower than a specified tolerance. Such cost-benefit analysis is crucial for designing and deploying multiatlas segmentation systems.

  20. Pressure volume analysis in the mouse

    PubMed Central

    Townsend, DeWayne

    2017-01-01

    SHORT ABSTRACT This manuscript describes a detailed protocol for the collection of pressure-volume data from the mouse. LONG ABSTRACT Understanding the causes and progression of heart disease presents a significant challenge to the biomedical community. The genetic flexibility of the mouse provides great potential to explore cardiac function at the molecular level. The mouse’s small size does present some challenges in regards to performing detailed cardiac phenotyping. Miniaturization and other advancements in technology have made many methods of cardiac assessment possible in the mouse. Of these, the simultaneous collection of pressure and volume data provides a detailed picture of cardiac function that is not available through any other modality. Here a detailed procedure for the collection of pressure-volume loop data is described. Included is a discussion of the principles underlying the measurements and the potential sources of error. Anesthetic management and surgical approaches are discussed in great detail as they are both critical to obtaining high quality hemodynamic measurements from the mouse. The principles of hemodynamic protocol development and relevant aspects of data analysis are also addressed. PMID:27166576

  1. LABCEDE and COCHISE Analysis II. Volume I.

    DTIC Science & Technology

    1980-02-01

    C-0O69 UNCLASSIFIED PSI-TR-207A NL. 11368 MICROCOPY RESOLUTION TEST CHART AFGL -TR-80 -0063(I) LABCEDE AND COCHISE ANALYSIS H Volume I r( W. T...uniformly across the test chamber (i.e., helium is not cryogenically pumped by the walls). The added reactant gases, N2 , 02, etc. are still adsorbed by...infrared (IR) mirror flange is directed through the reaction cell, reflected from a plane MgF2 - coated mirror mounted on the IR lens baffle, and viewed by

  2. Segmentation of hepatic artery in multi-phase liver CT using directional dilation and connectivity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Schnurr, Alena-Kathrin; Zidowitz, Stephan; Georgii, Joachim; Zhao, Yue; Razavi, Mohammad; Schwier, Michael; Hahn, Horst K.; Hansen, Christian

    2016-03-01

    Segmentation of hepatic arteries in multi-phase computed tomography (CT) images is indispensable in liver surgery planning. During image acquisition, the hepatic artery is enhanced by the injection of contrast agent. The enhanced signals are often not stably acquired due to non-optimal contrast timing. Other vascular structure, such as hepatic vein or portal vein, can be enhanced as well in the arterial phase, which can adversely affect the segmentation results. Furthermore, the arteries might suffer from partial volume effects due to their small diameter. To overcome these difficulties, we propose a framework for robust hepatic artery segmentation requiring a minimal amount of user interaction. First, an efficient multi-scale Hessian-based vesselness filter is applied on the artery phase CT image, aiming to enhance vessel structures with specified diameter range. Second, the vesselness response is processed using a Bayesian classifier to identify the most probable vessel structures. Considering the vesselness filter normally performs not ideally on the vessel bifurcations or the segments corrupted by noise, two vessel-reconnection techniques are proposed. The first technique uses a directional morphological operator to dilate vessel segments along their centerline directions, attempting to fill the gap between broken vascular segments. The second technique analyzes the connectivity of vessel segments and reconnects disconnected segments and branches. Finally, a 3D vessel tree is reconstructed. The algorithm has been evaluated using 18 CT images of the liver. To quantitatively measure the similarities between segmented and reference vessel trees, the skeleton coverage and mean symmetric distance are calculated to quantify the agreement between reference and segmented vessel skeletons, resulting in an average of 0:55+/-0:27 and 12:7+/-7:9 mm (mean standard deviation), respectively.

  3. Three-dimensional reconstruction of active muscle cell segment volume from two-dimensional optical sections

    NASA Astrophysics Data System (ADS)

    Lake, David S.; Griffiths, P. J.; Cecchi, G.; Taylor, Stuart R.

    1999-06-01

    An ultramicroscope coupled to a square-aspect-ratio sensor was used to image the dynamic geometry of live muscle cells. Skeletal muscle cells, dissected from frogs, were suspended in the optical axis and illuminated from one side by a focused slit of white light. The sensor detected light scattered at 90 degrees to the incident beam. Serial cross-sections were acquired as a motorized stage moved the cell through the slit of light. The axial force at right angles to the cross- sections was recorded simultaneously. Cross-sections were aligned by a least-squares fit of their centroids to a straight line, to correct for misalignments between the axes of the microscope, the stage, and the sensor. Three- dimensional volumes were reconstructed from each series and viewed from all directions to locate regions that remained at matching axial positions. The angle of the principal axis and the cross-sectional area were calculated and associated with force recorded concurrently. The cells adjusted their profile and volume to remain stable against turning as contractile force rose and fell, as predicted by the law of conservation of angular momentum.

  4. Automated compromised right lung segmentation method using a robust atlas-based active volume model with sparse shape composition prior in CT.

    PubMed

    Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren

    2015-12-01

    To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation.

  5. Interactive high-quality visualization of color volume datasets using GPU-based refinements of segmentation data.

    PubMed

    Lee, Byeonghun; Kwon, Koojoo; Shin, Byeong-Seok

    2016-04-24

    Data sets containing colored anatomical images of the human body, such as Visible Human or Visible Korean, show realistic internal organ structures. However, imperfect segmentations of these color images, which are typically generated manually or semi-automatically, produces poor-quality rendering results. We propose an interactive high-quality visualization method using GPU-based refinements to aid in the study of anatomical structures. In order to represent the boundaries of a region-of-interest (ROI) smoothly, we apply Gaussian filtering to the opacity values of the color volume. Morphological grayscale erosion operations are performed to reduce the region size, which is expanded by Gaussian filtering. Pseudo-coloring and color blending are also applied to the color volume in order to give more informative rendering results. We implement these operations on GPUs to speed up the refinements. As a result, our method delivered high-quality result images with smooth boundaries and provided considerably faster refinements. The speed of these refinements is sufficient to be used with interactive renderings as the ROI changes, especially compared to CPU-based methods. Moreover, the pseudo-coloring methods used presented anatomical structures clearly.

  6. Infant Word Segmentation and Childhood Vocabulary Development: A Longitudinal Analysis

    ERIC Educational Resources Information Center

    Singh, Leher; Reznick, J. Steven; Xuehua, Liang

    2012-01-01

    Infants begin to segment novel words from speech by 7.5 months, demonstrating an ability to track, encode and retrieve words in the context of larger units. Although it is presumed that word recognition at this stage is a prerequisite to constructing a vocabulary, the continuity between these stages of development has not yet been empirically…

  7. Automated segmentation refinement of small lung nodules in CT scans by local shape analysis.

    PubMed

    Diciotti, Stefano; Lombardo, Simone; Falchini, Massimo; Picozzi, Giulia; Mascalchi, Mario

    2011-12-01

    One of the most important problems in the segmentation of lung nodules in CT imaging arises from possible attachments occurring between nodules and other lung structures, such as vessels or pleura. In this report, we address the problem of vessels attachments by proposing an automated correction method applied to an initial rough segmentation of the lung nodule. The method is based on a local shape analysis of the initial segmentation making use of 3-D geodesic distance map representations. The correction method has the advantage that it locally refines the nodule segmentation along recognized vessel attachments only, without modifying the nodule boundary elsewhere. The method was tested using a simple initial rough segmentation, obtained by a fixed image thresholding. The validation of the complete segmentation algorithm was carried out on small lung nodules, identified in the ITALUNG screening trial and on small nodules of the lung image database consortium (LIDC) dataset. In fully automated mode, 217/256 (84.8%) lung nodules of ITALUNG and 139/157 (88.5%) individual marks of lung nodules of LIDC were correctly outlined and an excellent reproducibility was also observed. By using an additional interactive mode, based on a controlled manual interaction, 233/256 (91.0%) lung nodules of ITALUNG and 144/157 (91.7%) individual marks of lung nodules of LIDC were overall correctly segmented. The proposed correction method could also be usefully applied to any existent nodule segmentation algorithm for improving the segmentation quality of juxta-vascular nodules.

  8. Latent segmentation based count models: Analysis of bicycle safety in Montreal and Toronto.

    PubMed

    Yasmin, Shamsunnahar; Eluru, Naveen

    2016-10-01

    The study contributes to literature on bicycle safety by building on the traditional count regression models to investigate factors affecting bicycle crashes at the Traffic Analysis Zone (TAZ) level. TAZ is a traffic related geographic entity which is most frequently used as spatial unit for macroscopic crash risk analysis. In conventional count models, the impact of exogenous factors is restricted to be the same across the entire region. However, it is possible that the influence of exogenous factors might vary across different TAZs. To accommodate for the potential variation in the impact of exogenous factors we formulate latent segmentation based count models. Specifically, we formulate and estimate latent segmentation based Poisson (LP) and latent segmentation based Negative Binomial (LNB) models to study bicycle crash counts. In our latent segmentation approach, we allow for more than two segments and also consider a large set of variables in segmentation and segment specific models. The formulated models are estimated using bicycle-motor vehicle crash data from the Island of Montreal and City of Toronto for the years 2006 through 2010. The TAZ level variables considered in our analysis include accessibility measures, exposure measures, sociodemographic characteristics, socioeconomic characteristics, road network characteristics and built environment. A policy analysis is also conducted to illustrate the applicability of the proposed model for planning purposes. This macro-level research would assist decision makers, transportation officials and community planners to make informed decisions to proactively improve bicycle safety - a prerequisite to promoting a culture of active transportation.

  9. Segmentation of biological target volumes on multi-tracer PET images based on information fusion for achieving dose painting in radiotherapy.

    PubMed

    Lelandais, Benoît; Gardin, Isabelle; Mouchard, Laurent; Vera, Pierre; Ruan, Su

    2012-01-01

    Medical imaging plays an important role in radiotherapy. Dose painting consists in the application of a nonuniform dose prescription on a tumoral region, and is based on an efficient segmentation of biological target volumes (BTV). It is derived from PET images, that highlight tumoral regions of enhanced glucose metabolism (FDG), cell proliferation (FLT) and hypoxia (FMiso). In this paper, a framework based on Belief Function Theory is proposed for BTV segmentation and for creating 3D parametric images for dose painting. We propose to take advantage of neighboring voxels for BTV segmentation, and also multi-tracer PET images using information fusion to create parametric images. The performances of BTV segmentation was evaluated on an anthropomorphic phantom and compared with two other methods. Quantitative results show the good performances of our method. It has been applied to data of five patients suffering from lung cancer. Parametric images show promising results by highlighting areas where a high frequency or dose escalation could be planned.

  10. Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Syed, Abdul Haleem

    Land-use-and-land-cover (LULC) mapping is crucial in precision agriculture, environmental monitoring, disaster response, and military applications. The demand for improved and more accurate LULC maps has led to the emergence of a key methodology known as Geographic Object-Based Image Analysis (GEOBIA). The core idea of the GEOBIA for an object-based classification system (OBC) is to change the unit of analysis from single-pixels to groups-of-pixels called `objects' through segmentation. While this new paradigm solved problems and improved global accuracy, it also raised new challenges such as the loss of accuracy in categories that are less abundant, but potentially important. Although this trade-off may be acceptable in some domains, the consequences of such an accuracy loss could be potentially fatal in others (for instance, landmine detection). This thesis proposes a method to improve OBC performance by eliminating such accuracy losses. Specifically, we examine the two key players of an OBC system: Hierarchical Segmentation and Supervised Classification. Further, we propose a model to understand the source of accuracy errors in minority categories and provide a method called Scale Fusion to eliminate those errors. This proposed fusion method involves two stages. First, the characteristic scale for each category is estimated through a combination of segmentation and supervised classification. Next, these estimated scales (segmentation maps) are fused into one combined-object-map. Classification performance is evaluated by comparing results of the multi-cut-and-fuse approach (proposed) to the traditional single-cut (SC) scale selection strategy. Testing on four different data sets revealed that our proposed algorithm improves accuracy on minority classes while performing just as well on abundant categories. Another active obstacle, presented by today's remotely sensed images, is the volume of information produced by our modern sensors with high spatial and

  11. Global Positioning System Control/User Segments. Volume II. System Error Performance.

    DTIC Science & Technology

    RADIO NAVIGATION, *NAVIGATION SATELLITES, * POSITION FINDING, *NAVIGATION COMPUTERS, *IONOSPHERIC PROPAGATION, GLOBAL , EPHEMERIDES, TRADE OFF...ANALYSIS, SPACEBORNE, ERRORS, VELOCITY, SYSTEMS ENGINEERING, DIGITAL COMPUTERS, MEMORY DEVICES, TIME SIGNALS, SITE SELECTION, GROUND STATIONS, MOTION, MATHEMATICAL MODELS, ALGORITHMS, PERFORMANCE(ENGINEERING), USER NEEDS, S BAND, L BAND.

  12. Diffractive imaging analysis of large-aperture segmented telescope based on partial Fourier transform

    NASA Astrophysics Data System (ADS)

    Dong, Bing; Qin, Shun; Hu, Xinqi

    2013-09-01

    Large-aperture segmented primary mirror will be widely used in next-generation space-based and ground-based telescopes. The effects of intersegment gaps, obstructions, position and figure errors of segments, which are all involved in the pupil plane, on the image quality metric should be analyzed using diffractive imaging theory. Traditional Fast Fourier Transform (FFT) method is very time-consuming and costs a lot of memory especially in dealing with large pupil-sampling matrix. A Partial Fourier Transform (PFT) method is first proposed to substantially speed up the computation and reduce memory usage for diffractive imaging analysis. Diffraction effects of a 6-meter segmented mirror including 18 hexagonal segments are simulated and analyzed using PFT method. The influence of intersegment gaps and position errors of segments on Strehl ratio is quantitatively analyzed by computing the Point Spread Function (PSF). By comparing simulation results with theoretical results, the correctness and feasibility of PFT method is confirmed.

  13. Failure analysis for model-based organ segmentation using outlier detection

    NASA Astrophysics Data System (ADS)

    Saalbach, Axel; Wächter Stehle, Irina; Lorenz, Cristian; Weese, Jürgen

    2014-03-01

    During the last years Model-Based Segmentation (MBS) techniques have been used in a broad range of medical applications. In clinical practice, such techniques are increasingly employed for diagnostic purposes and treatment decisions. However, it is not guaranteed that a segmentation algorithm will converge towards the desired solution. In specific situations as in the presence of rare anatomical variants (which cannot be represented) or for images with an extremely low quality, a meaningful segmentation might not be feasible. At the same time, an automated estimation of the segmentation reliability is commonly not available. In this paper we present an approach for the identification of segmentation failures using concepts from the field of outlier detection. The approach is validated on a comprehensive set of Computed Tomography Angiography (CTA) images by means of Receiver Operating Characteristic (ROC) analysis. Encouraging results in terms of an Area Under the ROC Curve (AUC) of up to 0.965 were achieved.

  14. 3D Building Models Segmentation Based on K-Means++ Cluster Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Mao, B.

    2016-10-01

    3D mesh model segmentation is drawing increasing attentions from digital geometry processing field in recent years. The original 3D mesh model need to be divided into separate meaningful parts or surface patches based on certain standards to support reconstruction, compressing, texture mapping, model retrieval and etc. Therefore, segmentation is a key problem for 3D mesh model segmentation. In this paper, we propose a method to segment Collada (a type of mesh model) 3D building models into meaningful parts using cluster analysis. Common clustering methods segment 3D mesh models by K-means, whose performance heavily depends on randomized initial seed points (i.e., centroid) and different randomized centroid can get quite different results. Therefore, we improved the existing method and used K-means++ clustering algorithm to solve this problem. Our experiments show that K-means++ improves both the speed and the accuracy of K-means, and achieve good and meaningful results.

  15. SU-E-J-123: Assessing Segmentation Accuracy of Internal Volumes and Sub-Volumes in 4D PET/CT of Lung Tumors Using a Novel 3D Printed Phantom

    SciTech Connect

    Soultan, D; Murphy, J; James, C; Hoh, C; Moiseenko, V; Cervino, L; Gill, B

    2015-06-15

    Purpose: To assess the accuracy of internal target volume (ITV) segmentation of lung tumors for treatment planning of simultaneous integrated boost (SIB) radiotherapy as seen in 4D PET/CT images, using a novel 3D-printed phantom. Methods: The insert mimics high PET tracer uptake in the core and 50% uptake in the periphery, by using a porous design at the periphery. A lung phantom with the insert was placed on a programmable moving platform. Seven breathing waveforms of ideal and patient-specific respiratory motion patterns were fed to the platform, and 4D PET/CT scans were acquired of each of them. CT images were binned into 10 phases, and PET images were binned into 5 phases following the clinical protocol. Two scenarios were investigated for segmentation: a gate 30–70 window, and no gating. The radiation oncologist contoured the outer ITV of the porous insert with on CT images, while the internal void volume with 100% uptake was contoured on PET images for being indistinguishable from the outer volume in CT images. Segmented ITVs were compared to the expected volumes based on known target size and motion. Results: 3 ideal breathing patterns, 2 regular-breathing patient waveforms, and 2 irregular-breathing patient waveforms were used for this study. 18F-FDG was used as the PET tracer. The segmented ITVs from CT closely matched the expected motion for both no gating and gate 30–70 window, with disagreement of contoured ITV with respect to the expected volume not exceeding 13%. PET contours were seen to overestimate volumes in all the cases, up to more than 40%. Conclusion: 4DPET images of a novel 3D printed phantom designed to mimic different uptake values were obtained. 4DPET contours overestimated ITV volumes in all cases, while 4DCT contours matched expected ITV volume values. Investigation of the cause and effects of the discrepancies is undergoing.

  16. Computed Tomographic Image Analysis Based on FEM Performance Comparison of Segmentation on Knee Joint Reconstruction

    PubMed Central

    Jang, Seong-Wook; Seo, Young-Jin; Yoo, Yon-Sik

    2014-01-01

    The demand for an accurate and accessible image segmentation to generate 3D models from CT scan data has been increasing as such models are required in many areas of orthopedics. In this paper, to find the optimal image segmentation to create a 3D model of the knee CT data, we compared and validated segmentation algorithms based on both objective comparisons and finite element (FE) analysis. For comparison purposes, we used 1 model reconstructed in accordance with the instructions of a clinical professional and 3 models reconstructed using image processing algorithms (Sobel operator, Laplacian of Gaussian operator, and Canny edge detection). Comparison was performed by inspecting intermodel morphological deviations with the iterative closest point (ICP) algorithm, and FE analysis was performed to examine the effects of the segmentation algorithm on the results of the knee joint movement analysis. PMID:25538950

  17. Computed tomographic image analysis based on FEM performance comparison of segmentation on knee joint reconstruction.

    PubMed

    Jang, Seong-Wook; Seo, Young-Jin; Yoo, Yon-Sik; Kim, Yoon Sang

    2014-01-01

    The demand for an accurate and accessible image segmentation to generate 3D models from CT scan data has been increasing as such models are required in many areas of orthopedics. In this paper, to find the optimal image segmentation to create a 3D model of the knee CT data, we compared and validated segmentation algorithms based on both objective comparisons and finite element (FE) analysis. For comparison purposes, we used 1 model reconstructed in accordance with the instructions of a clinical professional and 3 models reconstructed using image processing algorithms (Sobel operator, Laplacian of Gaussian operator, and Canny edge detection). Comparison was performed by inspecting intermodel morphological deviations with the iterative closest point (ICP) algorithm, and FE analysis was performed to examine the effects of the segmentation algorithm on the results of the knee joint movement analysis.

  18. Who avoids going to the doctor and why? Audience segmentation analysis for application of message development.

    PubMed

    Kannan, Viji Diane; Veazie, Peter J

    2015-01-01

    This exploratory study examines the prevalent and detrimental health care phenomenon of patient delay in order to inform formative research leading to the design of communication strategies. Delayed medical care diminishes optimal treatment choices, negatively impacts prognosis, and increases medical costs. Various communication strategies have been employed to combat patient delay, with limited success. This study fills a gap in research informing those interventions by focusing on the portion of patient delay occurring after symptoms have been assessed as a sign of illness and the need for medical care has been determined. We used CHAID segmentation analysis to produce homogeneous segments from the sample according to the propensity to avoid medical care. CHAID is a criterion-based predictive cluster analysis technique. CHAID examines a variety of characteristics to find the one most strongly associated with avoiding doctor visits through a chi-squared test and assessment of statistical significance. The characteristics identified then define the segments. Fourteen segments were produced. Age was the first delineating characteristic, with younger age groups comprising a greater proportion of avoiders. Other segments containing a comparatively larger percent of avoiders were characterized by lower income, lower education, being uninsured, and being male. Each segment was assessed for psychographic properties associated with avoiding care, reasons for avoiding care, and trust in health information sources. While the segments display distinct profiles, having had positive provider experiences, having high health self-efficacy, and having an internal rather than external or chance locus of control were associated with low avoidance among several segments. Several segments were either more or less likely to cite time or money as the reason for avoiding care. And several older aged segments were less likely than the remaining sample to trust the government as a source

  19. Tracking and data acquisition system for the 1990's. Volume 5: TDAS ground segment architecture and operations concept

    NASA Technical Reports Server (NTRS)

    Daly, R.

    1983-01-01

    Tracking and data acquisition system (TDAS) ground segment and operational requirements, TDAS RF terminal configurations, TDAS ground segment elements, the TDAS network, and the TDAS ground terminal hardware are discussed.

  20. Development and Evaluation of a Semi-automated Segmentation Tool and a Modified Ellipsoid Formula for Volumetric Analysis of the Kidney in Non-contrast T2-Weighted MR Images.

    PubMed

    Seuss, Hannes; Janka, Rolf; Prümmer, Marcus; Cavallaro, Alexander; Hammon, Rebecca; Theis, Ragnar; Sandmair, Martin; Amann, Kerstin; Bäuerle, Tobias; Uder, Michael; Hammon, Matthias

    2017-04-01

    Volumetric analysis of the kidney parenchyma provides additional information for the detection and monitoring of various renal diseases. Therefore the purposes of the study were to develop and evaluate a semi-automated segmentation tool and a modified ellipsoid formula for volumetric analysis of the kidney in non-contrast T2-weighted magnetic resonance (MR)-images. Three readers performed semi-automated segmentation of the total kidney volume (TKV) in axial, non-contrast-enhanced T2-weighted MR-images of 24 healthy volunteers (48 kidneys) twice. A semi-automated threshold-based segmentation tool was developed to segment the kidney parenchyma. Furthermore, the three readers measured renal dimensions (length, width, depth) and applied different formulas to calculate the TKV. Manual segmentation served as a reference volume. Volumes of the different methods were compared and time required was recorded. There was no significant difference between the semi-automatically and manually segmented TKV (p = 0.31). The difference in mean volumes was 0.3 ml (95% confidence interval (CI), -10.1 to 10.7 ml). Semi-automated segmentation was significantly faster than manual segmentation, with a mean difference = 188 s (220 vs. 408 s); p < 0.05. Volumes did not differ significantly comparing the results of different readers. Calculation of TKV with a modified ellipsoid formula (ellipsoid volume × 0.85) did not differ significantly from the reference volume; however, the mean error was three times higher (difference of mean volumes -0.1 ml; CI -31.1 to 30.9 ml; p = 0.95). Applying the modified ellipsoid formula was the fastest way to get an estimation of the renal volume (41 s). Semi-automated segmentation and volumetric analysis of the kidney in native T2-weighted MR data delivers accurate and reproducible results and was significantly faster than manual segmentation. Applying a modified ellipsoid formula quickly provides an accurate kidney volume.

  1. Theoretical analysis and experimental verification on valve-less piezoelectric pump with hemisphere-segment bluff-body

    NASA Astrophysics Data System (ADS)

    Ji, Jing; Zhang, Jianhui; Xia, Qixiao; Wang, Shouyin; Huang, Jun; Zhao, Chunsheng

    2014-05-01

    Existing researches on no-moving part valves in valve-less piezoelectric pumps mainly concentrate on pipeline valves and chamber bottom valves, which leads to the complex structure and manufacturing process of pump channel and chamber bottom. Furthermore, position fixed valves with respect to the inlet and outlet also makes the adjustability and controllability of flow rate worse. In order to overcome these shortcomings, this paper puts forward a novel implantable structure of valve-less piezoelectric pump with hemisphere-segments in the pump chamber. Based on the theory of flow around bluff-body, the flow resistance on the spherical and round surface of hemisphere-segment is different when fluid flows through, and the macroscopic flow resistance differences thus formed are also different. A novel valve-less piezoelectric pump with hemisphere-segment bluff-body (HSBB) is presented and designed. HSBB is the no-moving part valve. By the method of volume and momentum comparison, the stress on the bluff-body in the pump chamber is analyzed. The essential reason of unidirectional fluid pumping is expounded, and the flow rate formula is obtained. To verify the theory, a prototype is produced. By using the prototype, experimental research on the relationship between flow rate, pressure difference, voltage, and frequency has been carried out, which proves the correctness of the above theory. This prototype has six hemisphere-segments in the chamber filled with water, and the effective diameter of the piezoelectric bimorph is 30mm. The experiment result shows that the flow rate can reach 0.50 mL/s at the frequency of 6 Hz and the voltage of 110 V. Besides, the pressure difference can reach 26.2 mm H2O at the frequency of 6 Hz and the voltage of 160 V. This research proposes a valve-less piezoelectric pump with hemisphere-segment bluff-body, and its validity and feasibility is verified through theoretical analysis and experiment.

  2. An integrated segmentation and analysis approach for QCT of the knee to determine subchondral bone mineral density and texture.

    PubMed

    Zerfass, P; Lowitz, T; Museyko, O; Bousson, V; Laouisset, L; Kalender, W A; Laredo, J-D; Engelke, K

    2012-09-01

    We have developed a new integrated approach for quantitative computed tomography of the knee in order to quantify bone mineral density (BMD) and subchondral bone structure. The present framework consists of image acquisition and reconstruction, 3-D segmentation, determination of anatomic coordinate systems, and reproducible positioning of analysis volumes of interest (VOI). Novel segmentation algorithms were developed to identify growth plates of the tibia and femur and the joint space with high reproducibility. Five different VOIs with varying distance to the articular surface are defined in the epiphysis. Each VOI is further subdivided into a medial and a lateral part. In each VOI, BMD is determined. In addition, a texture analysis is performed on a high-resolution computed tomography (CT) reconstruction of the same CT scan in order to quantify subchondral bone structure. Local and global homogeneity, as well as local and global anisotropy were measured in all VOIs. Overall short-term precision of the technique was evaluated using double measurements of 20 osteoarthritic cadaveric human knees. Precision errors for volume were about 2-3% in the femur and 3-5% in the tibia. Precision errors for BMD were about 1-2% lower. Homogeneity parameters showed precision errors up to about 2% and anisotropy parameters up to about 4%.

  3. A new partial volume segmentation approach to extract bladder wall for computer-aided detection in virtual cystoscopy

    NASA Astrophysics Data System (ADS)

    Li, Lihong; Wang, Zigang; Li, Xiang; Wei, Xinzhou; Adler, Howard L.; Huang, Wei; Rizvi, Syed A.; Meng, Hong; Harrington, Donald P.; Liang, Zhengrong

    2004-04-01

    We propose a new partial volume (PV) segmentation scheme to extract bladder wall for computer aided detection (CAD) of bladder lesions using multispectral MR images. Compared with CT images, MR images provide not only a better tissue contrast between bladder wall and bladder lumen, but also the multispectral information. As multispectral images are spatially registered over three-dimensional space, information extracted from them is more valuable than that extracted from each image individually. Furthermore, the intrinsic T1 and T2 contrast of the urine against the bladder wall eliminates the invasive air insufflation procedure. Because the earliest stages of bladder lesion growth tend to develop gradually and migrate slowly from the mucosa into the bladder wall, our proposed PV algorithm quantifies images as percentages of tissues inside each voxel. It preserves both morphology and texture information and provides tissue growth tendency in addition to the anatomical structure. Our CAD system utilizes a multi-scan protocol on dual (full and empty of urine) states of the bladder to extract both geometrical and texture information. Moreover, multi-scan of transverse and coronal MR images eliminates motion artifacts. Experimental results indicate that the presented scheme is feasible towards mass screening and lesion detection for virtual cystoscopy (VC).

  4. Combined texture feature analysis of segmentation and classification of benign and malignant tumour CT slices.

    PubMed

    Padma, A; Sukanesh, R

    2013-01-01

    A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity.

  5. Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.

    PubMed

    Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A

    2011-04-01

    Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique.

  6. Teeth segmentation of dental periapical radiographs based on local singularity analysis.

    PubMed

    Lin, P L; Huang, P Y; Huang, P W; Hsu, H C; Chen, C C

    2014-02-01

    Teeth segmentation for periapical raidographs is one of the most critical tasks for effective periapical lesion or periodontitis detection, as both types of anomalies usually occur around tooth boundaries and dental radiographs are often subject to noise, low contrast, and uneven illumination. In this paper, we propose an effective scheme to segment each tooth in periapical radiographs. The method consists of four stages: image enhancement using adaptive power law transformation, local singularity analysis using Hölder exponent, tooth recognition using Otsu's thresholding and connected component analysis, and tooth delineation using snake boundary tracking and morphological operations. Experimental results of 28 periapical radiographs containing 106 teeth in total and 75 useful for dental examination demonstrate that 105 teeth are successfully isolated and segmented, and the overall mean segmentation accuracy of all 75 useful teeth in terms of (TP, FP) is (0.8959, 0.0093) with standard deviation (0.0737, 0.0096), respectively.

  7. Control volume based hydrocephalus research; analysis of human data

    NASA Astrophysics Data System (ADS)

    Cohen, Benjamin; Wei, Timothy; Voorhees, Abram; Madsen, Joseph; Anor, Tomer

    2010-11-01

    Hydrocephalus is a neuropathophysiological disorder primarily diagnosed by increased cerebrospinal fluid volume and pressure within the brain. To date, utilization of clinical measurements have been limited to understanding of the relative amplitude and timing of flow, volume and pressure waveforms; qualitative approaches without a clear framework for meaningful quantitative comparison. Pressure volume models and electric circuit analogs enforce volume conservation principles in terms of pressure. Control volume analysis, through the integral mass and momentum conservation equations, ensures that pressure and volume are accounted for using first principles fluid physics. This approach is able to directly incorporate the diverse measurements obtained by clinicians into a simple, direct and robust mechanics based framework. Clinical data obtained for analysis are discussed along with data processing techniques used to extract terms in the conservation equation. Control volume analysis provides a non-invasive, physics-based approach to extracting pressure information from magnetic resonance velocity data that cannot be measured directly by pressure instrumentation.

  8. Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis

    PubMed Central

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva

    2013-01-01

    Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175

  9. Imalytics Preclinical: Interactive Analysis of Biomedical Volume Data

    PubMed Central

    Gremse, Felix; Stärk, Marius; Ehling, Josef; Menzel, Jan Robert; Lammers, Twan; Kiessling, Fabian

    2016-01-01

    A software tool is presented for interactive segmentation of volumetric medical data sets. To allow interactive processing of large data sets, segmentation operations, and rendering are GPU-accelerated. Special adjustments are provided to overcome GPU-imposed constraints such as limited memory and host-device bandwidth. A general and efficient undo/redo mechanism is implemented using GPU-accelerated compression of the multiclass segmentation state. A broadly applicable set of interactive segmentation operations is provided which can be combined to solve the quantification task of many types of imaging studies. A fully GPU-accelerated ray casting method for multiclass segmentation rendering is implemented which is well-balanced with respect to delay, frame rate, worst-case memory consumption, scalability, and image quality. Performance of segmentation operations and rendering are measured using high-resolution example data sets showing that GPU-acceleration greatly improves the performance. Compared to a reference marching cubes implementation, the rendering was found to be superior with respect to rendering delay and worst-case memory consumption while providing sufficiently high frame rates for interactive visualization and comparable image quality. The fast interactive segmentation operations and the accurate rendering make our tool particularly suitable for efficient analysis of multimodal image data sets which arise in large amounts in preclinical imaging studies. PMID:26909109

  10. Segmentation of Moving Objects by Long Term Video Analysis.

    PubMed

    Ochs, Peter; Malik, Jitendra; Brox, Thomas

    2014-06-01

    Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short-term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects.

  11. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends.

    PubMed

    Mansoor, Awais; Bagci, Ulas; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z; Folio, Les R; Udupa, Jayaram K; Mollura, Daniel J

    2015-01-01

    The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy-guided, and (e) machine learning-based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed.

  12. 3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities

    NASA Astrophysics Data System (ADS)

    Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir

    2016-03-01

    Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.

  13. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment

  14. LOGAM (Logistic Analysis Model). Volume 3. Technical/Programmer Manual.

    DTIC Science & Technology

    1982-08-01

    MANUAL VOLUME III Systems Analysis Division Systems Analysis and Evaluation Office US Army Missile Command Redstone Arsenal, Alabama 35898 August 1982...hardware or software. I I I- 5 1FORWARD The Logistic Analysis Model LOGAM Technical/Programmer Manual Volume III was written under Contract DAAH)I-82-C-A...LOGAM Users Manual Volume II and LOGAM Executive Summary Volume I. Acces s i on For NT’.... DTIC’ Unari::o, e JuS , 1f l’" C) L i,’ _ By

  15. High-throughput microcoil NMR of compound libraries using zero-dispersion segmented flow analysis.

    PubMed

    Kautz, Roger A; Goetzinger, Wolfgang K; Karger, Barry L

    2005-01-01

    An automated system for loading samples into a microcoil NMR probe has been developed using segmented flow analysis. This approach enhanced 2-fold the throughput of the published direct injection and flow injection methods, improved sample utilization 3-fold, and was applicable to high-field NMR facilities with long transfer lines between the sample handler and NMR magnet. Sample volumes of 2 microL (10-30 mM, approximately 10 microg) were drawn from a 96-well microtiter plate by a sample handler, then pumped to a 0.5-microL microcoil NMR probe as a queue of closely spaced "plugs" separated by an immiscible fluorocarbon fluid. Individual sample plugs were detected by their NMR signal and automatically positioned for stopped-flow data acquisition. The sample in the NMR coil could be changed within 35 s by advancing the queue. The fluorocarbon liquid wetted the wall of the Teflon transfer line, preventing the DMSO samples from contacting the capillary wall and thus reducing sample losses to below 5% after passage through the 3-m transfer line. With a wash plug of solvent between samples, sample-to-sample carryover was <1%. Significantly, the samples did not disperse into the carrier liquid during loading or during acquisitions of several days for trace analysis. For automated high-throughput analysis using a 16-second acquisition time, spectra were recorded at a rate of 1.5 min/sample and total deuterated solvent consumption was <0.5 mL (1 US dollar) per 96-well plate.

  16. An automatic variational level set segmentation framework for computer aided dental X-rays analysis in clinical environments.

    PubMed

    Li, Shuo; Fevens, Thomas; Krzyzak, Adam; Li, Song

    2006-03-01

    An automatic variational level set segmentation framework for Computer Aided Dental X-rays Analysis (CADXA) in clinical environments is proposed. Designed for clinical environments, the segmentation contains two stages: a training stage and a segmentation stage. During the training stage, first, manually chosen representative images are segmented using hierarchical level set region detection. Then the window based feature extraction followed by principal component analysis (PCA) is applied and results are used to train a support vector machine (SVM) classifier. During the segmentation stage, dental X-rays are classified first by the trained SVM. The classifier provides initial contours which are close to correct boundaries for three coupled level sets driven by a proposed pathologically variational modeling which greatly accelerates the level set segmentation. Based on the segmentation results and uncertainty maps that are built based on a proposed uncertainty measurement, a computer aided analysis scheme is applied. The experimental results show that the proposed method is able to provide an automatic pathological segmentation which naturally segments those problem areas. Based on the segmentation results, the analysis scheme is able to provide indications of possible problem areas of bone loss and decay to the dentists. As well, the experimental results show that the proposed segmentation framework is able to speed up the level set segmentation in clinical environments.

  17. [Singularity spectra analysis of the ST segments of 12-lead electrocardiogram].

    PubMed

    Wang, Jun; Ning, Xinbao; Xu, Yinlin; Ma, Qianli; Chen, Ying; Li, Dehua

    2007-12-01

    By analysing the f(a) singularity spectra of the ST segments of the synchronous 12-lead ECG, we have found that the singularity spectrum is close to monofractality and its area is only half the area of the synchronous 12-lead ECG f(alpha) singularity spectrum. The ST segments of the synchronous 12-lead ECG signal also has f(alpha) singularity spectra distribution and it also has a reasonable varying scope. We have also found that the lead number of the ST segment f (alpha) singularity spectra for adults having coronary heart disease overstep the reasonable scope tends to increase over that of the ECG f(alpha) singularity spectra. These findings show that using the ST segments f(alpha) singularity spectra distribution of the synchronous 12-lead ECG is more effective than using the synchronous 12-lead ECG on the clinical analysis.

  18. A robust and fast line segment detector based on top-down smaller eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing

    2014-01-01

    In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.

  19. Analysis of reassortment of genome segments in mice mixedly infected with rotaviruses SA11 and RRV.

    PubMed Central

    Gombold, J L; Ramig, R F

    1986-01-01

    Seven-day-old CD-1 mice born to seronegative dams were orally inoculated with a mixture of wild-type simian rotavirus SA11 and wild-type rhesus rotavirus RRV. At various times postinfection, progeny clones were randomly isolated from intestinal homogenates by limiting dilution. Analysis of genome RNAs by polyacrylamide gel electrophoresis was used to identify and genotype reassortant progeny. Reassortment of genome segments was observed in 252 of 662 (38%) clones analyzed from in vivo mixed infections. Kinetic studies indicated that reassortment was an early event in the in vivo infectious cycle; more than 25% of the progeny clones were reassortant by 12 h postinfection. The frequency of reassortant progeny increased to 80 to 100% by 72 to 96 h postinfection. A few reassortants with specific constellations of SA11 and RRV genome segments were repeatedly isolated from different litters or different animals within single litters, suggesting that these genotypes were independently and specifically selected in vivo. Analysis of segregation of individual genome segments among the 252 reassortant progeny revealed that, although most segments segregated randomly, segments 3 and 5 nonrandomly segregated from the SA11 parent. The possible selective pressures active during in vivo reassortment of rotavirus genome segments are discussed. Images PMID:3001336

  20. Phase Segmentation Methods for an Automatic Surgical Workflow Analysis

    PubMed Central

    Sakurai, Ryuhei; Yamazoe, Hirotake

    2017-01-01

    In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA) and hidden Markov model (HMM) approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result.

  1. Second law analysis of an infinitely segmented magnetohydrodynamic generator

    NASA Astrophysics Data System (ADS)

    Arash, Ardeshir; Saidi, Mohammad Hassan; Najafi, Mohammad

    2017-03-01

    The performance of an infinitely segmented magnetohydrodynamic generator is analyzed using the second law of thermodynamics entropy generation criterion. The exact analytical solution of the velocity and temperature fields are provided by applying the modified Hartmann flow model, taking into account the occurrence of the Hall effect in the considered generator. Contributions of heat transfer, fluid friction, and ohmic dissipation to the destruction of useful available work are found, and the nature of irreversibilities in the considered generator is determined. In addition, the electrical isotropic efficiency scheme is used to evaluate the generator performance. Finally, the implication of the Hall parameter, Hartmann number, and load factor for the entropy generation and the generator performance are studied and the optimal operating conditions are determined. The results show that the heat transfer has the smallest contribution to the entropy generation compared to that of the friction and ohmic dissipation. The application of the Hall effect on the system showed an appreciable augmentation of entropy generation rate which is along with what the logic implies. A parametric study is conducted and its results provide the generated entropy and also efficiency diagrams which show the influence of the Hall effect on the considered generator.

  2. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  3. Segmenting Business Students Using Cluster Analysis Applied to Student Satisfaction Survey Results

    ERIC Educational Resources Information Center

    Gibson, Allen

    2009-01-01

    This paper demonstrates a new application of cluster analysis to segment business school students according to their degree of satisfaction with various aspects of the academic program. The resulting clusters provide additional insight into drivers of student satisfaction that are not evident from analysis of the responses of the student body as a…

  4. A normative spatiotemporal MRI atlas of the fetal brain for automatic segmentation and analysis of early brain growth.

    PubMed

    Gholipour, Ali; Rollins, Caitlin K; Velasco-Annis, Clemente; Ouaalam, Abdelhakim; Akhondi-Asl, Alireza; Afacan, Onur; Ortinau, Cynthia M; Clancy, Sean; Limperopoulos, Catherine; Yang, Edward; Estroff, Judy A; Warfield, Simon K

    2017-03-28

    Longitudinal characterization of early brain growth in-utero has been limited by a number of challenges in fetal imaging, the rapid change in size, shape and volume of the developing brain, and the consequent lack of suitable algorithms for fetal brain image analysis. There is a need for an improved digital brain atlas of the spatiotemporal maturation of the fetal brain extending over the key developmental periods. We have developed an algorithm for construction of an unbiased four-dimensional atlas of the developing fetal brain by integrating symmetric diffeomorphic deformable registration in space with kernel regression in age. We applied this new algorithm to construct a spatiotemporal atlas from MRI of 81 normal fetuses scanned between 19 and 39 weeks of gestation and labeled the structures of the developing brain. We evaluated the use of this atlas and additional individual fetal brain MRI atlases for completely automatic multi-atlas segmentation of fetal brain MRI. The atlas is available online as a reference for anatomy and for registration and segmentation, to aid in connectivity analysis, and for groupwise and longitudinal analysis of early brain growth.

  5. Preliminary analysis of effect of random segment errors on coronagraph performance

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-09-01

    "Are we alone in the Universe?" is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 1010 of the host star's light with a 10-11 stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segments. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  6. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  7. Gene expression analysis reveals that Delta/Notch signalling is not involved in onychophoran segmentation.

    PubMed

    Janssen, Ralf; Budd, Graham E

    2016-03-01

    Delta/Notch (Dl/N) signalling is involved in the gene regulatory network underlying the segmentation process in vertebrates and possibly also in annelids and arthropods, leading to the hypothesis that segmentation may have evolved in the last common ancestor of bilaterian animals. Because of seemingly contradicting results within the well-studied arthropods, however, the role and origin of Dl/N signalling in segmentation generally is still unclear. In this study, we investigate core components of Dl/N signalling by means of gene expression analysis in the onychophoran Euperipatoides kanangrensis, a close relative to the arthropods. We find that neither Delta or Notch nor any other investigated components of its signalling pathway are likely to be involved in segment addition in onychophorans. We instead suggest that Dl/N signalling may be involved in posterior elongation, another conserved function of these genes. We suggest further that the posterior elongation network, rather than classic Dl/N signalling, may be in the control of the highly conserved segment polarity gene network and the lower-level pair-rule gene network in onychophorans. Consequently, we believe that the pair-rule gene network and its interaction with Dl/N signalling may have evolved within the arthropod lineage and that Dl/N signalling has thus likely been recruited independently for segment addition in different phyla.

  8. AxonSeg: Open Source Software for Axon and Myelin Segmentation and Morphometric Analysis

    PubMed Central

    Zaimi, Aldo; Duval, Tanguy; Gasecka, Alicja; Côté, Daniel; Stikov, Nikola; Cohen-Adad, Julien

    2016-01-01

    Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy (SEM) only). Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface (GUI) and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i) image pre-processing; (ii) pre-segmentation of axons over a cropped image and discriminant analysis (DA) to select the best parameters based on axon shape and intensity information; (iii) automatic axon and myelin segmentation over the full image; and (iv) atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM), SEM and coherent anti-Stokes Raman scattering (CARS) microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg. PMID:27594833

  9. Preliminary Analysis of Effect of Random Segment Errors on Coronagraph Performance

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-01-01

    Are we alone in the Universe is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 10(exp 10) of the host stars light with a 10(exp -11) stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segmented. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  10. Total variation based edge enhancement for level set segmentation and asymmetry analysis in breast thermograms.

    PubMed

    Prabha, S; Anandh, K R; Sujatha, C M; Ramakrishnan, S

    2014-01-01

    In this work, an attempt has been made to perform asymmetry analysis in breast thermograms using non-linear total variation diffusion filter and reaction diffusion based level set method. Breast images used in this study are obtained from online database of the project PROENG. Initially the images are subjected to total variation (TV) diffusion filter to generate the edge map. Reaction diffusion based level set method is employed to segment the breast tissues using TV edge map as stopping boundary function. Asymmetry analysis is performed on the segmented breast tissues using wavelet based structural texture features. The results show that nonlinear total variation based reaction diffusion level set method could efficiently segment the breast tissues. This method yields high correlation between the segmented output and the ground truth than the conventional level set. Structural texture features extracted from the wavelet coefficients are found to be significant in demarcating normal and abnormal tissues. Hence, it appears that the asymmetry analysis on segmented breast tissues extracted using total variation edge map can be used efficiently to identify the pathological conditions of breast thermograms.

  11. Effect of different segmentation algorithms on metabolic tumor volume measured on 18F-FDG PET/CT of cervical primary squamous cell carcinoma

    PubMed Central

    Xu, Weina; Yu, Shupeng; Ma, Ying; Liu, Changping

    2017-01-01

    Background and purpose It is known that fluorine-18 fluorodeoxyglucose PET/computed tomography (CT) segmentation algorithms have an impact on the metabolic tumor volume (MTV). This leads to some uncertainties in PET/CT guidance of tumor radiotherapy. The aim of this study was to investigate the effect of segmentation algorithms on the PET/CT-based MTV and their correlations with the gross tumor volumes (GTVs) of cervical primary squamous cell carcinoma. Materials and methods Fifty-five patients with International Federation of Gynecology and Obstetrics stage Ia∼IIb and histologically proven cervical squamous cell carcinoma were enrolled. A fluorine-18 fluorodeoxyglucose PET/CT scan was performed before definitive surgery. GTV was measured on surgical specimens. MTVs were estimated on PET/CT scans using different segmentation algorithms, including a fixed percentage of the maximum standardized uptake value (20∼60% SUVmax) threshold and iterative adaptive algorithm. We divided all patients into four different groups according to the SUVmax within target volume. The comparisons of absolute values and percentage differences between MTVs by segmentation and GTV were performed in different SUVmax subgroups. The optimal threshold percentage was determined from MTV20%∼MTV60%, and was correlated with SUVmax. The correlation of MTViterative adaptive with GTV was also investigated. Results MTV50% and MTV60% were similar to GTV in the SUVmax up to 5 (P>0.05). MTV30%∼MTV60% were similar to GTV (P>0.05) in the 50.05) in the 100.05) in the SUVmax of at least 15 group. MTViterative adaptive was similar to GTV in both total and different SUVmax groups (P>0.05). Significant differences were observed among the fixed percentage method and the optimal threshold percentage was inversely correlated with SUVmax. The iterative adaptive segmentation algorithm led

  12. Microreactors with integrated UV/Vis spectroscopic detection for online process analysis under segmented flow.

    PubMed

    Yue, Jun; Falke, Floris H; Schouten, Jaap C; Nijhuis, T Alexander

    2013-12-21

    Combining reaction and detection in multiphase microfluidic flow is becoming increasingly important for accelerating process development in microreactors. We report the coupling of UV/Vis spectroscopy with microreactors for online process analysis under segmented flow conditions. Two integration schemes are presented: one uses a cross-type flow-through cell subsequent to a capillary microreactor for detection in the transmission mode; the other uses embedded waveguides on a microfluidic chip for detection in the evanescent wave field. Model experiments reveal the capabilities of the integrated systems in real-time concentration measurements and segmented flow characterization. The application of such integration for process analysis during gold nanoparticle synthesis is demonstrated, showing its great potential in process monitoring in microreactors operated under segmented flow.

  13. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed

  14. 3-D segmentation and quantitative analysis of inner and outer walls of thrombotic abdominal aortic aneurysms

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Yin, Yin; Wahle, Andreas; Olszewski, Mark E.; Sonka, Milan

    2008-03-01

    An abdominal aortic aneurysm (AAA) is an area of a localized widening of the abdominal aorta, with a frequent presence of thrombus. A ruptured aneurysm can cause death due to severe internal bleeding. AAA thrombus segmentation and quantitative analysis are of paramount importance for diagnosis, risk assessment, and determination of treatment options. Until now, only a small number of methods for thrombus segmentation and analysis have been presented in the literature, either requiring substantial user interaction or exhibiting insufficient performance. We report a novel method offering minimal user interaction and high accuracy. Our thrombus segmentation method is composed of an initial automated luminal surface segmentation, followed by a cost function-based optimal segmentation of the inner and outer surfaces of the aortic wall. The approach utilizes the power and flexibility of the optimal triangle mesh-based 3-D graph search method, in which cost functions for thrombus inner and outer surfaces are based on gradient magnitudes. Sometimes local failures caused by image ambiguity occur, in which case several control points are used to guide the computer segmentation without the need to trace borders manually. Our method was tested in 9 MDCT image datasets (951 image slices). With the exception of a case in which the thrombus was highly eccentric, visually acceptable aortic lumen and thrombus segmentation results were achieved. No user interaction was used in 3 out of 8 datasets, and 7.80 +/- 2.71 mouse clicks per case / 0.083 +/- 0.035 mouse clicks per image slice were required in the remaining 5 datasets.

  15. The Influence of Segmental Impedance Analysis in Predicting Validity of Consumer Grade Bioelectrical Impedance Analysis Devices

    NASA Astrophysics Data System (ADS)

    Sharp, Andy; Heath, Jennifer; Peterson, Janet

    2008-05-01

    Consumer grade bioelectric impedance analysis (BIA) instruments measure the body's impedance at 50 kHz, and yield a quick estimate of percent body fat. The frequency dependence of the impedance gives more information about the current pathway and the response of different tissues. This study explores the impedance response of human tissue at a range of frequencies from 0.2 - 102 kHz using a four probe method and probe locations standard for segmental BIA research of the arm. The data at 50 kHz, for a 21 year old healthy Caucasian male (resistance of 180φ±10 and reactance of 33φ±2) is in agreement with previously reported values [1]. The frequency dependence is not consistent with simple circuit models commonly used in evaluating BIA data, and repeatability of measurements is problematic. This research will contribute to a better understanding of the inherent difficulties in estimating body fat using consumer grade BIA devices. [1] Chumlea, William C., Richard N. Baumgartner, and Alex F. Roche. ``Specific resistivity used to estimate fat-free mass from segmental body measures of bioelectrical impedance.'' Am J Clin Nutr 48 (1998): 7-15.

  16. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  17. Combining multiset resolution and segmentation for hyperspectral image analysis of biological tissues.

    PubMed

    Piqueras, S; Krafft, C; Beleites, C; Egodage, K; von Eggeling, F; Guntinas-Lichius, O; Popp, J; Tauler, R; de Juan, A

    2015-06-30

    Hyperspectral images can provide useful biochemical information about tissue samples. Often, Fourier transform infrared (FTIR) images have been used to distinguish different tissue elements and changes caused by pathological causes. The spectral variation between tissue types and pathological states is very small and multivariate analysis methods are required to describe adequately these subtle changes. In this work, a strategy combining multivariate curve resolution-alternating least squares (MCR-ALS), a resolution (unmixing) method, which recovers distribution maps and pure spectra of image constituents, and K-means clustering, a segmentation method, which identifies groups of similar pixels in an image, is used to provide efficient information on tissue samples. First, multiset MCR-ALS analysis is performed on the set of images related to a particular pathology status to provide basic spectral signatures and distribution maps of the biological contributions needed to describe the tissues. Later on, multiset segmentation analysis is applied to the obtained MCR scores (concentration profiles), used as compressed initial information for segmentation purposes. The multiset idea is transferred to perform image segmentation of different tissue samples. Doing so, a difference can be made between clusters associated with relevant biological parts common to all images, linked to general trends of the type of samples analyzed, and sample-specific clusters, that reflect the natural biological sample-to-sample variability. The last step consists of performing separate multiset MCR-ALS analyses on the pixels of each of the relevant segmentation clusters for the pathology studied to obtain a finer description of the related tissue parts. The potential of the strategy combining multiset resolution on complete images, multiset segmentation and multiset local resolution analysis will be shown on a study focused on FTIR images of tissue sections recorded on inflamed and non

  18. Segmented K-mer and its application on similarity analysis of mitochondrial genome sequences.

    PubMed

    Yu, Hong-Jie

    2013-04-15

    K-mer-based approach has been widely used in similarity analyses so as to discover similarity/dissimilarity among different biological sequences. In this study, we have improved the traditional K-mer method, and introduce a segmented K-mer approach (s-K-mer). After each primary sequence is divided into several segments, we simultaneously transform all these segments into corresponding K-mer-based vectors. In this approach, it is vital how to determine the optimal combination of distance metric with the number of K and the number of segments, i.e., (K(⁎), s(⁎), and d(⁎)). Based on the cascaded feature vectors transformed from s(⁎) segmented sequences, we analyze 34 mammalian genome sequences using the proposed s-K-mer approach. Meanwhile, we compare the results of s-K-mer with those of traditional K-mer. The contrastive analysis results demonstrate that s-K-mer approach outperforms the traditionally K-mer method on similarity analysis among different species.

  19. Segmentation of vascular structures and hematopoietic cells in 3D microscopy images and quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mu, Jian; Yang, Lin; Kamocka, Malgorzata M.; Zollman, Amy L.; Carlesso, Nadia; Chen, Danny Z.

    2015-03-01

    In this paper, we present image processing methods for quantitative study of how the bone marrow microenvironment changes (characterized by altered vascular structure and hematopoietic cell distribution) caused by diseases or various factors. We develop algorithms that automatically segment vascular structures and hematopoietic cells in 3-D microscopy images, perform quantitative analysis of the properties of the segmented vascular structures and cells, and examine how such properties change. In processing images, we apply local thresholding to segment vessels, and add post-processing steps to deal with imaging artifacts. We propose an improved watershed algorithm that relies on both intensity and shape information and can separate multiple overlapping cells better than common watershed methods. We then quantitatively compute various features of the vascular structures and hematopoietic cells, such as the branches and sizes of vessels and the distribution of cells. In analyzing vascular properties, we provide algorithms for pruning fake vessel segments and branches based on vessel skeletons. Our algorithms can segment vascular structures and hematopoietic cells with good quality. We use our methods to quantitatively examine the changes in the bone marrow microenvironment caused by the deletion of Notch pathway. Our quantitative analysis reveals property changes in samples with deleted Notch pathway. Our tool is useful for biologists to quantitatively measure changes in the bone marrow microenvironment, for developing possible therapeutic strategies to help the bone marrow microenvironment recovery.

  20. Stochastic segmentation models for array-based comparative genomic hybridization data analysis.

    PubMed

    Lai, Tze Leung; Xing, Haipeng; Zhang, Nancy

    2008-04-01

    Array-based comparative genomic hybridization (array-CGH) is a high throughput, high resolution technique for studying the genetics of cancer. Analysis of array-CGH data typically involves estimation of the underlying chromosome copy numbers from the log fluorescence ratios and segmenting the chromosome into regions with the same copy number at each location. We propose for the analysis of array-CGH data, a new stochastic segmentation model and an associated estimation procedure that has attractive statistical and computational properties. An important benefit of this Bayesian segmentation model is that it yields explicit formulas for posterior means, which can be used to estimate the signal directly without performing segmentation. Other quantities relating to the posterior distribution that are useful for providing confidence assessments of any given segmentation can also be estimated by using our method. We propose an approximation method whose computation time is linear in sequence length which makes our method practically applicable to the new higher density arrays. Simulation studies and applications to real array-CGH data illustrate the advantages of the proposed approach.

  1. Segmental analysis of molecular surface electrostatic potentials: application to enzyme inhibition.

    PubMed

    Brinck, Tore; Jin, Ping; Ma, Yuguang; Murray, Jane S; Politzer, Peter

    2003-04-01

    We have recently shown that the anti-HIV activities of reverse transcriptase inhibitors can be related quantitatively to properties of the electrostatic potentials on their molecular surfaces. We now introduce the technique of using only segments of the drug molecules in developing such expressions. If an improved correlation is obtained for a given family of compounds, it would suggest that the segment being used plays a key role in the interaction. We demonstrate the procedure for three groups of drugs, two acting on reverse transcriptase and one on HIV protease. Segmental analysis is found to be definitely beneficial in one case, less markedly so in another, and to have a negative effect in the third. The last result indicates that major portions of the molecular surfaces are involved in the interactions and that the entire molecules need to be considered, in contrast to the first two examples, in which certain segments appear to be of primary importance. This initial exploratory study shows that segmental analysis can provide insight into the nature of the process being investigated, as well as possibly enhancing the predictive capability.

  2. Morphotectonic Index Analysis as an Indicator of Neotectonic Segmentation of the Nicoya Peninsula, Costa Rica

    NASA Astrophysics Data System (ADS)

    Morrish, S.; Marshall, J. S.

    2013-12-01

    The Nicoya Peninsula lies within the Costa Rican forearc where the Cocos plate subducts under the Caribbean plate at ~8.5 cm/yr. Rapid plate convergence produces frequent large earthquakes (~50yr recurrence interval) and pronounced crustal deformation (0.1-2.0m/ky uplift). Seven uplifted segments have been identified in previous studies using broad geomorphic surfaces (Hare & Gardner 1984) and late Quaternary marine terraces (Marshall et al. 2010). These surfaces suggest long term net uplift and segmentation of the peninsula in response to contrasting domains of subducting seafloor (EPR, CNS-1, CNS-2). In this study, newer 10m contour digital topographic data (CENIGA- Terra Project) will be used to characterize and delineate this segmentation using morphotectonic analysis of drainage basins and correlation of fluvial terrace/ geomorphic surface elevations. The peninsula has six primary watersheds which drain into the Pacific Ocean; the Río Andamojo, Río Tabaco, Río Nosara, Río Ora, Río Bongo, and Río Ario which range in area from 200 km2 to 350 km2. The trunk rivers follow major lineaments that define morphotectonic segment boundaries and in turn their drainage basins are bisected by them. Morphometric analysis of the lower (1st and 2nd) order drainage basins will provide insight into segmented tectonic uplift and deformation by comparing values of drainage basin asymmetry, stream length gradient, and hypsometry with respect to margin segmentation and subducting seafloor domain. A general geomorphic analysis will be conducted alongside the morphometric analysis to map previously recognized (Morrish et al. 2010) but poorly characterized late Quaternary fluvial terraces. Stream capture and drainage divide migration are common processes throughout the peninsula in response to the ongoing deformation. Identification and characterization of basin piracy throughout the peninsula will provide insight into the history of landscape evolution in response to

  3. Segmentation of the Knee for Analysis of Osteoarthritis

    NASA Astrophysics Data System (ADS)

    Zerfass, Peter; Museyko, Oleg; Bousson, Valérie; Laredo, Jean-Denis; Kalender, Willi A.; Engelke, Klaus

    Osteoarthritis changes the load distribution within joints and also changes bone density and structure. Within typical timelines of clinical studies these changes can be very small. Therefore precise definition of evaluation regions which are highly robust and show little to no interand intra-operator variance are essential for high quality quantitative analysis. To achieve this goal we have developed a system for the definition of such regions with minimal user input.

  4. The number of circulating CD14+ cells is related to infarct size and postinfarct volumes in ST segment elevation myocardial infarction but not non-ST segment elevation myocardial infarction

    PubMed Central

    Montange, Damien; Davani, Siamak; Deschaseaux, Frédéric; Séronde, Marie France; Chopard, Romain; Schiele, François; Jehl, Jérome; Bassand, Jean Pierre; Kantelip, Jean-Pierre; Meneveau, Nicolas

    2012-01-01

    OBJECTIVE: To determine the relationship between the number of CD14+ cells, myocardial infarct (MI) size and left ventricular (LV) volumes in ST segment elevation MI (STEMI) and non-ST segment elevation MI (NSTEMI) patients. METHODS: A total of 62 patients with STEMI (n=34) or NSTEMI (n=28) were enrolled. The number of CD14+ cells was assessed at admission. Infarct size, left ventricular ejection fraction (LVEF) and LV volumes were measured using magnetic resonance imaging five days after MI and six months after MI. Results: In STEMI patients, the number of CD14+ cells was positively and significantly correlated with infarct size at day 5 (r=0.40; P=0.016) and after six months (r=0.34; P=0.047), negatively correlated with LVEF at day 5 (r=−0.50; P=0.002) and after six months (r=−0.46; P=0.005) and positively correlated with end-diastolic (r=0.38; P=0.02) and end-systolic (r=0.49; P=0.002) volumes after six months. In NSTEMI patients, no significant correlation was found between the number of CD14+ cells and infarct size, LVEF or LV volumes at day 5 or after six months. CONCLUSIONS: The number of CD14+ cells at admission was associated with infarct size and LV remodelling in STEMI patients with large infarct size, whereas in NSTEMI patients, no relationship was observed between numbers of CD14+ cells and LV remodelling. PMID:23620701

  5. 3D-segmentation of the 18F-choline PET signal for target volume definition in radiation therapy of the prostate.

    PubMed

    Ciernik, I Frank; Brown, Derek W; Schmid, Daniel; Hany, Thomas; Egli, Peter; Davis, J Bernard

    2007-02-01

    Volumetric assessment of PET signals becomes increasingly relevant for radiotherapy (RT) planning. Here, we investigate the utility of 18F-choline PET signals to serve as a structure for semi-automatic segmentation for forward treatment planning of prostate cancer. 18F-choline PET and CT scans of ten patients with histologically proven prostate cancer without extracapsular growth were acquired using a combined PET/CT scanner. Target volumes were manually delineated on CT images using standard software. Volumes were also obtained from 18F-choline PET images using an asymmetrical segmentation algorithm. PTVs were derived from CT 18F-choline PET based clinical target volumes (CTVs) by automatic expansion and comparative planning was performed. As a read-out for dose given to non-target structures, dose to the rectal wall was assessed. Planning target volumes (PTVs) derived from CT and 18F-choline PET yielded comparable results. Optimal matching of CT and 18F-choline PET derived volumes in the lateral and cranial-caudal directions was obtained using a background-subtracted signal thresholds of 23.0+/-2.6%. In antero-posterior direction, where adaptation compensating for rectal signal overflow was required, optimal matching was achieved with a threshold of 49.5+/-4.6%. 3D-conformal planning with CT or 18F-choline PET resulted in comparable doses to the rectal wall. Choline PET signals of the prostate provide adequate spatial information amendable to standardized asymmetrical region growing algorithms for PET-based target volume definition for external beam RT.

  6. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends

    PubMed Central

    Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.

    2015-01-01

    The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351

  7. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  8. Segmentation, statistical analysis, and modelling of the wall system in ceramic foams

    SciTech Connect

    Kampf, Jürgen; Schlachter, Anna-Lena; Redenbach, Claudia; Liebscher, André

    2015-01-15

    Closed walls in otherwise open foam structures may have a great impact on macroscopic properties of the materials. In this paper, we present two algorithms for the segmentation of such closed walls from micro-computed tomography images of the foam structure. The techniques are compared on simulated data and applied to tomographic images of ceramic filters. This allows for a detailed statistical analysis of the normal directions and sizes of the walls. Finally, we explain how the information derived from the segmented wall system can be included in a stochastic microstructure model for the foam.

  9. Moving cast shadow resistant for foreground segmentation based on shadow properties analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Gao, Yun; Yuan, Guowu; Ji, Rongbin

    2015-12-01

    Moving object detection is the fundamental task in machine vision applications. However, moving cast shadows detection is one of the major concerns for accurate video segmentation. Since detected moving object areas are often contain shadow points, errors in measurements, localization, segmentation, classification and tracking may arise from this. A novel shadow elimination algorithm is proposed in this paper. A set of suspected moving object area are detected by the adaptive Gaussian approach. A model is established based on shadow optical properties analysis. And shadow regions are discriminated from the set of moving pixels by using the properties of brightness, chromaticity and texture in sequence.

  10. CADDIS Volume 4. Data Analysis: Basic Analyses

    EPA Pesticide Factsheets

    Use of statistical tests to determine if an observation is outside the normal range of expected values. Details of CART, regression analysis, use of quantile regression analysis, CART in causal analysis, simplifying or pruning resulting trees.

  11. Control-Volume Analysis Of Thrust-Augmenting Ejectors

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1990-01-01

    New method of analysis of transient flow in thrust-augmenting ejector based on control-volume formulation of governing equations. Considered as potential elements of propulsion subsystems of short-takeoff/vertical-landing airplanes.

  12. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    SciTech Connect

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  13. Phantom-based ground-truth generation for cerebral vessel segmentation and pulsatile deformation analysis

    NASA Astrophysics Data System (ADS)

    Schetelig, Daniel; Säring, Dennis; Illies, Till; Sedlacik, Jan; Kording, Fabian; Werner, René

    2016-03-01

    Hemodynamic and mechanical factors of the vascular system are assumed to play a major role in understanding, e.g., initiation, growth and rupture of cerebral aneurysms. Among those factors, cardiac cycle-related pulsatile motion and deformation of cerebral vessels currently attract much interest. However, imaging of those effects requires high spatial and temporal resolution and remains challenging { and similarly does the analysis of the acquired images: Flow velocity changes and contrast media inflow cause vessel intensity variations in related temporally resolved computed tomography and magnetic resonance angiography data over the cardiac cycle and impede application of intensity threshold-based segmentation and subsequent motion analysis. In this work, a flow phantom for generation of ground-truth images for evaluation of appropriate segmentation and motion analysis algorithms is developed. The acquired ground-truth data is used to illustrate the interplay between intensity fluctuations and (erroneous) motion quantification by standard threshold-based segmentation, and an adaptive threshold-based segmentation approach is proposed that alleviates respective issues. The results of the phantom study are further demonstrated to be transferable to patient data.

  14. Segmental analysis of indocyanine green pharmacokinetics for the reliable diagnosis of functional vascular insufficiency

    NASA Astrophysics Data System (ADS)

    Kang, Yujung; Lee, Jungsul; An, Yuri; Jeon, Jongwook; Choi, Chulhee

    2011-03-01

    Accurate and reliable diagnosis of functional insufficiency of peripheral vasculature is essential since Raynaud phenomenon (RP), most common form of peripheral vascular insufficiency, is commonly associated with systemic vascular disorders. We have previously demonstrated that dynamic imaging of near-infrared fluorophore indocyanine green (ICG) can be a noninvasive and sensitive tool to measure tissue perfusion. In the present study, we demonstrated that combined analysis of multiple parameters, especially onset time and modified Tmax which means the time from onset of ICG fluorescence to Tmax, can be used as a reliable diagnostic tool for RP. To validate the method, we performed the conventional thermographic analysis combined with cold challenge and rewarming along with ICG dynamic imaging and segmental analysis. A case-control analysis demonstrated that segmental pattern of ICG dynamics in both hands was significantly different between normal and RP case, suggesting the possibility of clinical application of this novel method for the convenient and reliable diagnosis of RP.

  15. New Software for Market Segmentation Analysis: A Chi-Square Interaction Detector. AIR 1983 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Lay, Robert S.

    The advantages and disadvantages of new software for market segmentation analysis are discussed, and the application of this new, chi-square based procedure (CHAID), is illustrated. A comparison is presented of an earlier, binary segmentation technique (THAID) and a multiple discriminant analysis. It is suggested that CHAID is superior to earlier…

  16. 3D Segmentation with an application of level set-method using MRI volumes for image guided surgery.

    PubMed

    Bosnjak, A; Montilla, G; Villegas, R; Jara, I

    2007-01-01

    This paper proposes an innovation in the application for image guided surgery using a comparative study of three different method of segmentation. This segmentation method is faster than the manual segmentation of images, with the advantage that it allows to use the same patient as anatomical reference, which has more precision than a generic atlas. This new methodology for 3D information extraction is based on a processing chain structured of the following modules: 1) 3D Filtering: the purpose is to preserve the contours of the structures and to smooth the homogeneous areas; several filters were tested and finally an anisotropic diffusion filter was used. 2) 3D Segmentation. This module compares three different methods: Region growing Algorithm, Cubic spline hand assisted, and Level Set Method. It then proposes a Level Set-based on the front propagation method that allows the making of the reconstruction of the internal walls of the anatomical structures of the brain. 3) 3D visualization. The new contribution of this work consists on the visualization of the segmented model and its use in the pre-surgery planning.

  17. Market segmentation for multiple option healthcare delivery systems--an application of cluster analysis.

    PubMed

    Jarboe, G R; Gates, R H; McDaniel, C D

    1990-01-01

    Healthcare providers of multiple option plans may be confronted with special market segmentation problems. This study demonstrates how cluster analysis may be used for discovering distinct patterns of preference for multiple option plans. The availability of metric, as opposed to categorical or ordinal, data provides the ability to use sophisticated analysis techniques which may be superior to frequency distributions and cross-tabulations in revealing preference patterns.

  18. Comparision between Brain Atrophy and Subdural Volume to Predict Chronic Subdural Hematoma: Volumetric CT Imaging Analysis

    PubMed Central

    Ju, Min-Wook; Kwon, Hyon-Jo; Choi, Seung-Won; Koh, Hyeon-Song; Youm, Jin-Young; Song, Shi-Hun

    2015-01-01

    Objective Brain atrophy and subdural hygroma were well known factors that enlarge the subdural space, which induced formation of chronic subdural hematoma (CSDH). Thus, we identified the subdural volume that could be used to predict the rate of future CSDH after head trauma using a computed tomography (CT) volumetric analysis. Methods A single institution case-control study was conducted involving 1,186 patients who visited our hospital after head trauma from January 1, 2010 to December 31, 2014. Fifty-one patients with delayed CSDH were identified, and 50 patients with age and sex matched for control. Intracranial volume (ICV), the brain parenchyme, and the subdural space were segmented using CT image-based software. To adjust for variations in head size, volume ratios were assessed as a percentage of ICV [brain volume index (BVI), subdural volume index (SVI)]. The maximum depth of the subdural space on both sides was used to estimate the SVI. Results Before adjusting for cranium size, brain volume tended to be smaller, and subdural space volume was significantly larger in the CSDH group (p=0.138, p=0.021, respectively). The BVI and SVI were significantly different (p=0.003, p=0.001, respectively). SVI [area under the curve (AUC), 77.3%; p=0.008] was a more reliable technique for predicting CSDH than BVI (AUC, 68.1%; p=0.001). Bilateral subdural depth (sum of subdural depth on both sides) increased linearly with SVI (p<0.0001). Conclusion Subdural space volume was significantly larger in CSDH groups. SVI was a more reliable technique for predicting CSDH. Bilateral subdural depth was useful to measure SVI. PMID:27169071

  19. Mimicking human expert interpretation of remotely sensed raster imagery by using a novel segmentation analysis within ArcGIS

    NASA Astrophysics Data System (ADS)

    Le Bas, Tim; Scarth, Anthony; Bunting, Peter

    2015-04-01

    Traditional computer based methods for the interpretation of remotely sensed imagery use each pixel individually or the average of a small window of pixels to calculate a class or thematic value, which provides an interpretation. However when a human expert interprets imagery, the human eye is excellent at finding coherent and homogenous areas and edge features. It may therefore be advantageous for computer analysis to mimic human interpretation. A new toolbox for ArcGIS 10.x will be presented that segments the data layers into a set of polygons. Each polygon is defined by a K-means clustering and region growing algorithm, thus finding areas, their edges and any lineations in the imagery. Attached to each polygon are the characteristics of the imagery such as mean and standard deviation of the pixel values, within the polygon. The segmentation of imagery into a jigsaw of polygons also has the advantage that the human interpreter does not need to spend hours digitising the boundaries. The segmentation process has been taken from the RSGIS library of analysis and classification routines (Bunting et al., 2014). These routines are freeware and have been modified to be available in the ArcToolbox under the Windows (v7) operating system. Input to the segmentation process is a multi-layered raster image, for example; a Landsat image, or a set of raster datasets made up from derivatives of topography. The size and number of polygons are set by the user and are dependent on the imagery used. Examples will be presented of data from the marine environment utilising bathymetric depth, slope, rugosity and backscatter from a multibeam system. Meaningful classification of the polygons using their numerical characteristics is the next goal. Object based image analysis (OBIA) should help this workflow. Fully calibrated imagery systems will allow numerical classification to be translated into more readily understandable terms. Peter Bunting, Daniel Clewley, Richard M. Lucas and Sam

  20. CADDIS Volume 4. Data Analysis: Getting Started

    EPA Pesticide Factsheets

    Assembling data for an ecological causal analysis, matching biological and environmental samples in time and space, organizing data along conceptual causal pathways, data quality and quantity requirements, Data Analysis references.

  1. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  2. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  3. Chromosome-specific segmentation revealed by structural analysis of individually isolated chromosomes.

    PubMed

    Kitada, Kunio; Taima, Akira; Ogasawara, Kiyomoto; Metsugi, Shouichi; Aikawa, Satoko

    2011-04-01

    Analysis of structural rearrangements at the individual chromosomal level is still technologically challenging. Here we optimized a chromosome isolation method using fluorescent marker-assisted laser-capture and laser-beam microdissection and applied it to structural analysis of two aberrant chromosomes found in a lung cancer cell line. A high-density array-comparative genomic hybridization (array-CGH) analysis of DNA samples prepared from each of the chromosomes revealed that these two chromosomes contained 296 and 263 segments, respectively, ranging from 1.5 kb to 784.3 kb in size, derived from different portions of chromosome 8. Among these segments, 242 were common in both aberrant chromosomes, but 75 were found to be chromosome-specific. Sequences of 263 junction sites connecting the ends of segments were determined using a PCR/Sanger-sequencing procedure. Overlapping microhomologies were found at 169 junction sites. Junction partners came from various portions of chromosome 8 and no biased pattern in the positional distribution of junction partners was detected. These structural characteristics suggested the occurrence of random fragmentation of the entire chromosome 8 followed by random rejoining of these fragments. Based on that, we proposed a model to explain how these aberrant chromosomes are formed. Through these structural analyses, it was demonstrated that the optimized chromosome isolation method described here can provide high-quality chromosomal DNA for high resolution array-CGH analysis and probably for massively parallel sequencing analysis.

  4. Segmental bronchi collapsibility: computed tomography-based quantification in patients with chronic obstructive pulmonary disease and correlation with emphysema phenotype, corresponding lung volume changes and clinical parameters

    PubMed Central

    Thaiss, Wolfgang Maximilian; Ditt, Hendrik; Hetzel, Jürgen; Schülen, Eva; Nikolaou, Konstantin; Horger, Marius

    2016-01-01

    Background Global pulmonary function tests lack region specific differentiation that might influence therapy in severe chronic obstructive pulmonary disease (COPD) patients. Therefore, the aim of this work was to assess the degree of expiratory 3rd generation bronchial lumen collapsibility in patients with severe COPD using chest-computed tomography (CT), to evaluate emphysema-phenotype, lobar volumes and correlate results with pulmonary function tests. Methods Thin-slice chest-CTs acquired at end-inspiration & end-expiration in 42 COPD GOLD IV patients (19 females, median-age: 65.9 y) from November 2011 to July 2014 were re-evaluated. The cross-sectional area of all segmental bronchi was measured 5 mm below the bronchial origin in both examinations. Lung lobes were semi-automatically segmented, volumes calculated at end-inspiratory and end-expiratory phase and visually defined emphysema-phenotypes defined. Results of CT densitometry were compared with lung functional tests including forced expiratory volume at 1 s (FEV1), total lung capacity (TLC), vital capacity (VC), residual volume (RV), diffusion capacity parameters and the maximal expiratory flow rates (MEFs). Results Mean expiratory bronchial collapse was 31%, stronger in lobes with homogenous (38.5%) vs. heterogeneous emphysema-phenotype (27.8%, P=0.014). The mean lobar expiratory volume reduction was comparable in both emphysema-phenotypes (volume reduction 18.6%±8.3% in homogenous vs. 17.6%±16.5% in heterogeneous phenotype). The degree of bronchial lumen collapsibility, did not correlate with expiratory volume reduction. MEF25 correlated weakly with 3rd generation airway collapsibility (r=0.339, P=0.03). All patients showed a concentric expiratory reduction of bronchial cross-sectional area. Conclusions Changes in collapsibility of 3rd generation bronchi in COPD grade IV patients is significantly lower than that in the trachea and the main bronchi. Collapsibility did not correlate with the reduction in

  5. Information architecture. Volume 2, Part 1: Baseline analysis summary

    SciTech Connect

    1996-12-01

    The Department of Energy (DOE) Information Architecture, Volume 2, Baseline Analysis, is a collaborative and logical next-step effort in the processes required to produce a Departmentwide information architecture. The baseline analysis serves a diverse audience of program management and technical personnel and provides an organized way to examine the Department`s existing or de facto information architecture. A companion document to Volume 1, The Foundations, it furnishes the rationale for establishing a Departmentwide information architecture. This volume, consisting of the Baseline Analysis Summary (part 1), Baseline Analysis (part 2), and Reference Data (part 3), is of interest to readers who wish to understand how the Department`s current information architecture technologies are employed. The analysis identifies how and where current technologies support business areas, programs, sites, and corporate systems.

  6. Multilevel segmentation and integrated bayesian model classification with an application to brain tumor segmentation.

    PubMed

    Corso, Jason J; Sharon, Eitan; Yuille, Alan

    2006-01-01

    We present a new method for automatic segmentation of heterogeneous image data, which is very common in medical image analysis. The main contribution of the paper is a mathematical formulation for incorporating soft model assignments into the calculation of affinities, which are traditionally model free. We integrate the resulting model-aware affinities into the multilevel segmentation by weighted aggregation algorithm. We apply the technique to the task of detecting and segmenting brain tumor and edema in multimodal MR volumes. Our results indicate the benefit of incorporating model-aware affinities into the segmentation process for the difficult case of brain tumor.

  7. Screening Analysis : Volume 1, Description and Conclusions.

    SciTech Connect

    Bonneville Power Administration; Corps of Engineers; Bureau of Reclamation

    1992-08-01

    The SOR consists of three analytical phases leading to a Draft EIS. The first phase Pilot Analysis, was performed for the purpose of testing the decision analysis methodology being used in the SOR. The Pilot Analysis is described later in this chapter. The second phase, Screening Analysis, examines all possible operating alternatives using a simplified analytical approach. It is described in detail in this and the next chapter. This document also presents the results of screening. The final phase, Full-Scale Analysis, will be documented in the Draft EIS and is intended to evaluate comprehensively the few, best alternatives arising from the screening analysis. The purpose of screening is to analyze a wide variety of differing ways of operating the Columbia River system to test the reaction of the system to change. The many alternatives considered reflect the range of needs and requirements of the various river users and interests in the Columbia River Basin. While some of the alternatives might be viewed as extreme, the information gained from the analysis is useful in highlighting issues and conflicts in meeting operating objectives. Screening is also intended to develop a broad technical basis for evaluation including regional experts and to begin developing an evaluation capability for each river use that will support full-scale analysis. Finally, screening provides a logical method for examining all possible options and reaching a decision on a few alternatives worthy of full-scale analysis. An organizational structure was developed and staffed to manage and execute the SOR, specifically during the screening phase and the upcoming full-scale analysis phase. The organization involves ten technical work groups, each representing a particular river use. Several other groups exist to oversee or support the efforts of the work groups.

  8. Breast volume measurement of 598 women using biostereometric analysis.

    PubMed

    Loughry, C W; Sheffer, D B; Price, T E; Einsporn, R L; Bartfai, R G; Morek, W M; Meli, N M

    1989-05-01

    A study of the volumes of the right and left breasts of 598 subjects was undertaken using biostereometric analysis. This measurement uses close-range stereophotogrammetry to characterize the shape of the breast, and is noncontact, noninvasive, accurate, and rapid with respect to the subject involvement time. Using chi-square tests, volumes and volumetric differences between breast pairs were compared with handedness, perception of breast size by each subject, age, and menstrual status. No significant relationship was found between the handedness, age, or menstrual status of the subject and the breast volume. Several groups of subjects were accurate in their perception of breast size difference. Analysis did confirm the generally accepted clinical impression of left-breast volume dominance. These results are shown to be consistent with those of a previous study using 248 women.

  9. Breast volume measurement of 248 women using biostereometric analysis.

    PubMed

    Loughry, C W; Sheffer, D B; Price, T E; Lackney, M J; Bartfai, R G; Morek, W M

    1987-10-01

    A study of volumes of the right and left breasts of 248 subjects was undertaken using biostereometric analysis. This measurement technique uses close-range stereophotogrammetry to characterize the shape of the breast and is noncontact, noninvasive, accurate, and rapid with respect to the subject involvement time. Volumes and volumetric differences between breast pairs were compared, using chi-square tests, with handedness, perception of breast size by each subject, age, and menstrual status. No significant relationship was found between the handedness of the subject and the larger breast volume. Several groups of subjects based on age and menstrual status were accurate in their perception of breast size difference. Analysis did not confirm the generally accepted clinical impression of left breast volume dominance. Although a size difference in breast pairs was documented, neither breast predominated.

  10. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  11. Power Loss Analysis and Comparison of Segmented and Unsegmented Energy Coupling Coils for Wireless Energy Transfer

    PubMed Central

    Tang, Sai Chun; McDannold, Nathan J.

    2015-01-01

    This paper investigated the power losses of unsegmented and segmented energy coupling coils for wireless energy transfer. Four 30-cm energy coupling coils with different winding separations, conductor cross-sectional areas, and number of turns were developed. The four coils were tested in both unsegmented and segmented configurations. The winding conduction and intrawinding dielectric losses of the coils were evaluated individually based on a well-established lumped circuit model. We found that the intrawinding dielectric loss can be as much as seven times higher than the winding conduction loss at 6.78 MHz when the unsegmented coil is tightly wound. The dielectric loss of an unsegmented coil can be reduced by increasing the winding separation or reducing the number of turns, but the power transfer capability is reduced because of the reduced magnetomotive force. Coil segmentation using resonant capacitors has recently been proposed to significantly reduce the operating voltage of a coil to a safe level in wireless energy transfer for medical implants. Here, we found that it can naturally eliminate the dielectric loss. The coil segmentation method and the power loss analysis used in this paper could be applied to the transmitting, receiving, and resonant coils in two- and four-coil energy transfer systems. PMID:26640745

  12. Power Loss Analysis and Comparison of Segmented and Unsegmented Energy Coupling Coils for Wireless Energy Transfer.

    PubMed

    Tang, Sai Chun; McDannold, Nathan J

    2015-03-01

    This paper investigated the power losses of unsegmented and segmented energy coupling coils for wireless energy transfer. Four 30-cm energy coupling coils with different winding separations, conductor cross-sectional areas, and number of turns were developed. The four coils were tested in both unsegmented and segmented configurations. The winding conduction and intrawinding dielectric losses of the coils were evaluated individually based on a well-established lumped circuit model. We found that the intrawinding dielectric loss can be as much as seven times higher than the winding conduction loss at 6.78 MHz when the unsegmented coil is tightly wound. The dielectric loss of an unsegmented coil can be reduced by increasing the winding separation or reducing the number of turns, but the power transfer capability is reduced because of the reduced magnetomotive force. Coil segmentation using resonant capacitors has recently been proposed to significantly reduce the operating voltage of a coil to a safe level in wireless energy transfer for medical implants. Here, we found that it can naturally eliminate the dielectric loss. The coil segmentation method and the power loss analysis used in this paper could be applied to the transmitting, receiving, and resonant coils in two- and four-coil energy transfer systems.

  13. Analysis of gene expression levels in individual bacterial cells without image segmentation

    SciTech Connect

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J.

    2012-05-11

    Highlights: Black-Right-Pointing-Pointer We present a method for extracting gene expression data from images of bacterial cells. Black-Right-Pointing-Pointer The method does not employ cell segmentation and does not require high magnification. Black-Right-Pointing-Pointer Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. Black-Right-Pointing-Pointer We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

  14. Stress Analysis of Bolted, Segmented Cylindrical Shells Exhibiting Flange Mating-Surface Waviness

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2009-01-01

    Bolted, segmented cylindrical shells are a common structural component in many engineering systems especially for aerospace launch vehicles. Segmented shells are often needed due to limitations of manufacturing capabilities or transportation issues related to very long, large-diameter cylindrical shells. These cylindrical shells typically have a flange or ring welded to opposite ends so that shell segments can be mated together and bolted to form a larger structural system. As the diameter of these shells increases, maintaining strict fabrication tolerances for the flanges to be flat and parallel on a welded structure is an extreme challenge. Local fit-up stresses develop in the structure due to flange mating-surface mismatch (flange waviness). These local stresses need to be considered when predicting a critical initial flaw size. Flange waviness is one contributor to the fit-up stress state. The present paper describes the modeling and analysis effort to simulate fit-up stresses due to flange waviness in a typical bolted, segmented cylindrical shell. Results from parametric studies are presented for various flange mating-surface waviness distributions and amplitudes.

  15. CADDIS Volume 4. Data Analysis: Download Software

    EPA Pesticide Factsheets

    Overview of the data analysis tools available for download on CADDIS. Provides instructions for downloading and installing CADStat, access to Microsoft Excel macro for computing SSDs, a brief overview of command line use of R, a statistical software.

  16. Analysis of muscle activation in each body segment in response to the stimulation intensity of whole-body vibration

    PubMed Central

    Lee, Dae-Yeon

    2017-01-01

    [Purpose] The purpose of this study was to investigate the effects of a whole-body vibration exercise, as well as to discuss the scientific basis to establish optimal intensity by analyzing differences between muscle activations in each body part, according to the stimulation intensity of the whole-body vibration. [Subjects and Methods] The study subjects included 10 healthy men in their 20s without orthopedic disease. Representative muscles from the subjects’ primary body segments were selected while the subjects were in upright positions on exercise machines; electromyography electrodes were attached to the selected muscles. Following that, the muscle activities of each part were measured at different intensities. No vibration, 50/80 in volume, and 10/25/40 Hz were mixed and applied when the subjects were on the whole-vibration exercise machines in upright positions. After that, electromyographic signals were collected and analyzed with the root mean square of muscular activation. [Results] As a result of the analysis, it was found that the muscle activation effects had statistically meaningful differences according to changes in exercise intensity in all 8 muscles. When the no-vibration status was standardized and analyzed as 1, the muscle effect became lower at higher frequencies, but became higher at larger volumes. [Conclusion] In conclusion, it was shown that the whole-body vibration stimulation promoted muscle activation across the entire body part, and the exercise effects in each muscle varied depending on the exercise intensities. PMID:28265155

  17. Fetal autonomic brain age scores, segmented heart rate variability analysis, and traditional short term variability

    PubMed Central

    Hoyer, Dirk; Kowalski, Eva-Maria; Schmidt, Alexander; Tetschke, Florian; Nowack, Samuel; Rudolph, Anja; Wallwitz, Ulrike; Kynass, Isabelle; Bode, Franziska; Tegtmeyer, Janine; Kumm, Kathrin; Moraru, Liviu; Götz, Theresa; Haueisen, Jens; Witte, Otto W.; Schleußner, Ekkehard; Schneider, Uwe

    2014-01-01

    Disturbances of fetal autonomic brain development can be evaluated from fetal heart rate patterns (HRP) reflecting the activity of the autonomic nervous system. Although HRP analysis from cardiotocographic (CTG) recordings is established for fetal surveillance, temporal resolution is low. Fetal magnetocardiography (MCG), however, provides stable continuous recordings at a higher temporal resolution combined with a more precise heart rate variability (HRV) analysis. A direct comparison of CTG and MCG based HRV analysis is pending. The aims of the present study are: (i) to compare the fetal maturation age predicting value of the MCG based fetal Autonomic Brain Age Score (fABAS) approach with that of CTG based Dawes-Redman methodology; and (ii) to elaborate fABAS methodology by segmentation according to fetal behavioral states and HRP. We investigated MCG recordings from 418 normal fetuses, aged between 21 and 40 weeks of gestation. In linear regression models we obtained an age predicting value of CTG compatible short term variability (STV) of R2 = 0.200 (coefficient of determination) in contrast to MCG/fABAS related multivariate models with R2 = 0.648 in 30 min recordings, R2 = 0.610 in active sleep segments of 10 min, and R2 = 0.626 in quiet sleep segments of 10 min. Additionally segmented analysis under particular exclusion of accelerations (AC) and decelerations (DC) in quiet sleep resulted in a novel multivariate model with R2 = 0.706. According to our results, fMCG based fABAS may provide a promising tool for the estimation of fetal autonomic brain age. Beside other traditional and novel HRV indices as possible indicators of developmental disturbances, the establishment of a fABAS score normogram may represent a specific reference. The present results are intended to contribute to further exploration and validation using independent data sets and multicenter research structures. PMID:25505399

  18. Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data

    NASA Astrophysics Data System (ADS)

    Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus

    The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.

  19. National Aviation Fuel Scenario Analysis Program (NAFSAP). Volume I. Model Description. Volume II. User Manual.

    DTIC Science & Technology

    1980-03-01

    TESI CHART NATIONAI RUREAt (F ANDA[)Rt 1V4 A NATIONAL. AVIATION ~ FUEL SCENARIO.. ANALYSIS PROGRAM 49!! VOLUM I: MODEL DESCRIA~v 4<C VOLUME II: tr)ER...O %a 0 CD -00 a0 Dcc% %D Ln n q F u-u - ON 4w - u-u-MN N En -4C u-u-u-u- u- .- u-- - u--4uu 41 4t 1- The second major class of inputs to NAFSAP is the...this option determines the specific form of the number of new purchases (NOBUYS) computation. 14 L " - - .. .... . .I li I I q i . .. . . . . . r I I i

  20. Fully Bayesian inference for structural MRI: application to segmentation and statistical analysis of T2-hypointensities.

    PubMed

    Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark

    2013-01-01

    Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.

  1. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  2. Laser power conversion system analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-ground laser power conversion system analysis investigated the feasibility and cost effectiveness of converting solar energy into laser energy in space, and transmitting the laser energy to earth for conversion to electrical energy. The analysis included space laser systems with electrical outputs on the ground ranging from 100 to 10,000 MW. The space laser power system was shown to be feasible and a viable alternate to the microwave solar power satellite. The narrow laser beam provides many options and alternatives not attainable with a microwave beam.

  3. Advanced Durability Analysis. Volume 1. Analytical Methods

    DTIC Science & Technology

    1987-07-31

    for microstruc .- tural behavior . This approach for representing the IFQ, when properly used, can provide reasonable durability analysis rt,- sults for...equivalent initial flaw size distribution (EIFSD) function. Engineering principles rather than mechanistic-based theories for microstructural behavior are...accurate EIFS distribution and a service crack growth behavior . The determinations of EIFS distribution have been described in detail previously. In this

  4. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  5. Texture analysis based on the Hermite transform for image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus

    2012-06-01

    Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.

  6. Segmentation of Unstructured Datasets

    NASA Technical Reports Server (NTRS)

    Bhat, Smitha

    1996-01-01

    Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

  7. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  8. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    SciTech Connect

    Akhbardeh, Alireza; Jacobs, Michael A.

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment

  9. Segmentation of Breast Lesions in Ultrasound Images through Multiresolution Analysis Using Undecimated Discrete Wavelet Transform.

    PubMed

    Prabusankarlal, K M; Thirumoorthy, P; Manavalan, R

    2016-11-01

    Earliest detection and diagnosis of breast cancer reduces mortality rate of patients by increasing the treatment options. A novel method for the segmentation of breast ultrasound images is proposed in this work. The proposed method utilizes undecimated discrete wavelet transform to perform multiresolution analysis of the input ultrasound image. As the resolution level increases, although the effect of noise reduces, the details of the image also dilute. The appropriate resolution level, which contains essential details of the tumor, is automatically selected through mean structural similarity. The feature vector for each pixel is constructed by sampling intra-resolution and inter-resolution data of the image. The dimensionality of feature vectors is reduced by using principal components analysis. The reduced set of feature vectors is segmented into two disjoint clusters using spatial regularized fuzzy c-means algorithm. The proposed algorithm is evaluated by using four validation metrics on a breast ultrasound database of 150 images including 90 benign and 60 malignant cases. The algorithm produced significantly better segmentation results (Dice coef = 0.8595, boundary displacement error = 9.796, dvi = 1.744, and global consistency error = 0.1835) than the other three state of the art methods.

  10. Scanning and transmission electron microscopic analysis of ampullary segment of oviduct during estrous cycle in caprines.

    PubMed

    Sharma, R K; Singh, R; Bhardwaj, J K

    2015-01-01

    The ampullary segment of the mammalian oviduct provides suitable milieu for fertilization and development of zygote before implantation into uterus. It is, therefore, in the present study, the cyclic changes in the morphology of ampullary segment of goat oviduct were studied during follicular and luteal phases using scanning and transmission electron microscopy techniques. Topographical analysis revealed the presence of uniformly ciliated ampullary epithelia, concealing apical processes of non-ciliated cells along with bulbous secretory cells during follicular phase. The luteal phase was marked with decline in number of ciliated cells with increased occurrence of secretory cells. The ultrastructure analysis has demonstrated the presence of indented nuclear membrane, supranuclear cytoplasm, secretory granules, rough endoplasmic reticulum, large lipid droplets, apically located glycogen masses, oval shaped mitochondria in the secretory cells. The ciliated cells were characterized by the presence of elongated nuclei, abundant smooth endoplasmic reticulum, oval or spherical shaped mitochondria with crecentric cristae during follicular phase. However, in the luteal phase, secretory cells were possessing highly indented nucleus with diffused electron dense chromatin, hyaline nucleosol, increased number of lipid droplets. The ciliated cells had numerous fibrous granules and basal bodies. The parallel use of scanning and transmission electron microscopy techniques has enabled us to examine the cyclic and hormone dependent changes occurring in the topography and fine structure of epithelium of ampullary segment and its cells during different reproductive phases that will be great help in understanding major bottle neck that limits success rate in vitro fertilization and embryo transfer technology.

  11. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  12. Development and analysis of a linearly segmented CPC collector for industrial steam generation

    SciTech Connect

    Figueroa, J.A.A.

    1980-06-01

    This study involves the design, analysis and construction of a modular, non-imaging, trough, concentrating solar collector for generation of process steam in a tropical climate. The most innovative feature of this concentrator is that the mirror surface consists of long and narrow planar segments placed inside sealed low-cost glass tubes. The absorber is a cylindrical fin inside an evacuated glass tube. As an extension of the same study, the optical efficiency of the segmented concentrator has been simulated by means of a Monte-Carlo Ray-Tracing program. Laser Ray-Tracing techniques were also used to evaluate the possibilities of this new concept. A preliminary evaluation of the experimental concentrator was done using a relatively simple method that combines results from two experimental measurements: overall heat loss coefficient and optical efficiency. A transient behaviour test was used to measure the overall heat loss coefficient throughout a wide range of temperatures.

  13. AAV Vectors for FRET-Based Analysis of Protein-Protein Interactions in Photoreceptor Outer Segments

    PubMed Central

    Becirovic, Elvir; Böhm, Sybille; Nguyen, Ong N. P.; Riedmayr, Lisa M.; Hammelmann, Verena; Schön, Christian; Butz, Elisabeth S.; Wahl-Schott, Christian; Biel, Martin; Michalakis, Stylianos

    2016-01-01

    Fluorescence resonance energy transfer (FRET) is a powerful method for the detection and quantification of stationary and dynamic protein-protein interactions. Technical limitations have hampered systematic in vivo FRET experiments to study protein-protein interactions in their native environment. Here, we describe a rapid and robust protocol that combines adeno-associated virus (AAV) vector-mediated in vivo delivery of genetically encoded FRET partners with ex vivo FRET measurements. The method was established on acutely isolated outer segments of murine rod and cone photoreceptors and relies on the high co-transduction efficiency of retinal photoreceptors by co-delivered AAV vectors. The procedure can be used for the systematic analysis of protein-protein interactions of wild type or mutant outer segment proteins in their native environment. Conclusively, our protocol can help to characterize the physiological and pathophysiological relevance of photoreceptor specific proteins and, in principle, should also be transferable to other cell types. PMID:27516733

  14. Phylogenetic and recombination analysis of rice black-streaked dwarf virus segment 9 in China.

    PubMed

    Zhou, Yu; Weng, Jian-Feng; Chen, Yan-Ping; Liu, Chang-Lin; Han, Xiao-Hua; Hao, Zhuan-Fang; Li, Ming-Shun; Yong, Hong-Jun; Zhang, De-Gui; Zhang, Shi-Huang; Li, Xin-Hai

    2015-04-01

    Rice black-streaked dwarf virus (RBSDV) is an economically important virus that causes maize rough dwarf disease and rice black-streaked dwarf disease in East Asia. To study RBSDV variation and recombination, we examined the segment 9 (S9) sequences of 49 RBSDV isolates from maize and rice in China. Three S9 recombinants were detected in Baoding, Jinan, and Jining, China. Phylogenetic analysis showed that Chinese RBSDV isolates could be classified into two groups based on their S9 sequences, regardless of host or geographical origin. Further analysis suggested that S9 has undergone negative and purifying selection.

  15. GGO nodule volume-preserving nonrigid lung registration using GLCM texture analysis.

    PubMed

    Park, Seongjin; Kim, Bohyoung; Lee, Jeongjin; Goo, Jin Mo; Shin, Yeong-Gil

    2011-10-01

    In lung cancer screening, benign and malignant nodules can be classified through nodule growth assessment by the registration and, then, subtraction between follow-up computed tomography scans. During the registration, the volume of nodule regions in the floating image should be preserved, whereas the volume of other regions in the floating image should be aligned to that in the reference image. However, ground glass opacity (GGO) nodules are very elusive to automatically segment due to their inhomogeneous interior. In other words, it is difficult to automatically define the volume-preserving regions of GGO nodules. In this paper, we propose an accurate and fast nonrigid registration method. It applies the volume-preserving constraint to candidate regions of GGO nodules, which are automatically detected by gray-level cooccurrence matrix (GLCM) texture analysis. Considering that GGO nodules can be characterized by their inner inhomogeneity and high intensity, we identify the candidate regions of GGO nodules based on the homogeneity values calculated by the GLCM and the intensity values. Furthermore, we accelerate our nonrigid registration by using Compute Unified Device Architecture (CUDA). In the nonrigid registration process, the computationally expensive procedures of the floating-image transformation and the cost-function calculation are accelerated by using CUDA. The experimental results demonstrated that our method almost perfectly preserves the volume of GGO nodules in the floating image as well as effectively aligns the lung between the reference and floating images. Regarding the computational performance, our CUDA-based method delivers about 20× faster registration than the conventional method. Our method can be successfully applied to a GGO nodule follow-up study and can be extended to the volume-preserving registration and subtraction of specific diseases in other organs (e.g., liver cancer).

  16. Feature-driven model-based segmentation

    NASA Astrophysics Data System (ADS)

    Qazi, Arish A.; Kim, John; Jaffray, David A.; Pekar, Vladimir

    2011-03-01

    The accurate delineation of anatomical structures is required in many medical image analysis applications. One example is radiation therapy planning (RTP), where traditional manual delineation is tedious, labor intensive, and can require hours of clinician's valuable time. Majority of automated segmentation methods in RTP belong to either model-based or atlas-based approaches. One substantial limitation of model-based segmentation is that its accuracy may be restricted by the uncertainties in image content, specifically when segmenting low-contrast anatomical structures, e.g. soft tissue organs in computed tomography images. In this paper, we introduce a non-parametric feature enhancement filter which replaces raw intensity image data by a high level probabilistic map which guides the deformable model to reliably segment low-contrast regions. The method is evaluated by segmenting the submandibular and parotid glands in the head and neck region and comparing the results to manual segmentations in terms of the volume overlap. Quantitative results show that we are in overall good agreement with expert segmentations, achieving volume overlap of up to 80%. Qualitatively, we demonstrate that we are able to segment low-contrast regions, which otherwise are difficult to delineate with deformable models relying on distinct object boundaries from the original image data.

  17. Method 349.0 Determination of Ammonia in Estuarine and Coastal Waters by Gas Segmented Continuous Flow Colorimetric Analysis

    EPA Science Inventory

    This method provides a procedure for the determination of ammonia in estuarine and coastal waters. The method is based upon the indophenol reaction,1-5 here adapted to automated gas-segmented continuous flow analysis.

  18. Gait analysis and cerebral volumes in Down's syndrome.

    PubMed

    Rigoldi, C; Galli, M; Condoluci, C; Carducci, F; Onorati, P; Albertini, G

    2009-01-01

    The aim of this study was to look for a relationship between cerebral volumes computed using a voxel-based morphometry algorithm and walking patterns in individuals with Down's syndrome (DS), in order to investigate the origin of the motor problems in these subjects with a view to developing appropriate rehabilitation programmes. Nine children with DS underwent a gait analysis (GA) protocol that used a 3D motion analysis system, force plates and a video system, and magnetic resonance imaging (MRI). Analysis of GA graphs allowed a series of parameters to be defined and computed in order to quantify gait patterns. By combining some of the parameters it was possible to obtain a 3D description of gait in terms of distance from normal values. Finally, the results of cerebral volume analysis were compared with the gait patterns found. A strong relationship emerged between cerebellar vermis volume reduction and quality of gait and also between grey matter volume reduction of some cerebral areas and asymmetrical gait. An evaluation of high-level motor deficits, reflected in a lack or partial lack of proximal functions, is important in order to define a correct rehabilitation programme.

  19. Dense nuclei segmentation based on graph cut and convexity-concavity analysis.

    PubMed

    Qi, J

    2014-01-01

    With the rapid advancement of 3D confocal imaging technology, more and more 3D cellular images will be available. However, robust and automatic extraction of nuclei shape may be hindered by a highly cluttered environment, as for example, in fly eye tissues. In this paper, we present a novel and efficient nuclei segmentation algorithm based on the combination of graph cut and convex shape assumption. The main characteristic of the algorithm is that it segments nuclei foreground using a graph-cut algorithm with our proposed new initialization method and splits overlapping or touching cell nuclei by simple convexity and concavity analysis. Experimental results show that the proposed algorithm can segment complicated nuclei clumps effectively in our fluorescent fruit fly eye images. Evaluation on a public hand-labelled 2D benchmark demonstrates substantial quantitative improvement over other methods. For example, the proposed method achieves a 3.2 Hausdorff distance decrease and a 1.8 decrease in the merged nuclei error per slice.

  20. Automatic segmentation and analysis of fibrin networks in 3D confocal microscopy images

    NASA Astrophysics Data System (ADS)

    Liu, Xiaomin; Mu, Jian; Machlus, Kellie R.; Wolberg, Alisa S.; Rosen, Elliot D.; Xu, Zhiliang; Alber, Mark S.; Chen, Danny Z.

    2012-02-01

    Fibrin networks are a major component of blood clots that provides structural support to the formation of growing clots. Abnormal fibrin networks that are too rigid or too unstable can promote cardiovascular problems and/or bleeding. However, current biological studies of fibrin networks rarely perform quantitative analysis of their structural properties (e.g., the density of branch points) due to the massive branching structures of the networks. In this paper, we present a new approach for segmenting and analyzing fibrin networks in 3D confocal microscopy images. We first identify the target fibrin network by applying the 3D region growing method with global thresholding. We then produce a one-voxel wide centerline for each fiber segment along which the branch points and other structural information of the network can be obtained. Branch points are identified by a novel approach based on the outer medial axis. Cells within the fibrin network are segmented by a new algorithm that combines cluster detection and surface reconstruction based on the α-shape approach. Our algorithm has been evaluated on computer phantom images of fibrin networks for identifying branch points. Experiments on z-stack images of different types of fibrin networks yielded results that are consistent with biological observations.

  1. Local multifractal detrended fluctuation analysis for non-stationary image's texture segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Zong-shou; Li, Jin-wei

    2014-12-01

    Feature extraction plays a great important role in image processing and pattern recognition. As a power tool, multifractal theory is recently employed for this job. However, traditional multifractal methods are proposed to analyze the objects with stationary measure and cannot for non-stationary measure. The works of this paper is twofold. First, the definition of stationary image and 2D image feature detection methods are proposed. Second, a novel feature extraction scheme for non-stationary image is proposed by local multifractal detrended fluctuation analysis (Local MF-DFA), which is based on 2D MF-DFA. A set of new multifractal descriptors, called local generalized Hurst exponent (Lhq) is defined to characterize the local scaling properties of textures. To test the proposed method, both the novel texture descriptor and other two multifractal indicators, namely, local Hölder coefficients based on capacity measure and multifractal dimension Dq based on multifractal differential box-counting (MDBC) method, are compared in segmentation experiments. The first experiment indicates that the segmentation results obtained by the proposed Lhq are better than the MDBC-based Dq slightly and superior to the local Hölder coefficients significantly. The results in the second experiment demonstrate that the Lhq can distinguish the texture images more effectively and provide more robust segmentations than the MDBC-based Dq significantly.

  2. Ground truth delineation for medical image segmentation based on Local Consistency and Distribution Map analysis.

    PubMed

    Cheng, Irene; Sun, Xinyao; Alsufyani, Noura; Xiong, Zhihui; Major, Paul; Basu, Anup

    2015-01-01

    Computer-aided detection (CAD) systems are being increasingly deployed for medical applications in recent years with the goal to speed up tedious tasks and improve precision. Among others, segmentation is an important component in CAD systems as a preprocessing step to help recognize patterns in medical images. In order to assess the accuracy of a CAD segmentation algorithm, comparison with ground truth data is necessary. To-date, ground truth delineation relies mainly on contours that are either manually defined by clinical experts or automatically generated by software. In this paper, we propose a systematic ground truth delineation method based on a Local Consistency Set Analysis approach, which can be used to establish an accurate ground truth representation, or if ground truth is available, to assess the accuracy of a CAD generated segmentation algorithm. We validate our computational model using medical data. Experimental results demonstrate the robustness of our approach. In contrast to current methods, our model also provides consistency information at distributed boundary pixel level, and thus is invariant to global compensation error.

  3. Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population

    USGS Publications Warehouse

    Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel

    2002-01-01

    A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.

  4. A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks.

    PubMed

    Wang, Changhan; Yan, Xinchen; Smith, Max; Kochhar, Kanika; Rubin, Marcie; Warren, Stephen M; Wrobel, James; Lee, Honglak

    2015-01-01

    Wound surface area changes over multiple weeks are highly predictive of the wound healing process. Furthermore, the quality and quantity of the tissue in the wound bed also offer important prognostic information. Unfortunately, accurate measurements of wound surface area changes are out of reach in the busy wound practice setting. Currently, clinicians estimate wound size by estimating wound width and length using a scalpel after wound treatment, which is highly inaccurate. To address this problem, we propose an integrated system to automatically segment wound regions and analyze wound conditions in wound images. Different from previous segmentation techniques which rely on handcrafted features or unsupervised approaches, our proposed deep learning method jointly learns task-relevant visual features and performs wound segmentation. Moreover, learned features are applied to further analysis of wounds in two ways: infection detection and healing progress prediction. To the best of our knowledge, this is the first attempt to automate long-term predictions of general wound healing progress. Our method is computationally efficient and takes less than 5 seconds per wound image (480 by 640 pixels) on a typical laptop computer. Our evaluations on a large-scale wound database demonstrate the effectiveness and reliability of the proposed system.

  5. Advanced finite element analysis of L4-L5 implanted spine segment

    NASA Astrophysics Data System (ADS)

    Pawlikowski, Marek; Domański, Janusz; Suchocki, Cyprian

    2015-09-01

    In the paper finite element (FE) analysis of implanted lumbar spine segment is presented. The segment model consists of two lumbar vertebrae L4 and L5 and the prosthesis. The model of the intervertebral disc prosthesis consists of two metallic plates and a polyurethane core. Bone tissue is modelled as a linear viscoelastic material. The prosthesis core is made of a polyurethane nanocomposite. It is modelled as a non-linear viscoelastic material. The constitutive law of the core, derived in one of the previous papers, is implemented into the FE software Abaqus®. It was done by means of the User-supplied procedure UMAT. The metallic plates are elastic. The most important parts of the paper include: description of the prosthesis geometrical and numerical modelling, mathematical derivation of stiffness tensor and Kirchhoff stress and implementation of the constitutive model of the polyurethane core into Abaqus® software. Two load cases were considered, i.e. compression and stress relaxation under constant displacement. The goal of the paper is to numerically validate the constitutive law, which was previously formulated, and to perform advanced FE analyses of the implanted L4-L5 spine segment in which non-standard constitutive law for one of the model materials, i.e. the prosthesis core, is implemented.

  6. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    PubMed

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types.

  7. Automated 3D Segmentation of Intraretinal Surfaces in SD-OCT Volumes in Normal and Diabetic Mice

    PubMed Central

    Antony, Bhavna J.; Jeong, Woojin; Abràmoff, Michael D.; Vance, Joseph; Sohn, Elliott H.; Garvin, Mona K.

    2014-01-01

    Purpose To describe an adaptation of an existing graph-theoretic method (initially developed for human optical coherence tomography [OCT] images) for the three-dimensional (3D) automated segmentation of 10 intraretinal surfaces in mice scans, and assess the accuracy of the method and the reproducibility of thickness measurements. Methods Ten intraretinal surfaces were segmented in repeat spectral domain (SD)-OCT volumetric images acquired from normal (n = 8) and diabetic (n = 10) mice. The accuracy of the method was assessed by computing the border position errors of the automated segmentation with respect to manual tracings obtained from two experts. The reproducibility was statistically assessed for four retinal layers within eight predefined regions using the mean and SD of the differences in retinal thickness measured in the repeat scans, the coefficient of variation (CV) and the intraclass correlation coefficients (ICC; with 95% confidence intervals [CIs]). Results The overall mean unsigned border position error for the 10 surfaces computed over 97 B-scans (10 scans, 10 normal mice) was 3.16 ± 0.91 μm. The overall mean differences in retinal thicknesses computed from the normal and diabetic mice were 1.86 ± 0.95 and 2.15 ± 0.86 μm, respectively. The CV of the retinal thicknesses for all the measured layers ranged from 1.04% to 5%. The ICCs for the total retinal thickness in the normal and diabetic mice were 0.78 [0.10, 0.92] and 0.83 [0.31, 0.96], respectively. Conclusion The presented method (publicly available as part of the Iowa Reference Algorithms) has acceptable accuracy and reproducibility and is expected to be useful in the quantitative study of intraretinal layers in mice. Translational Relevance The presented method, initially developed for human OCT, has been adapted for mice, with the potential to be adapted for other animals as well. Quantitative in vivo assessment of the retina in mice allows changes to be measured longitudinally, decreasing

  8. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  9. Automated system for ST segment and arrhythmia analysis in exercise radionuclide ventriculography

    SciTech Connect

    Hsia, P.W.; Jenkins, J.M.; Shimoni, Y.; Gage, K.P.; Santinga, J.T.; Pitt, B.

    1986-06-01

    A computer-based system for interpretation of the electrocardiogram (ECG) in the diagnosis of arrhythmia and ST segment abnormality in an exercise system is presented. The system was designed for inclusion in a gamma camera so the ECG diagnosis could be combined with the diagnostic capability of radionuclide ventriculography. Digitized data are analyzed in a beat-by-beat mode and a contextual diagnosis of underlying rhythm is provided. Each beat is assigned a beat code based on a combination of waveform analysis and RR interval measurement. The waveform analysis employs a new correlation coefficient formula which corrects for baseline wander. Selective signal averaging, in which only normal beats are included, is done for an improved signal-to-noise ratio prior to ST segment analysis. Template generation, R wave detection, QRS window size, baseline correction, and continuous updating of heart rate have all been automated. ST level and slope measurements are computed on signal-averaged data. Arrhythmia analysis of 13 passages of abnormal rhythm by computer was found to be correct in 98.4 percent of all beats. 25 passages of exercise data, 1-5 min in length, were evaluated by the cardiologist and found to be in agreement in 95.8 percent in measurements of ST level and 91.7 percent in measurements of ST slope.

  10. [Whole body versus segmental bioimpedance measurements (BIS) of electrical resistance (Re) and extracellular volume (ECV) for assessment of dry weight in end-stage renal patients treated by hemodialysis].

    PubMed

    Załuska, Wojciech; Małecka, Teresa; Mozul, Sławomir; Ksiazek, Andrzej

    2004-01-01

    The precise estimation of the hydration status of the human body has a great meaning in the assessment of dry weight in end-stage renal disease patients treated by hemodialysis. The bioimpedance technique (BIS) is postulated as easy in use and as a non-invasive method in monitoring the size of hydrate space such as total body water (TBW) and extracellular volume (ECV). However, the precision of the method (Whole Body Bioimpedance Technique) has been questioned in several research papers. One of the problems lies in fluid transfer from peripheral spaces (limbs) to the central space (trunk) while changing the position of the body (orthostatic effect). This phenomena can be eliminated using segmental bioimpedance technique (4200 Hydra, Analyzer, Xitron, San Diego, CA, U.S.A.). The purpose of the study was to estimate the changes of electrical resistance (Re) the extracellular volume (ECV) at the time -pre, and -post 10 hemodialysis sessions using whole body bioimpedance technique (WBIS) in comparison to BIS measurements in specific segments of the body; arm (ECVarm), leg (ECVleg), trunk (ECVtrunk). The sum of changes in extracellular volume (ECV) in segments (2ECVarm+ ECVtrunk + 2ECVleg) was 13.26 +/- 1.861 L in comparison to 17.29 +/- 2.07 L (p < 0.01) as measured by WBIS technique at the time before HD. The changes in electrical resistance Re was of 558 +/- 68 W as calculated from the sum of segments versus 560 +/- 70 W (p < 0.05) as measured by WBIS. At the time after hemodialysis the sum of segmental ECV volume measurement was of 11.42 +/- 1.28 L in comparison to 14.84 +/- 1.31 (p < 0.001) from WBIS the whole body technique (WBIS) and changes in electrical resistance Re was of 674 +/- 67 W as calculated from the sum of segments versus 677 +/- 64 (p < 0.05) W respectively. The observed difference between the identical electrical resistance Re as measured by WBIS in comparison to the sum of segment measurements and important difference between ECV volume as measured

  11. Texture analysis improves level set segmentation of the anterior abdominal wall

    PubMed Central

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Landman, Bennett A.

    2013-01-01

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention. Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall. Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture. Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture

  12. Texture analysis improves level set segmentation of the anterior abdominal wall

    SciTech Connect

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Landman, Bennett A.

    2013-12-15

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention.Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall.Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture.Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture

  13. Minimization of sample volume with air-segmented sample injection and the simultaneous determination of trace elements by ICP-MS.

    PubMed

    Noguchi, Osamu; Oshima, Mitsuko; Motomizu, Shoji

    2008-05-01

    The application of inductively coupled plasma mass spectrometry (ICP-MS) to forensic chemistry was studied. The developed method, air-segmented sample injection (ASSI) coupled with ICP-MS, allowed the determination of about 25 elements at the sub-ppb level with only 0.2 ml of a sample solution. The optimum sample flow rate was found to be 0.4 ml min(-1), along with a sample suction time of 30 s. The proposed method was validated by determining trace elements in river-water certified reference material (SLRS-4) issued by National Research Council Canada. The analytical results of the proposed method were in good agreement with the certified values. This method was successfully applied to a human hair sample, the volume of which was 3 ml.

  14. Semi-automatic segmentation and modeling of the cervical spinal cord for volume quantification in multiple sclerosis patients from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sonkova, Pavlina; Evangelou, Iordanis E.; Gallo, Antonio; Cantor, Fredric K.; Ohayon, Joan; McFarland, Henry F.; Bagnato, Francesca

    2008-03-01

    Spinal cord (SC) tissue loss is known to occur in some patients with multiple sclerosis (MS), resulting in SC atrophy. Currently, no measurement tools exist to determine the magnitude of SC atrophy from Magnetic Resonance Images (MRI). We have developed and implemented a novel semi-automatic method for quantifying the cervical SC volume (CSCV) from Magnetic Resonance Images (MRI) based on level sets. The image dataset consisted of SC MRI exams obtained at 1.5 Tesla from 12 MS patients (10 relapsing-remitting and 2 secondary progressive) and 12 age- and gender-matched healthy volunteers (HVs). 3D high resolution image data were acquired using an IR-FSPGR sequence acquired in the sagittal plane. The mid-sagittal slice (MSS) was automatically located based on the entropy calculation for each of the consecutive sagittal slices. The image data were then pre-processed by 3D anisotropic diffusion filtering for noise reduction and edge enhancement before segmentation with a level set formulation which did not require re-initialization. The developed method was tested against manual segmentation (considered ground truth) and intra-observer and inter-observer variability were evaluated.

  15. Using semi-automated segmentation of computed tomography datasets for three-dimensional visualization and volume measurements of equine paranasal sinuses.

    PubMed

    Brinkschulte, Markus; Bienert-Zeit, Astrid; Lüpke, Matthias; Hellige, Maren; Staszyk, Carsten; Ohnesorge, Bernhard

    2013-01-01

    The system of the paranasal sinuses morphologically represents one of the most complex parts of the equine body. A clear understanding of spatial relationships is needed for correct diagnosis and treatment. The purpose of this study was to describe the anatomy and volume of equine paranasal sinuses using three-dimensional (3D) reformatted renderings of computed tomography (CT) slices. Heads of 18 cadaver horses, aged 2-25 years, were analyzed by the use of separate semi-automated segmentation of the following bilateral paranasal sinus compartments: rostral maxillary sinus (Sinus maxillaris rostralis), ventral conchal sinus (Sinus conchae ventralis), caudal maxillary sinus (Sinus maxillaris caudalis), dorsal conchal sinus (Sinus conchae dorsalis), frontal sinus (Sinus frontalis), sphenopalatine sinus (Sinus sphenopalatinus), and middle conchal sinus (Sinus conchae mediae). Reconstructed structures were displayed separately, grouped, or altogether as transparent or solid elements to visualize individual paranasal sinus morphology. The paranasal sinuses appeared to be divided into two systems by the maxillary septum (Septum sinuum maxillarium). The first or rostral system included the rostral maxillary and ventral conchal sinus. The second or caudal system included the caudal maxillary, dorsal conchal, frontal, sphenopalatine, and middle conchal sinuses. These two systems overlapped and were interlocked due to the oblique orientation of the maxillary septum. Total volumes of the paranasal sinuses ranged from 911.50 to 1502.00 ml (mean ± SD, 1151.00 ± 186.30 ml). 3D renderings of equine paranasal sinuses by use of semi-automated segmentation of CT-datasets improved understanding of this anatomically challenging region.

  16. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  17. SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.

    PubMed

    Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A

    2016-11-01

    Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution.

  18. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    SciTech Connect

    Not Available

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  19. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  20. Identifying radiotherapy target volumes in brain cancer by image analysis.

    PubMed

    Cheng, Kun; Montgomery, Dean; Feng, Yang; Steel, Robin; Liao, Hanqing; McLaren, Duncan B; Erridge, Sara C; McLaughlin, Stephen; Nailon, William H

    2015-10-01

    To establish the optimal radiotherapy fields for treating brain cancer patients, the tumour volume is often outlined on magnetic resonance (MR) images, where the tumour is clearly visible, and mapped onto computerised tomography images used for radiotherapy planning. This process requires considerable clinical experience and is time consuming, which will continue to increase as more complex image sequences are used in this process. Here, the potential of image analysis techniques for automatically identifying the radiation target volume on MR images, and thereby assisting clinicians with this difficult task, was investigated. A gradient-based level set approach was applied on the MR images of five patients with grades II, III and IV malignant cerebral glioma. The relationship between the target volumes produced by image analysis and those produced by a radiation oncologist was also investigated. The contours produced by image analysis were compared with the contours produced by an oncologist and used for treatment. In 93% of cases, the Dice similarity coefficient was found to be between 60 and 80%. This feasibility study demonstrates that image analysis has the potential for automatic outlining in the management of brain cancer patients, however, more testing and validation on a much larger patient cohort is required.

  1. Identifying radiotherapy target volumes in brain cancer by image analysis

    PubMed Central

    Cheng, Kun; Montgomery, Dean; Feng, Yang; Steel, Robin; Liao, Hanqing; McLaren, Duncan B.; Erridge, Sara C.; McLaughlin, Stephen

    2015-01-01

    To establish the optimal radiotherapy fields for treating brain cancer patients, the tumour volume is often outlined on magnetic resonance (MR) images, where the tumour is clearly visible, and mapped onto computerised tomography images used for radiotherapy planning. This process requires considerable clinical experience and is time consuming, which will continue to increase as more complex image sequences are used in this process. Here, the potential of image analysis techniques for automatically identifying the radiation target volume on MR images, and thereby assisting clinicians with this difficult task, was investigated. A gradient-based level set approach was applied on the MR images of five patients with grades II, III and IV malignant cerebral glioma. The relationship between the target volumes produced by image analysis and those produced by a radiation oncologist was also investigated. The contours produced by image analysis were compared with the contours produced by an oncologist and used for treatment. In 93% of cases, the Dice similarity coefficient was found to be between 60 and 80%. This feasibility study demonstrates that image analysis has the potential for automatic outlining in the management of brain cancer patients, however, more testing and validation on a much larger patient cohort is required. PMID:26609418

  2. Comparative analysis of methods for estimating arm segment parameters and joint torques from inverse dynamics.

    PubMed

    Piovesan, Davide; Pierobon, Alberto; Dizio, Paul; Lackner, James R

    2011-03-01

    A common problem in the analyses of upper limb unfettered reaching movements is the estimation of joint torques using inverse dynamics. The inaccuracy in the estimation of joint torques can be caused by the inaccuracy in the acquisition of kinematic variables, body segment parameters (BSPs), and approximation in the biomechanical models. The effect of uncertainty in the estimation of body segment parameters can be especially important in the analysis of movements with high acceleration. A sensitivity analysis was performed to assess the relevance of different sources of inaccuracy in inverse dynamics analysis of a planar arm movement. Eight regression models and one water immersion method for the estimation of BSPs were used to quantify the influence of inertial models on the calculation of joint torques during numerical analysis of unfettered forward arm reaching movements. Thirteen subjects performed 72 forward planar reaches between two targets located on the horizontal plane and aligned with the median plane. Using a planar, double link model for the arm with a floating shoulder, we calculated the normalized joint torque peak and a normalized root mean square (rms) of torque at the shoulder and elbow joints. Statistical analyses quantified the influence of different BSP models on the kinetic variable variance for given uncertainty on the estimation of joint kinematics and biomechanical modeling errors. Our analysis revealed that the choice of BSP estimation method had a particular influence on the normalized rms of joint torques. Moreover, the normalization of kinetic variables to BSPs for a comparison among subjects showed that the interaction between the BSP estimation method and the subject specific somatotype and movement kinematics was a significant source of variance in the kinetic variables. The normalized joint torque peak and the normalized root mean square of joint torque represented valuable parameters to compare the effect of BSP estimation methods

  3. Segmentation and Visual Analysis of Whole-Body Mouse Skeleton microSPECT

    PubMed Central

    Khmelinskii, Artem; Groen, Harald C.; Baiker, Martin; de Jong, Marion; Lelieveldt, Boudewijn P. F.

    2012-01-01

    Whole-body SPECT small animal imaging is used to study cancer, and plays an important role in the development of new drugs. Comparing and exploring whole-body datasets can be a difficult and time-consuming task due to the inherent heterogeneity of the data (high volume/throughput, multi-modality, postural and positioning variability). The goal of this study was to provide a method to align and compare side-by-side multiple whole-body skeleton SPECT datasets in a common reference, thus eliminating acquisition variability that exists between the subjects in cross-sectional and multi-modal studies. Six whole-body SPECT/CT datasets of BALB/c mice injected with bone targeting tracers 99mTc-methylene diphosphonate (99mTc-MDP) and 99mTc-hydroxymethane diphosphonate (99mTc-HDP) were used to evaluate the proposed method. An articulated version of the MOBY whole-body mouse atlas was used as a common reference. Its individual bones were registered one-by-one to the skeleton extracted from the acquired SPECT data following an anatomical hierarchical tree. Sequential registration was used while constraining the local degrees of freedom (DoFs) of each bone in accordance to the type of joint and its range of motion. The Articulated Planar Reformation (APR) algorithm was applied to the segmented data for side-by-side change visualization and comparison of data. To quantitatively evaluate the proposed algorithm, bone segmentations of extracted skeletons from the correspondent CT datasets were used. Euclidean point to surface distances between each dataset and the MOBY atlas were calculated. The obtained results indicate that after registration, the mean Euclidean distance decreased from 11.5±12.1 to 2.6±2.1 voxels. The proposed approach yielded satisfactory segmentation results with minimal user intervention. It proved to be robust for “incomplete” data (large chunks of skeleton missing) and for an intuitive exploration and comparison of multi-modal SPECT/CT cross

  4. Segmentation and visual analysis of whole-body mouse skeleton microSPECT.

    PubMed

    Khmelinskii, Artem; Groen, Harald C; Baiker, Martin; de Jong, Marion; Lelieveldt, Boudewijn P F

    2012-01-01

    Whole-body SPECT small animal imaging is used to study cancer, and plays an important role in the development of new drugs. Comparing and exploring whole-body datasets can be a difficult and time-consuming task due to the inherent heterogeneity of the data (high volume/throughput, multi-modality, postural and positioning variability). The goal of this study was to provide a method to align and compare side-by-side multiple whole-body skeleton SPECT datasets in a common reference, thus eliminating acquisition variability that exists between the subjects in cross-sectional and multi-modal studies. Six whole-body SPECT/CT datasets of BALB/c mice injected with bone targeting tracers (99m)Tc-methylene diphosphonate ((99m)Tc-MDP) and (99m)Tc-hydroxymethane diphosphonate ((99m)Tc-HDP) were used to evaluate the proposed method. An articulated version of the MOBY whole-body mouse atlas was used as a common reference. Its individual bones were registered one-by-one to the skeleton extracted from the acquired SPECT data following an anatomical hierarchical tree. Sequential registration was used while constraining the local degrees of freedom (DoFs) of each bone in accordance to the type of joint and its range of motion. The Articulated Planar Reformation (APR) algorithm was applied to the segmented data for side-by-side change visualization and comparison of data. To quantitatively evaluate the proposed algorithm, bone segmentations of extracted skeletons from the correspondent CT datasets were used. Euclidean point to surface distances between each dataset and the MOBY atlas were calculated. The obtained results indicate that after registration, the mean Euclidean distance decreased from 11.5±12.1 to 2.6±2.1 voxels. The proposed approach yielded satisfactory segmentation results with minimal user intervention. It proved to be robust for "incomplete" data (large chunks of skeleton missing) and for an intuitive exploration and comparison of multi-modal SPECT/CT cross

  5. An automated target recognition technique for image segmentation and scene analysis

    SciTech Connect

    Baumgart, C.W.; Ciarcia, C.A.

    1994-02-01

    Automated target recognition software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army`s Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multi-sensor system designed to detect buried and surface-emplaced metallic and non-metallic anti-tank mines. The basic requirements for this ATR software were: (1) an ability to separate target objects from the background in low S/N conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed utilizing an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a trade-off between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  6. A novel method for the measurement of linear body segment parameters during clinical gait analysis.

    PubMed

    Geil, Mark D

    2013-09-01

    Clinical gait analysis is a valuable tool for the understanding of motion disorders and treatment outcomes. Most standard models used in gait analysis rely on predefined sets of body segment parameters that must be measured on each individual. Traditionally, these parameters are measured using calipers and tape measures. The process can be time consuming and is prone to several sources of error. This investigation explored a novel method for rapid recording of linear body segment parameters using magnetic-field based digital calipers commonly used for a different purpose in prosthetics and orthotics. The digital method was found to be comparable to traditional in all linear measures and data capture was significantly faster with the digital method, with mean time savings for 10 measurements of 2.5 min. Digital calipers only record linear distances, and were less accurate when diameters were used to approximate limb circumferences. Experience in measuring BSPs is important, as an experienced measurer was significantly faster than a graduate student and showed less difference between methods. Comparing measurement of adults vs. children showed greater differences with adults, and some method-dependence. If the hardware is available, digital caliper measurement of linear BSPs is accurate and rapid.

  7. SNP discovery and haplotype analysis in the segmentally duplicated DRD5 coding region

    PubMed Central

    HOUSLEY, D. J. E.; NIKOLAS, M.; VENTA, P. J.; JERNIGAN, K. A.; WALDMAN, I. D.; NIGG, J. T.; FRIDERICI, K. H.

    2009-01-01

    SUMMARY The dopamine receptor 5 gene (DRD5) holds much promise as a candidate locus for contributing to neuropsychiatric disorders and other diseases influenced by the dopaminergic system, as well as having potential to affect normal behavioral variation. However, detailed analyses of this gene have been complicated by its location within a segmentally duplicated chromosomal region. Microsatellites and SNPs upstream from the coding region have been used for association studies, but we find, using bioinformatics resources, that these markers all lie within a previously unrecognized second segmental duplication (SD). In order to accurately analyze the DRD5 locus for polymorphisms in the absence of contaminating pseudogene sequences, we developed a fast and reliable method for sequence analysis and genotyping within the DRD5 coding region. We employed restriction enzyme digestion of genomic DNA to eliminate the pseudogenes prior to PCR amplification of the functional gene. This approach allowed us to determine the DRD5 haplotype structure using 31 trios and to reveal additional rare variants in 171 unrelated individuals. We clarify the inconsistencies and errors of the recorded SNPs in dbSNP and HapMap and illustrate the importance of using caution when choosing SNPs in regions of suspected duplications. The simple and relatively inexpensive method presented herein allows for convenient analysis of sequence variation in DRD5 and can be easily adapted to other duplicated genomic regions in order to obtain good quality sequence data. PMID:19397556

  8. Global fractional anisotropy and mean diffusivity together with segmented brain volumes assemble a predictive discriminant model for young and elderly healthy brains: a pilot study at 3T

    PubMed Central

    Garcia-Lazaro, Haydee Guadalupe; Becerra-Laparra, Ivonne; Cortez-Conradis, David; Roldan-Valadez, Ernesto

    2016-01-01

    Summary Several parameters of brain integrity can be derived from diffusion tensor imaging. These include fractional anisotropy (FA) and mean diffusivity (MD). Combination of these variables using multivariate analysis might result in a predictive model able to detect the structural changes of human brain aging. Our aim was to discriminate between young and older healthy brains by combining structural and volumetric variables from brain MRI: FA, MD, and white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) volumes. This was a cross-sectional study in 21 young (mean age, 25.71±3.04 years; range, 21–34 years) and 10 elderly (mean age, 70.20±4.02 years; range, 66–80 years) healthy volunteers. Multivariate discriminant analysis, with age as the dependent variable and WM, GM and CSF volumes, global FA and MD, and gender as the independent variables, was used to assemble a predictive model. The resulting model was able to differentiate between young and older brains: Wilks’ λ = 0.235, χ2 (6) = 37.603, p = .000001. Only global FA, WM volume and CSF volume significantly discriminated between groups. The total accuracy was 93.5%; the sensitivity, specificity and positive and negative predictive values were 91.30%, 100%, 100% and 80%, respectively. Global FA, WM volume and CSF volume are parameters that, when combined, reliably discriminate between young and older brains. A decrease in FA is the strongest predictor of membership of the older brain group, followed by an increase in WM and CSF volumes. Brain assessment using a predictive model might allow the follow-up of selected cases that deviate from normal aging. PMID:27027893

  9. Optical coherence tomography segmentation analysis in relapsing remitting versus progressive multiple sclerosis

    PubMed Central

    Behbehani, Raed; Abu Al-Hassan, Abdullah; Al-Salahat, Ali; Sriraman, Devarajan; Oakley, J. D.; Alroughani, Raed

    2017-01-01

    Introduction Optical coherence tomography (OCT) with retinal segmentation analysis is a valuable tool in assessing axonal loss and neuro-degeneration in multiple sclerosis (MS) by in-vivo imaging, delineation and quantification of retinal layers. There is evidence of deep retinal involvement in MS beyond the inner retinal layers. The ultra-structural retinal changes in MS in different MS phenotypes can reflect differences in the pathophysiologic mechanisms. There is limited data on the pattern of deeper retinal layer involvement in progressive MS (PMS) versus relapsing remitting MS (RRMS). We have compared the OCT segmentation analysis in patients with relapsing-remitting MS and progressive MS. Methods Cross-sectional study of 113 MS patients (226 eyes) (29 PMS, 84 RRMS) and 38 healthy controls (72 eyes). Spectral domain OCT (SDOCT) using the macular cube acquisition protocol (Cirrus HDOCT 5000; Carl Zeiss Meditec) and segmentation of the retinal layers for quantifying the thicknesses of the retinal layers. Segmentation of the retinal layers was carried out utilizing Orion software (Voxeleron, USA) for quantifying the thicknesses of individual retinal layers. Results The retinal nerve finer layer (RNFL) (p = 0.023), the ganglion-cell/inner plexiform layer (GCIPL) (p = 0.006) and the outer plexiform layer (OPL) (p = 0.033) were significantly thinner in PMS compared to RRMS. There was significant negative correlation between the outer nuclear layer (ONL) and EDSS (r = -0.554, p = 0.02) in PMS patients. In RRMS patients with prior optic neuritis, the GCIPL correlated negatively (r = -0.317; p = 0.046), while the photoreceptor layer (PR) correlated positively with EDSS (r = 0.478; p = 0.003). Conclusions Patients with PMS exhibit more atrophy of both the inner and outer retinal layers than RRMS. The ONL in PMS and the GCIPL and PR in RRMS can serve as potential surrogate of disease burden and progression (EDSS). The specific retinal layer predilection and its

  10. Dose-Volume Differences for Computed Tomography and Magnetic Resonance Imaging Segmentation and Planning for Proton Prostate Cancer Therapy

    SciTech Connect

    Yeung, Anamaria R.; Vargas, Carlos E. Falchook, Aaron; Louis, Debbie C.; Olivier, Kenneth; Keole, Sameer; Yeung, Daniel; Mendenhall, Nancy P.; Li Zuofeng

    2008-12-01

    Purpose: To determine the influence of magnetic-resonance-imaging (MRI)-vs. computed-tomography (CT)-based prostate and normal structure delineation on the dose to the target and organs at risk during proton therapy. Methods and Materials: Fourteen patients were simulated in the supine position using both CT and T2 MRI. The prostate, rectum, and bladder were delineated on both imaging modalities. The planning target volume (PTV) was generated from the delineated prostates with a 5-mm axial and 8-mm superior and inferior margin. Two plans were generated and analyzed for each patient: an MRI plan based on the MRI-delineated PTV, and a CT plan based on the CT-delineated PTV. Doses of 78 Gy equivalents (GE) were prescribed to the PTV. Results: Doses to normal structures were lower when MRI was used to delineate the rectum and bladder compared with CT: bladder V50 was 15.3% lower (p = 0.04), and rectum V50 was 23.9% lower (p = 0.003). Poor agreement on the definition of the prostate apex was seen between CT and MRI (p = 0.007). The CT-defined prostate apex was within 2 mm of the apex on MRI only 35.7% of the time. Coverage of the MRI-delineated PTV was significantly decreased with the CT-based plan: the minimum dose to the PTV was reduced by 43% (p < 0.001), and the PTV V99% was reduced by 11% (p < 0.001). Conclusions: Using MRI to delineate the prostate results in more accurate target definition and a smaller target volume compared with CT, allowing for improved target coverage and decreased doses to critical normal structures.

  11. Three-Dimensional Blood Vessel Segmentation and Centerline Extraction based on Two-Dimensional Cross-Section Analysis.

    PubMed

    Kumar, Rahul Prasanna; Albregtsen, Fritz; Reimers, Martin; Edwin, Bjørn; Langø, Thomas; Elle, Ole Jakob

    2015-05-01

    The segmentation of tubular tree structures like vessel systems in volumetric datasets is of vital interest for many medical applications. In this paper we present a novel, semi-automatic method for blood vessel segmentation and centerline extraction, by tracking the blood vessel tree from a user-initiated seed point to the ends of the blood vessel tree. The novelty of our method is in performing only two-dimensional cross-section analysis for segmentation of the connected blood vessels. The cross-section analysis is done by our novel single-scale or multi-scale circle enhancement filter, used at the blood vessel trunk or bifurcation, respectively. The method was validated for both synthetic and medical images. Our validation has shown that the cross-sectional centerline error for our method is below 0.8 pixels and the Dice coefficient for our segmentation is 80% ± 2.7%. On combining our method with an optional active contour post-processing, the Dice coefficient for the resulting segmentation is found to be 94% ± 2.4%. Furthermore, by restricting the image analysis to the regions of interest and converting most of the three-dimensional calculations to two-dimensional calculations, the processing was found to be more than 18 times faster than Frangi vesselness with thinning, 8 times faster than user-initiated active contour segmentation with thinning and 7 times faster than our previous method.

  12. M1A2 Adjunct Analysis (POSNOV Volume)

    DTIC Science & Technology

    1989-12-01

    located a 200 metergap in the obstacle belt. The MRR is 6 to 12 kilometers frm the main defensive linles and must maneuver (navigate) to thca 200...Department of the Army United States Army Armor Center MIA2 ADJUNCT ANALYSIS (POSNAV VOLUME) ACN: 70670 POSNAV PREPARED BY: EDWARD A . BRYLA ajo Ge ral U Army...Col.onel, Armor Director, Combat Developments Commander, USAAR C CERTIFIED BYt APPROVED I.Y: ROBERT T. HIOWARD A AD WSAT I Brigadier General, U.S

  13. Fused silica capillaries with two segments of different internal diameters and inner surface roughnesses prepared by etching with supercritical water and used for volume coupling electrophoresis.

    PubMed

    Horká, Marie; Karásek, Pavel; Roth, Michal; Šlais, Karel

    2017-02-22

    In this work, single-piece fused silica capillaries with two different internal diameter segments featuring different inner surface roughness were prepared by new etching technology with supercritical water and used for volume coupling electrophoresis. The concept of separation and online pre-concentration of analytes in high conductivity matrix is based on the online large-volume sample pre-concentration by the combination of transient isotachophoretic stacking and sweeping of charged proteins in micellar electrokinetic chromatography using non-ionogenic surfactant. The modified surface roughness step helped to the significant narrowing of the zones of examined analytes. The sweeping and separating steps were accomplished simultaneously by the use of phosphate buffer (pH 7) containing ethanol, non-ionogenic surfactant Brij 35, and polyethylene glycol (PEG 10000) after sample injection. Sample solution of a large volume (maximum 3.7 μL) dissolved in physiological saline solution was injected into the wider end of capillary with inlet inner diameter from 150, 185 or 218 μm. The calibration plots were linear (R(2) ∼ 0.9993) over a 0.060-1 μg/mL range for the proteins used, albumin and cytochrome c. The peak area RSDs from at least 20 independent measuremens were below 3.2%. This online pre-concentration technique produced a more than 196-fold increase in sensitivity, and it can be applied for detection of, e.g. the presence of albumin in urine (0.060 μg/mL).

  14. Sequence and phylogenetic analysis of M-class genome segments of novel duck reovirus NP03

    PubMed Central

    Wang, Shao; Chen, Shilong; Cheng, Xiaoxia; Chen, Shaoying; Lin, FengQiang; Jiang, Bing; Zhu, Xiaoli; Li, Zhaolong; Wang, Jinxiang

    2015-01-01

    We report the sequence and phylogenetic analysis of the entire M1, M2, and M3 genome segments of the novel duck reovirus (NDRV) NP03. Alignment between the newly determined nucleotide sequences as well as their deduced amino acid sequences and the published sequences of avian reovirus (ARV) was carried out with DNASTAR software. Sequence comparison showed that the M2 gene had the most variability among the M-class genes of DRV. Phylogenetic analysis of the M-class genes of ARV strains revealed different lineages and clusters within DRVs. The 5 NDRV strains used in this study fall into a well-supported lineage that includes chicken ARV strains, whereas Muscovy DRV (MDRV) strains are separate from NDRV strains and form a distinct genetic lineage in the M2 gene tree. However, the MDRV and NDRV strains are closely related and located in a common lineage in the M1 and M3 gene trees, respectively. PMID:25852231

  15. CADDIS Volume 4. Data Analysis: Exploratory Data Analysis

    EPA Pesticide Factsheets

    Intro to exploratory data analysis. Overview of variable distributions, scatter plots, correlation analysis, GIS datasets. Use of conditional probability to examine stressor levels and impairment. Exploring correlations among multiple stressors.

  16. Computerized analysis of coronary artery disease: Performance evaluation of segmentation and tracking of coronary arteries in CT angiograms

    SciTech Connect

    Zhou, Chuan Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean; Agarwal, Prachi; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Patel, Smita; Wei, Jun

    2014-08-15

    Purpose: The authors are developing a computer-aided detection system to assist radiologists in analysis of coronary artery disease in coronary CT angiograms (cCTA). This study evaluated the accuracy of the authors’ coronary artery segmentation and tracking method which are the essential steps to define the search space for the detection of atherosclerotic plaques. Methods: The heart region in cCTA is segmented and the vascular structures are enhanced using the authors’ multiscale coronary artery response (MSCAR) method that performed 3D multiscale filtering and analysis of the eigenvalues of Hessian matrices. Starting from seed points at the origins of the left and right coronary arteries, a 3D rolling balloon region growing (RBG) method that adapts to the local vessel size segmented and tracked each of the coronary arteries and identifies the branches along the tracked vessels. The branches are queued and subsequently tracked until the queue is exhausted. With Institutional Review Board approval, 62 cCTA were collected retrospectively from the authors’ patient files. Three experienced cardiothoracic radiologists manually tracked and marked center points of the coronary arteries as reference standard following the 17-segment model that includes clinically significant coronary arteries. Two radiologists visually examined the computer-segmented vessels and marked the mistakenly tracked veins and noisy structures as false positives (FPs). For the 62 cases, the radiologists marked a total of 10191 center points on 865 visible coronary artery segments. Results: The computer-segmented vessels overlapped with 83.6% (8520/10191) of the center points. Relative to the 865 radiologist-marked segments, the sensitivity reached 91.9% (795/865) if a true positive is defined as a computer-segmented vessel that overlapped with at least 10% of the reference center points marked on the segment. When the overlap threshold is increased to 50% and 100%, the sensitivities were 86

  17. Segmentation and segment connection of obstructed colon

    NASA Astrophysics Data System (ADS)

    Medved, Mario; Truyen, Roel; Likar, Bostjan; Pernus, Franjo

    2004-05-01

    Segmentation of colon CT images is the main factor that inhibits automation of virtual colonoscopy. There are two main reasons that make efficient colon segmentation difficult. First, besides the colon, the small bowel, lungs, and stomach are also gas-filled organs in the abdomen. Second, peristalsis or residual feces often obstruct the colon, so that it consists of multiple gas-filled segments. In virtual colonoscopy, it is very useful to automatically connect the centerlines of these segments into a single colon centerline. Unfortunately, in some cases this is a difficult task. In this study a novel method for automated colon segmentation and connection of colon segments' centerlines is proposed. The method successfully combines features of segments, such as centerline and thickness, with information on main colon segments. The results on twenty colon cases show that the method performs well in cases of small obstructions of the colon. Larger obstructions are mostly also resolved properly, especially if they do not appear in the sigmoid part of the colon. Obstructions in the sigmoid part of the colon sometimes cause improper classification of the small bowel segments. If a segment is too small, it is classified as the small bowel segment. However, such misclassifications have little impact on colon analysis.

  18. MADAM: Multiple-Attribute Decision Analysis Model. Volume 2

    DTIC Science & Technology

    1981-12-01

    CONTAINED A SIGNIFICANT NUMBER OF PAGES WHICH DO NOT REPRODUCE LEGIBLY. AFIT/GOR/AA/81 0-1 MADAM : MULTIPLE-ATTRIBUTE DECISION ANALYSIS MODEL VOLUME...11 T!IFSIS w T C AFIT/GOR/AA/81D-I Wayne A. Stimpson (J> CC’ T 2Lt USAFR ~~FEB 1 9 1982 AFITj,0R/AA/81 D-1 Thes is t", MADAM : MULTIPLE-ATTRIBUTE...objectives to be satisfied. The program is MADAM : Multiple-Attribute Decision Analysis Model, and it is written in FORTRAN V and is implemented on the

  19. Interfacial energetics approach for analysis of endothelial cell and segmental polyurethane interactions.

    PubMed

    Hill, Michael J; Cheah, Calvin; Sarkar, Debanjan

    2016-08-01

    Understanding the physicochemical interactions between endothelial cells and biomaterials is vital for regenerative medicine applications. Particularly, physical interactions between the substratum interface and spontaneously deposited biomacromolecules as well as between the induced biomolecular interface and the cell in terms of surface energetics are important factors to regulate cellular functions. In this study, we examined the physical interactions between endothelial cells and segmental polyurethanes (PUs) using l-tyrosine based PUs to examine the structure-property relations in terms of PU surface energies and endothelial cell organization. Since, contact angle analysis used to probe surface energetics provides incomplete interpretation and understanding of the physical interactions, we sought a combinatorial surface energetics approach utilizing water contact angle, Zisman's critical surface tension (CST), Kaelble's numerical method, and van Oss-Good-Chaudhury theory (vOGCT), and applied to both substrata and serum adsorbed matrix to correlate human umbilical vein endothelial cell (HUVEC) behavior with surface energetics of l-tyrosine based PU surfaces. We determined that, while water contact angle of substratum or adsorbed matrix did not correlate well with HUVEC behavior, overall higher polarity according to the numerical method as well as Lewis base character of the substratum explained increased HUVEC interaction and monolayer formation as opposed to organization into networks. Cell interaction was also interpreted in terms of the combined effects of substratum and adsorbed matrix polarity and Lewis acid-base character to determine the effect of PU segments.

  20. Investigating materials for breast nodules simulation by using segmentation and similarity analysis of digital images

    NASA Astrophysics Data System (ADS)

    Siqueira, Paula N.; Marcomini, Karem D.; Sousa, Maria A. Z.; Schiabel, Homero

    2015-03-01

    The task of identifying the malignancy of nodular lesions on mammograms becomes quite complex due to overlapped structures or even to the granular fibrous tissue which can cause confusion in classifying masses shape, leading to unnecessary biopsies. Efforts to develop methods for automatic masses detection in CADe (Computer Aided Detection) schemes have been made with the aim of assisting radiologists and working as a second opinion. The validation of these methods may be accomplished for instance by using databases with clinical images or acquired through breast phantoms. With this aim, some types of materials were tested in order to produce radiographic phantom images which could characterize a good enough approach to the typical mammograms corresponding to actual breast nodules. Therefore different nodules patterns were physically produced and used on a previous developed breast phantom. Their characteristics were tested according to the digital images obtained from phantom exposures at a LORAD M-IV mammography unit. Two analysis were realized the first one by the segmentation of regions of interest containing the simulated nodules by an automated segmentation technique as well as by an experienced radiologist who has delineated the contour of each nodule by means of a graphic display digitizer. Both results were compared by using evaluation metrics. The second one used measure of quality Structural Similarity (SSIM) to generate quantitative data related to the texture produced by each material. Although all the tested materials proved to be suitable for the study, the PVC film yielded the best results.

  1. ANALYSIS OF THE SEGMENTAL IMPACTION OF FEMORAL HEAD FOLLOWING AN ACETABULAR FRACTURE SURGICALLY MANAGED

    PubMed Central

    Guimarães, Rodrigo Pereira; Kaleka, Camila Cohen; Cohen, Carina; Daniachi, Daniel; Keiske Ono, Nelson; Honda, Emerson Kiyoshi; Polesello, Giancarlo Cavalli; Riccioli, Walter

    2015-01-01

    Objective: Correlate the postoperative radiographic evaluation with variables accompanying acetabular fractures in order to determine the predictive factors for segmental impaction of femoral head. Methods: Retrospective analysis of medial files of patients submitted to open reduction surgery with internal acetabular fixation. Within approximately 35 years, 596 patients were treated for acetabular fractures; 267 were followed up for at least two years. The others were excluded either because their follow up was shorter than the minimum time, or as a result of the lack of sufficient data reported on files, or because they had been submitted to non-surgical treatment. The patients were followed up by one of three surgeons of the group using the Merle d'Aubigné and Postel clinical scales as well as radiological studies. Results: Only tow studied variables-age and amount of postoperative reductionshowed statistically significant correlation with femoral head impaction. Conclusions: The quality of reduction-anatomical or with up to 2mm residual deviation-presents a good radiographic evolution, reducing the potential for segmental impaction of the femoral head, a statistically significant finding. PMID:27004191

  2. Robust Anisotropic Diffusion Based Edge Enhancement for Level Set Segmentation and Asymmetry Analysis of Breast Thermograms using Zernike Moments.

    PubMed

    Prabha, S; Sujatha, C M; Ramakrishnan, S

    2015-01-01

    Breast thermography plays a major role in early detection of breast cancer in which the thermal variations are associated with precancerous state of breast. The distribution of asymmetrical thermal patterns indicates the pathological condition in breast thermal images. In this work, asymmetry analysis of breast thermal images is carried out using level set segmentation and Zernike moments. The breast tissues are subjected to Tukey’s biweight robust anisotropic diffusion filtering (TBRAD) for the generation of edge map. Reaction diffusion level set method is employed for segmentation in which TBRAD edge map is used as stopping criterion during the level set evolution. Zernike moments are extracted from the segmented breast tissues to perform asymmetry analysis. Results show that the TBRAD filter is able to enhance the edges near infra mammary folds and lower breast boundaries effectively. It is observed that segmented breast tissues are found to be continuous and has sharper boundary. This method yields high degree of correlation (98%) between the segmented output and the ground truth images. Among the extracted Zernike features, higher order moments are found to be significant in demarcating normal and carcinoma breast tissues by 9%. It appears that, the methodology adopted here is useful in accurate segmentation and differentiation of normal and carcinoma breast tissues for automated diagnosis of breast abnormalities.

  3. Image segmentation for uranium isotopic analysis by SIMS: Combined adaptive thresholding and marker controlled watershed approach

    SciTech Connect

    Willingham, David G.; Naes, Benjamin E.; Heasler, Patrick G.; Zimmer, Mindy M.; Barrett, Christopher A.; Addleman, Raymond S.

    2016-05-31

    A novel approach to particle identification and particle isotope ratio determination has been developed for nuclear safeguard applications. This particle search approach combines an adaptive thresholding algorithm and marker-controlled watershed segmentation (MCWS) transform, which improves the secondary ion mass spectrometry (SIMS) isotopic analysis of uranium containing particle populations for nuclear safeguards applications. The Niblack assisted MCWS approach (a.k.a. SEEKER) developed for this work has improved the identification of isotopically unique uranium particles under conditions that have historically presented significant challenges for SIMS image data processing techniques. Particles obtained from five NIST uranium certified reference materials (CRM U129A, U015, U150, U500 and U850) were successfully identified in regions of SIMS image data 1) where a high variability in image intensity existed, 2) where particles were touching or were in close proximity to one another and/or 3) where the magnitude of ion signal for a given region was count limited. Analysis of the isotopic distributions of uranium containing particles identified by SEEKER showed four distinct, accurately identified 235U enrichment distributions, corresponding to the NIST certified 235U/238U isotope ratios for CRM U129A/U015 (not statistically differentiated), U150, U500 and U850. Additionally, comparison of the minor uranium isotope (234U, 235U and 236U) atom percent values verified that, even in the absence of high precision isotope ratio measurements, SEEKER could be used to segment isotopically unique uranium particles from SIMS image data. Although demonstrated specifically for SIMS analysis of uranium containing particles for nuclear safeguards, SEEKER has application in addressing a broad set of image processing challenges.

  4. A novel approach to segmentation and measurement of medical image using level set methods.

    PubMed

    Chen, Yao-Tien

    2017-02-17

    The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis.

  5. Evaluation of poly-drug use in methadone-related fatalities using segmental hair analysis.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2015-03-01

    In Denmark, fatal poisoning among drug addicts is often related to methadone. The primary mechanism contributing to fatal methadone overdose is respiratory depression. Concurrent use of other central nervous system (CNS) depressants is suggested to heighten the potential for fatal methadone toxicity. Reduced tolerance due to a short-time abstinence period is also proposed to determine a risk for fatal overdose. The primary aims of this study were to investigate if concurrent use of CNS depressants or reduced tolerance were significant risk factors in methadone-related fatalities using segmental hair analysis. The study included 99 methadone-related fatalities collected in Denmark from 2008 to 2011, where both blood and hair were available. The cases were divided into three subgroups based on the cause of death; methadone poisoning (N=64), poly-drug poisoning (N=28) or methadone poisoning combined with fatal diseases (N=7). No significant differences between methadone concentrations in the subgroups were obtained in both blood and hair. The methadone blood concentrations were highly variable (0.015-5.3, median: 0.52mg/kg) and mainly within the concentration range detected in living methadone users. In hair, methadone was detected in 97 fatalities with concentrations ranging from 0.061 to 211ng/mg (median: 11ng/mg). In the remaining two cases, methadone was detected in blood but absent in hair specimens, suggesting that these two subjects were methadone-naive users. Extensive poly-drug use was observed in all three subgroups, both recently and within the last months prior to death. Especially, concurrent use of multiple benzodiazepines was prevalent among the deceased followed by the abuse of morphine, codeine, amphetamine, cannabis, cocaine and ethanol. By including quantitative segmental hair analysis, additional information on poly-drug use was obtained. Especially, 6-acetylmorphine was detected more frequently in hair specimens, indicating that regular abuse of

  6. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  7. Automated identification of best-quality coronary artery segments from multiple-phase coronary CT angiography (cCTA) for vessel analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A.

    2016-03-01

    We are developing an automated method to identify the best quality segment among the corresponding segments in multiple-phase cCTA. The coronary artery trees are automatically extracted from different cCTA phases using our multi-scale vessel segmentation and tracking method. An automated registration method is then used to align the multiple-phase artery trees. The corresponding coronary artery segments are identified in the registered vessel trees and are straightened by curved planar reformation (CPR). Four features are extracted from each segment in each phase as quality indicators in the original CT volume and the straightened CPR volume. Each quality indicator is used as a voting classifier to vote the corresponding segments. A newly designed weighted voting ensemble (WVE) classifier is finally used to determine the best-quality coronary segment. An observer preference study is conducted with three readers to visually rate the quality of the vessels in 1 to 6 rankings. Six and 10 cCTA cases are used as training and test set in this preliminary study. For the 10 test cases, the agreement between automatically identified best-quality (AI-BQ) segments and radiologist's top 2 rankings is 79.7%, and between AI-BQ and the other two readers are 74.8% and 83.7%, respectively. The results demonstrated that the performance of our automated method was comparable to those of experienced readers for identification of the best-quality coronary segments.

  8. Application of Control Volume Analysis to Cerebrospinal Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Wei, Timothy; Cohen, Benjamin; Anor, Tomer; Madsen, Joseph

    2011-11-01

    Hydrocephalus is among the most common birth defects and may not be prevented nor cured. Afflicted individuals face serious issues, which at present are too complicated and not well enough understood to treat via systematic therapies. This talk outlines the framework and application of a control volume methodology to clinical Phase Contrast MRI data. Specifically, integral control volume analysis utilizes a fundamental, fluid dynamics methodology to quantify intracranial dynamics within a precise, direct, and physically meaningful framework. A chronically shunted, hydrocephalic patient in need of a revision procedure was used as an in vivo case study. Magnetic resonance velocity measurements within the patient's aqueduct were obtained in four biomedical state and were analyzed using the methods presented in this dissertation. Pressure force estimates were obtained, showing distinct differences in amplitude, phase, and waveform shape for different intracranial states within the same individual. Thoughts on the physiological and diagnostic research and development implications/opportunities will be presented.

  9. Analysis of Nuclear Mitochondrial DNA Segments of Nine Plant Species: Size, Distribution, and Insertion Loci

    PubMed Central

    Ko, Young-Joon

    2016-01-01

    Nuclear mitochondrial DNA segment (Numt) insertion describes a well-known phenomenon of mitochondrial DNA transfer into a eukaryotic nuclear genome. However, it has not been well understood, especially in plants. Numt insertion patterns vary from species to species in different kingdoms. In this study, the patterns were surveyed in nine plant species, and we found some tip-offs. First, when the mitochondrial genome size is relatively large, the portion of the longer Numt is also larger than the short one. Second, the whole genome duplication event increases the ratio of the shorter Numt portion in the size distribution. Third, Numt insertions are enriched in exon regions. This analysis may be helpful for understanding plant evolution. PMID:27729838

  10. Multi-level segment analysis: definition and application in turbulent systems

    NASA Astrophysics Data System (ADS)

    Wang, L. P.; Huang, Y. X.

    2015-06-01

    For many complex systems the interaction of different scales is among the most interesting and challenging features. It seems not very successful to extract the physical properties in different scale regimes by the existing approaches, such as the structure-function and Fourier spectrum method. Fundamentally, these methods have their respective limitations, for instance scale mixing, i.e. the so-called infrared and ultraviolet effects. To make improvements in this regard, a new method, multi-level segment analysis (MSA) based on the local extrema statistics, has been developed. Benchmark (fractional Brownian motion) verifications and the important case tests (Lagrangian and two-dimensional turbulence) show that MSA can successfully reveal different scaling regimes which have remained quite controversial in turbulence research. In general the MSA method proposed here can be applied to different dynamic systems in which the concepts of multiscale and multifractality are relevant.

  11. CADDIS Volume 4. Data Analysis: Selecting an Analysis Approach

    EPA Pesticide Factsheets

    An approach for selecting statistical analyses to inform causal analysis. Describes methods for determining whether test site conditions differ from reference expectations. Describes an approach for estimating stressor-response relationships.

  12. In-Depth Analysis on Influencing Factors of Adjacent Segment Degeneration After Cervical Fusion

    PubMed Central

    Yu, Chaojie; Mu, Xiaoping; Wei, Jianxun; Chu, Ye; Liang, Bin

    2016-01-01

    Background To explore the related influencing factors of adjacent segment degeneration (ASD) after cervical discectomy and fusion (ACDF). Material/Methods A retrospective analysis of 263 patients who underwent ACDF was carried out. Cervical x-ray and magnetic resonance imaging (MRI) were required before operation, after operation, and at the last follow-up. General information and some radiographic parameters of all patients were measured and recorded. According to the imaging data, patients were put into one of two groups: non-ASD group and ASD group. The differences between the two groups were compared by t-test and χ2-test, and the related influencing factors of ASD were analyzed by logistic regression. Results In all, 138 patients had imaging ASD. Comparing the age, the postoperative cervical arc chord distance (po-CACD), and the plate to disc distance (PDD) of the two groups, differences were statistically significant (p<0.05). The gender, the fusion segment number, the pre-CACD, the pre-and-po CACD, the preoperative cervical spinal canal ratio, and the upper and lower disc height (DH) showed no statistical difference between the two groups (p>0.05). The results of logistic regression analysis showed that there were significant correlations in the following characteristics: age, postoperative po-CACD, and the PDD (p<0.05). Of all these characteristics, the correlation of age was the highest (R=1.820). Conclusions Age, po-CACD, and PDD were risk factors for ASD after ACDF. The older the operation age, the worse the recovery was of postoperative physiological curvature of cervical spine, and a PDD < 5 mm was more likely to lead to ASD. PMID:27965512

  13. In-Depth Analysis on Influencing Factors of Adjacent Segment Degeneration After Cervical Fusion.

    PubMed

    Yu, Chaojie; Mu, Xiaoping; Wei, Jianxun; Chu, Ye; Liang, Bin

    2016-12-14

    BACKGROUND To explore the related influencing factors of adjacent segment degeneration (ASD) after cervical discectomy and fusion (ACDF). MATERIAL AND METHODS A retrospective analysis of 263 patients who underwent ACDF was carried out. Cervical x-ray and magnetic resonance imaging (MRI) were required before operation, after operation, and at the last follow-up. General information and some radiographic parameters of all patients were measured and recorded. According to the imaging data, patients were put into one of two groups: non-ASD group and ASD group. The differences between the two groups were compared by t-test and χ²-test, and the related influencing factors of ASD were analyzed by logistic regression. RESULTS In all, 138 patients had imaging ASD. Comparing the age, the postoperative cervical arc chord distance (po-CACD), and the plate to disc distance (PDD) of the two groups, differences were statistically significant (p<0.05). The gender, the fusion segment number, the pre-CACD, the pre-and-po CACD, the preoperative cervical spinal canal ratio, and the upper and lower disc height (DH) showed no statistical difference between the two groups (p>0.05). The results of logistic regression analysis showed that there were significant correlations in the following characteristics: age, postoperative po-CACD, and the PDD (p<0.05). Of all these characteristics, the correlation of age was the highest (R=1.820). CONCLUSIONS Age, po-CACD, and PDD were risk factors for ASD after ACDF. The older the operation age, the worse the recovery was of postoperative physiological curvature of cervical spine, and a PDD < 5 mm was more likely to lead to ASD.

  14. Semisupervised segmentation of MRI stroke studies

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Windham, Joe P.; Robbins, Linda

    1997-04-01

    Fast, accurate, and reproducible image segmentation is vital to the diagnosis, treatment, and evaluation of many medical situations. We present development and application of a semi-supervised method for segmenting normal and abnormal brain tissues from magnetic resonance images (MRI) of stroke patients. The method does not require manual drawing of the tissue boundaries. It is therefore faster and more reproducible than conventional methods. The steps of the new method are as follows: (1) T2- and T1-weighted MR images are co-registered using a head and hat approach. (2) Intracranial brain volume is segmented from the skull, scalp, and background using a multi-resolution edge tracking algorithm. (3) Additive noise is suppressed (image is restored) using a non-linear edge-preserving filter which preserves partial volume information on average. (4) Image nonuniformities are corrected using a modified lowpass filtering approach. (5) The resulting images are segmented using a self organizing data analysis technique which is similar in principle to the K-means clustering but includes a set of additional heuristic merging and splitting procedures to generate a meaningful segmentation. (6) Segmented regions are labeled white matter, gray matter, CSF, partial volumes of normal tissues, zones of stroke, or partial volumes between stroke and normal tissues. (7) Previous steps are repeated for each slice of the brain and the volume of each tissue type is estimated from the results. Details and significance of each step are explained. Experimental results using a simulation, a phantom, and selected clinical cases are presented.

  15. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    PubMed

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length)(2)/Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R(2) = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  16. Understanding coastal change using shoreline trend analysis supported by cluster-based segmentation

    NASA Astrophysics Data System (ADS)

    Burningham, Helene; French, Jon

    2017-04-01

    Shoreline change analysis is a well defined and widely adopted approach for the examination of trends in coastal position over different timescales. Conventional shoreline change metrics are best suited to resolving progressive quasi-linear trends. However, coastal change is often highly non-linear and may exhibit complex behaviour including trend-reversals. This paper advocates a secondary level of investigation based on a cluster analysis to resolve a more complete range of coastal behaviours. Cluster-based segmentation of shoreline behaviour is demonstrated with reference to a regional-scale case study of the Suffolk coast, eastern UK. An exceptionally comprehensive suite of shoreline datasets covering the period 1881 to 2015 is used to examine both centennial- and intra-decadal scale change in shoreline position. Analysis of shoreline position changes at a 100 m alongshore interval along 74 km of coastline reveals a number of distinct behaviours. The suite of behaviours varies with the timescale of analysis. There is little evidence of regionally coherent shoreline change. Rather, the analyses reveal a complex interaction between met-ocean forcing, inherited geological and geomorphological controls, and evolving anthropogenic intervention that drives changing foci of erosion and deposition.

  17. Analysis of automated highway system risks and uncertainties. Volume 5

    SciTech Connect

    Sicherman, A.

    1994-10-01

    This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.

  18. User's operating procedures. Volume 2: Scout project financial analysis program

    NASA Technical Reports Server (NTRS)

    Harris, C. G.; Haris, D. K.

    1985-01-01

    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.

  19. Service Vessel Analysis. Volume 2. Detailed District Plots.

    DTIC Science & Technology

    1987-09-01

    263 SERVICE VESSEL ANALYSIS VOLUME 2 DETAILED DISTRICT V/2 PLOTSCU) TRANSPORTATION SYSTEMS CENTER CAMBRIDGE MA G J SbRLIOTIS SEP 87 DOT-TSC-CG-87-Y...George J. Skaliotis, Ph.D. D T IC * 0 Transportation Systems Center $m L EC1 Cambridge, MA 02142 S CT iIr I Final ReportD September 1987 Approved for...Administration I I. Contra"t or Grat No. Transportation Systems Center Cambridge, MA 02142 13. Type of Rope"s end Period Cover"d 12. Sponsoring

  20. Automatic cell segmentation and nuclear-to-cytoplasmic ratio analysis for third harmonic generated microscopy medical images.

    PubMed

    Lee, Gwo Giun; Lin, Huan-Hsiang; Tsai, Ming-Rung; Chou, Sin-Yo; Lee, Wen-Jeng; Liao, Yi-Hua; Sun, Chi-Kuang; Chen, Chun-Fu

    2013-04-01

    Traditional biopsy procedures require invasive tissue removal from a living subject, followed by time-consuming and complicated processes, so noninvasive in vivo virtual biopsy, which possesses the ability to obtain exhaustive tissue images without removing tissues, is highly desired. Some sets of in vivo virtual biopsy images provided by healthy volunteers were processed by the proposed cell segmentation approach, which is based on the watershed-based approach and the concept of convergence index filter for automatic cell segmentation. Experimental results suggest that the proposed algorithm not only reveals high accuracy for cell segmentation but also has dramatic potential for noninvasive analysis of cell nuclear-to-cytoplasmic ratio (NC ratio), which is important in identifying or detecting early symptoms of diseases with abnormal NC ratios, such as skin cancers during clinical diagnosis via medical imaging analysis.

  1. Identifying Like-Minded Audiences for Global Warming Public Engagement Campaigns: An Audience Segmentation Analysis and Tool Development

    PubMed Central

    Maibach, Edward W.; Leiserowitz, Anthony; Roser-Renouf, Connie; Mertz, C. K.

    2011-01-01

    Background Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation – a process of identifying coherent groups within a population – can be used to improve the effectiveness of public engagement campaigns. Methodology/Principal Findings In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164) to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%), to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%). Three of the segments (totaling 70%) were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18%) were unsupportive, and one was largely disengaged (12%), having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively. Conclusions/Significance In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are

  2. Shared genomic segment analysis: the power to find rare disease variants.

    PubMed

    Knight, Stacey; Abo, Ryan P; Abel, Haley J; Neklason, Deborah W; Tuohy, Therese M; Burt, Randall W; Thomas, Alun; Camp, Nicola J

    2012-11-01

    Shared genomic segment (SGS) analysis uses dense single nucleotide polymorphism genotyping in high-risk (HR) pedigrees to identify regions of sharing between cases. Here, we illustrate the power of SGS to identify dominant rare risk variants. Using simulated pedigrees, we consider 12 disease models based on disease prevalence, minor allele frequency and penetrance to represent disease loci that explain 0.2-99.8% of total disease risk. Pedigrees were required to contain ≥ 15 meioses between all cases and to be HR based on significant excess of disease (P < 0.001 or P < 0.00001). Across these scenarios, the power for a single pedigree ranged widely. Nonetheless, fewer than 10 pedigrees were sufficient for excellent power in the majority of models. Power increased with the risk attributable to the disease locus, penetrance and the excess of disease in the pedigree. Sharing allowing for one sporadic case was uniformly more powerful than sharing using all cases. Furthermore, an SGS analysis using a large attenuated familial adenomatous polyposis pedigree identified a 1.96 Mb region containing the known causal APC gene with genome-wide significance. SGS is a powerful method for detecting rare variants and a valuable complement to genome-wide association studies and linkage analysis.

  3. Differential gene expression profiling and biological process analysis in proximal nerve segments after sciatic nerve transection.

    PubMed

    Li, Shiying; Liu, Qianqian; Wang, Yongjun; Gu, Yun; Liu, Dong; Wang, Chunming; Ding, Guohui; Chen, Jianping; Liu, Jie; Gu, Xiaosong

    2013-01-01

    After traumatic injury, peripheral nerves can spontaneously regenerate through highly sophisticated and dynamic processes that are regulated by multiple cellular elements and molecular factors. Despite evidence of morphological changes and of expression changes of a few regulatory genes, global knowledge of gene expression changes and related biological processes during peripheral nerve injury and regeneration is still lacking. Here we aimed to profile global mRNA expression changes in proximal nerve segments of adult rats after sciatic nerve transection. According to DNA microarray analysis, the huge number of genes was differentially expressed at different time points (0.5 h-14 d) post nerve transection, exhibiting multiple distinct temporal expression patterns. The expression changes of several genes were further validated by quantitative real-time RT-PCR analysis. The gene ontology enrichment analysis was performed to decipher the biological processes involving the differentially expressed genes. Collectively, our results highlighted the dynamic change of the important biological processes and the time-dependent expression of key regulatory genes after peripheral nerve injury. Interestingly, we, for the first time, reported the presence of olfactory receptors in sciatic nerves. Hopefully, this study may provide a useful platform for deeply studying peripheral nerve injury and regeneration from a molecular-level perspective.

  4. Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.

    PubMed

    Jakobsson, Gerd; Kronstrand, Robert

    2014-06-01

    A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10 mg) was incubated in 2M sodium hydroxide (80°C, 10 min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4 min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5 ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated.

  5. Competition Analysis Model (CAM). Analysis Guide. Volume 1.

    DTIC Science & Technology

    1987-06-12

    Sources 12 % Technical Data Package 13 Leader / Follower 16 Contractor Teaming 20 Directed Licensing 23 Form, Fit and Function 25 Breakout 28 V...production through such methods as % leader / follower or licensing. While this allows you to begin the qualification process while concurrently moving your...have provided a stable design. Leader / follower and licensing arrangements might be the best for complex projects. Checklist II: Cost-Benefit Analysis

  6. Multi-atlas multi-shape segmentation of fetal brain MRI for volumetric and morphometric analysis of ventriculomegaly.

    PubMed

    Gholipour, Ali; Akhondi-Asl, Alireza; Estroff, Judy A; Warfield, Simon K

    2012-04-15

    The recent development of motion robust super-resolution fetal brain MRI holds out the potential for dramatic new advances in volumetric and morphometric analysis. Volumetric analysis based on volumetric and morphometric biomarkers of the developing fetal brain must include segmentation. Automatic segmentation of fetal brain MRI is challenging, however, due to the highly variable size and shape of the developing brain; possible structural abnormalities; and the relatively poor resolution of fetal MRI scans. To overcome these limitations, we present a novel, constrained, multi-atlas, multi-shape automatic segmentation method that specifically addresses the challenge of segmenting multiple structures with similar intensity values in subjects with strong anatomic variability. Accordingly, we have applied this method to shape segmentation of normal, dilated, or fused lateral ventricles for quantitative analysis of ventriculomegaly (VM), which is a pivotal finding in the earliest stages of fetal brain development, and warrants further investigation. Utilizing these innovative techniques, we introduce novel volumetric and morphometric biomarkers of VM comparing these values to those that are generated by standard methods of VM analysis, i.e., by measuring the ventricular atrial diameter (AD) on manually selected sections of 2D ultrasound or 2D MRI. To this end, we studied 25 normal and abnormal fetuses in the gestation age (GA) range of 19 to 39 weeks (mean=28.26, stdev=6.56). This heterogeneous dataset was essentially used to 1) validate our segmentation method for normal and abnormal ventricles; and 2) show that the proposed biomarkers may provide improved detection of VM as compared to the AD measurement.

  7. Volume analysis of heat-induced cracks in human molars: A preliminary study

    PubMed Central

    Sandholzer, Michael A.; Baron, Katharina; Heimel, Patrick; Metscher, Brian D.

    2014-01-01

    Context: Only a few methods have been published dealing with the visualization of heat-induced cracks inside bones and teeth. Aims: As a novel approach this study used nondestructive X-ray microtomography (micro-CT) for volume analysis of heat-induced cracks to observe the reaction of human molars to various levels of thermal stress. Materials and Methods: Eighteen clinically extracted third molars were rehydrated and burned under controlled temperatures (400, 650, and 800°C) using an electric furnace adjusted with a 25°C increase/min. The subsequent high-resolution scans (voxel-size 17.7 μm) were made with a compact micro-CT scanner (SkyScan 1174). In total, 14 scans were automatically segmented with Definiens XD Developer 1.2 and three-dimensional (3D) models were computed with Visage Imaging Amira 5.2.2. The results of the automated segmentation were analyzed with an analysis of variance (ANOVA) and uncorrected post hoc least significant difference (LSD) tests using Statistical Package for Social Sciences (SPSS) 17. A probability level of P < 0.05 was used as an index of statistical significance. Results: A temperature-dependent increase of heat-induced cracks was observed between the three temperature groups (P < 0.05, ANOVA post hoc LSD). In addition, the distributions and shape of the heat-induced changes could be classified using the computed 3D models. Conclusion: The macroscopic heat-induced changes observed in this preliminary study correspond with previous observations of unrestored human teeth, yet the current observations also take into account the entire microscopic 3D expansions of heat-induced cracks within the dental hard tissues. Using the same experimental conditions proposed in the literature, this study confirms previous results, adds new observations, and offers new perspectives in the investigation of forensic evidence. PMID:25125923

  8. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 4

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 4 of the four major tasks included in the study. Task 4 uses flight plan segment wind and temperature differences as indicators of dates and geographic areas for which significant forecast errors may have occurred. An in-depth analysis is then conducted for the days identified. The analysis show that significant errors occur in the operational forecast on 15 of the 33 arbitrarily selected days included in the study. Wind speeds in an area of maximum winds are underestimated by at least 20 to 25 kts. on 14 of these days. The analysis also show that there is a tendency to repeat the same forecast errors from prog to prog. Also, some perceived forecast errors from the flight plan comparisons could not be verified by visual inspection of the corresponding National Meteorological Center forecast and analyses charts, and it is likely that they are the result of weather data interpolation techniques or some other data processing procedure in the airlines' flight planning systems.

  9. Vessel segmentation in 3D spectral OCT scans of the retina

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; van Ginneken, Bram; Sonka, Milan; Abràmoff, Michael D.

    2008-03-01

    The latest generation of spectral optical coherence tomography (OCT) scanners is able to image 3D cross-sectional volumes of the retina at a high resolution and high speed. These scans offer a detailed view of the structure of the retina. Automated segmentation of the vessels in these volumes may lead to more objective diagnosis of retinal vascular disease including hypertensive retinopathy, retinopathy of prematurity. Additionally, vessel segmentation can allow color fundus images to be registered to these 3D volumes, possibly leading to a better understanding of the structure and localization of retinal structures and lesions. In this paper we present a method for automatically segmenting the vessels in a 3D OCT volume. First, the retina is automatically segmented into multiple layers, using simultaneous segmentation of their boundary surfaces in 3D. Next, a 2D projection of the vessels is produced by only using information from certain segmented layers. Finally, a supervised, pixel classification based vessel segmentation approach is applied to the projection image. We compared the influence of two methods for the projection on the performance of the vessel segmentation on 10 optic nerve head centered 3D OCT scans. The method was trained on 5 independent scans. Using ROC analysis, our proposed vessel segmentation system obtains an area under the curve of 0.970 when compared with the segmentation of a human observer.

  10. A Comparison of the Segmental Analysis of Sodium Reabsorption during Ringer's and Hyperoncotic Albumin Infusion in the Rat

    PubMed Central

    Stein, Jay H.; Osgood, Richard W.; Boonjarern, Sampanta; Ferris, Thomas F.

    1973-01-01

    Studies were designed to compare the segmental analysis of sodium reabsorption along the nephron during volume expansion with either 10% body weight Ringer's or 0.6% body weight hyperoncotic albumin. Total kidney and nephron glomerular filtration rate increased similarly with both, but urinary sodium excretion (12.7 vs. 4.0 μeq/min, P < 0.001) and fractional sodium excretion (5.0 vs. 1.6%, P < 0.001) increased to a greater extent with Ringer's. Fractional reabsorption of sodium in the proximal tubule was diminished in both groups but to a significantly greater extent during Ringer's (P < 0.005). Absolute reabsorption was inhibited only in the Ringer's group. Delivery of filtrate out of the proximal tubule was greater in the Ringer's studies, 45 vs. 37 nl/min (P < 0.001). However, both fractional and absolute sodium delivery to the early and late distal tubule were not significantly different in the two groups. Fractional reabsorption in the collecting duct decreased from 96% in hydropenia to 31% during Ringer's but fell only slightly to 80% in the albumin studies. Absolute collecting duct reabsorption was also greater in the albumin studies, 0.55 vs. 0.21 neq/min (P < 0.001), which could totally account for the difference in urinary sodium excretion between the two groups. 22Na recovery in the final urine after end distal microinjections was 71% during Ringer's infusion and 34% during albumin (P < 0.001). From these data we conclude that: (a) Ringer's solution has a greater inhibitory effect on proximal tubular sodium reabsorption, and (b) in spite of this effect, differences in mucosal to serosal collecting duct sodium transport are primarily responsible for the greater natriuresis during Ringer's infusion. PMID:4727461

  11. Optical granulometric analysis of sedimentary deposits by color segmentation-based software: OPTGRAN-CS

    NASA Astrophysics Data System (ADS)

    Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.

    2015-12-01

    The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.

  12. A comparison between handgrip strength, upper limb fat free mass by segmental bioelectrical impedance analysis (SBIA) and anthropometric measurements in young males

    NASA Astrophysics Data System (ADS)

    Gonzalez-Correa, C. H.; Caicedo-Eraso, J. C.; Varon-Serna, D. R.

    2013-04-01

    The mechanical function and size of a muscle may be closely linked. Handgrip strength (HGS) has been used as a predictor of functional performing. Anthropometric measurements have been made to estimate arm muscle area (AMA) and physical muscle mass volume of upper limb (ULMMV). Electrical volume estimation is possible by segmental BIA measurements of fat free mass (SBIA-FFM), mainly muscle-mass. Relationship among these variables is not well established. We aimed to determine if physical and electrical muscle mass estimations relate to each other and to what extent HGS is to be related to its size measured by both methods in normal or overweight young males. Regression analysis was used to determine association between these variables. Subjects showed a decreased HGS (65.5%), FFM, (85.5%) and AMA (74.5%). It was found an acceptable association between SBIA-FFM and AMA (r2 = 0.60) and poorer between physical and electrical volume (r2 = 0.55). However, a paired Student t-test and Bland and Altman plot showed that physical and electrical models were not interchangeable (pt<0.0001). HGS showed a very weak association with anthropometric (r2 = 0.07) and electrical (r2 = 0.192) ULMMV showing that muscle mass quantity does not mean muscle strength. Other factors influencing HGS like physical training or nutrition require more research.

  13. Parallel runway requirement analysis study. Volume 1: The analysis

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.

    1993-01-01

    The correlation of increased flight delays with the level of aviation activity is well recognized. A main contributor to these flight delays has been the capacity of airports. Though new airport and runway construction would significantly increase airport capacity, few programs of this type are currently underway, let alone planned, because of the high cost associated with such endeavors. Therefore, it is necessary to achieve the most efficient and cost effective use of existing fixed airport resources through better planning and control of traffic flows. In fact, during the past few years the FAA has initiated such an airport capacity program designed to provide additional capacity at existing airports. Some of the improvements that that program has generated thus far have been based on new Air Traffic Control procedures, terminal automation, additional Instrument Landing Systems, improved controller display aids, and improved utilization of multiple runways/Instrument Meteorological Conditions (IMC) approach procedures. A useful element to understanding potential operational capacity enhancements at high demand airports has been the development and use of an analysis tool called The PLAND_BLUNDER (PLB) Simulation Model. The objective for building this simulation was to develop a parametric model that could be used for analysis in determining the minimum safety level of parallel runway operations for various parameters representing the airplane, navigation, surveillance, and ATC system performance. This simulation is useful as: a quick and economical evaluation of existing environments that are experiencing IMC delays, an efficient way to study and validate proposed procedure modifications, an aid in evaluating requirements for new airports or new runways in old airports, a simple, parametric investigation of a wide range of issues and approaches, an ability to tradeoff air and ground technology and procedures contributions, and a way of considering probable

  14. Cache la Poudre River Basin, Larimer - Weld Counties, Colorado. Volume 4. Flood Plain Analysis, Fossil Creek.

    DTIC Science & Technology

    1981-10-01

    BASIN LARIMER-WELD COUNTIES COLORADO VOLUME I FLOOD HAZARD, DAM SAFETY, AND FLOOD WARNING VOLUME II HYDROLOGY VOLUME III FLOOD PLAIN ANALYSIS, SHEEP...presented in four separate volumes. Vol- ume I considers basin flood hazards, dam safety, and flood warning. Volume II presents the detailed...has its source near the south end of Horsetooth Reservoir and flows in a generally eastward direction to its confluence with the Cache la Poudre

  15. A coronary artery segmentation method based on multiscale analysis and region growing.

    PubMed

    Kerkeni, Asma; Benabdallah, Asma; Manzanera, Antoine; Bedoui, Mohamed Hedi

    2016-03-01

    Accurate coronary artery segmentation is a fundamental step in various medical imaging applications such as stenosis detection, 3D reconstruction and cardiac dynamics assessing. In this paper, a multiscale region growing (MSRG) method for coronary artery segmentation in 2D X-ray angiograms is proposed. First, a region growing rule incorporating both vesselness and direction information in a unique way is introduced. Then an iterative multiscale search based on this criterion is performed. Selected points in each step are considered as seeds for the following step. By combining vesselness and direction information in the growing rule, this method is able to avoid blockage caused by low vesselness values in vascular regions, which in turn, yields continuous vessel tree. Performing the process in a multiscale fashion helps to extract thin and peripheral vessels often missed by other segmentation methods. Quantitative evaluation performed on real angiography images shows that the proposed segmentation method identifies about 80% of the total coronary artery tree in relatively easy images and 70% in challenging cases with a mean precision of 82% and outperforms others segmentation methods in terms of sensitivity. The MSRG segmentation method was also implemented with different enhancement filters and it has been shown that the Frangi filter gives better results. The proposed segmentation method has proven to be tailored for coronary artery segmentation. It keeps an acceptable performance when dealing with challenging situations such as noise, stenosis and poor contrast.

  16. Unconventional Word Segmentation in Emerging Bilingual Students' Writing: A Longitudinal Analysis

    ERIC Educational Resources Information Center

    Sparrow, Wendy

    2014-01-01

    This study explores cross-language and longitudinal patterns in unconventional word segmentation in 25 emerging bilingual students' (Spanish/English) writing from first through third grade. Spanish and English writing samples were collected annually and analyzed for two basic types of unconventional word segmentation: hyposegmentation, in…

  17. Understanding the market for geographic information: A market segmentation and characteristics analysis

    NASA Technical Reports Server (NTRS)

    Piper, William S.; Mick, Mark W.

    1994-01-01

    Findings and results from a marketing research study are presented. The report identifies market segments and the product types to satisfy demand in each. An estimate of market size is based on the specific industries in each segment. A sample of ten industries was used in the study. The scientific study covered U.S. firms only.

  18. A Theoretical Analysis of How Segmentation of Dynamic Visualizations Optimizes Students' Learning

    ERIC Educational Resources Information Center

    Spanjers, Ingrid A. E.; van Gog, Tamara; van Merrienboer, Jeroen J. G.

    2010-01-01

    This article reviews studies investigating segmentation of dynamic visualizations (i.e., showing dynamic visualizations in pieces with pauses in between) and discusses two not mutually exclusive processes that might underlie the effectiveness of segmentation. First, cognitive activities needed for dealing with the transience of dynamic…

  19. Bivariate segmentation of SNP-array data for allele-specific copy number analysis in tumour samples

    PubMed Central

    2013-01-01

    Background SNP arrays output two signals that reflect the total genomic copy number (LRR) and the allelic ratio (BAF), which in combination allow the characterisation of allele-specific copy numbers (ASCNs). While methods based on hidden Markov models (HMMs) have been extended from array comparative genomic hybridisation (aCGH) to jointly handle the two signals, only one method based on change-point detection, ASCAT, performs bivariate segmentation. Results In the present work, we introduce a generic framework for bivariate segmentation of SNP array data for ASCN analysis. For the matter, we discuss the characteristics of the typically applied BAF transformation and how they affect segmentation, introduce concepts of multivariate time series analysis that are of concern in this field and discuss the appropriate formulation of the problem. The framework is implemented in a method named CnaStruct, the bivariate form of the structural change model (SCM), which has been successfully applied to transcriptome mapping and aCGH. Conclusions On a comprehensive synthetic dataset, we show that CnaStruct outperforms the segmentation of existing ASCN analysis methods. Furthermore, CnaStruct can be integrated into the workflows of several ASCN analysis tools in order to improve their performance, specially on tumour samples highly contaminated by normal cells. PMID:23497144

  20. Concept Area Two Objectives and Test Items (Rev.) Part One, Part Two. Economic Analysis Course. Segments 17-49.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This report deals with the second concept area of the course and focuses on macroeconomics. Segments 17 through 49 are presented,…

  1. Using Paleoseismic Trenching and LiDAR Analysis to Evaluate Rupture Propagation Through Segment Boundaries of the Central Wasatch Fault Zone, Utah

    NASA Astrophysics Data System (ADS)

    Bennett, S. E. K.; DuRoss, C. B.; Reitman, N. G.; Devore, J. R.; Hiscock, A.; Gold, R. D.; Briggs, R. W.; Personius, S. F.

    2014-12-01

    Paleoseismic data near fault segment boundaries constrain the extent of past surface ruptures and the persistence of rupture termination at segment boundaries. Paleoseismic evidence for large (M≥7.0) earthquakes on the central Holocene-active fault segments of the 350-km-long Wasatch fault zone (WFZ) generally supports single-segment ruptures but also permits multi-segment rupture scenarios. The extent and frequency of ruptures that span segment boundaries remains poorly known, adding uncertainty to seismic hazard models for this populated region of Utah. To address these uncertainties we conducted four paleoseismic investigations near the Salt Lake City-Provo and Provo-Nephi segment boundaries of the WFZ. We examined an exposure of the WFZ at Maple Canyon (Woodland Hills, UT) and excavated the Flat Canyon trench (Salem, UT), 7 and 11 km, respectively, from the southern tip of the Provo segment. We document evidence for at least five earthquakes at Maple Canyon and four to seven earthquakes that post-date mid-Holocene fan deposits at Flat Canyon. These earthquake chronologies will be compared to seven earthquakes observed in previous trenches on the northern Nephi segment to assess rupture correlation across the Provo-Nephi segment boundary. To assess rupture correlation across the Salt Lake City-Provo segment boundary we excavated the Alpine trench (Alpine, UT), 1 km from the northern tip of the Provo segment, and the Corner Canyon trench (Draper, UT) 1 km from the southern tip of the Salt Lake City segment. We document evidence for six earthquakes at both sites. Ongoing geochronologic analysis (14C, optically stimulated luminescence) will constrain earthquake chronologies and help identify through-going ruptures across these segment boundaries. Analysis of new high-resolution (0.5m) airborne LiDAR along the entire WFZ will quantify latest Quaternary displacements and slip rates and document spatial and temporal slip patterns near fault segment boundaries.

  2. High-throughput histopathological image analysis via robust cell segmentation and hashing.

    PubMed

    Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting

    2015-12-01

    Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells .

  3. Image Segmentation and Analysis of Flexion-Extension Radiographs of Cervical Spines

    PubMed Central

    Enikov, Eniko T.

    2014-01-01

    We present a new analysis tool for cervical flexion-extension radiographs based on machine vision and computerized image processing. The method is based on semiautomatic image segmentation leading to detection of common landmarks such as the spinolaminar (SL) line or contour lines of the implanted anterior cervical plates. The technique allows for visualization of the local curvature of these landmarks during flexion-extension experiments. In addition to changes in the curvature of the SL line, it has been found that the cervical plates also deform during flexion-extension examination. While extension radiographs reveal larger curvature changes in the SL line, flexion radiographs on the other hand tend to generate larger curvature changes in the implanted cervical plates. Furthermore, while some lordosis is always present in the cervical plates by design, it actually decreases during extension and increases during flexion. Possible causes of this unexpected finding are also discussed. The described analysis may lead to a more precise interpretation of flexion-extension radiographs, allowing diagnosis of spinal instability and/or pseudoarthrosis in already seemingly fused spines. PMID:27006937

  4. Quantitative Analysis of Ligand-Induced Endocytosis of FLAGELLIN-SENSING 2 Using Automated Image Segmentation.

    PubMed

    Leslie, Michelle E; Heese, Antje

    2017-01-01

    Plants are equipped with a suite of plant pattern recognition receptors (PRRs) that must be properly trafficked to and from the plasma membrane (PM), which serves as the host-pathogen interface, for robust detection of invading pathogenic microbes. Recognition of bacterial flagellin, or the derived peptide flg22, is facilitated by the PM-localized PRR, FLAGELLIN SENSING 2 (FLS2). Upon flg22 binding, FLS2 is rapidly internalized from the PM into endosomal compartments and subsequently degraded. To understand better the integration of FLS2 endocytosis and signaling outputs, we developed methods for the quantitative analysis of FLS2 trafficking using freely available bioimage informatic tools. Emphasis was placed on robust recognition of features and ease of access for users. Using the free and open-source software Fiji (Fiji is just ImageJ) and Trainable Weka Segmentation (TWS) plug-in, we developed a workflow for the automated identification of green fluorescent protein (GFP)-tagged FLS2 in endosomal puncta. Fiji-TWS methods can be adapted with ease for the analysis of FLS2 trafficking in various genetic backgrounds as well as for the endocytic regulation of diverse plant PRRs.

  5. Stereophotogrammetrie Mass Distribution Parameter Determination Of The Lower Body Segments For Use In Gait Analysis

    NASA Astrophysics Data System (ADS)

    Sheffer, Daniel B.; Schaer, Alex R.; Baumann, Juerg U.

    1989-04-01

    Inclusion of mass distribution information in biomechanical analysis of motion is a requirement for the accurate calculation of external moments and forces acting on the segmental joints during locomotion. Regression equations produced from a variety of photogrammetric, anthropometric and cadaeveric studies have been developed and espoused in literature. Because of limitations in the accuracy of predicted inertial properties based on the application of regression equation developed on one population and then applied on a different study population, the employment of a measurement technique that accurately defines the shape of each individual subject measured is desirable. This individual data acquisition method is especially needed when analyzing the gait of subjects with large differences in their extremity geo-metry from those considered "normal", or who may possess gross asymmetries in shape in their own contralateral limbs. This study presents the photogrammetric acquisition and data analysis methodology used to assess the inertial tensors of two groups of subjects, one with spastic diplegic cerebral palsy and the other considered normal.

  6. High-Throughput Histopathological Image Analysis via Robust Cell Segmentation and Hashing

    PubMed Central

    Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting

    2015-01-01

    Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells. PMID:26599156

  7. Interactive 3D segmentation of the prostate in magnetic resonance images using shape and local appearance similarity analysis

    NASA Astrophysics Data System (ADS)

    Shahedi, Maysam; Fenster, Aaron; Cool, Derek W.; Romagnoli, Cesare; Ward, Aaron D.

    2013-03-01

    3D segmentation of the prostate in medical images is useful to prostate cancer diagnosis and therapy guidance, but is time-consuming to perform manually. Clinical translation of computer-assisted segmentation algorithms for this purpose requires a comprehensive and complementary set of evaluation metrics that are informative to the clinical end user. We have developed an interactive 3D prostate segmentation method for 1.5T and 3.0T T2-weighted magnetic resonance imaging (T2W MRI) acquired using an endorectal coil. We evaluated our method against manual segmentations of 36 3D images using complementary boundary-based (mean absolute distance; MAD), regional overlap (Dice similarity coefficient; DSC) and volume difference (ΔV) metrics. Our technique is based on inter-subject prostate shape and local boundary appearance similarity. In the training phase, we calculated a point distribution model (PDM) and a set of local mean intensity patches centered on the prostate border to capture shape and appearance variability. To segment an unseen image, we defined a set of rays - one corresponding to each of the mean intensity patches computed in training - emanating from the prostate centre. We used a radial-based search strategy and translated each mean intensity patch along its corresponding ray, selecting as a candidate the boundary point with the highest normalized cross correlation along each ray. These boundary points were then regularized using the PDM. For the whole gland, we measured a mean+/-std MAD of 2.5+/-0.7 mm, DSC of 80+/-4%, and ΔV of 1.1+/-8.8 cc. We also provided an anatomic breakdown of these metrics within the prostatic base, mid-gland, and apex.

  8. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    SciTech Connect

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-19

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

  9. CENTS: Cortical Enhanced Neonatal Tissue Segmentation

    PubMed Central

    Shi, Feng; Shen, Dinggang; Yap, Pew-Thian; Fan, Yong; Cheng, Jie-Zhi; An, Hongyu; Wald, Lawrence L.; Gerig, Guido; Gilmore, John H.; Lin, Weili

    2010-01-01

    The acquisition of high-quality magnetic resonance (MR) images of neonatal brains is largely hampered by their characteristically small head size and insufficient tissue contrast. As a result, subsequent image processing and analysis, especially brain tissue segmentation, are often affected. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by augmenting signal-to-noise ratio and spatial resolution without lengthening data acquisition time. In addition, a specialized hybrid atlas-based tissue segmentation algorithm is developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in the to-be-segmented neonatal image with a Hessian filter for generation of a cortical GM confidence map. A neonatal population atlas is then generated by averaging the presegmented images of a population, weighted by their cortical GM similarity with respect to the to-be-segmented image. Finally, the neonatal population atlas is combined with the GM confidence map, and the resulting enhanced tissue probability maps for each tissue form a hybrid atlas that is used for atlas-based segmentation. Various experiments are conducted to compare the segmentations of the proposed method with manual segmentation (on both images acquired with a dedicated phased array coil and a conventional volume coil), as well as with the segmentations of two population-atlas-based methods. Results show the proposed method is capable of segmenting the neonatal brain with the best accuracy, and also preserving the most structural details in the cortical regions. PMID:20690143

  10. Study of Alternate Space Shuttle Concepts. Volume 2, Part 2: Concept Analysis and Definition

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This is the final report of a Phase A Study of Alternate Space Shuttle Concepts by the Lockheed Missiles & Space Company (LMSC) for the National Aeronautics and Space Administration George C. Marshall Space Flight Center (MSFC). The eleven-month study, which began on 30 June 1970, is to examine the stage-and-one-half and other Space Shuttle configurations and to establish feasibility, performance, cost, and schedules for the selected concepts. This final report consists of four volumes as follows: Volume I - Executive Summary, Volume II - Concept Analysis and Definition, Volume III - Program Planning, and Volume IV - Data Cost Data. This document is Volume II, Concept Analysis and Definition.

  11. a New Framework for Object-Based Image Analysis Based on Segmentation Scale Space and Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Hadavand, A.; Saadatseresht, M.; Homayouni, S.

    2015-12-01

    In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS), a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  12. Microfluidic Chip for High Efficiency Electrophoretic Analysis of Segmented Flow from a Microdialysis Probe and in Vivo Chemical Monitoring

    PubMed Central

    Wang, Meng; Roman, Gregory T.; Perry, Maura L.; Kennedy, Robert T.

    2009-01-01

    An effective method for in vivo chemical monitoring is to couple sampling probes, such as microdialysis, to on-line analytical methods. A limitation of this approach is that in vivo chemical dynamics may be distorted by flow and diffusion broadening during transfer from sampling probe to analytical system. Converting a homogenous sample stream to segmented flow can prevent such broadening. We have developed a system for coupling segmented microdialysis flow with chip-based electrophoresis. In this system, the dialysis probe is integrated with a PDMS chip that merges dialysate with fluorogenic reagent and segments the flow into 8–10 nL plugs at 0.3–0.5 Hz separated by perfluorodecalin. The plugs flow to a glass chip where they are extracted to an aqueous stream and analyzed by electrophoresis with fluorescence detection. The novel extraction system connects the segmented flow to an electrophoresis sampling channel by a shallow and hydrophilic extraction bridge that removes the entire aqueous droplet from the oil stream. With this approach, temporal resolution was 35 s and independent of distance between sampling and analysis. Electrophoretic analysis produced separation with 223,000 ± 21,000 theoretical plates, 4.4% RSD in peak height, and detection limits of 90–180 nM for six amino acids. This performance was made possible by three key elements: 1) reliable transfer of plug flow to a glass chip; 2) efficient extraction of aqueous plugs from segmented flow; and 3) electrophoretic injection suitable for high efficiency separation with minimal dilution of sample. The system was used to detect rapid concentration changes evoked by infusing glutamate uptake inhibitor into the striatum of anesthetized rats. These results demonstrate the potential of incorporating segmented flow into separations-based sensing schemes for studying chemical dynamics in vivo with improved temporal resolution. PMID:19803495

  13. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  14. Three-dimensional volume analysis of vasculature in engineered tissues

    NASA Astrophysics Data System (ADS)

    YousefHussien, Mohammed; Garvin, Kelley; Dalecki, Diane; Saber, Eli; Helguera, María.

    2013-01-01

    Three-dimensional textural and volumetric image analysis holds great potential in understanding the image data produced by multi-photon microscopy. In this paper, an algorithm that quantitatively analyzes the texture and the morphology of vasculature in engineered tissues is proposed. The investigated 3D artificial tissues consist of Human Umbilical Vein Endothelial Cells (HUVEC) embedded in collagen exposed to two regimes of ultrasound standing wave fields under different pressure conditions. Textural features were evaluated using the normalized Gray-Scale Cooccurrence Matrix (GLCM) combined with Gray-Level Run Length Matrix (GLRLM) analysis. To minimize error resulting from any possible volume rotation and to provide a comprehensive textural analysis, an averaged version of nine GLCM and GLRLM orientations is used. To evaluate volumetric features, an automatic threshold using the gray level mean value is utilized. Results show that our analysis is able to differentiate among the exposed samples, due to morphological changes induced by the standing wave fields. Furthermore, we demonstrate that providing more textural parameters than what is currently being reported in the literature, enhances the quantitative understanding of the heterogeneity of artificial tissues.

  15. Flow Analysis on a Limited Volume Chilled Water System

    SciTech Connect

    Zheng, Lin

    2012-07-31

    LANL Currently has a limited volume chilled water system for use in a glove box, but the system needs to be updated. Before we start building our new system, a flow analysis is needed to ensure that there are no high flow rates, extreme pressures, or any other hazards involved in the system. In this project the piping system is extremely important to us because it directly affects the overall design of the entire system. The primary components necessary for the chilled water piping system are shown in the design. They include the pipes themselves (perhaps of more than one diameter), the various fitting used to connect the individual pipes to form the desired system, the flow rate control devices (valves), and the pumps that add energy to the fluid. Even the most simple pipe systems are actually quite complex when they are viewed in terms of rigorous analytical considerations. I used an 'exact' analysis and dimensional analysis considerations combined with experimental results for this project. When 'real-world' effects are important (such as viscous effects in pipe flows), it is often difficult or impossible to use only theoretical methods to obtain the desired results. A judicious combination of experimental data with theoretical considerations and dimensional analysis are needed in order to reduce risks to an acceptable level.

  16. Structural and Functional Analysis of Transmembrane Segment IV of the Salt Tolerance Protein Sod2*

    PubMed Central

    Ullah, Asad; Kemp, Grant; Lee, Brian; Alves, Claudia; Young, Howard; Sykes, Brian D.; Fliegel, Larry

    2013-01-01

    Sod2 is the plasma membrane Na+/H+ exchanger of the fission yeast Schizosaccharomyces pombe. It provides salt tolerance by removing excess intracellular sodium (or lithium) in exchange for protons. We examined the role of amino acid residues of transmembrane segment IV (TM IV) (126FPQINFLGSLLIAGCITSTDPVLSALI152) in activity by using alanine scanning mutagenesis and examining salt tolerance in sod2-deficient S. pombe. Two amino acids were critical for function. Mutations T144A and V147A resulted in defective proteins that did not confer salt tolerance when reintroduced into S. pombe. Sod2 protein with other alanine mutations in TM IV had little or no effect. T144D and T144K mutant proteins were inactive; however, a T144S protein was functional and provided lithium, but not sodium, tolerance and transport. Analysis of sensitivity to trypsin indicated that the mutations caused a conformational change in the Sod2 protein. We expressed and purified TM IV (amino acids 125–154). NMR analysis yielded a model with two helical regions (amino acids 128–142 and 147–154) separated by an unwound region (amino acids 143–146). Molecular modeling of the entire Sod2 protein suggested that TM IV has a structure similar to that deduced by NMR analysis and an overall structure similar to that of Escherichia coli NhaA. TM IV of Sod2 has similarities to TM V of the Zygosaccharomyces rouxii Na+/H+ exchanger and TM VI of isoform 1 of mammalian Na+/H+ exchanger. TM IV of Sod2 is critical to transport and may be involved in cation binding or conformational changes of the protein. PMID:23836910

  17. Movement Analysis of Flexion and Extension of Honeybee Abdomen Based on an Adaptive Segmented Structure.

    PubMed

    Zhao, Jieliang; Wu, Jianing; Yan, Shaoze

    2015-01-01

    Honeybees (Apis mellifera) curl their abdomens for daily rhythmic activities. Prior to determining this fact, people have concluded that honeybees could curl their abdomen casually. However, an intriguing but less studied feature is the possible unidirectional abdominal deformation in free-flying honeybees. A high-speed video camera was used to capture the curling and to analyze the changes in the arc length of the honeybee abdomen not only in free-flying mode but also in the fixed sample. Frozen sections and environment scanning electron microscope were used to investigate the microstructure and motion principle of honeybee abdomen and to explore the physical structure restricting its curling. An adaptive segmented structure, especially the folded intersegmental membrane (FIM), plays a dominant role in the flexion and extension of the abdomen. The structural features of FIM were utilized to mimic and exhibit movement restriction on honeybee abdomen. Combining experimental analysis and theoretical demonstration, a unidirectional bending mechanism of honeybee abdomen was revealed. Through this finding, a new perspective for aerospace vehicle design can be imitated.

  18. Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method

    PubMed Central

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  19. Uncontrolled manifold analysis of segmental angle variability during walking: preadolescents with and without Down syndrome.

    PubMed

    Black, David P; Smith, Beth A; Wu, Jianhua; Ulrich, Beverly D

    2007-12-01

    The uncontrolled manifold (UCM) approach allows us to address issues concerning the nature of variability. In this study we applied the UCM analysis to gait and to a population known for exhibiting high levels of performance variability, Down syndrome (DS). We wanted to determine if preadolescents (ages between 8 and 10) with DS partition goal-equivalent variability (UCM( ||)) and non-goal equivalent variability differently than peers with typical development (TD) and whether treadmill practice would result in utilizing greater amounts of functional, task-specific variability to accomplish the task goal. We also wanted to determine how variance is structured with respect to two important performance variables: center of mass (COM) and head trajectory at one specific event (i.e., heel contact) for both groups during gait. Preadolescents with and without DS walked on a treadmill below, at, and above their preferred overground speed. We tested both groups before and after four visits of treadmill practice. We found that children with DS partition more UCM( ||) variance than children with TD across all speeds and both pre and post practice. The results also suggest that more segmental configuration variance was structured such that less motion of COM than head position was exhibited at heel contact. Overall, we believe children with DS are employing a different control strategy to compensate for their inherent limitations by exploiting that variability that corresponds to successfully performing the task.

  20. Segmented independent component analysis for improved separation of fetal cardiac signals from nonstationary fetal magnetocardiograms

    PubMed Central

    Murta, Luiz O.; Guzo, Mauro G.; Moraes, Eder R.; Baffa, Oswaldo; Wakai, Ronald T.; Comani, Silvia

    2015-01-01

    Fetal magnetocardiograms (fMCGs) have been successfully processed with independent component analysis (ICA) to separate the fetal cardiac signals, but ICA effectiveness can be limited by signal nonstation-arities due to fetal movements. We propose an ICA-based method to improve the quality of fetal signals separated from fMCG affected by fetal movements. This technique (SegICA) includes a procedure to detect signal nonstationarities, according to which the fMCG recordings are divided in stationary segments that are then processed with ICA. The first and second statistical moments and the signal polarity reversal were used at different threshold levels to detect signal transients. SegICA effectiveness was assessed in two fMCG datasets (with and without fetal movements) by comparing the signal-to-noise ratio (SNR) of the signals extracted with ICA and with SegICA. Results showed that the SNR of fetal signals affected by fetal movements improved with SegICA, whereas the SNR gain was negligible elsewhere. The best measure to detect signal nonstationarities of physiological origin was signal polarity reversal at threshold level 0.9. The first statistical moment also provided good results at threshold level 0.6. SegICA seems a promising method to separate fetal cardiac signals of improved quality from nonstationary fMCG recordings affected by fetal movements. PMID:25781658

  1. Movement Analysis of Flexion and Extension of Honeybee Abdomen Based on an Adaptive Segmented Structure

    PubMed Central

    Zhao, Jieliang; Wu, Jianing; Yan, Shaoze

    2015-01-01

    Honeybees (Apis mellifera) curl their abdomens for daily rhythmic activities. Prior to determining this fact, people have concluded that honeybees could curl their abdomen casually. However, an intriguing but less studied feature is the possible unidirectional abdominal deformation in free-flying honeybees. A high-speed video camera was used to capture the curling and to analyze the changes in the arc length of the honeybee abdomen not only in free-flying mode but also in the fixed sample. Frozen sections and environment scanning electron microscope were used to investigate the microstructure and motion principle of honeybee abdomen and to explore the physical structure restricting its curling. An adaptive segmented structure, especially the folded intersegmental membrane (FIM), plays a dominant role in the flexion and extension of the abdomen. The structural features of FIM were utilized to mimic and exhibit movement restriction on honeybee abdomen. Combining experimental analysis and theoretical demonstration, a unidirectional bending mechanism of honeybee abdomen was revealed. Through this finding, a new perspective for aerospace vehicle design can be imitated. PMID:26223946

  2. phenoVein-A Tool for Leaf Vein Segmentation and Analysis.

    PubMed

    Bühler, Jonas; Rishmawi, Louai; Pflugfelder, Daniel; Huber, Gregor; Scharr, Hanno; Hülskamp, Martin; Koornneef, Maarten; Schurr, Ulrich; Jahnke, Siegfried

    2015-12-01

    Precise measurements of leaf vein traits are an important aspect of plant phenotyping for ecological and genetic research. Here, we present a powerful and user-friendly image analysis tool named phenoVein. It is dedicated to automated segmenting and analyzing of leaf veins in images acquired with different imaging modalities (microscope, macrophotography, etc.), including options for comfortable manual correction. Advanced image filtering emphasizes veins from the background and compensates for local brightness inhomogeneities. The most important traits being calculated are total vein length, vein density, piecewise vein lengths and widths, areole area, and skeleton graph statistics, like the number of branching or ending points. For the determination of vein widths, a model-based vein edge estimation approach has been implemented. Validation was performed for the measurement of vein length, vein width, and vein density of Arabidopsis (Arabidopsis thaliana), proving the reliability of phenoVein. We demonstrate the power of phenoVein on a set of previously described vein structure mutants of Arabidopsis (hemivenata, ondulata3, and asymmetric leaves2-101) compared with wild-type accessions Columbia-0 and Landsberg erecta-0. phenoVein is freely available as open-source software.

  3. Interactive Exploration and Analysis of Large-Scale Simulations Using Topology-Based Data Segmentation.

    PubMed

    Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B

    2011-09-01

    Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting

  4. Coal gasification systems engineering and analysis. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Feasibility analyses and systems engineering studies for a 20,000 tons per day medium Btu (MBG) coal gasification plant to be built by TVA in Northern Alabama were conducted. Major objectives were as follows: (1) provide design and cost data to support the selection of a gasifier technology and other major plant design parameters, (2) provide design and cost data to support alternate product evaluation, (3) prepare a technology development plan to address areas of high technical risk, and (4) develop schedules, PERT charts, and a work breakdown structure to aid in preliminary project planning. Volume one contains a summary of gasification system characterizations. Five gasification technologies were selected for evaluation: Koppers-Totzek, Texaco, Lurgi Dry Ash, Slagging Lurgi, and Babcock and Wilcox. A summary of the trade studies and cost sensitivity analysis is included.

  5. Comparison between manual and semi-automatic segmentation of nasal cavity and paranasal sinuses from CT images.

    PubMed

    Tingelhoff, K; Moral, A I; Kunkel, M E; Rilk, M; Wagner, I; Eichhorn, K G; Wahl, F M; Bootz, F

    2007-01-01

    Segmentation of medical image data is getting more and more important over the last years. The results are used for diagnosis, surgical planning or workspace definition of robot-assisted systems. The purpose of this paper is to find out whether manual or semi-automatic segmentation is adequate for ENT surgical workflow or whether fully automatic segmentation of paranasal sinuses and nasal cavity is needed. We present a comparison of manual and semi-automatic segmentation of paranasal sinuses and the nasal cavity. Manual segmentation is performed by custom software whereas semi-automatic segmentation is realized by a commercial product (Amira). For this study we used a CT dataset of the paranasal sinuses which consists of 98 transversal slices, each 1.0 mm thick, with a resolution of 512 x 512 pixels. For the analysis of both segmentation procedures we used volume, extension (width, length and height), segmentation time and 3D-reconstruction. The segmentation time was reduced from 960 minutes with manual to 215 minutes with semi-automatic segmentation. We found highest variances segmenting nasal cavity. For the paranasal sinuses manual and semi-automatic volume differences are not significant. Dependent on the segmentation accuracy both approaches deliver useful results and could be used for e.g. robot-assisted systems. Nevertheless both procedures are not useful for everyday surgical workflow, because they take too much time. Fully automatic and reproducible segmentation algorithms are needed for segmentation of paranasal sinuses and nasal cavity.

  6. WE-G-207-05: Relationship Between CT Image Quality, Segmentation Performance, and Quantitative Image Feature Analysis

    SciTech Connect

    Lee, J; Nishikawa, R; Reiser, I; Boone, J

    2015-06-15

    Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benign or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or classification

  7. A geometric analysis of mastectomy incisions: Optimizing intraoperative breast volume

    PubMed Central

    Chopp, David; Rawlani, Vinay; Ellis, Marco; Johnson, Sarah A; Buck, Donald W; Khan, Seema; Bethke, Kevin; Hansen, Nora; Kim, John YS

    2011-01-01

    INTRODUCTION: The advent of acellular dermis-based tissue expander breast reconstruction has placed an increased emphasis on optimizing intraoperative volume. Because skin preservation is a critical determinant of intraoperative volume expansion, a mathematical model was developed to capture the influence of incision dimension on subsequent tissue expander volumes. METHODS: A mathematical equation was developed to calculate breast volume via integration of a geometrically modelled breast cross-section. The equation calculates volume changes associated with excised skin during the mastectomy incision by reducing the arc length of the cross-section. The degree of volume loss is subsequently calculated based on excision dimensions ranging from 35 mm to 60 mm. RESULTS: A quadratic relationship between breast volume and the vertical dimension of the mastectomy incision exists, such that incrementally larger incisions lead to a disproportionally greater amount of volume loss. The vertical dimension of the mastectomy incision – more so than the horizontal dimension – is of critical importance to maintain breast volume. Moreover, the predicted volume loss is more profound in smaller breasts and primarily occurs in areas that affect breast projection on ptosis. CONCLUSIONS: The present study is the first to model the relationship between the vertical dimensions of the mastectomy incision and subsequent volume loss. These geometric principles will aid in optimizing intra-operative volume expansion during expander-based breast reconstruction. PMID:22654531

  8. Relationship between methamphetamine use history and segmental hair analysis findings of MA users.

    PubMed

    Han, Eunyoung; Lee, Sangeun; In, Sanghwan; Park, Meejung; Park, Yonghoon; Cho, Sungnam; Shin, Junguk; Lee, Hunjoo

    2015-09-01

    The aim of this study was to investigate the relationship between methamphetamine (MA) use history and segmental hair analysis (1 and 3cm sections) and whole hair analysis results in Korean MA users in rehabilitation programs. Hair samples were collected from 26 Korean MA users. Eleven of the 26 subjects used cannabis with MA and two used cocaine, opiates, and MDMA with MA. Self-reported single dose of MA from the 26 subjects ranged from 0.03 to 0.5g/one time. Concentrations of MA and its metabolite amphetamine (AP) in hair were determined by gas chromatography mass spectrometry (GC/MS) after derivatization. The method used was well validated. Qualitative analysis from all 1cm sections (n=154) revealed a good correlation between positive or negative results for MA in hair and self-reported MA use (69.48%, n=107). In detail, MA results were positive in 66 hair specimens of MA users who reported administering MA, and MA results were negative in 41 hair specimens of MA users who denied MA administration in the corresponding month. Test results were false-negative in 10.39% (n=16) of hair specimens and false-positive in 20.13% (n=31) of hair specimens. In false positive cases, it is considered that after MA cessation it continued to be accumulated in hair still, while in false negative cases, self-reported histories showed a small amount of MA use or MA use 5-7 months previously. In terms of quantitative analysis, the concentrations of MA in 1 and 3cm long hair segments and in whole hair samples ranged from 1.03 to 184.98 (mean 22.01), 2.26 to 89.33 (mean 18.71), and 0.91 to 124.49 (mean 15.24)ng/mg, respectively. Ten subjects showed a good correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 7 among 10 subjects ranged from 0.71 to 0.98 (mean 0.85). Four subjects showed a low correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 4 subjects ranged from 0.36 to 0.55. Eleven subjects showed a poor

  9. Synfuel program analysis. Volume 1: Procedures-capabilities

    NASA Astrophysics Data System (ADS)

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    The analytic procedures and capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative are described. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specify cases and interpret outputs. It contains an explicit description (with examples) of the types of results which can be obtained when applied for the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. The objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

  10. 3-D volume reconstruction of skin lesions for melanin and blood volume estimation and lesion severity analysis.

    PubMed

    D'Alessandro, Brian; Dhawan, Atam P

    2012-11-01

    Subsurface information about skin lesions, such as the blood volume beneath the lesion, is important for the analysis of lesion severity towards early detection of skin cancer such as malignant melanoma. Depth information can be obtained from diffuse reflectance based multispectral transillumination images of the skin. An inverse volume reconstruction method is presented which uses a genetic algorithm optimization procedure with a novel population initialization routine and nudge operator based on the multispectral images to reconstruct the melanin and blood layer volume components. Forward model evaluation for fitness calculation is performed using a parallel processing voxel-based Monte Carlo simulation of light in skin. Reconstruction results for simulated lesions show excellent volume accuracy. Preliminary validation is also done using a set of 14 clinical lesions, categorized into lesion severity by an expert dermatologist. Using two features, the average blood layer thickness and the ratio of blood volume to total lesion volume, the lesions can be classified into mild and moderate/severe classes with 100% accuracy. The method therefore has excellent potential for detection and analysis of pre-malignant lesions.

  11. Joint High Speed Sealift (JHSS) Segmented Model Test Data Analysis and Validation of Numerical Simulations

    DTIC Science & Technology

    2012-12-01

    O’Shea, Donald Wyatt, Douglas Dommermuth, William R. Story, Edward A. Devine, Ann Marie Powers, Thomas C. Fu, and Anne M. Fullerton 2 • Q O O...PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dominic Piro, Kyle A. Brucker, Thomas T. O’Shea, Donald Wyatt, Douglas Dommermuth, William R. Story, Edward A...The mass distribution is summarized in Table 4. Table 4. Mass properties of JHSS segments. Segment Weight [ lbl x<e |ft aft FP] m

  12. Real-Time Ultrasound Segmentation, Analysis and Visualisation of Deep Cervical Muscle Structure.

    PubMed

    Cunningham, Ryan J; Harding, Peter J; Loram, Ian D

    2017-02-01

    Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0±6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in [Formula: see text] to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online.

  13. Biomechanical Analysis of Fusion Segment Rigidity Upon Stress at Both the Fusion and Adjacent Segments: A Comparison between Unilateral and Bilateral Pedicle Screw Fixation

    PubMed Central

    Kim, Ho-Joong; Kang, Kyoung-Tak; Chang, Bong-Soon; Lee, Choon-Ki; Kim, Jang-Woo

    2014-01-01

    Purpose The purpose of this study was to investigate the effects of unilateral pedicle screw fixation on the fusion segment and the superior adjacent segment after one segment lumbar fusion using validated finite element models. Materials and Methods Four L3-4 fusion models were simulated according to the extent of decompression and the method of pedicle screws fixation in L3-4 lumbar fusion. These models included hemi-laminectomy with bilateral pedicle screw fixation in the L3-4 segment (BF-HL model), total laminectomy with bilateral pedicle screw fixation (BF-TL model), hemi-laminectomy with unilateral pedicle screw fixation (UF-HL model), and total laminectomy with unilateral pedicle screw fixation (UF-TL model). In each scenario, intradiscal pressures, annulus stress, and range of motion at the L2-3 and L3-4 segments were analyzed under flexion, extension, lateral bending, and torsional moments. Results Under four pure moments, the unilateral fixation leads to a reduction in increment of range of motion at the adjacent segment, but larger motions were noted at the fusion segment (L3-4) in the unilateral fixation (UF-HL and UF-TL) models when compared to bilateral fixation. The maximal von Mises stress showed similar patterns to range of motion at both superior adjacent L2-3 segments and fusion segment. Conclusion The current study suggests that unilateral pedicle screw fixation seems to be unable to afford sufficient biomechanical stability in case of bilateral total laminectomy. Conversely, in the case of hemi-laminectomy, unilateral fixation could be an alternative option, which also has potential benefit to reduce the stress of the adjacent segment. PMID:25048501

  14. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  15. Light Helicopter Family Trade-Off Analysis. Volume 8. Appendices S and T

    DTIC Science & Technology

    2014-01-02

    UNITED STATES AIMY MATERIEL COMMAND LIGHT I COPTER FAMlL Y TllADE-QFF ANALYSIS APPENDICES S AND T VOLUME VIII ACN : 69396 Copy IQJ. of 130...REPORT NUMBER ACN 69396 2. GOVT ACCESSION NO. 4. TITLE (and Subtltl») LIGHT HELICOPTER FAMILY TRADE-OFF ANALYSIS, APPENDICES S AND T, VOLUME VIII...UNITED STATES ARMY MATERIEL COMMAND LIGHT HELICOPTER FAMILY TRADE-OFF ANALYSIS APPENDICES S AND T . VOLUME VIII ACN : 69396 1 Accession For

  16. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 3

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.

  17. Analysis of hantavirus genetic diversity in Argentina: S segment-derived phylogeny.

    PubMed

    Bohlman, Marlene C; Morzunov, Sergey P; Meissner, John; Taylor, Mary Beth; Ishibashi, Kimiko; Rowe, Joan; Levis, Silvana; Enria, Delia; St Jeor, Stephen C

    2002-04-01

    Nucleotide sequences were determined for the complete S genome segments of the six distinct hantavirus genotypes from Argentina and for two cell culture-isolated Andes virus strains from Chile. Phylogenetic analysis indicates that, although divergent from each other, all Argentinian hantavirus genotypes group together and form a novel phylogenetic clade with the Andes virus. The previously characterized South American hantaviruses Laguna Negra virus and Rio Mamore virus make up another clade that originates from the same ancestral node as the Argentinian/Chilean viruses. Within the clade of Argentinian/Chilean viruses, three subclades can be defined, although the branching order is somewhat obscure. These are made of (i) "Lechiguanas-like" virus genotypes, (ii) Maciel virus and Pergamino virus genotypes, and (iii) strains of the Andes virus. Two hantavirus genotypes from Brazil, Araraquara and Castello dos Sonhos, were found to group with Maciel virus and Andes virus, respectively. The nucleocapsid protein amino acid sequence variability among the members of the Argentinian/Chilean clade does not exceed 5.8%. It is especially low (3.5%) among oryzomyine species-associated virus genotypes, suggesting recent divergence from the common ancestor. Interestingly, the Maciel and Pergamino viruses fit well with the rest of the clade although their hosts are akodontine rodents. Taken together, these data suggest that under conditions in which potential hosts display a high level of genetic diversity and are sympatric, host switching may play a prominent role in establishing hantavirus genetic diversity. However, cospeciation still remains the dominant factor in the evolution of hantaviruses.

  18. Global Warming’s Six Americas: An Audience Segmentation Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Roser-Renouf, C.; Maibach, E.; Leiserowitz, A.

    2009-12-01

    One of the first rules of effective communication is to “know thy audience.” People have different psychological, cultural and political reasons for acting - or not acting - to reduce greenhouse gas emissions, and climate change educators can increase their impact by taking these differences into account. In this presentation we will describe six unique audience segments within the American public that each responds to the issue in its own distinct way, and we will discuss methods of engaging each. The six audiences were identified using a nationally representative survey of American adults conducted in the fall of 2008 (N=2,164). In two waves of online data collection, the public’s climate change beliefs, attitudes, risk perceptions, values, policy preferences, conservation, and energy-efficiency behaviors were assessed. The data were subjected to latent class analysis, yielding six groups distinguishable on all the above dimensions. The Alarmed (18%) are fully convinced of the reality and seriousness of climate change and are already taking individual, consumer, and political action to address it. The Concerned (33%) - the largest of the Six Americas - are also convinced that global warming is happening and a serious problem, but have not yet engaged with the issue personally. Three other Americas - the Cautious (19%), the Disengaged (12%) and the Doubtful (11%) - represent different stages of understanding and acceptance of the problem, and none are actively involved. The final America - the Dismissive (7%) - are very sure it is not happening and are actively involved as opponents of a national effort to reduce greenhouse gas emissions. Mitigating climate change will require a diversity of messages, messengers and methods that take into account these differences within the American public. The findings from this research can serve as guideposts for educators on the optimal choices for reaching and influencing target groups with varied informational needs

  19. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  20. Airway segmentation and analysis for the study of mouse models of lung disease using micro-CT

    NASA Astrophysics Data System (ADS)

    Artaechevarria, X.; Pérez-Martín, D.; Ceresa, M.; de Biurrun, G.; Blanco, D.; Montuenga, L. M.; van Ginneken, B.; Ortiz-de-Solorzano, C.; Muñoz-Barrutia, A.

    2009-11-01

    Animal models of lung disease are gaining importance in understanding the underlying mechanisms of diseases such as emphysema and lung cancer. Micro-CT allows in vivo imaging of these models, thus permitting the study of the progression of the disease or the effect of therapeutic drugs in longitudinal studies. Automated analysis of micro-CT images can be helpful to understand the physiology of diseased lungs, especially when combined with measurements of respiratory system input impedance. In this work, we present a fast and robust murine airway segmentation and reconstruction algorithm. The algorithm is based on a propagating fast marching wavefront that, as it grows, divides the tree into segments. We devised a number of specific rules to guarantee that the front propagates only inside the airways and to avoid leaking into the parenchyma. The algorithm was tested on normal mice, a mouse model of chronic inflammation and a mouse model of emphysema. A comparison with manual segmentations of two independent observers shows that the specificity and sensitivity values of our method are comparable to the inter-observer variability, and radius measurements of the mainstem bronchi reveal significant differences between healthy and diseased mice. Combining measurements of the automatically segmented airways with the parameters of the constant phase model provides extra information on how disease affects lung function.

  1. Stress and strain analysis of contractions during ramp distension in partially obstructed guinea pig jejunal segments.

    PubMed

    Zhao, Jingbo; Liao, Donghua; Yang, Jian; Gregersen, Hans

    2011-07-28

    Previous studies have demonstrated morphological and biomechanical remodeling in the intestine proximal to an obstruction. The present study aimed to obtain stress and strain thresholds to initiate contraction and the maximal contraction stress and strain in partially obstructed guinea pig jejunal segments. Partial obstruction and sham operations were surgically created in mid-jejunum of male guinea pigs. The animals survived 2, 4, 7 and 14 days. Animals not being operated on served as normal controls. The segments were used for no-load state, zero-stress state and distension analyses. The segment was inflated to 10 cmH(2)O pressure in an organ bath containing 37°C Krebs solution and the outer diameter change was monitored. The stress and strain at the contraction threshold and at maximum contraction were computed from the diameter, pressure and the zero-stress state data. Young's modulus was determined at the contraction threshold. The muscle layer thickness in obstructed intestinal segments increased up to 300%. Compared with sham-obstructed and normal groups, the contraction stress threshold, the maximum contraction stress and the Young's modulus at the contraction threshold increased whereas the strain threshold and maximum contraction strain decreased after 7 days obstruction (P<0.05 and 0.01). In conclusion, in the partially obstructed intestinal segments, a larger distension force was needed to evoke contraction likely due to tissue remodeling. Higher contraction stresses were produced and the contraction deformation (strain) became smaller.

  2. An analysis of methods for the selection of atlases for use in medical image segmentation

    NASA Astrophysics Data System (ADS)

    Prescott, Jeffrey W.; Best, Thomas M.; Haq, Furqan; Jackson, Rebecca; Gurcan, Metin

    2010-03-01

    The use of atlases has been shown to be a robust method for segmentation of medical images. In this paper we explore different methods of selection of atlases for the segmentation of the quadriceps muscles in magnetic resonance (MR) images, although the results are pertinent for a wide range of applications. The experiments were performed using 103 images from the Osteoarthritis Initiative (OAI). The images were randomly split into a training set consisting of 50 images and a testing set of 53 images. Three different atlas selection methods were systematically compared. First, a set of readers was assigned the task of selecting atlases from a training population of images, which were selected to be representative subgroups of the total population. Second, the same readers were instructed to select atlases from a subset of the training data which was stratified based on population modes. Finally, every image in the training set was employed as an atlas, with no input from the readers, and the atlas which had the best initial registration, judged by an appropriate registration metric, was used in the final segmentation procedure. The segmentation results were quantified using the Zijdenbos similarity index (ZSI). The results show that over all readers the agreement of the segmentation algorithm decreased from 0.76 to 0.74 when using population modes to assist in atlas selection. The use of every image in the training set as an atlas outperformed both manual atlas selection methods, achieving a ZSI of 0.82.

  3. Stress and strain analysis of contractions during ramp distension in partially obstructed guinea pig jejunal segments

    PubMed Central

    Zhao, Jingbo; Liao, Donghua; Yang, Jian; Gregersen, Hans

    2011-01-01

    Previous studies have demonstrated morphological and biomechanical remodeling in the intestine proximal to an obstruction. The present study aimed to obtain stress and strain thresholds to initiate contraction and the maximal contraction stress and strain in partially obstructed guinea pig jejunal segments. Partial obstruction and sham operations were surgically created in mid-jejunum of male guinea pigs. The animals survived 2, 4, 7, and 14 days, respectively. Animals not being operated on served as normal controls. The segments were used for no-load state, zero-stress state and distension analyses. The segment was inflated to 10 cmH2O pressure in an organ bath containing 37°C Krebs solution and the outer diameter change was monitored. The stress and strain at the contraction threshold and at maximum contraction were computed from the diameter, pressure and the zero-stress state data. Young’s modulus was determined at the contraction threshold. The muscle layer thickness in obstructed intestinal segments increased up to 300%. Compared with sham-obstructed and normal groups, the contraction stress threshold, the maximum contraction stress and the Young’s modulus at the contraction threshold increased whereas the strain threshold and maximum contraction strain decreased after 7 days obstruction (P<0.05 and 0.01). In conclusion, in the partially obstructed intestinal segments, a larger distension force was needed to evoke contraction likely due to tissue remodeling. Higher contraction stresses were produced and the contraction deformation (strain) became smaller. PMID:21632056

  4. Biomechanical Evaluation of Different Fixation Methods for Mandibular Anterior Segmental Osteotomy Using Finite Element Analysis, Part One: Superior Repositioning Surgery.

    PubMed

    Kilinç, Yeliz; Erkmen, Erkan; Kurt, Ahmet

    2016-01-01

    The aim of the current study was to comparatively evaluate the mechanical behavior of 3 different fixation methods following various amounts of superior repositioning of mandibular anterior segment. In this study, 3 different rigid fixation configurations comprising double right L, double left L, or double I miniplates with monocortical screws were compared under vertical, horizontal, and oblique load conditions by means of finite element analysis. A three-dimensional finite element model of a fully dentate mandible was generated. A 3 and 5 mm superior repositioning of mandibular anterior segmental osteotomy were simulated. Three different finite element models corresponding to different fixation configurations were created for each superior repositioning. The von Mises stress values on fixation appliances and principal maximum stresses (Pmax) on bony structures were predicted by finite element analysis. The results have demonstrated that double right L configuration provides better stability with less stress fields in comparison with other fixation configurations used in this study.

  5. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment

    PubMed Central

    Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols. PMID:26147196

  6. Hospital benefit segmentation.

    PubMed

    Finn, D W; Lamb, C W

    1986-12-01

    Market segmentation is an important topic to both health care practitioners and researchers. The authors explore the relative importance that health care consumers attach to various benefits available in a major metropolitan area hospital. The purposes of the study are to test, and provide data to illustrate, the efficacy of one approach to hospital benefit segmentation analysis.

  7. Yucca Mountain transportation routes: Preliminary characterization and risk analysis; Volume 2, Figures [and] Volume 3, Technical Appendices

    SciTech Connect

    Souleyrette, R.R. II; Sathisan, S.K.; di Bartolo, R.

    1991-05-31

    This report presents appendices related to the preliminary assessment and risk analysis for high-level radioactive waste transportation routes to the proposed Yucca Mountain Project repository. Information includes data on population density, traffic volume, ecologically sensitive areas, and accident history.

  8. A New MRI-Based Pediatric Subcortical Segmentation Technique (PSST).

    PubMed

    Loh, Wai Yen; Connelly, Alan; Cheong, Jeanie L Y; Spittle, Alicia J; Chen, Jian; Adamson, Christopher; Ahmadzai, Zohra M; Fam, Lillian Gabra; Rees, Sandra; Lee, Katherine J; Doyle, Lex W; Anderson, Peter J; Thompson, Deanne K

    2016-01-01

    Volumetric and morphometric neuroimaging studies of the basal ganglia and thalamus in pediatric populations have utilized existing automated segmentation tools including FIRST (Functional Magnetic Resonance Imaging of the Brain's Integrated Registration and Segmentation Tool) and FreeSurfer. These segmentation packages, however, are mostly based on adult training data. Given that there are marked differences between the pediatric and adult brain, it is likely an age-specific segmentation technique will produce more accurate segmentation results. In this study, we describe a new automated segmentation technique for analysis of 7-year-old basal ganglia and thalamus, called Pediatric Subcortical Segmentation Technique (PSST). PSST consists of a probabilistic 7-year-old subcortical gray matter atlas (accumbens, caudate, pallidum, putamen and thalamus) combined with a customized segmentation pipeline using existing tools: ANTs (Advanced Normalization Tools) and SPM (Statistical Parametric Mapping). The segmentation accuracy of PSST in 7-year-old data was compared against FIRST and FreeSurfer, relative to manual segmentation as the ground truth, utilizing spatial overlap (Dice's coefficient), volume correlation (intraclass correlation coefficient, ICC) and limits of agreement (Bland-Altman plots). PSST achieved spatial overlap scores ≥90% and ICC scores ≥0.77 when compared with manual segmentation, for all structures except the accumbens. Compared with FIRST and FreeSurfer, PSST showed higher spatial overlap (p FDR  < 0.05) and ICC scores, with less volumetric bias according to Bland-Altman plots. PSST is a customized segmentation pipeline with an age-specific atlas that accurately segments typical and atypical basal ganglia and thalami at age 7 years, and has the potential to be applied to other pediatric datasets.

  9. An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm.

    PubMed

    Xu, Hongming; Lu, Cheng; Mandal, Mrinal

    2014-09-01

    In this paper, we propose an efficient method for segmenting cell nuclei in the skin histopathological images. The proposed technique consists of four modules. First, it separates the nuclei regions from the background with an adaptive threshold technique. Next, an elliptical descriptor is used to detect the isolated nuclei with elliptical shapes. This descriptor classifies the nuclei regions based on two ellipticity parameters. Nuclei clumps and nuclei with irregular shapes are then localized by an improved seed detection technique based on voting in the eroded nuclei regions. Finally, undivided nuclei regions are segmented by a marked watershed algorithm. Experimental results on 114 different image patches indicate that the proposed technique provides a superior performance in nuclei detection and segmentation.

  10. Concepts and analysis for precision segmented reflector and feed support structures

    NASA Technical Reports Server (NTRS)

    Miller, Richard K.; Thomson, Mark W.; Hedgepeth, John M.

    1990-01-01

    Several issues surrounding the design of a large (20-meter diameter) Precision Segmented Reflector are investigated. The concerns include development of a reflector support truss geometry that will permit deployment into the required doubly-curved shape without significant member strains. For deployable and erectable reflector support trusses, the reduction of structural redundancy was analyzed to achieve reduced weight and complexity for the designs. The stiffness and accuracy of such reduced member trusses, however, were found to be affected to a degree that is unexpected. The Precision Segmented Reflector designs were developed with performance requirements that represent the Reflector application. A novel deployable sunshade concept was developed, and a detailed parametric study of various feed support structural concepts was performed. The results of the detailed study reveal what may be the most desirable feed support structure geometry for Precision Segmented Reflector/Large Deployable Reflector applications.

  11. Incorporation of learned shape priors into a graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes of mice

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Song, Qi; Abràmoff, Michael D.; Sohn, Eliott; Wu, Xiaodong; Garvin, Mona K.

    2014-03-01

    Spectral-domain optical coherence tomography (SD-OCT) finds widespread use clinically for the detection and management of ocular diseases. This non-invasive imaging modality has also begun to find frequent use in research studies involving animals such as mice. Numerous approaches have been proposed for the segmentation of retinal surfaces in SD-OCT images obtained from human subjects; however, the segmentation of retinal surfaces in mice scans is not as well-studied. In this work, we describe a graph-theoretic segmentation approach for the simultaneous segmentation of 10 retinal surfaces in SD-OCT scans of mice that incorporates learned shape priors. We compared the method to a baseline approach that did not incorporate learned shape priors and observed that the overall unsigned border position errors reduced from 3.58 +/- 1.33 μm to 3.20 +/- 0.56 μm.

  12. Style, content and format guide for writing safety analysis documents. Volume 1, Safety analysis reports for DOE nuclear facilities

    SciTech Connect

    Not Available

    1994-06-01

    The purpose of Volume 1 of this 4-volume style guide is to furnish guidelines on writing and publishing Safety Analysis Reports (SARs) for DOE nuclear facilities at Sandia National Laboratories. The scope of Volume 1 encompasses not only the general guidelines for writing and publishing, but also the prescribed topics/appendices contents along with examples from typical SARs for DOE nuclear facilities.

  13. A Genetic Analysis of Brain Volumes and IQ in Children

    ERIC Educational Resources Information Center

    van Leeuwen, Marieke; Peper, Jiska S.; van den Berg, Stephanie M.; Brouwer, Rachel M.; Hulshoff Pol, Hilleke E.; Kahn, Rene S.; Boomsma, Dorret I.

    2009-01-01

    In a population-based sample of 112 nine-year old twin pairs, we investigated the association among total brain volume, gray matter and white matter volume, intelligence as assessed by the Raven IQ test, verbal comprehension, perceptual organization and perceptual speed as assessed by the Wechsler Intelligence Scale for Children-III. Phenotypic…

  14. EPA RREL'S MOBILE VOLUME REDUCTION UNIT -- APPLICATIONS ANALYSIS REPORT

    EPA Science Inventory

    The volume reduction unit (VRU) is a pilot-scale, mobile soil washing system designed to remove organic contaminants from the soil through particle size separation and solubilization. The VRU removes contaminants by suspending them in a wash solution and by reducing the volume of...

  15. Probabilistic Analysis of Activation Volumes Generated During Deep Brain Stimulation

    PubMed Central

    Butson, Christopher R.; Cooper, Scott E.; Henderson, Jaimie M.; Wolgamuth, Barbara; McIntyre, Cameron C.

    2010-01-01

    Deep brain stimulation (DBS) is an established therapy for the treatment of Parkinson’s disease (PD) and shows great promise for the treatment of several other disorders. However, while the clinical analysis of DBS has received great attention, a relative paucity of quantitative techniques exists to define the optimal surgical target and most effective stimulation protocol for a given disorder. In this study we describe a methodology that represents an evolutionary addition to the concept of a probabilistic brain atlas, which we call a probabilistic stimulation atlas (PSA). We outline steps to combine quantitative clinical outcome measures with advanced computational models of DBS to identify regions where stimulation-induced activation could provide the best therapeutic improvement on a per-symptom basis. While this methodology is relevant to any form of DBS, we present example results from subthalamic nucleus (STN) DBS for PD. We constructed patient-specific computer models of the volume of tissue activated (VTA) for 163 different stimulation parameter settings which were tested in six patients. We then assigned clinical outcome scores to each VTA and compiled all of the VTAs into a PSA to identify stimulation-induced activation targets that maximized therapeutic response with minimal side effects. The results suggest that selection of both electrode placement and clinical stimulation parameter settings could be tailored to the patient’s primary symptoms using patient-specific models and PSAs. PMID:20974269

  16. Probabilistic analysis of activation volumes generated during deep brain stimulation.

    PubMed

    Butson, Christopher R; Cooper, Scott E; Henderson, Jaimie M; Wolgamuth, Barbara; McIntyre, Cameron C

    2011-02-01

    Deep brain stimulation (DBS) is an established therapy for the treatment of Parkinson's disease (PD) and shows great promise for the treatment of several other disorders. However, while the clinical analysis of DBS has received great attention, a relative paucity of quantitative techniques exists to define the optimal surgical target and most effective stimulation protocol for a given disorder. In this study we describe a methodology that represents an evolutionary addition to the concept of a probabilistic brain atlas, which we call a probabilistic stimulation atlas (PSA). We outline steps to combine quantitative clinical outcome measures with advanced computational models of DBS to identify regions where stimulation-induced activation could provide the best therapeutic improvement on a per-symptom basis. While this methodology is relevant to any form of DBS, we present example results from subthalamic nucleus (STN) DBS for PD. We constructed patient-specific computer models of the volume of tissue activated (VTA) for 163 different stimulation parameter settings which were tested in six patients. We then assigned clinical outcome scores to each VTA and compiled all of the VTAs into a PSA to identify stimulation-induced activation targets that maximized therapeutic response with minimal side effects. The results suggest that selection of both electrode placement and clinical stimulation parameter settings could be tailored to the patient's primary symptoms using patient-specific models and PSAs.

  17. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    SciTech Connect

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  18. Volume component analysis for classification of LiDAR data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2015-03-01

    One of the most difficult challenges of working with LiDAR data is the large amount of data points that are produced. Analysing these large data sets is an extremely time consuming process. For this reason, automatic perception of LiDAR scenes is a growing area of research. Currently, most LiDAR feature extraction relies on geometrical features specific to the point cloud of interest. These geometrical features are scene-specific, and often rely on the scale and orientation of the object for classification. This paper proposes a robust method for reduced dimensionality feature extraction of 3D objects using a volume component analysis (VCA) approach.1 This VCA approach is based on principal component analysis (PCA). PCA is a method of reduced feature extraction that computes a covariance matrix from the original input vector. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix are used to describe an image. Block-based PCA is an adapted method for feature extraction in facial images because PCA, when performed in local areas of the image, can extract more significant features than can be extracted when the entire image is considered. The image space is split into several of these blocks, and PCA is computed individually for each block. This VCA proposes that a LiDAR point cloud can be represented as a series of voxels whose values correspond to the point density within that relative location. From this voxelized space, block-based PCA is used to analyze sections of the space where the sections, when combined, will represent features of the entire 3-D object. These features are then used as the input to a support vector machine which is trained to identify four classes of objects, vegetation, vehicles, buildings and barriers with an overall accuracy of 93.8%

  19. Segmental and Positional Effects on Children's Coda Production: Comparing Evidence from Perceptual Judgments and Acoustic Analysis

    ERIC Educational Resources Information Center

    Theodore, Rachel M.; Demuth, Katherine; Shattuck-Hufnagel, Stephanie

    2012-01-01

    Children's early productions are highly variable. Findings from children's early productions of grammatical morphemes indicate that some of the variability is systematically related to segmental and phonological factors. Here, we extend these findings by assessing 2-year-olds' production of non-morphemic codas using both listener decisions and…

  20. Very High Resolution Classification of Sentinel-1A Data Using Segmentation and Texture Analysis

    NASA Astrophysics Data System (ADS)

    Korosov, Anton A.; Park, Jeong-Won

    2016-08-01

    An algorithm for classification of sea ice, water and other types on Sentinel-1A SAR data has been developed based on thermal noise correction, segmentation, texture features and support vector machines. The algorithm was tested on several SAR images and proves to be accurate (95% true positive hits) and to have very high resolution (100 m pixel size).

  1. 3D CT spine data segmentation and analysis of vertebrae bone lesions.

    PubMed

    Peter, R; Malinsky, M; Ourednicek, P; Jan, J

    2013-01-01

    A method is presented aiming at detecting and classifying bone lesions in 3D CT data of human spine, via Bayesian approach utilizing Markov random fields. A developed algorithm for necessary segmentation of individual possibly heavily distorted vertebrae based on 3D intensity modeling of vertebra types is presented as well.

  2. Kinematic analysis of dynamic lumbar motion in patients with lumbar segmental instability using digital videofluoroscopy

    PubMed Central

    Maroufi, Nader; Behtash, Hamid; Zekavat, Hajar; Parnianpour, Mohamad

    2009-01-01

    The study design is a prospective, case–control. The aim of this study was to develop a reliable measurement technique for the assessment of lumbar spine kinematics using digital video fluoroscopy in a group of patients with low back pain (LBP) and a control group. Lumbar segmental instability (LSI) is one subgroup of nonspecific LBP the diagnosis of which has not been clarified. The diagnosis of LSI has traditionally relied on the use of lateral functional (flexion–extension) radiographs but use of this method has proven unsatisfactory. Fifteen patients with chronic low back pain suspected to have LSI and 15 matched healthy subjects were recruited. Pulsed digital videofluoroscopy was used to investigate kinematics of lumbar motion segments during flexion and extension movements in vivo. Intersegmental linear translation and angular displacement, and pathway of instantaneous center of rotation (PICR) were calculated for each lumbar motion segment. Movement pattern of lumbar spine between two groups and during the full sagittal plane range of motion were analyzed using ANOVA with repeated measures design. Intersegmental linear translation was significantly higher in patients during both flexion and extension movements at L5–S1 segment (p < 0.05). Arc length of PICR was significantly higher in patients for L1–L2 and L5–S1 motion segments during extension movement (p < 0.05). This study determined some kinematic differences between two groups during the full range of lumbar spine. Devices, such as digital videofluoroscopy can assist in identifying better criteria for diagnosis of LSI in otherwise nonspecific low back pain patients in hope of providing more specific treatment. PMID:19727854

  3. Lung extraction, lobe segmentation and hierarchical region assessment for quantitative analysis on high resolution computed tomography images.

    PubMed

    Ross, James C; Estépar, Raúl San José; Díaz, Alejandro; Westin, Carl-Fredrik; Kikinis, Ron; Silverman, Edwin K; Washko, George R

    2009-01-01

    Regional assessment of lung disease (such as chronic obstructive pulmonary disease) is a critical component to accurate patient diagnosis. Software tools than enable such analysis are also important for clinical research studies. In this work, we present an image segmentation and data representation framework that enables quantitative analysis specific to different lung regions on high resolution computed tomography (HRCT) datasets. We present an offline, fully automatic image processing chain that generates airway, vessel, and lung mask segmentations in which the left and right lung are delineated. We describe a novel lung lobe segmentation tool that produces reproducible results with minimal user interaction. A usability study performed across twenty datasets (inspiratory and expiratory exams including a range of disease states) demonstrates the tool's ability to generate results within five to seven minutes on average. We also describe a data representation scheme that involves compact encoding of label maps such that both "regions" (such as lung lobes) and "types" (such as emphysematous parenchyma) can be simultaneously represented at a given location in the HRCT.

  4. Analysis of segmental duplications reveals a distinct pattern of continuation-of-synteny between human and mouse genomes.

    PubMed

    Mehan, Michael R; Almonte, Maricel; Slaten, Erin; Freimer, Nelson B; Rao, P Nagesh; Ophoff, Roel A

    2007-03-01

    About 5% of the human genome consists of large-scale duplicated segments of almost identical sequences. Segmental duplications (SDs) have been proposed to be involved in non-allelic homologous recombination leading to recurrent genomic variation and disease. It has also been suggested that these SDs are associated with syntenic rearrangements that have shaped the human genome. We have analyzed 14 members of a single family of closely related SDs in the human genome, some of which are associated with common inversion polymorphisms at chromosomes 8p23 and 4p16. Comparative analysis with the mouse genome revealed syntenic inversions for these two human polymorphic loci. In addition, 12 of the 14 SDs, while absent in the mouse genome, occur at the breaks of synteny; suggesting a non-random involvement of these sequences in genome evolution. Furthermore, we observed a syntenic familial relationship between 8 and 12 breakpoint-loci, where broken synteny that ends at one family member resumes at another, even across different chromosomes. Subsequent genome-wide assessment revealed that this relationship, which we named continuation-of-synteny, is not limited to the 8p23 family and occurs 46 times in the human genome with high frequency at specific chromosomes. Our analysis supports a non-random breakage model of genomic evolution with an active involvement of segmental duplications for specific regions of the human genome.

  5. Electromechanical modeling and power performance analysis of a piezoelectric energy harvester having an attached mass and a segmented piezoelectric layer

    NASA Astrophysics Data System (ADS)

    Jeong, Sinwoo; Cho, Jae Yong; Sung, Tae Hyun; Yoo, Hong Hee

    2017-03-01

    Conventional vibration-based piezoelectric energy harvesters (PEHs) have advantages including the ubiquity of their energy source and their ease of manufacturing. However, they have a critical disadvantage as well: they can produce a reasonable amount of power only if the excitation frequency is concentrated near a natural frequency of the PEH. Because the excitation frequency is often spread and/or variable, it is very difficult to successfully design a conventional PEH. In this paper, we propose a new cantilevered PEH whose design includes an attached mass and a segmented piezoelectric layer. By choosing a proper size and location for the attached mass, the gap between the first and second natural frequencies of the PEH can be decreased in order to broaden the effective excitation frequency range and thus to allow reasonable power generation. Especially, the output power performance improves significantly around the second natural frequency of the PEH since the voltage cancellation effect can be made very weak by segmenting the piezoelectric layer at an appropriate location. To investigate the power performance of the new PEH, herein a reduced-order electromechanical analysis model is proposed and the accuracy of this model is validated experimentally. The effects of variable load resistance and piezoelectric layer segmentation location upon the power performance of the new PEH are investigated by means of the reduced-order analysis model.

  6. Microstructural analysis of pineal volume using trueFISP imaging

    PubMed Central

    Bumb, Jan M; Brockmann, Marc A; Groden, Christoph; Nolte, Ingo

    2013-01-01

    AIM: To determine the spectrum of pineal microstructures (solid/cystic parts) in a large clinical population using a high-resolution 3D-T2-weighted sequence. METHODS: A total of 347 patients enrolled for cranial magnetic resonance imaging were randomly included in this study. Written informed consent was obtained from all patients. The exclusion criteria were artifacts or mass lesions prohibiting evaluation of the pineal gland in any of the sequences. True-FISP-3D-imaging (1.5-T, isotropic voxel 0.9 mm) was performed in 347 adults (55.4 ± 18.1 years). Pineal gland volume (PGV), cystic volume, and parenchyma volume (cysts excluded) were measured manually. RESULTS: Overall, 40.3% of pineal glands were cystic. The median PGV was 54.6 mm3 (78.33 ± 89.0 mm3), the median cystic volume was 5.4 mm3 (15.8 ± 37.2 mm3), and the median parenchyma volume was 53.6 mm3 (71.9 ± 66.7 mm3). In cystic glands, the standard deviation of the PGV was substantially higher than in solid glands (98% vs 58% of the mean). PGV declined with age (r = -0.130, P = 0.016). CONCLUSION: The high interindividual volume variation is mainly related to cysts. Pineal parenchyma volume decreased slightly with age, whereas gender-related effects appear to be negligible. PMID:23671752

  7. Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography.

    PubMed

    Hu, Zhihong; Niemeijer, Meindert; Abràmoff, Michael D; Garvin, Mona K

    2012-10-01

    Segmenting retinal vessels in optic nerve head (ONH) centered spectral-domain optical coherence tomography (SD-OCT) volumes is particularly challenging due to the projected neural canal opening (NCO) and relatively low visibility in the ONH center. Color fundus photographs provide a relatively high vessel contrast in the region inside the NCO, but have not been previously used to aid the SD-OCT vessel segmentation process. Thus, in this paper, we present two approaches for the segmentation of retinal vessels in SD-OCT volumes that each take advantage of complimentary information from fundus photographs. In the first approach (referred to as the registered-fundus vessel segmentation approach), vessels are first segmented on the fundus photograph directly (using a k-NN pixel classifier) and this vessel segmentation result is mapped to the SD-OCT volume through the registration of the fundus photograph to the SD-OCT volume. In the second approach (referred to as the multimodal vessel segmentation approach), after fundus-to-SD-OCT registration, vessels are simultaneously segmented with a k -NN classifier using features from both modalities. Three-dimensional structural information from the intraretinal layers and neural canal opening obtained through graph-theoretic segmentation approaches of the SD-OCT volume are used in combination with Gaussian filter banks and Gabor wavelets to generate the features. The approach is trained on 15 and tested on 19 randomly chosen independent image pairs of SD-OCT volumes and fundus images from 34 subjects with glaucoma. Based on a receiver operating characteristic (ROC) curve analysis, the present registered-fundus and multimodal vessel segmentation approaches [area under the curve (AUC) of 0.85 and 0.89, respectively] both perform significantly better than the two previous OCT-based approaches (AUC of 0.78 and 0.83, p < 0.05). The multimodal approach overall performs significantly better than the other three approaches (p < 0.05).

  8. Analysis on volume grating induced by femtosecond laser pulses.

    PubMed

    Zhou, Keya; Guo, Zhongyi; Ding, Weiqiang; Liu, Shutian

    2010-06-21

    We report on a kind of self-assembled volume grating in silica glass induced by tightly focused femtosecond laser pulses. The formation of the volume grating is attributed to the multiple microexplosion in the transparent materials induced by the femtosecond pulses. The first order diffractive efficiency is in dependence on the energy of the pulses and the scanning velocity of the laser greatly, and reaches as high as 30%. The diffraction pattern of the fabricated grating is numerically simulated and analyzed by a two dimensional FDTD method and the Fresnel Diffraction Integral. The numerical results proved our prediction on the formation of the volume grating, which agrees well with our experiment results.

  9. Determination of fiber volume in graphite/epoxy materials using computer image analysis

    NASA Technical Reports Server (NTRS)

    Viens, Michael J.

    1990-01-01

    The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.

  10. Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.

    2013-01-01

    The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.

  11. An ECG ambulatory system with mobile embedded architecture for ST-segment analysis.

    PubMed

    Miranda-Cid, Alejandro; Alvarado-Serrano, Carlos

    2010-01-01

    A prototype of a ECG ambulatory system for long term monitoring of ST segment of 3 leads, low power, portability and data storage in solid state memory cards has been developed. The solution presented is based in a mobile embedded architecture of a portable entertainment device used as a tool for storage and processing of bioelectric signals, and a mid-range RISC microcontroller, PIC 16F877, which performs the digitalization and transmission of ECG. The ECG amplifier stage is a low power, unipolar voltage and presents minimal distortion of the phase response of high pass filter in the ST segment. We developed an algorithm that manages access to files through an implementation for FAT32, and the ECG display on the device screen. The records are stored in TXT format for further processing. After the acquisition, the system implemented works as a standard USB mass storage device.

  12. Design and Analysis of Modules for Segmented X-Ray Optics

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; BIskach, Michael P.; Chan, Kai-Wing; Saha, Timo T; Zhang, William W.

    2012-01-01

    Future X-ray astronomy missions demand thin, light, and closely packed optics which lend themselves to segmentation of the annular mirrors and, in turn, a modular approach to the mirror design. The modular approach to X-ray Flight Mirror Assembly (FMA) design allows excellent scalability of the mirror technology to support a variety of mission sizes and science objectives. This paper describes FMA designs using slumped glass mirror segments for several X-ray astrophysics missions studied by NASA and explores the driving requirements and subsequent verification tests necessary to qualify a slumped glass mirror module for space-flight. A rigorous testing program is outlined allowing Technical Development Modules to reach technical readiness for mission implementation while reducing mission cost and schedule risk.

  13. Global segmentation and curvature analysis of volumetric data sets using trivariate B-spline functions.

    PubMed

    Soldea, Octavian; Elber, Gershon; Rivlin, Ehud

    2006-02-01

    This paper presents a method to globally segment volumetric images into regions that contain convex or concave (elliptic) iso-surfaces, planar or cylindrical (parabolic) iso-surfaces, and volumetric regions with saddle-like (hyperbolic) iso-surfaces, regardless of the value of the iso-surface level. The proposed scheme relies on a novel approach to globally compute, bound, and analyze the Gaussian and mean curvatures of an entire volumetric data set, using a trivariate B-spline volumetric representation. This scheme derives a new differential scalar field for a given volumetric scalar field, which could easily be adapted to other differential properties. Moreover, this scheme can set the basis for more precise and accurate segmentation of data sets targeting the identification of primitive parts. Since the proposed scheme employs piecewise continuous functions, it is precise and insensitive to aliasing.

  14. Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

    2013-04-01

    In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

  15. A new high-resolution computed tomography (CT) segmentation method for trabecular bone architectural analysis.

    PubMed

    Scherf, Heike; Tilgner, Rico

    2009-09-01

    In the last decade, high-resolution computed tomography (CT) and microcomputed tomography (micro-CT) have been increasingly used in anthropological studies and as a complement to traditional histological techniques. This is due in large part to the ability of CT techniques to nondestructively extract three-dimensional representations of bone structures. Despite prior studies employing CT techniques, no completely reliable method of bone segmentation has been established. Accurate preprocessing of digital data is crucial for measurement accuracy, especially when subtle structures such as trabecular bone are investigated. The research presented here is a new, reproducible, accurate, and fully automated computerized segmentation method for high-resolution CT datasets of fossil and recent cancellous bone: the Ray Casting Algorithm (RCA). We compare this technique with commonly used methods of image thresholding (i.e., the half-maximum height protocol and the automatic, adaptive iterative thresholding procedure). While the quality of the input images is crucial for conventional image segmentation, the RCA method is robust regarding the signal to noise ratio, beam hardening, ring artifacts, and blurriness. Tests with data of extant and fossil material demonstrate the superior quality of RCA compared with conventional thresholding procedures, and emphasize the need for careful consideration of optimal CT scanning parameters.

  16. Automatic nevi segmentation using adaptive mean shift filters and feature analysis

    NASA Astrophysics Data System (ADS)

    King, Michael A.; Lee, Tim K.; Atkins, M. Stella; McLean, David I.

    2004-05-01

    A novel automatic method of segmenting nevi is explained and analyzed in this paper. The first step in nevi segmentation is to iteratively apply an adaptive mean shift filter to form clusters in the image and to remove noise. The goal of this step is to remove differences in skin intensity and hairs from the image, while still preserving the shape of nevi present on the skin. Each iteration of the mean shift filter changes pixel values to be a weighted average of pixels in its neighborhood. Some new extensions to the mean shift filter are proposed to allow for better segmentation of nevi from the skin. The kernel, that describes how the pixels in its neighborhood will be averaged, is adaptive; the shape of the kernel is a function of the local histogram. After initial clustering, a simple merging of clusters is done. Finally, clusters that are local minima are found and analyzed to determine which clusters are nevi. When this algorithm was compared to an assessment by an expert dermatologist, it showed a sensitivity rate and diagnostic accuracy of over 95% on the test set, for nevi larger than 1.5mm.

  17. Design and analysis of a modified segmented cladding fiber with large mode area

    NASA Astrophysics Data System (ADS)

    Ma, Shaoshuo; Ning, Tigang; Li, Jing; Zheng, Jingjing; Wen, Xiaodong; Pei, Li

    2017-02-01

    This paper proposes a novel segmented cladding fiber structure for large mode area properties. In this structure a thin ring is placed between the high index core and nonuniform cladding. It is called Single-Ring Segmented-Cladding Fiber (SR-SCF). The novel fiber offers the possibility of single-mode(SM) operation from 1 μm to 1.7 μm with a large core diameter. With illustrations, the fiber has a better SM operation than segmented-cladding fiber (SCF) is demonstrated. A large effective area of 1000 μm2 is achieved. The SM operation with very high suppression of the higher order modes can arise by 76%. Moreover, mode spacing between the adjacent modes (LP01 and LP11) is also improved significantly. Besides, the bending property is analyzed. It is found that the fiber is insensitive to bending angle ranging from -180° to 180° at bending radius of 30 cm. The proposed fiber will play an important role in developing high power fiber laser, fiber amplifier and high power delivery application.

  18. Challenges in the segmentation and analysis of X-ray Micro-CT image data

    NASA Astrophysics Data System (ADS)

    Larsen, J. D.; Schaap, M. G.; Tuller, M.; Kulkarni, R.; Guber, A.

    2014-12-01

    Pore scale modeling of fluid flow is becoming increasing popular among scientific disciplines. With increased computational power, and technological advancements it is now possible to create realistic models of fluid flow through highly complex porous media by using a number of fluid dynamic techniques. One such technique that has gained popularity is lattice Boltzmann for its relative ease of programming and ability to capture and represent complex geometries with simple boundary conditions. In this study lattice Boltzmann fluid models are used on macro-porous silt loam soil imagery that was obtained using an industrial CT scanner. The soil imagery was segmented with six separate automated segmentation standards to reduce operator bias and provide distinction between phases. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We attempt to validate simulated permeability from differing segmentation algorithms to experimental findings. Limitations arise with X-ray micro-CT image data. Polychromatic X-ray CT has the potential to produce low image contrast and image artifacts. In this case, we find that the data is unsegmentable and unable to be modeled in a realistic and unbiased fashion.

  19. HARDMAN Comparability Analysis Methodology Guide. Volume 2. Problem Definition. Step 1 - Systems Analysis

    DTIC Science & Technology

    1985-04-01

    performance measure and a performance standard. Functions and sub-function provided earlier are at a high level of abstraction, while performance... Measures and Standards 1C-1 Distinctions Between the Predecessor, Baseline Comparison, and Proposed Systems 1.4-1 EIC Assignment Example 1.4-2 TB 750...analysis flow diagrams depict, at a high level, the general flow of data and the interrelationship of the individual HAROMAN substeps (see Volume I

  20. ATHOS: a computer program for thermal-hydraulic analysis of steam generators. Volume 2. Programmer's manual

    SciTech Connect

    Singhal, A.K.; Keeton, L.W.; Przekwas, A.J.; Weems, J.S.

    1982-10-01

    ATHOS (Analysis of the Thermal Hydraulics of Steam Generators) is a computer code developed by CHAM of North America Incorporated, under the contract RP 1066-1 from the Electric Power Research Institute, Palo Alto, California. ATHOS supercedes the earlier code URSULA2. ATHOS is designed for three-dimensional, steady-state and transient analyses of PWR steam generators. The current version of the code has been checked out for: three different configurations of the recirculating-type U-tube steam generators; the homogeneous and algebraic-slip flow models; and full and part load operating conditions. The description of ATHOS is divided into the following four volumes: Volume 1, Mathematical and Physical Models and Methods of Solution; Volume 2, Programmer's Manual; Volume 3, User's Manual; and Volume 4, Applications. The code's possible uses, capabilities and limitations are described in Volume 1 as well as in Volume 3.

  1. EVENT SEGMENTATION

    PubMed Central

    Zacks, Jeffrey M.; Swallow, Khena M.

    2012-01-01

    One way to understand something is to break it up into parts. New research indicates that segmenting ongoing activity into meaningful events is a core component of ongoing perception, with consequences for memory and learning. Behavioral and neuroimaging data suggest that event segmentation is automatic and that people spontaneously segment activity into hierarchically organized parts and sub-parts. This segmentation depends on the bottom-up processing of sensory features such as movement, and on the top-down processing of conceptual features such as actors’ goals. How people segment activity affects what they remember later; as a result, those who identify appropriate event boundaries during perception tend to remember more and learn more proficiently. PMID:22468032

  2. Genomic sequence analysis of the MHC class I G/F segment in common marmoset (Callithrix jacchus).

    PubMed

    Kono, Azumi; Brameier, Markus; Roos, Christian; Suzuki, Shingo; Shigenari, Atsuko; Kametani, Yoshie; Kitaura, Kazutaka; Matsutani, Takaji; Suzuki, Ryuji; Inoko, Hidetoshi; Walter, Lutz; Shiina, Takashi

    2014-04-01

    The common marmoset (Callithrix jacchus) is a New World monkey that is used frequently as a model for various human diseases. However, detailed knowledge about the MHC is still lacking. In this study, we sequenced and annotated a total of 854 kb of the common marmoset MHC region that corresponds to the HLA-A/G/F segment (Caja-G/F) between the Caja-G1 and RNF39 genes. The sequenced region contains 19 MHC class I genes, of which 14 are of the MHC-G (Caja-G) type, and 5 are of the MHC-F (Caja-F) type. Six putatively functional Caja-G and Caja-F genes (Caja-G1, Caja-G3, Caja-G7, Caja-G12, Caja-G13, and Caja-F4), 13 pseudogenes related either to Caja-G or Caja-F, three non-MHC genes (ZNRD1, PPPIR11, and RNF39), two miscRNA genes (ZNRD1-AS1 and HCG8), and one non-MHC pseudogene (ETF1P1) were identified. Phylogenetic analysis suggests segmental duplications of units consisting of basically five (four Caja-G and one Caja-F) MHC class I genes, with subsequent expansion/deletion of genes. A similar genomic organization of the Caja-G/F segment has not been observed in catarrhine primates, indicating that this genomic segment was formed in New World monkeys after the split of New World and Old World monkeys.

  3. Prevalence and Distribution of Segmentation Errors in Macular Ganglion Cell Analysis of Healthy Eyes Using Cirrus HD-OCT

    PubMed Central

    Alshareef, Rayan A.; Dumpala, Sunila; Rapole, Shruthi; Januwada, Manideepak; Goud, Abhilash; Peguda, Hari Kumar; Chhablani, Jay

    2016-01-01

    Purpose To determine the frequency of different types of spectral domain optical coherence tomography (SD-OCT) scan artifacts and errors in ganglion cell algorithm (GCA) in healthy eyes. Methods Infrared image, color-coded map and each of the 128 horizontal b-scans acquired in the macular ganglion cell-inner plexiform layer scans using the Cirrus HD-OCT (Carl Zeiss Meditec, Dublin, CA) macular cube 512 × 128 protocol in 30 healthy normal eyes were evaluated. The frequency and pattern of each artifact was determined. Deviation of the segmentation line was classified into mild (less than 10 microns), moderate (10–50 microns) and severe (more than 50 microns). Each deviation, if present, was noted as upward or downward deviation. Each artifact was further described as per location on the scan and zones in the total scan area. Results A total of 1029 (26.8%) out of total 3840 scans had scan errors. The most common scan error was segmentation error (100%), followed by degraded images (6.70%), blink artifacts (0.09%) and out of register artifacts (3.3%). Misidentification of the inner retinal layers was most frequent (62%). Upward Deviation of the segmentation line (47.91%) and severe deviation (40.3%) were more often noted. Artifacts were mostly located in the central scan area (16.8%). The average number of scans with artifacts per eye was 34.3% and was not related to signal strength on Spearman correlation (p = 0.36). Conclusions This study reveals that image artifacts and scan errors in SD-OCT GCA analysis are common and frequently involve segmentation errors. These errors may affect inner retinal thickness measurements in a clinically significant manner. Careful review of scans for artifacts is important when using this feature of SD-OCT device. PMID:27191396

  4. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  5. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 4: Mission peculiar spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) peculiar spacecraft segment and associated subsystems and modules are presented. The specifications considered include the following: (1) wideband communications subsystem module, (2) mission peculiar software, (3) hydrazine propulsion subsystem module, (4) solar array assembly, and (5) the scanning spectral radiometer.

  6. Effects of immersion on visual analysis of volume data.

    PubMed

    Laha, Bireswar; Sensharma, Kriti; Schiffbauer, James D; Bowman, Doug A

    2012-04-01

    Volume visualization has been widely used for decades for analyzing datasets ranging from 3D medical images to seismic data to paleontological data. Many have proposed using immersive virtual reality (VR) systems to view volume visualizations, and there is anecdotal evidence of the benefits of VR for this purpose. However, there has been very little empirical research exploring the effects of higher levels of immersion for volume visualization, and it is not known how various components of immersion influence the effectiveness of visualization in VR. We conducted a controlled experiment in which we studied the independent and combined effects of three components of immersion (head tracking, field of regard, and stereoscopic rendering) on the effectiveness of visualization tasks with two x-ray microscopic computed tomography datasets. We report significant benefits of analyzing volume data in an environment involving those components of immersion. We find that the benefits do not necessarily require all three components simultaneously, and that the components have variable influence on different task categories. The results of our study improve our understanding of the effects of immersion on perceived and actual task performance, and provide guidance on the choice of display systems to designers seeking to maximize the effectiveness of volume visualization applications.

  7. A rapid and efficient 2D/3D nuclear segmentation method for analysis of early mouse embryo and stem cell image data.

    PubMed

    Lou, Xinghua; Kang, Minjung; Xenopoulos, Panagiotis; Muñoz-Descalzo, Silvia; Hadjantonakis, Anna-Katerina

    2014-03-11

    Segmentation is a fundamental problem that dominates the success of microscopic image analysis. In almost 25 years of cell detection software development, there is still no single piece of commercial software that works well in practice when applied to early mouse embryo or stem cell image data. To address this need, we developed MINS (modular interactive nuclear segmentation) as a MATLAB/C++-based segmentation tool tailored for counting cells and fluorescent intensity measurements of 2D and 3D image data. Our aim was to develop a tool that is accurate and efficient yet straightforward and user friendly. The MINS pipeline comprises three major cascaded modules: detection, segmentation, and cell position classification. An extensive evaluation of MINS on both 2D and 3D images, and comparison to related tools, reveals improvements in segmentation accuracy and usability. Thus, its accuracy and ease of use will allow MINS to be implemented for routine single-cell-level image analyses.

  8. Analysis of flexible aircraft longitudinal dynamics and handling qualities. Volume 1: Analysis methods

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Schmidt, D. S.

    1985-01-01

    As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.

  9. Computer-aided segmentation and 3D analysis of in vivo MRI examinations of the human vocal tract during phonation

    NASA Astrophysics Data System (ADS)

    Wismüller, Axel; Behrends, Johannes; Hoole, Phil; Leinsinger, Gerda L.; Meyer-Baese, Anke; Reiser, Maximilian F.

    2008-03-01

    We developed, tested, and evaluated a 3D segmentation and analysis system for in vivo MRI examinations of the human vocal tract during phonation. For this purpose, six professionally trained speakers, age 22-34y, were examined using a standardized MRI protocol (1.5 T, T1w FLASH, ST 4mm, 23 slices, acq. time 21s). The volunteers performed a prolonged (>=21s) emission of sounds of the German phonemic inventory. Simultaneous audio tape recording was obtained to control correct utterance. Scans were made in axial, coronal, and sagittal planes each. Computer-aided quantitative 3D evaluation included (i) automated registration of the phoneme-specific data acquired in different slice orientations, (ii) semi-automated segmentation of oropharyngeal structures, (iii) computation of a curvilinear vocal tract midline in 3D by nonlinear PCA, (iv) computation of cross-sectional areas of the vocal tract perpendicular to this midline. For the vowels /a/,/e/,/i/,/o/,/ø/,/u/,/y/, the extracted area functions were used to synthesize phoneme sounds based on an articulatory-acoustic model. For quantitative analysis, recorded and synthesized phonemes were compared, where area functions extracted from 2D midsagittal slices were used as a reference. All vowels could be identified correctly based on the synthesized phoneme sounds. The comparison between synthesized and recorded vowel phonemes revealed that the quality of phoneme sound synthesis was improved for phonemes /a/ and /y/, if 3D instead of 2D data were used, as measured by the average relative frequency shift between recorded and synthesized vowel formants (p<0.05, one-sided Wilcoxon rank sum test). In summary, the combination of fast MRI followed by subsequent 3D segmentation and analysis is a novel approach to examine human phonation in vivo. It unveils functional anatomical findings that may be essential for realistic modelling of the human vocal tract during speech production.

  10. Dimensionality reduction of hyperspectral imagery based on spectral analysis of homogeneous segments: distortion measurements and classification scores

    NASA Astrophysics Data System (ADS)

    Alparone, Luciano; Argenti, Fabrizio; Dionisio, Michele; Santurri, Leonardo

    2004-02-01

    In this work, a new strategy for the analysis of hyperspectral image data is described and assessed. Firstly, the image is segmented into areas based on a spatial homogeneity criterion of pixel spectra. Then, a reduced data set (RDS) is produced by applying the projection pursuit (PP) algorithm to each of the segments in which the original hyperspectral image has been partitioned. Few significant spectral pixels are extracted from each segment. This operation allows the size of the data set to be dramatically reduced; nevertheless, most of the spectral information relative to the whole image is retained by RDS. In fact, RDS constitutes a good approximation of the most representative elements that would be found for the whole image, as the spectral features of RDS are very similar to the features of the original hyperspectral data. Therefore, the elements of a basis, either orthogonal or nonorthogonal, that best represents RDS, are searched for. Algorithms that can be used for this task are principal component analysis (PCA), independent component analysis (ICA), PP, or matching pursuit (MP). Once the basis has been calculated from RDS, the whole hyperspectral data set is decomposed on such a basis to yield a sequence of components, or features, whose (statistical) significance decreases with the index. Hence, minor components may be discarded without compromising the results of application tasks. Experiments carried out on AVIRIS data, whose ground truth was available, show that PCA based on RDS, even if suboptimal in the MMSE sense with respect to standard PCA, increases the separability of thematic classes, which is favored when pixel vectors in the transformed domain are homogeneously spread around their class centers.

  11. Fuzzy pulmonary vessel segmentation in contrast enhanced CT data

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kiraly, Atilla P.; Bakai, Annemarie; Das, Marco; Novak, Carol L.; Aach, Til

    2008-03-01

    Pulmonary vascular tree segmentation has numerous applications in medical imaging and computer-aided diagnosis (CAD), including detection and visualization of pulmonary emboli (PE), improved lung nodule detection, and quantitative vessel analysis. We present a novel approach to pulmonary vessel segmentation based on a fuzzy segmentation concept, combining the strengths of both threshold and seed point based methods. The lungs of the original image are first segmented and a threshold-based approach identifies core vessel components with a high specificity. These components are then used to automatically identify reliable seed points for a fuzzy seed point based segmentation method, namely fuzzy connectedness. The output of the method consists of the probability of each voxel belonging to the vascular tree. Hence, our method provides the possibility to adjust the sensitivity/specificity of the segmentation result a posteriori according to application-specific requirements, through definition of a minimum vessel-probability required to classify a voxel as belonging to the vascular tree. The method has been evaluated on contrast-enhanced thoracic CT scans from clinical PE cases and demonstrates overall promising results. For quantitative validation we compare the segmentation results to randomly selected, semi-automatically segmented sub-volumes and present the resulting receiver operating characteristic (ROC) curves. Although we focus on contrast enhanced chest CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.

  12. Cargo Logistics Airlift Systems Study (CLASS). Volume 1: Analysis of current air cargo system

    NASA Technical Reports Server (NTRS)

    Burby, R. J.; Kuhlman, W. H.

    1978-01-01

    The material presented in this volume is classified into the following sections; (1) analysis of current routes; (2) air eligibility criteria; (3) current direct support infrastructure; (4) comparative mode analysis; (5) political and economic factors; and (6) future potential market areas. An effort was made to keep the observations and findings relating to the current systems as objective as possible in order not to bias the analysis of future air cargo operations reported in Volume 3 of the CLASS final report.

  13. Texture analysis of automatic graph cuts segmentations for detection of lung cancer recurrence after stereotactic radiotherapy

    NASA Astrophysics Data System (ADS)

    Mattonen, Sarah A.; Palma, David A.; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.

    2015-03-01

    Stereotactic ablative radiotherapy (SABR) is a treatment for early-stage lung cancer with local control rates comparable to surgery. After SABR, benign radiation induced lung injury (RILI) results in tumour-mimicking changes on computed tomography (CT) imaging. Distinguishing recurrence from RILI is a critical clinical decision determining the need for potentially life-saving salvage therapies whose high risks in this population dictate their use only for true recurrences. Current approaches do not reliably detect recurrence within a year post-SABR. We measured the detection accuracy of texture features within automatically determined regions of interest, with the only operator input being the single line segment measuring tumour diameter, normally taken during the clinical workflow. Our leave-one-out cross validation on images taken 2-5 months post-SABR showed robustness of the entropy measure, with classification error of 26% and area under the receiver operating characteristic curve (AUC) of 0.77 using automatic segmentation; the results using manual segmentation were 24% and 0.75, respectively. AUCs for this feature increased to 0.82 and 0.93 at 8-14 months and 14-20 months post SABR, respectively, suggesting even better performance nearer to the date of clinical diagnosis of recurrence; thus this system could also be used to support and reinforce the physician's decision at that time. Based on our ongoing validation of this automatic approach on a larger sample, we aim to develop a computer-aided diagnosis system which will support the physician's decision to apply timely salvage therapies and prevent patients with RILI from undergoing invasive and risky procedures.

  14. Texture-based segmentation and analysis of emphysema depicted on CT images

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Zheng, Bin; Wang, Xingwei; Lederman, Dror; Pu, Jiantao; Sciurba, Frank C.; Gur, David; Leader, J. Ken

    2011-03-01

    In this study we present a texture-based method of emphysema segmentation depicted on CT examination consisting of two steps. Step 1, a fractal dimension based texture feature extraction is used to initially detect base regions of emphysema. A threshold is applied to the texture result image to obtain initial base regions. Step 2, the base regions are evaluated pixel-by-pixel using a method that considers the variance change incurred by adding a pixel to the base in an effort to refine the boundary of the base regions. Visual inspection revealed a reasonable segmentation of the emphysema regions. There was a strong correlation between lung function (FEV1%, FEV1/FVC, and DLCO%) and fraction of emphysema computed using the texture based method, which were -0.433, -.629, and -0.527, respectively. The texture-based method produced more homogeneous emphysematous regions compared to simple thresholding, especially for large bulla, which can appear as speckled regions in the threshold approach. In the texture-based method, single isolated pixels may be considered as emphysema only if neighboring pixels meet certain criteria, which support the idea that single isolated pixels may not be sufficient evidence that emphysema is present. One of the strength of our complex texture-based approach to emphysema segmentation is that it goes beyond existing approaches that typically extract a single or groups texture features and individually analyze the features. We focus on first identifying potential regions of emphysema and then refining the boundary of the detected regions based on texture patterns.

  15. Functional analysis of centipede development supports roles for Wnt genes in posterior development and segment generation.

    PubMed

    Hayden, Luke; Schlosser, Gerhard; Arthur, Wallace

    2015-01-01

    The genes of the Wnt family play important and highly conserved roles in posterior growth and development in a wide range of animal taxa. Wnt genes also operate in arthropod segmentation, and there has been much recent debate regarding the relationship between arthropod and vertebrate segmentation mechanisms. Due to its phylogenetic position, body form, and possession of many (11) Wnt genes, the centipede Strigamia maritima is a useful system with which to examine these issues. This study takes a functional approach based on treatment with lithium chloride, which causes ubiquitous activation of canonical Wnt signalling. This is the first functional developmental study performed in any of the 15,000 species of the arthropod subphylum Myriapoda. The expression of all 11 Wnt genes in Strigamia was analyzed in relation to posterior development. Three of these genes, Wnt11, Wnt5, and WntA, were strongly expressed in the posterior region and, thus, may play important roles in posterior developmental processes. In support of this hypothesis, LiCl treatment of S. maritima embryos was observed to produce posterior developmental defects and perturbations in AbdB and Delta expression. The effects of LiCl differ depending on the developmental stage treated, with more severe effects elicited by treatment during germband formation than by treatment at later stages. These results support a role for Wnt signalling in conferring posterior identity in Strigamia. In addition, data from this study are consistent with the hypothesis of segmentation based on a "clock and wavefront" mechanism operating in this species.

  16. Segmentation of liver and liver tumor for the Liver-Workbench

    NASA Astrophysics Data System (ADS)

    Zhou, Jiayin; Ding, Feng; Xiong, Wei; Huang, Weimin; Tian, Qi; Wang, Zhimin; Venkatesh, Sudhakar K.; Leow, Wee Kheng

    2011-03-01

    Robust and efficient segmentation tools are important for the quantification of 3D liver and liver tumor volumes which can greatly help clinicians in clinical decision-making and treatment planning. A two-module image analysis procedure which integrates two novel semi-automatic algorithms has been developed to segment 3D liver and liver tumors from multi-detector computed tomography (MDCT) images. The first module is to segment the liver volume using a flippingfree mesh deformation model. In each iteration, before mesh deformation, the algorithm detects and avoids possible flippings which will cause the self-intersection of the mesh and then the undesired segmentation results. After flipping avoidance, Laplacian mesh deformation is performed with various constraints in geometry and shape smoothness. In the second module, the segmented liver volume is used as the ROI and liver tumors are segmented by using support vector machines (SVMs)-based voxel classification and propagational learning. First a SVM classifier was trained to extract tumor region from one single 2D slice in the intermediate part of a tumor by voxel classification. Then the extracted tumor contour, after some morphological operations, was projected to its neighboring slices for automated sampling, learning and further voxel classification in neighboring slices. This propagation procedure continued till all tumorcontaining slices were processed. The performance of the whole procedure was tested using 20 MDCT data sets and the results were promising: Nineteen liver volumes were successfully segmented out, with the mean relative absolute volume difference (RAVD), volume overlap error (VOE) and average symmetric surface distance (ASSD) to reference segmentation of 7.1%, 12.3% and 2.5 mm, respectively. For live tumors segmentation, the median RAVD, VOE and ASSD were 7.3%, 18.4%, 1.7 mm, respectively.

  17. Segmentation of acute pyelonephritis area on kidney SPECT images using binary shape analysis

    NASA Astrophysics Data System (ADS)

    Wu, Chia-Hsiang; Sun, Yung-Nien; Chiu, Nan-Tsing

    1999-05-01

    Acute pyelonephritis is a serious disease in children that may result in irreversible renal scarring. The ability to localize the site of urinary tract infection and the extent of acute pyelonephritis has considerable clinical importance. In this paper, we are devoted to segment the acute pyelonephritis area from kidney SPECT images. A two-step algorithm is proposed. First, the original images are translated into binary versions by automatic thresholding. Then the acute pyelonephritis areas are located by finding convex deficiencies in the obtained binary images. This work gives important diagnosis information for physicians and improves the quality of medical care for children acute pyelonephritis disease.

  18. Robust demarcation of basal cell carcinoma by dependent component analysis-based segmentation of multi-spectral fluorescence images.

    PubMed

    Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2010-07-02

    This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude.

  19. A computer program for comprehensive ST-segment depression/heart rate analysis of the exercise ECG test.

    PubMed

    Lehtinen, R; Vänttinen, H; Sievänen, H; Malmivuo, J

    1996-06-01

    The ST-segment depression/heart rate (ST/HR) analysis has been found to improve the diagnostic accuracy of the exercise ECG test in detecting myocardial ischemia. Recently, three different continuous diagnostic variables based on the ST/HR analysis have been introduced; the ST/HR slope, the ST/HR index and the ST/HR hysteresis. The latter utilises both the exercise and recovery phases of the exercise ECG test, whereas the two former are based on the exercise phase only. This present article presents a computer program which not only calculates the above three diagnostic variables but also plots the full diagrams of ST-segment depression against heart rate during both exercise and recovery phases for each ECG lead from given ST/HR data. The program can be used in the exercise ECG diagnosis of daily clinical practice provided that the ST/HR data from the ECG measurement system can be linked to the program. At present, the main purpose of the program is to provide clinical and medical researchers with a practical tool for comprehensive clinical evaluation and development of the ST/HR analysis.

  20. Automated segmentation of the lamina cribrosa using Frangi's filter: a novel approach for rapid identification of tissue volume fraction and beam orientation in a trabeculated structure in the eye.

    PubMed

    Campbell, Ian C; Coudrillier, Baptiste; Mensah, Johanne; Abel, Richard L; Ethier, C Ross

    2015-03-06

    The lamina cribrosa (LC) is a tissue in the posterior eye with a complex trabecular microstructure. This tissue is of great research interest, as it is likely the initial site of retinal ganglion cell axonal damage in glaucoma. Unfortunately, the LC is difficult to access experimentally, and thus imaging techniques in tandem with image processing have emerged as powerful tools to study the microstructure and biomechanics of this tissue. Here, we present a staining approach to enhance the contrast of the microstructure in micro-computed tomography (micro-CT) imaging as well as a comparison between tissues imaged with micro-CT and second harmonic generation (SHG) microscopy. We then apply a modified version of Frangi's vesselness filter to automatically segment the connective tissue beams of the LC and determine the orientation of each beam. This approach successfully segmented the beams of a porcine optic nerve head from micro-CT in three dimensions and SHG microscopy in two dimensions. As an application of this filter, we present finite-element modelling of the posterior eye that suggests that connective tissue volume fraction is the major driving factor of LC biomechanics. We conclude that segmentation with Frangi's filter is a powerful tool for future image-driven studies of LC biomechanics.

  1. Automated segmentation of the lamina cribrosa using Frangi's filter: a novel approach for rapid identification of tissue volume fraction and beam orientation in a trabeculated structure in the eye

    PubMed Central

    Campbell, Ian C.; Coudrillier, Baptiste; Mensah, Johanne; Abel, Richard L.; Ethier, C. Ross

    2015-01-01

    The lamina cribrosa (LC) is a tissue in the posterior eye with a complex trabecular microstructure. This tissue is of great research interest, as it is likely the initial site of retinal ganglion cell axonal damage in glaucoma. Unfortunately, the LC is difficult to access experimentally, and thus imaging techniques in tandem with image processing have emerged as powerful tools to study the microstructure and biomechanics of this tissue. Here, we present a staining approach to enhance the contrast of the microstructure in micro-computed tomography (micro-CT) imaging as well as a comparison between tissues imaged with micro-CT and second harmonic generation (SHG) microscopy. We then apply a modified version of Frangi's vesselness filter to automatically segment the connective tissue beams of the LC and determine the orientation of each beam. This approach successfully segmented the beams of a porcine optic nerve head from micro-CT in three dimensions and SHG microscopy in two dimensions. As an application of this filter, we present finite-element modelling of the posterior eye that suggests that connective tissue volume fraction is the major driving factor of LC biomechanics. We conclude that segmentation with Frangi's filter is a powerful tool for future image-driven studies of LC biomechanics. PMID:25589572

  2. Molecular phylogeny of grey mullets (Teleostei: Mugilidae) in Greece: evidence from sequence analysis of mtDNA segments.

    PubMed

    Papasotiropoulos, Vasilis; Klossa-Kilia, Elena; Alahiotis, Stamatis N; Kilias, George

    2007-08-01

    Mitochondrial DNA sequence analysis has been used to explore genetic differentiation and phylogenetic relationships among five species of the Mugilidae family, Mugil cephalus, Chelon labrosus, Liza aurata, Liza ramada, and Liza saliens. DNA was isolated from samples originating from the Messolongi Lagoon in Greece. Three mtDNA segments (12s rRNA, 16s rRNA, and CO I) were PCR amplified and sequenced. Sequencing analysis revealed that the greatest genetic differentiation was observed between M. cephalus and all the other species studied, while C. labrosus and L. aurata were the closest taxa. Dendrograms obtained by the neighbor-joining method and Bayesian inference analysis exhibited the same topology. According to this topology, M. cephalus is the most distinct species and the remaining taxa are clustered together, with C. labrosus and L. aurata forming a single group. The latter result brings into question the monophyletic origin of the genus Liza.

  3. Analysis of offset error for segmented micro-structure optical element based on optical diffraction theory

    NASA Astrophysics Data System (ADS)

    Su, Jinyan; Wu, Shibin; Yang, Wei; Wang, Lihua

    2016-10-01

    Micro-structure optical elements are gradually applied in modern optical system due to their characters such as light weight, replicating easily, high diffraction efficiency and many design variables. Fresnel lens is a typical micro-structure optical element. So in this paper we take Fresnel lens as base of research. Analytic solution to the Point Spread Function (PSF) of the segmented Fresnel lens is derived based on the theory of optical diffraction, and the mathematical simulation model is established. Then we take segmented Fresnel lens with 5 pieces of sub-mirror as an example. In order to analyze the influence of different offset errors on the system's far-field image quality, we obtain the analytic solution to PSF of the system under the condition of different offset errors by using Fourier-transform. The result shows the translation error along XYZ axis and tilt error around XY axis will introduce phase errors which affect the imaging quality of system. The translation errors along XYZ axis constitute linear relationship with corresponding phase errors and the tilt errors around XY axis constitute trigonometric function relationship with corresponding phase errors. In addition, the standard deviations of translation errors along XY axis constitute quadratic nonlinear relationship with system's Strehl ratio. Finally, the tolerances of different offset errors are obtained according to Strehl Criteria.

  4. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 3: General purpose spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) general purpose aircraft segment are presented. The satellite is designed to provide attitude stabilization, electrical power, and a communications data handling subsystem which can support various mission peculiar subsystems. The various specifications considered include the following: (1) structures subsystem, (2) thermal control subsystem, (3) communications and data handling subsystem module, (4) attitude control subsystem module, (5) power subsystem module, and (6) electrical integration subsystem.

  5. Quantitative Analysis of Change in Intracranial Volume After Posterior Cranial Vault Distraction.

    PubMed

    Shimizu, Azusa; Komuro, Yuzo; Shimoji, Kazuaki; Miyajima, Masakazu; Arai, Hajime

    2016-07-01

    Posterior cranial vault distraction is considered to be more effective for increasing intracranial volume than fronto-orbital advancement or anterior cranial vault expansion, but the changes in intracranial volumes after posterior cranial vault distraction remain unclear. The changes in intracranial volume were investigated in patients of premature craniosynostosis treated by this technique. Seven patients, 3 boys and 4 girls aged from 5 months to 3 years 3 months (mean 23 months) at operation, with craniosynostosis underwent posterior cranial vault distraction at Juntendo University Hospital from 2011 to 2014. Patient characteristics, length of distraction, and pre- and postoperative computed tomography findings were reviewed. Total intracranial volume, including the supratentorial space and posterior cranial fossa, was measured using the workstation functions on three-dimensional computed tomography scans. Posterior distraction was performed without severe complications except in 2 patients requiring additional surgeries. The distraction length was 22.3 to 39 mm (mean 31 mm), the intracranial volume change was 144 to 281 mL (mean 192 mL), and the enlargement ratio of intracranial volume was 113% to 134% (mean 121%). The present quantitative analysis of intracranial volume change after posterior distraction showed greater increases in intracranial volume compared with previous reports. Furthermore, intracranial volumes in our patients became nearly normal and were maintained for the follow-up period (maximum 13 months). Posterior cranial vault distraction is very effective to increase cranial volume, so may be the first choice of treatment in patients of craniosynostosis.

  6. Risk factors for neovascular glaucoma after carbon ion radiotherapy of choroidal melanoma using dose-volume histogram analysis

    SciTech Connect

    Hirasawa, Naoki . E-mail: naoki_h@nirs.go.jp; Tsuji, Hiroshi; Ishikawa, Hitoshi; Koyama-Ito, Hiroko; Kamada, Tadashi; Mizoe, Jun-Etsu; Ito, Yoshiyuki; Naganawa, Shinji; Ohnishi, Yoshitaka; Tsujii, Hirohiko

    2007-02-01

    Purpose: To determine the risk factors for neovascular glaucoma (NVG) after carbon ion radiotherapy (C-ion RT) of choroidal melanoma. Methods and Materials: A total of 55 patients with choroidal melanoma were treated between 2001 and 2005 with C-ion RT based on computed tomography treatment planning. All patients had a tumor of large size or one located close to the optic disk. Univariate and multivariate analyses were performed to identify the risk factors of NVG for the following parameters; gender, age, dose-volumes of the iris-ciliary body and the wall of eyeball, and irradiation of the optic disk (ODI). Results: Neovascular glaucoma occurred in 23 patients and the 3-year cumulative NVG rate was 42.6 {+-} 6.8% (standard error), but enucleation from NVG was performed in only three eyes. Multivariate analysis revealed that the significant risk factors for NVG were V50{sub IC} (volume irradiated {>=}50 GyE to iris-ciliary body) (p = 0.002) and ODI (p = 0.036). The 3-year NVG rate for patients with V50{sub IC} {>=}0.127 mL and those with V50{sub IC} <0.127 mL were 71.4 {+-} 8.5% and 11.5 {+-} 6.3%, respectively. The corresponding rate for the patients with and without ODI were 62.9 {+-} 10.4% and 28.4 {+-} 8.0%, respectively. Conclusion: Dose-volume histogram analysis with computed tomography indicated that V50{sub IC} and ODI were independent risk factors for NVG. An irradiation system that can reduce the dose to both the anterior segment and the optic disk might be worth adopting to investigate whether or not incidence of NVG can be decreased with it.

  7. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES

    PubMed Central

    Seyedhosseini, Mojtaba; Ellisman, Mark H.; Tasdizen, Tolga

    2014-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images. PMID:25132915

  8. Industrial process heat data analysis and evaluation. Volume 2

    SciTech Connect

    Lewandowski, A; Gee, R; May, K

    1984-07-01

    The Solar Energy Research Institute (SERI) has modeled seven of the Department of Energy (DOE) sponsored solar Industrial Process Heat (IPH) field experiments and has generated thermal performance predictions for each project. Additionally, these performance predictions have been compared with actual performance measurements taken at the projects. Predictions were generated using SOLIPH, an hour-by-hour computer code with the capability for modeling many types of solar IPH components and system configurations. Comparisons of reported and predicted performance resulted in good agreement when the field test reliability and availability was high. Volume I contains the main body of the work; objective model description, site configurations, model results, data comparisons, and summary. Volume II contains complete performance prediction results (tabular and graphic output) and computer program listings.

  9. Analysis of Optical Imaging from the Local Volume Legacy Survey

    NASA Astrophysics Data System (ADS)

    Friberg, Sarah; Snyder, E.; van Zee, L.; Croxall, K. V.; Funes, J. G.; Warren, S. R.; Lee, H.; Lee, J. C.; LVL Team

    2010-01-01

    We report the results of Halpha and broadband optical imaging of 96 galaxies in the Local Volume Legacy Survey (LVL). The majority of galaxies in the volume limited (D < 11 Mpc) parent sample are low luminosity dwarf galaxies. We examine optical colors, star formation rates, and Halpha equivalent widths both as global values and as a function of radius. As expected, the majority of galaxies in this sample have blue colors and modest star formation activity. While the majority of galaxies have negligible color gradients in their stellar disks, a handful of galaxies have strong red color gradients (greater than 0.4 mag/kpc in U-B). These galaxies tend to be smaller and more compact than other galaxies in the sample with comparable luminosities and are likely starbursting or post-burst systems.

  10. Study of the Utah uranium milling industry. Volume I. A policy analysis

    SciTech Connect

    Turley, R.E.

    1981-01-01

    Volume I is an analysis of the major problems raised by milling operators - primarily the issue of whether the federal government or the state should be responsible for the perpetual surveillance, monitoring, and maintenance of uranium tailings. (DMC)

  11. Method for Determining Language Objectives and Criteria. Volume II. Methodological Tools: Computer Analysis, Data Collection Instruments.

    DTIC Science & Technology

    1979-05-25

    This volume presents (1) Methods for computer and hand analysis of numerical language performance data (includes examples) (2) samples of interview, observation, and survey instruments used in collecting language data. (Author)

  12. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  13. Interactive algorithms for the segmentation and quantitation of 3-D MRI brain scans.

    PubMed

    Freeborough, P A; Fox, N C; Kitney, R I

    1997-05-01

    Interactive algorithms are an attractive approach to the accurate segmentation of 3D brain scans as they potentially improve the reliability of fully automated segmentation while avoiding the labour intensiveness and inaccuracies of manual segmentation. We present a 3D image analysis package (MIDAS) with a novel architecture enabling highly interactive segmentation algorithms to be implemented as add on modules. Interactive methods based on intensity thresholding, region growing and the constrained application of morphological operators are also presented. The methods involve the application of constraints and freedoms on the algorithms coupled with real time visualisation of the effect. This methodology has been applied to the segmentation, visualisation and measurement of the whole brain and a small irregular neuroanatomical structure, the hippocampus. We demonstrate reproducible and anatomically accurate segmentations of these structures. The efficacy of one method in measuring volume loss (atrophy) of the hippocampus in Alzheimer's disease is shown and is compared to conventional methods.

  14. Semantic segmentation of 3D textured meshes for urban scene analysis

    NASA Astrophysics Data System (ADS)

    Rouhani, Mohammad; Lafarge, Florent; Alliez, Pierre

    2017-01-01

    Classifying 3D measurement data has become a core problem in photogrammetry and 3D computer vision, since the rise of modern multiview geometry techniques, combined with affordable range sensors. We introduce a Markov Random Field-based approach for segmenting textured meshes generated via multi-view stereo into urban classes of interest. The input mesh is first partitioned into small clusters, referred to as superfacets, from which geometric and photometric features are computed. A random forest is then trained to predict the class of each superfacet as well as its similarity with the neighboring superfacets. Similarity is used to assign the weights of the Markov Random Field pairwise-potential and to account for contextual information between the classes. The experimental results illustrate the efficacy and accuracy of the proposed framework.

  15. Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images

    PubMed Central

    de Castro, J.; Méndez, A.; Tarquis, A. M.

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  16. Support trusses for large precision segmented reflectors - Preliminary design and analysis

    NASA Technical Reports Server (NTRS)

    Collins, Timothy J.; Fichter, W. B.

    1989-01-01

    The Precision Segmented Reflector (PSR) primary structures plan is outlined. Geometries and design considerations for erectable and deployable reflector support structures are discussed. Support truss requirements and goals for the PSR are given, and the results of static and dynamic analyses of a prototype four meter diameter structure are presented. In addition, similar results are presented for two 20-meter diameter support trusses. Implications of the analyses for the PSR program are considered and the formulation and limitations of current PSR finite element models are discussed. It is shown that if the secondary optical system is supported by a simple tripod design, the first six vibration modes are likely to be dominated by the secondary system. The 20-meter diameter support trusses are found to be quite stiff for structures of such large size.

  17. Support trusses for large precision segmented reflectors: Preliminary design and analysis

    NASA Technical Reports Server (NTRS)

    Collins, Timothy J.; Fichter, W. B.

    1989-01-01

    Precision Segmented Reflector (PSR) technology is currently being developed for a range of future applications such as the Large Deployable Reflector. The structures activities at NASA-Langley are outlined in support of the PSR program. Design concepts are explored for erectable and deployable support structures which are envisioned to be the backbone of these precision reflectors. Important functional requirements for the support trusses related to stiffness, mass, and surface accuracy are reviewed. Proposed geometries for these structures and factors motivating the erectable and deployable designs are discussed. Analytical results related to stiffness, dynamic behavior, and surface accuracy are presented and considered in light of the functional requirements. Results are included for both a 4-meter-diameter prototype support truss which is currently being designed as the Test Bed for the PSR technology development program, and for two 20-meter support structures.

  18. The EM/MPM algorithm for segmentation of textured images: analysis and further experimental results.

    PubMed

    Comer, M L; Delp, E J

    2000-01-01

    In this paper we present new results relative to the "expectation-maximization/maximization of the posterior marginals" (EM/MPM) algorithm for simultaneous parameter estimation and segmentation of textured images. The EM/MPM algorithm uses a Markov random field model for the pixel class labels and alternately approximates the MPM estimate of the pixel class labels and estimates parameters of the observed image model. The goal of the EM/MPM algorithm is to minimize the expected value of the number of misclassified pixels. We present new theoretical results in this paper which show that the algorithm can be expected to achieve this goal, to the extent that the EM estimates of the model parameters are close to the true values of the model parameters. We also present new experimental results demonstrating the performance of the EM/MPM algorithm.

  19. The power-proportion method for intracranial volume correction in volumetric imaging analysis.

    PubMed

    Liu, Dawei; Johnson, Hans J; Long, Jeffrey D; Magnotta, Vincent A; Paulsen, Jane S

    2014-01-01

    In volumetric brain imaging analysis, volumes of brain structures are typically assumed to be proportional or linearly related to intracranial volume (ICV). However, evidence abounds that many brain structures have power law relationships with ICV. To take this relationship into account in volumetric imaging analysis, we propose a power law based method-the power-proportion method-for ICV correction. The performance of the new method is demonstrated using data from the PREDICT-HD study.

  20. Minimally invasive procedure reduces adjacent segment degeneration and disease: New benefit-based global meta-analysis

    PubMed Central

    Li, Xiao-Chuan; Huang, Chun-Ming; Zhong, Cheng-Fan; Liang, Rong-Wei; Luo, Shao-Jian

    2017-01-01

    Objective Adjacent segment pathology (ASP) is a common complication presenting in patients with axial pain and dysfunction, requiring treatment or follow-up surgery. However, whether minimally invasive surgery (MIS), including MIS transforaminal / posterior lumbar interbody fusion (MIS-TLIF/PLIF) decreases the incidence rate of ASP remains unknown. The aim of this meta-analysis was to compare the incidence rate of ASP in patients undergoing MIS versus open procedures. Methods This systematic review was undertaken by following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Statement. We searched electronic databases, including PubMed, EMBASE, SinoMed, and the Cochrane Library, without language restrictions, to identify clinical trials comparing MIS to open procedures. The results retrieved were last updated on June 15, 2016. Results Overall, 9 trials comprising 770 patients were included in the study; the quality of the studies included 4 moderate and 5 low-quality studies. The pooled data analysis demonstrated low heterogeneity between the trials and a significantly lower ASP incidence rate in patients who underwent MIS procedure, compared with those who underwent open procedure (p = 0.0001). Single-level lumbar interbody fusion was performed in 6 trials of 408 patients and we found a lower ASP incidence rate in MIS group, compared with those who underwent open surgery (p = 0.002). Moreover, the pooled data analysis showed a significant reduction in the incidence rate of adjacent segment disease (ASDis) (p = 0.0003) and adjacent segment degeneration (ASDeg) (p = 0.0002) for both procedures, favoring MIS procedure. Subgroup analyses showed no difference in follow-up durations between the procedures (p = 0.93). Conclusion Therefore, we conclude that MIS-TLIF/PLIF can reduce the incidence rate of ASDis and ASDeg, compared with open surgery. Although the subgroup analysis did not indicate a difference in follow-up duration between the two

  1. A link-segment model of upright human posture for analysis of head-trunk coordination

    NASA Technical Reports Server (NTRS)

    Nicholas, S. C.; Doxey-Gasway, D. D.; Paloski, W. H.

    1998-01-01

    Sensory-motor control of upright human posture may be organized in a top-down fashion such that certain head-trunk coordination strategies are employed to optimize visual and/or vestibular sensory inputs. Previous quantitative models of the biomechanics of human posture control have examined the simple case of ankle sway strategy, in which an inverted pendulum model is used, and the somewhat more complicated case of hip sway strategy, in which multisegment, articulated models are used. While these models can be used to quantify the gross dynamics of posture control, they are not sufficiently detailed to analyze head-trunk coordination strategies that may be crucial to understanding its underlying mechanisms. In this paper, we present a biomechanical model of upright human posture that extends an existing four mass, sagittal plane, link-segment model to a five mass model including an independent head link. The new model was developed to analyze segmental body movements during dynamic posturography experiments in order to study head-trunk coordination strategies and their influence on sensory inputs to balance control. It was designed specifically to analyze data collected on the EquiTest (NeuroCom International, Clackamas, OR) computerized dynamic posturography system, where the task of maintaining postural equilibrium may be challenged under conditions in which the visual surround, support surface, or both are in motion. The performance of the model was tested by comparing its estimated ground reaction forces to those measured directly by support surface force transducers. We conclude that this model will be a valuable analytical tool in the search for mechanisms of balance control.

  2. Purification of cone outer segment for proteomic analysis on its membrane proteins in carp retina

    PubMed Central

    Fukagawa, Takashi; Takafuji, Kazuaki; Tachibanaki, Shuji

    2017-01-01

    Rods and cones are both photoreceptors in the retina, but they are different in many aspects including the light response characteristics and, for example, cell morphology and metabolism. These differences would be caused by differences in proteins expressed in rods and cones. To understand the molecular bases of these differences between rods and cones, one of the ways is to compare proteins expressed in rods and cones, and to find those expressed specifically or dominantly. In the present study, we are interested in proteins in the outer segment (OS), the site responsible for generation of rod- or cone-characteristic light responses and also the site showing different morphology between rods and cones. For this, we established a method to purify the OS and the inner segment (IS) of rods and also of cones from purified carp rods and cones, respectively, using sucrose density gradient. In particular, we were interested in proteins tightly bound to the membranes of cone OS. To identify these proteins, we analyzed proteins in some selected regions of an SDS-gel of washed membranes of the OS and the IS obtained from both rods and cones, with Liquid Chromatography-tandem Mass Spectrometry (LC-MS/MS) using a protein database constructed from carp retina. By comparing the lists of the proteins found in the OS and the IS of both rods and cones, we found some proteins present in cone OS membranes specifically or dominantly, in addition to the proteins already known to be present specifically in cone OS. PMID:28291804

  3. Segmental hair analysis for 11-nor-Δ⁹-tetrahydrocannabinol-9-carboxylic acid and the patterns of cannabis use.

    PubMed

    Han, Eunyoung; Chung, Heesun; Song, Joon Myong

    2012-04-01

    Cannabis is the most widely abused drug in the world. The purpose of this study is to detect 11-nor-9-carboxy-Δ⁹-tetrahydrocannabinol (THCCOOH) in segmental hair and to evaluate the patterns of cannabis use. We investigated the relationship between the concentrations of THCCOOH in hair and the self-reported use data and the route of administration. For this purpose, the hair samples were washed, digested with 1 mL of 1 M NaOH at 85°C for 30 min along with the internal standard, THCCOOH-d₃ (2.5 pg/mg) and extracted in 2 mL of n-hexane-ethyl acetate (9:1) twice after adding 1 mL of 0.1N sodium acetate buffer (pH = 4.5) and 200 µL of acetic acid. The organic extract was transferred and evaporated and the mixture was derivatized with 50 µL of pentafluoropropionic anhydride and 25 µL of pentafluoropropanol for 30 min at 70°C. Reconstituted final extract was injected into the gas chromatography-tandem mass spectrometer operating in the negative chemical ionization mode. In segmental hair analysis, the concentrations of THCCOOH decreased from the proximal to distal segments. The concentrations of THCCOOH in hair and the self-reported dose and frequency of administration from cannabis users were not well correlated because of the low accuracy and reliability of the self-reported data. However, this study provides preliminary information on the dose and frequency of administration among cannabis users in our country.

  4. Analysis of iris structure and iridocorneal angle parameters with anterior segment optical coherence tomography in Fuchs' uveitis syndrome.

    PubMed

    Basarir, Berna; Altan, Cigdem; Pinarci, Eylem Yaman; Celik, Ugur; Satana, Banu; Demirok, Ahmet

    2013-06-01

    To evaluate the differences in the biometric parameters of iridocorneal angle and iris structure measured by anterior segment optical coherence tomography (AS-OCT) in Fuchs' uveitis syndrome (FUS). Seventy-six eyes of 38 consecutive patients with the diagnosis of unilateral FUS were recruited into this prospective, cross-sectional and comparative study. After a complete ocular examination, anterior segment biometric parameters were measured by Visante(®) AS-OCT. All parameters were compared between the two eyes of each patient statistically. The mean age of the 38 subjects was 32.5 ± 7.5 years (18 female and 20 male). The mean visual acuity was lower in eyes with FUS (0.55 ± 0.31) than in healthy eyes (0.93 ± 0.17). The central corneal thickness did not differ significantly between eyes. All iridocorneal angle parameters (angle-opening distance 500 and 750, scleral spur angle, trabecular-iris space (TISA) 500 and 750) except TISA 500 in temporal quadrant were significantly larger in eyes with FUS than in healthy eyes. Anterior chamber depth was deeper in the eyes with FUS than in the unaffected eyes. With regard to iris measurements, iris thickness in the thickest part, iris bowing and iris shape were all statistically different between the affected eye and the healthy eye in individual patients with FUS. However, no statistically significant differences were evident in iris thickness 500 μm, thickness in the middle and iris length. There were significant difference in iris shape between the two eyes of patients with glaucoma. AS-OCT as an imaging method provides us with many informative resultsin the analysis of anterior segment parameters in FUS.

  5. Tandem configurations of variably duplicated segments of 22q11.2 confirmed by fiber-FISH analysis.

    PubMed

    Shimojima, Keiko; Okamoto, Nobuhiko; Inazu, Tetsuya; Yamamoto, Toshiyuki

    2011-11-01

    22q11.2 duplication syndrome has recently been established as a new syndrome manifesting broad clinical phenotypes including mental retardation. It is reciprocal to DiGeorge (DGS)/velo-cardio-facial syndrome (VCFS), in which the same portion of the chromosome is hemizygously deleted. Deletions and duplications of the 22q11.2 region are facilitated by the low-copy repeats (LCRs) flanking this region. In this study, we aimed to identify the directions of the duplicated segments of 22q11.2 to better understand the mechanism of chromosomal duplication. To achieve this aim, we accumulated samples from four patients with 22q11.2 duplications. One of the patients had an atypically small (741 kb) duplication of 22q11.2. The centromeric end of the breakpoint was on LCR22A, but the telomeric end was between LCR22A and B. Therefore, the duplicated segment did not include T-box 1 gene (TBX1), the gene primarily responsible for the DGS/VCFS. As this duplication was shared by the patient's healthy mother, this appears to be a benign copy-number variation rather than a disease-causing alteration. The other three patients showed 3.0 or 4.0 Mb duplications flanked by LCRs. The directions of the duplicated segments were investigated by fiber-fluorescence in situ hybridization analysis. All samples showed tandem configurations. These results support the hypothesized mechanism of non-allelic homologous recombination with flanking LCRs and add additional evidence that many interstitial duplications are aligned as tandem configurations.

  6. Segment-interaction in sprint start: Analysis of 3D angular velocity and kinetic energy in elite sprinters.

    PubMed

    Slawinski, J; Bonnefoy, A; Ontanon, G; Leveque, J M; Miller, C; Riquet, A; Chèze, L; Dumas, R

    2010-05-28

    The aim of the present study was to measure during a sprint start the joint angular velocity and the kinetic energy of the different segments in elite sprinters. This was performed using a 3D kinematic analysis of the whole body. Eight elite sprinters (10.30+/-0.14s 100 m time), equipped with 63 passive reflective markers, realised four maximal 10 m sprints start on an indoor track. An opto-electronic Motion Analysis system consisting of 12 digital cameras (250 Hz) was used to collect the 3D marker trajectories. During the pushing phase on the blocks, the 3D angular velocity vector and its norm were calculated for each joint. The kinetic energy of 16 segments of the lower and upper limbs and of the total body was calculated. The 3D kinematic analysis of the whole body demonstrated that joints such as shoulders, thoracic or hips did not reach their maximal angular velocity with a movement of flexion-extension, but with a combination of flexion-extension, abduction-adduction and internal-external rotation. The maximal kinetic energy of the total body was reached before clearing block (respectively, 537+/-59.3 J vs. 514.9+/-66.0 J; p< or =0.01). These results suggested that a better synchronization between the upper and lower limbs could increase the efficiency of pushing phase on the blocks. Besides, to understand low interindividual variances in the sprint start performance in elite athletes, a 3D complete body kinematic analysis shall be used.

  7. Virtual Mastoidectomy Performance Evaluation through Multi-Volume Analysis

    PubMed Central

    Kerwin, Thomas; Stredney, Don; Wiet, Gregory; Shen, Han-Wei

    2012-01-01

    Purpose Development of a visualization system that provides surgical instructors with a method to compare the results of many virtual surgeries (n > 100). Methods A masked distance field models the overlap between expert and resident results. Multiple volume displays are used side-by-side with a 2D point display. Results Performance characteristics were examined by comparing the results of specific residents with those of experts and the entire class. Conclusions The software provides a promising approach for comparing performance between large groups of residents learning mastoidectomy techniques. PMID:22528058

  8. Glacier volume estimation of Cascade Volcanoes—an analysis and comparison with other methods

    USGS Publications Warehouse

    Driedger, Carolyn L.; Kennard, P.M.

    1986-01-01

    During the 1980 eruption of Mount St. Helens, the occurrence of floods and mudflows made apparent a need to assess mudflow hazards on other Cascade volcanoes. A basic requirement for such analysis is information about the volume and distribution of snow and ice on these volcanoes. An analysis was made of the volume-estimation methods developed by previous authors and a volume estimation method was developed for use in the Cascade Range. A radio echo-sounder, carried in a backpack, was used to make point measurements of ice thickness on major glaciers of four Cascade volcanoes (Mount Rainier, Washington; Mount Hood and the Three Sisters, Oregon; and Mount Shasta, California). These data were used to generate ice-thickness maps and bedrock topographic maps for developing and testing volume-estimation methods. Subsequently, the methods were applied to the unmeasured glaciers on those mountains and, as a test of the geographical extent of applicability, to glaciers beyond the Cascades having measured volumes. Two empirical relationships were required in order to predict volumes for all the glaciers. Generally, for glaciers less than 2.6 km in length, volume was found to be estimated best by using glacier area, raised to a power. For longer glaciers, volume was found to be estimated best by using a power law relationship, including slope and shear stress. The necessary variables can be estimated from topographic maps and aerial photographs.

  9. CADDIS Volume 4. Data Analysis: Basic Principles & Issues

    EPA Pesticide Factsheets

    Use of inferential statistics in causal analysis, introduction to data independence and autocorrelation, methods to identifying and control for confounding variables, references for the Basic Principles section of Data Analysis.

  10. SUPERSONIC TRANSPORT DEVELOPMENT AND PRODUCTION. VOLUME I. COST ANALYSIS PROGRAM.

    DTIC Science & Technology

    SUPERSONIC AIRCRAFT, *COSTS), (*AIRCRAFT INDUSTRY, INDUSTRIAL PRODUCTION ), MANAGEMENT ENGINEERING, AIRFRAMES, ECONOMICS, COMPUTER PROGRAMS, STATISTICAL ANALYSIS, MONEY, AIRCRAFT ENGINES, FEASIBILITY STUDIES

  11. A spherical harmonics intensity model for 3D segmentation and 3D shape analysis of heterochromatin foci.

    PubMed

    Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl

    2016-08-01

    The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci.

  12. The genetic architecture of Down syndrome phenotypes revealed by high-resolution analysis of human segmental trisomies.

    PubMed

    Korbel, Jan O; Tirosh-Wagner, Tal; Urban, Alexander Eckehart; Chen, Xiao-Ning; Kasowski, Maya; Dai, Li; Grubert, Fabian; Erdman, Chandra; Gao, Michael C; Lange, Ken; Sobel, Eric M; Barlow, Gillian M; Aylsworth, Arthur S; Carpenter, Nancy J; Clark, Robin Dawn; Cohen, Monika Y; Doran, Eric; Falik-Zaccai, Tzipora; Lewin, Susan O; Lott, Ira T; McGillivray, Barbara C; Moeschler, John B; Pettenati, Mark J; Pueschel, Siegfried M; Rao, Kathleen W; Shaffer, Lisa G; Shohat, Mordechai; Van Riper, Alexander J; Warburton, Dorothy; Weissman, Sherman; Gerstein, Mark B; Snyder, Michael; Korenberg, Julie R

    2009-07-21

    Down syndrome (DS), or trisomy 21, is a common disorder associated with several complex clinical phenotypes. Although several hypotheses have been put forward, it is unclear as to whether particular gene loci on chromosome 21 (HSA21) are sufficient to cause DS and its associated features. Here we present a high-resolution genetic map of DS phenotypes based on an analysis of 30 subjects carrying rare segmental trisomies of various regions of HSA21. By using state-of-the-art genomics technologies we mapped segmental trisomies at exon-level resolution and identified discrete regions of 1.8-16.3 Mb likely to be involved in the development of 8 DS phenotypes, 4 of which are congenital malformations, including acute megakaryocytic leukemia, transient myeloproliferative disorder, Hirschsprung disease, duodenal stenosis, imperforate anus, severe mental retardation, DS-Alzheimer Disease, and DS-specific congenital heart disease (DSCHD). Our DS-phenotypic maps located DSCHD to a <2-Mb interval. Furthermore, the map enabled us to present evidence against the necessary involvement of other loci as well as specific hypotheses that have been put forward in relation to the etiology of DS-i.e., the presence of a single DS consensus region and the sufficiency of DSCR1 and DYRK1A, or APP, in causing several severe DS phenotypes. Our study demonstrates the value of combining advanced genomics with cohorts of rare patients for studying DS, a prototype for the role of copy-number variation in complex disease.

  13. Reliability and reproducibility of macular segmentation using a custom-built optical coherence tomography retinal image analysis software

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Somfai, Gábor Márk; Ranganathan, Sudarshan; Tátrai, Erika; Ferencz, Mária; Puliafito, Carmen A.

    2009-11-01

    We determine the reliability and reproducibility of retinal thickness measurements with a custom-built OCT retinal image analysis software (OCTRIMA). Ten eyes of five healthy subjects undergo repeated standard macular thickness map scan sessions by two experienced examiners using a Stratus OCT device. Automatic/semi automatic thickness quantification of the macula and intraretinal layers is performed using OCTRIMA software. Intraobserver, interobserver, and intervisit repeatability and reproducibility coefficients, and intraclass correlation coefficients (ICCs) per scan are calculated. Intraobserver, interobserver, and intervisit variability combined account for less than 5% of total variability for the total retinal thickness measurements and less than 7% for the intraretinal layers except the outer segment/ retinal pigment epithelium (RPE) junction. There is no significant difference between scans acquired by different observers or during different visits. The ICCs obtained for the intraobserver and intervisit variability tests are greater than 0.75 for the total retina and all intraretinal layers, except the inner nuclear layer intraobserver and interobserver test and the outer plexiform layer, intraobserver, interobserver, and intervisit test. Our results indicate that thickness measurements for the total retina and all intraretinal layers (except the outer segment/RPE junction) performed using OCTRIMA are highly repeatable and reproducible.

  14. Practical considerations for the segmented-flow analysis of nitrate and ammonium in seawater and the avoidance of matrix effects

    NASA Astrophysics Data System (ADS)

    Rho, Tae Keun; Coverly, Stephen; Kim, Eun-Soo; Kang, Dong-Jin; Kahng, Sung-Hyun; Na, Tae-Hee; Cho, Sung-Rok; Lee, Jung-Moo; Moon, Cho-Rong

    2015-12-01

    In this study we describe measures taken in our laboratory to improve the long-term precision of nitrate and ammonia analysis in seawater using a microflow segmented-flow analyzer. To improve the nitrate reduction efficiency using a flow-through open tube cadmium reactor (OTCR), we compared alternative buffer formulations and regeneration procedures for an OTCR. We improved long-term stability for nitrate with a modified flow scheme and color reagent formulation and for ammonia by isolating samples from the ambient air and purifying the air used for bubble segmentation. We demonstrate the importance of taking into consideration the residual nutrient content of the artificial seawater used for the preparation of calibration standards. We describe how an operating procedure to eliminate errors from that source as well as from the refractive index of the matrix itself can be modified to include the minimization of dynamic refractive index effects resulting from differences between the matrix of the samples, the calibrants, and the wash solution. We compare the data for long-term measurements of certified reference material under two different conditions, using ultrapure water (UPW) and artificial seawater (ASW) for the sampler wash.

  15. Analysis of slope slip surface case study landslide road segment Purwantoro-Nawangan/Bts Jatim Km 89+400

    NASA Astrophysics Data System (ADS)

    Sidik Purnomo, Joko; Muslih Purwana, Yusep; Silmi Surjandari, Niken

    2017-01-01

    Wonogiri is a region of south eastern part of Central Java province which borders with East Java and Yogyakarta Province. In Physiographic its mostly undulating hills so that the frequent occurrence of landslides, especially during the rainy season. Landslide disaster that just happened that on the road segment Purwantoro-Nawangan / Bts Jatim Km 89 + 400 were included in the authority of the Highways Department of Central Java Province. During this time, Error analysis of slope stability is not caused by a lot of presumption shape of slip surface, but by an error in determining the location of the critical slip surface. This study aims to find the shape and location slip surface landslide on segment Purwantoro - Nawangan Km 89 + 400 with the interpretation of soil test results. This research method is with the interpretation of CPT test and Bore Hole as well as modeling use limit equilibrium method and finite element method. Processing contours of the slopes in the landslide area resulted in three cross section that slopes A-A, B-B and C-C which will be modeling the slopes. Modeling slopes with dry and wet conditions at the third cross section slope. It was found that the form of the slope slip surface are known to be composite depth 1.5-2 m with safety factor values more than 1.2 (stable) when conditions are dry slopes. But its became failure with factor of safety < 0.44 when conditions are wet slopes.

  16. Ventriculogram segmentation using boosted decision trees

    NASA Astrophysics Data System (ADS)

    McDonald, John A.; Sheehan, Flo