Science.gov

Sample records for volume segmentation analysis

  1. Economic Analysis. Volume V. Course Segments 65-79.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    The fifth volume of the multimedia, individualized course in economic analysis produced for the United States Naval Academy covers segments 65-79 of the course. Included in the volume are discussions of monopoly markets, monopolistic competition, oligopoly markets, and the theory of factor demand and supply. Other segments of the course, the…

  2. Automated segmentation and dose-volume analysis with DICOMautomaton

    NASA Astrophysics Data System (ADS)

    Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.

    2014-03-01

    Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.

  3. Direct volume estimation without segmentation

    NASA Astrophysics Data System (ADS)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  4. Automated localization and segmentation of lung tumor from PET-CT thorax volumes based on image feature analysis.

    PubMed

    Cui, Hui; Wang, Xiuying; Feng, Dagan

    2012-01-01

    Positron emission tomography - computed tomography (PET-CT) plays an essential role in early tumor detection, diagnosis, staging and treatment. Automated and more accurate lung tumor detection and delineation from PET-CT is challenging. In this paper, on the basis of quantitative analysis of contrast feature of PET volume in SUV (standardized uptake value), our method firstly automatically localized the lung tumor. Then based on analysing the surrounding CT features of the initial tumor definition, our decision strategy determines the tumor segmentation from CT or from PET. The algorithm has been validated on 20 PET-CT studies involving non-small cell lung cancer (NSCLC). Experimental results demonstrated that our method was able to segment the tumor when adjacent to mediastinum or chest wall, and the algorithm outperformed the other five lung segmentation methods in terms of overlapping measure. PMID:23367146

  5. NSEG, a segmented mission analysis program for low and high speed aircraft. Volume 1: Theoretical development

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is presented. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed characteristics were specified in tabular form. The code also contains extensive flight envelope performance mapping capabilities. Approximate take off and landing analyses were performed. At high speeds, centrifugal lift effects were accounted for. Extensive turbojet and ramjet engine scaling procedures were incorporated in the code.

  6. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 3: Demonstration problems

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    Program NSEG is a rapid mission analysis code based on the use of approximate flight path equations of motion. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelope performance mapping capabilities. For example, rate-of-climb, turn rates, and energy maneuverability parameter values may be mapped in the Mach-altitude plane. Approximate take off and landing analyses are also performed. At high speeds, centrifugal lift effects are accounted for. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  7. Inter-sport variability of muscle volume distribution identified by segmental bioelectrical impedance analysis in four ball sports

    PubMed Central

    Yamada, Yosuke; Masuo, Yoshihisa; Nakamura, Eitaro; Oda, Shingo

    2013-01-01

    The aim of this study was to evaluate and quantify differences in muscle distribution in athletes of various ball sports using segmental bioelectrical impedance analysis (SBIA). Participants were 115 male collegiate athletes from four ball sports (baseball, soccer, tennis, and lacrosse). Percent body fat (%BF) and lean body mass were measured, and SBIA was used to measure segmental muscle volume (MV) in bilateral upper arms, forearms, thighs, and lower legs. We calculated the MV ratios of dominant to nondominant, proximal to distal, and upper to lower limbs. The measurements consisted of a total of 31 variables. Cluster and factor analyses were applied to identify redundant variables. The muscle distribution was significantly different among groups, but the %BF was not. The classification procedures of the discriminant analysis could correctly distinguish 84.3% of the athletes. These results suggest that collegiate ball game athletes have adapted their physique to their sport movements very well, and the SBIA, which is an affordable, noninvasive, easy-to-operate, and fast alternative method in the field, can distinguish ball game athletes according to their specific muscle distribution within a 5-minute measurement. The SBIA could be a useful, affordable, and fast tool for identifying talents for specific sports. PMID:24379714

  8. Early Expansion of the Intracranial CSF Volume After Palliative Whole-Brain Radiotherapy: Results of a Longitudinal CT Segmentation Analysis

    SciTech Connect

    Sanghera, Paul; Gardner, Sandra L.; Scora, Daryl; Davey, Phillip

    2010-03-15

    Purpose: To assess cerebral atrophy after radiotherapy, we measured intracranial cerebrospinal fluid volume (ICSFV) over time after whole-brain radiotherapy (WBRT) and compared it with published normal-population data. Methods and Materials: We identified 9 patients receiving a single course of WBRT (30 Gy in 10 fractions over 2 weeks) for ipsilateral brain metastases with at least 3 years of computed tomography follow-up. Segmentation analysis was confined to the tumor-free hemi-cranium. The technique was semiautomated by use of thresholds based on scanned image intensity. The ICSFV percentage (ratio of ICSFV to brain volume) was used for modeling purposes. Published normal-population ICSFV percentages as a function of age were used as a control. A repeated-measures model with cross-sectional (between individuals) and longitudinal (within individuals) quadratic components was fitted to the collected data. The influence of clinical factors including the use of subependymal plate shielding was studied. Results: The median imaging follow-up was 6.25 years. There was an immediate increase (p < 0.0001) in ICSFV percentage, which decelerated over time. The clinical factors studied had no significant effect on the model. Conclusions: WBRT immediately accelerates the rate of brain atrophy. This longitudinal study in patients with brain metastases provides a baseline against which the potential benefits of more localized radiotherapeutic techniques such as radiosurgery may be compared.

  9. Parallel Mean Shift for Interactive Volume Segmentation

    NASA Astrophysics Data System (ADS)

    Zhou, Fangfang; Zhao, Ying; Ma, Kwan-Liu

    In this paper we present a parallel dynamic mean shift algorithm based on path transmission for medical volume data segmentation. The algorithm first translates the volume data into a joint position-color feature space subdivided uniformly by bandwidths, and then clusters points in feature space in parallel by iteratively finding its peak point. Over iterations it improves the convergent rate by dynamically updating data points via path transmission and reduces the amount of data points by collapsing overlapping points into one point. The GPU implementation of the algorithm can segment 256x256x256 volume in 6 seconds using an NVIDIA GeForce 8800 GTX card for interactive processing, which is hundreds times faster than its CPU implementation. We also introduce an interactive interface to segment volume data based on this GPU implementation. This interface not only provides the user with the capability to specify segmentation resolution, but also allows the user to operate on the segmented tissues and create desired visualization results.

  10. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 2: Program users manual

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  11. Automated volume analysis of head and neck lesions on CT scans using 3D level set segmentation

    SciTech Connect

    Street, Ethan; Hadjiiski, Lubomir; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Mukherji, Suresh K.; Chan, Heang-Ping

    2007-11-15

    The authors have developed a semiautomatic system for segmentation of a diverse set of lesions in head and neck CT scans. The system takes as input an approximate bounding box, and uses a multistage level set to perform the final segmentation. A data set consisting of 69 lesions marked on 33 scans from 23 patients was used to evaluate the performance of the system. The contours from automatic segmentation were compared to both 2D and 3D gold standard contours manually drawn by three experienced radiologists. Three performance metric measures were used for the comparison. In addition, a radiologist provided quality ratings on a 1 to 10 scale for all of the automatic segmentations. For this pilot study, the authors observed that the differences between the automatic and gold standard contours were larger than the interobserver differences. However, the system performed comparably to the radiologists, achieving an average area intersection ratio of 85.4% compared to an average of 91.2% between two radiologists. The average absolute area error was 21.1% compared to 10.8%, and the average 2D distance was 1.38 mm compared to 0.84 mm between the radiologists. In addition, the quality rating data showed that, despite the very lax assumptions made on the lesion characteristics in designing the system, the automatic contours approximated many of the lesions very well.

  12. Bioimpedance Measurement of Segmental Fluid Volumes and Hemodynamics

    NASA Technical Reports Server (NTRS)

    Montgomery, Leslie D.; Wu, Yi-Chang; Ku, Yu-Tsuan E.; Gerth, Wayne A.; DeVincenzi, D. (Technical Monitor)

    2000-01-01

    Bioimpedance has become a useful tool to measure changes in body fluid compartment volumes. An Electrical Impedance Spectroscopic (EIS) system is described that extends the capabilities of conventional fixed frequency impedance plethysmographic (IPG) methods to allow examination of the redistribution of fluids between the intracellular and extracellular compartments of body segments. The combination of EIS and IPG techniques was evaluated in the human calf, thigh, and torso segments of eight healthy men during 90 minutes of six degree head-down tilt (HDT). After 90 minutes HDT the calf and thigh segments significantly (P < 0.05) lost conductive volume (eight and four percent, respectively) while the torso significantly (P < 0.05) gained volume (approximately three percent). Hemodynamic responses calculated from pulsatile IPG data also showed a segmental pattern consistent with vascular fluid loss from the lower extremities and vascular engorgement in the torso. Lumped-parameter equivalent circuit analyses of EIS data for the calf and thigh indicated that the overall volume decreases in these segments arose from reduced extracellular volume that was not completely balanced by increased intracellular volume. The combined use of IPG and EIS techniques enables noninvasive tracking of multi-segment volumetric and hemodynamic responses to environmental and physiological stresses.

  13. Interobserver variation in clinical target volume and organs at risk segmentation in post-parotidectomy radiotherapy: can segmentation protocols help?

    PubMed Central

    Mukesh, M; Benson, R; Jena, R; Hoole, A; Roques, T; Scrase, C; Martin, C; Whitfield, G A; Gemmill, J; Jefferies, S

    2012-01-01

    Objective : A study of interobserver variation in the segmentation of the post-operative clinical target volume (CTV) and organs at risk (OARs) for parotid tumours was undertaken. The segmentation exercise was performed as a baseline, and repeated after 3 months using a segmentation protocol to assess whether CTV conformity improved. Methods : Four head and neck oncologists independently segmented CTVs and OARs (contralateral parotid, spinal cord and brain stem) on CT data sets of five patients post parotidectomy. For each CTV or OAR delineation, total volume was calculated. The conformity level (CL) between different clinicians' outlines was measured using a validated outline analysis tool. The data for CTVs were reaanalysed after using the cochlear sparing therapy and conventional radiation segmentation protocol. Results : Significant differences in CTV morphology were observed at baseline, yielding a mean CL of 30% (range 25–39%). The CL improved after using the segmentation protocol with a mean CL of 54% (range 50–65%). For OARs, the mean CL was 60% (range 53–68%) for the contralateral parotid gland, 23% (range 13–27%) for the brain stem and 25% (range 22–31%) for the spinal cord. Conclusions There was low conformity for CTVs and OARs between different clinicians. The CL for CTVs improved with use of a segmentation protocol, but the CLs remained lower than expected. This study supports the need for clear guidelines for segmentation of target and OARs to compare and interpret the results of head and neck cancer radiation studies. PMID:22815423

  14. Partial volume effect modeling for segmentation and tissue classification of brain magnetic resonance images: A review

    PubMed Central

    Tohka, Jussi

    2014-01-01

    Quantitative analysis of magnetic resonance (MR) brain images are facilitated by the development of automated segmentation algorithms. A single image voxel may contain of several types of tissues due to the finite spatial resolution of the imaging device. This phenomenon, termed partial volume effect (PVE), complicates the segmentation process, and, due to the complexity of human brain anatomy, the PVE is an important factor for accurate brain structure quantification. Partial volume estimation refers to a generalized segmentation task where the amount of each tissue type within each voxel is solved. This review aims to provide a systematic, tutorial-like overview and categorization of methods for partial volume estimation in brain MRI. The review concentrates on the statistically based approaches for partial volume estimation and also explains differences to other, similar image segmentation approaches. PMID:25431640

  15. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    PubMed Central

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís; Sastre-Garriga, Jaume; Montalban, Xavier; Rovira, Àlex; Lladó, Xavier

    2015-01-01

    Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS) lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM) and white matter (WM) using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations. PMID:26740917

  16. Reproducibility of intracranial volume measurement by unsupervised multispectral brain segmentation.

    PubMed

    Alfano, B; Quarantelli, M; Brunetti, A; Larobina, M; Covelli, E M; Tedeschi, E; Salvatore, M

    1998-03-01

    To assess the inter-study variability of a recently published unsupervised segmentation method (Magn. Reson. Med. 1997;37:84-93), 14 brain MR studies were performed in five normal subjects. Standard deviations for absolute and fractional volumes of intracranial compartments, which reflect the experimental variability, were smaller than 16.5 ml and 1.1%, respectively. By comparing the experimental component of the variability with the variability observed in our reference database, an estimate of the biological variability of the intracranial fractional volumes in the database population was obtained. PMID:9498607

  17. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  18. Tooth segmentation system with intelligent editing for cephalometric analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shoupu

    2015-03-01

    Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.

  19. Fully automated segmentation of oncological PET volumes using a combined multiscale and statistical model

    SciTech Connect

    Montgomery, David W. G.; Amira, Abbes; Zaidi, Habib

    2007-02-15

    The widespread application of positron emission tomography (PET) in clinical oncology has driven this imaging technology into a number of new research and clinical arenas. Increasing numbers of patient scans have led to an urgent need for efficient data handling and the development of new image analysis techniques to aid clinicians in the diagnosis of disease and planning of treatment. Automatic quantitative assessment of metabolic PET data is attractive and will certainly revolutionize the practice of functional imaging since it can lower variability across institutions and may enhance the consistency of image interpretation independent of reader experience. In this paper, a novel automated system for the segmentation of oncological PET data aiming at providing an accurate quantitative analysis tool is proposed. The initial step involves expectation maximization (EM)-based mixture modeling using a k-means clustering procedure, which varies voxel order for initialization. A multiscale Markov model is then used to refine this segmentation by modeling spatial correlations between neighboring image voxels. An experimental study using an anthropomorphic thorax phantom was conducted for quantitative evaluation of the performance of the proposed segmentation algorithm. The comparison of actual tumor volumes to the volumes calculated using different segmentation methodologies including standard k-means, spatial domain Markov Random Field Model (MRFM), and the new multiscale MRFM proposed in this paper showed that the latter dramatically reduces the relative error to less than 8% for small lesions (7 mm radii) and less than 3.5% for larger lesions (9 mm radii). The analysis of the resulting segmentations of clinical oncologic PET data seems to confirm that this methodology shows promise and can successfully segment patient lesions. For problematic images, this technique enables the identification of tumors situated very close to nearby high normal physiologic uptake. The use of this technique to estimate tumor volumes for assessment of response to therapy and to delineate treatment volumes for the purpose of combined PET/CT-based radiation therapy treatment planning is also discussed.

  20. Active surface segmentation analysis of CCAT

    NASA Astrophysics Data System (ADS)

    Cortés-Medellín, Germán

    2006-06-01

    The Cornell Caltech Atacama Sub-millimeter Telescope (CCAT) is proposed to have 25m-diameter primary segmented active surface capable of diffraction-limited operation in the wavelength range between 200 microns to 1mm. The active surface design layout is composed of 162 "pie-shaped" segments, each fitted with three actuators that provide piston and tilt/tip control for segment positioning and orientation. We present a performance analysis for five types of segment positioning errors, e.g., piston, tilt/tips, radial and azimuth displacements, and twist errors. From these only the first two, segment piston and tilt/tip errors, are directly controllable by the actuator system. Segment tilt/tip motions may indirectly compensate radial and azimuth segment positioning errors. Residual segment twists introduce quadric phase distribution errors across the face of the segments that cannot be compensated by a simple 3-actuator/segment active surface control system. We have obtained Ruze's coefficients that relate the standard deviation of each segment positioning error type with the overall Strehl ratio of the telescope at 200 microns.

  1. Synthesis of intensity gradient and texture information for efficient three-dimensional segmentation of medical volumes

    PubMed Central

    Vantaram, Sreenath Rao; Saber, Eli; Dianat, Sohail A.; Hu, Yang

    2015-01-01

    Abstract. We propose a framework that efficiently employs intensity, gradient, and textural features for three-dimensional (3-D) segmentation of medical (MRI/CT) volumes. Our methodology commences by determining the magnitude of intensity variations across the input volume using a 3-D gradient detection scheme. The resultant gradient volume is utilized in a dynamic volume growing/formation process that is initiated in voxel locations with small gradient magnitudes and is concluded at sites with large gradient magnitudes, yielding a map comprising an initial set of partitions (or subvolumes). This partition map is combined with an entropy-based texture descriptor along with intensity and gradient attributes in a multivariate analysis-based volume merging procedure that fuses subvolumes with similar characteristics to yield a final/refined segmentation output. Additionally, a semiautomated version of the aforestated algorithm that allows a user to interactively segment a desired subvolume of interest as opposed to the entire volume is also discussed. Our approach was tested on several MRI and CT datasets and the results show favorable performance in comparison to the state-of-the-art ITK-SNAP technique. PMID:26158098

  2. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  3. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging.

    PubMed

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart

    2015-02-21

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development. PMID:25611494

  4. Applicability of single muscle CSA for predicting segmental muscle volume in young men.

    PubMed

    Tanaka, N I; Kanehisa, H

    2014-06-01

    The present study aimed to evaluate the applicability of using a single slice cross-sectional area (CSA) of the skeletal muscle for estimating segmental skeletal muscle volume (SMV). By using MRI, the SMV of each of the upper arm, lower arm, upper leg, lower leg, and trunk was determined in 29 males. First, step-wise multiple regression analysis was applied to develop the equation for each segmental SMV in which the CSAs at intervals of 10% of segment length (SL) were used as independent variables. Second, simple linear regression analysis with every CSA selected in the first step was applied to predict SMV in each body segment. In each segment, the standard error of estimate (SEE) in the simple linear regression equation was greater than that in the multiple regression one. The most appropriate slice level for measuring a single CSA to estimate SMV was 30% of the upper arm SL (R2=0.800, SEE=7.4%), 60% of the lower arm SL (0.788, 10.3%), 50% of the upper leg SL (0.795, 7.0%), and 20% of the trunk SL (0.813, 6.1%). For the lower leg, muscle CSAs at multiple slice levels are required to estimate SMV without the systematic error. PMID:24408770

  5. Clinical value of prostate segmentation and volume determination on MRI in benign prostatic hyperplasia

    PubMed Central

    Garvey, Brian; Türkbey, Barış; Truong, Hong; Bernardo, Marcelino; Periaswamy, Senthil; Choyke, Peter L.

    2014-01-01

    Benign prostatic hyperplasia (BPH) is a nonmalignant pathological enlargement of the prostate, which occurs primarily in the transitional zone. BPH is highly prevalent and is a major cause of lower urinary tract symptoms in aging males, although there is no direct relationship between prostate volume and symptom severity. The progression of BPH can be quantified by measuring the volumes of the whole prostate and its zones, based on image segmentation on magnetic resonance imaging. Prostate volume determination via segmentation is a useful measure for patients undergoing therapy for BPH. However, prostate segmentation is not widely used due to the excessive time required for even experts to manually map the margins of the prostate. Here, we review and compare new methods of prostate volume segmentation using both manual and automated methods, including the ellipsoid formula, manual planimetry, and semiautomated and fully automated segmentation approaches. We highlight the utility of prostate segmentation in the clinical context of assessing BPH. PMID:24675166

  6. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region.

    PubMed

    Tian, Jing; Varga, Boglrka; Somfai, Gbor Mrk; Lee, Wen-Hsiang; Smiddy, William E; DeBuc, Delia Cabrera

    2015-01-01

    Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra's algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 49664451 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (? 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data. PMID:26258430

  7. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region

    PubMed Central

    Tian, Jing; Varga, Boglárka; Somfai, Gábor Márk; Lee, Wen-Hsiang; Smiddy, William E.; Cabrera DeBuc, Delia

    2015-01-01

    Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra’s algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (∼ 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data. PMID:26258430

  8. Hitchhiker's Guide to Voxel Segmentation for Partial Volume Correction of In Vivo Magnetic Resonance Spectroscopy.

    PubMed

    Quadrelli, Scott; Mountford, Carolyn; Ramadan, Saadallah

    2016-01-01

    Partial volume effects have the potential to cause inaccuracies when quantifying metabolites using proton magnetic resonance spectroscopy (MRS). In order to correct for cerebrospinal fluid content, a spectroscopic voxel needs to be segmented according to different tissue contents. This article aims to detail how automated partial volume segmentation can be undertaken and provides a software framework for researchers to develop their own tools. While many studies have detailed the impact of partial volume correction on proton magnetic resonance spectroscopy quantification, there is a paucity of literature explaining how voxel segmentation can be achieved using freely available neuroimaging packages. PMID:27147822

  9. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    PubMed Central

    Hatt, Mathieu; Lamare, Frédéric; Boussion, Nicolas; Roux, Christian; Turzo, Alexandre; Cheze-Lerest, Catherine; Jarritt, Peter; Carson, Kathryn; Salzenstein, Fabien; Collet, Christophe; Visvikis, Dimitris

    2007-01-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the Fuzzy Hidden Markov Chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical Hidden Markov Chain (HMC) algorithm, FHMC takes into account noise, voxel’s intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the “fuzzy” nature of the object on interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8mm3 and 64mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned. PMID:17664555

  10. Midbrain volume segmentation using active shape models and LBPs

    NASA Astrophysics Data System (ADS)

    Olveres, Jimena; Nava, Rodrigo; Escalante-Ramírez, Boris; Cristóbal, Gabriel; García-Moreno, Carla María.

    2013-09-01

    In recent years, the use of Magnetic Resonance Imaging (MRI) to detect different brain structures such as midbrain, white matter, gray matter, corpus callosum, and cerebellum has increased. This fact together with the evidence that midbrain is associated with Parkinson's disease has led researchers to consider midbrain segmentation as an important issue. Nowadays, Active Shape Models (ASM) are widely used in literature for organ segmentation where the shape is an important discriminant feature. Nevertheless, this approach is based on the assumption that objects of interest are usually located on strong edges. Such a limitation may lead to a final shape far from the actual shape model. This paper proposes a novel method based on the combined use of ASM and Local Binary Patterns for segmenting midbrain. Furthermore, we analyzed several LBP methods and evaluated their performance. The joint-model considers both global and local statistics to improve final adjustments. The results showed that our proposal performs substantially better than the ASM algorithm and provides better segmentation measurements.

  11. Automated lung tumor segmentation for whole body PET volume based on novel downhill region growing

    NASA Astrophysics Data System (ADS)

    Ballangan, Cherry; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Feng, Dagan

    2010-03-01

    We propose an automated lung tumor segmentation method for whole body PET images based on a novel downhill region growing (DRG) technique, which regards homogeneous tumor hotspots as 3D monotonically decreasing functions. The method has three major steps: thoracic slice extraction with K-means clustering of the slice features; hotspot segmentation with DRG; and decision tree analysis based hotspot classification. To overcome the common problem of leakage into adjacent hotspots in automated lung tumor segmentation, DRG employs the tumors' SUV monotonicity features. DRG also uses gradient magnitude of tumors' SUV to improve tumor boundary definition. We used 14 PET volumes from patients with primary NSCLC for validation. The thoracic region extraction step achieved good and consistent results for all patients despite marked differences in size and shape of the lungs and the presence of large tumors. The DRG technique was able to avoid the problem of leakage into adjacent hotspots and produced a volumetric overlap fraction of 0.61 +/- 0.13 which outperformed four other methods where the overlap fraction varied from 0.40 +/- 0.24 to 0.59 +/- 0.14. Of the 18 tumors in 14 NSCLC studies, 15 lesions were classified correctly, 2 were false negative and 15 were false positive.

  12. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  13. 3D robust Chan-Vese model for industrial computed tomography volume data segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Linghui; Zeng, Li; Luan, Xiao

    2013-11-01

    Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.

  14. Multi-region unstructured volume segmentation using tetrahedron filling

    SciTech Connect

    Willliams, Sean Jamerson; Dillard, Scott E; Thoma, Dan J; Hlawitschka, Mario; Hamann, Bernd

    2010-01-01

    Segmentation is one of the most common operations in image processing, and while there are several solutions already present in the literature, they each have their own benefits and drawbacks that make them well-suited for some types of data and not for others. We focus on the problem of breaking an image into multiple regions in a single segmentation pass, while supporting both voxel and scattered point data. To solve this problem, we begin with a set of potential boundary points and use a Delaunay triangulation to complete the boundaries. We use heuristic- and interaction-driven Voronoi clustering to find reasonable groupings of tetrahedra. Apart from the computation of the Delaunay triangulation, our algorithm has linear time complexity with respect to the number of tetrahedra.

  15. Automated segmentation of mesothelioma volume on CT scan

    NASA Astrophysics Data System (ADS)

    Zhao, Binsheng; Schwartz, Lawrence; Flores, Raja; Liu, Fan; Kijewski, Peter; Krug, Lee; Rusch, Valerie

    2005-04-01

    In mesothelioma, response is usually assessed by computed tomography (CT). In current clinical practice the Response Evaluation Criteria in Solid Tumors (RECIST) or WHO, i.e., the uni-dimensional or the bi-dimensional measurements, is applied to the assessment of therapy response. However, the shape of the mesothelioma volume is very irregular and its longest dimension is almost never in the axial plane. Furthermore, the sections and the sites where radiologists measure the tumor are rather subjective, resulting in poor reproducibility of tumor size measurements. We are developing an objective three-dimensional (3D) computer algorithm to automatically identify and quantify tumor volumes that are associated with malignant pleural mesothelioma to assess therapy response. The algorithm first extracts the lung pleural surface from the volumetric CT images by interpolating the chest ribs over a number of adjacent slices and then forming a volume that includes the thorax. This volume allows a separation of mesothelioma from the chest wall. Subsequently, the structures inside the extracted pleural lung surface, including the mediastinal area, lung parenchyma, and pleural mesothelioma, can be identified using a multiple thresholding technique and morphological operations. Preliminary results have shown the potential of utilizing this algorithm to automatically detect and quantify tumor volumes on CT scans and thus to assess therapy response for malignant pleural mesothelioma.

  16. LANDSAT-D program. Volume 2: Ground segment

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Raw digital data, as received from the LANDSAT spacecraft, cannot generate images that meet specifications. Radiometric corrections must be made to compensate for aging and for differences in sensitivity among the instrument sensors. Geometric corrections must be made to compensate for off-nadir look angle, and to calculate spacecraft drift from its prescribed path. Corrections must also be made for look-angle jitter caused by vibrations induced by spacecraft equipment. The major components of the LANDSAT ground segment and their functions are discussed.

  17. Knowledge-based segmentation of pediatric kidneys in CT for measuring parenchymal volume

    NASA Astrophysics Data System (ADS)

    Brown, Matthew S.; Feng, Waldo C.; Hall, Theodore R.; McNitt-Gray, Michael F.; Churchill, Bernard M.

    2000-06-01

    The purpose of this work was to develop an automated method for segmenting pediatric kidneys in contrast-enhanced helical CT images and measuring the volume of the renal parenchyma. An automated system was developed to segment the abdomen, spine, aorta and kidneys. The expected size, shape, topology an X-ray attenuation of anatomical structures are stored as features in an anatomical model. These features guide 3-D threshold-based segmentation and then matching of extracted image regions to anatomical structures in the model. Following segmentation, the kidney volumes are calculated by summing included voxels. To validate the system, the kidney volumes of 4 swine were calculated using our approach and compared to the 'true' volumes measured after harvesting the kidneys. Automated volume calculations were also performed retrospectively in a cohort of 10 children. The mean difference between the calculated and measured values in the swine kidneys was 1.38 (S.D. plus or minus 0.44) cc. For the pediatric cases, calculated volumes ranged from 41.7 - 252.1 cc/kidney, and the mean ratio of right to left kidney volume was 0.96 (S.D. plus or minus 0.07). These results demonstrate the accuracy of the volumetric technique that may in the future provide an objective assessment of renal damage.

  18. Comprehensive evaluation of an image segmentation technique for measuring tumor volume from CT images

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun

    2008-03-01

    Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.

  19. Segmentation propagation for the automated quantification of ventricle volume from serial MRI

    NASA Astrophysics Data System (ADS)

    Linguraru, Marius George; Butman, John A.

    2009-02-01

    Accurate ventricle volume estimates could potentially improve the understanding and diagnosis of communicating hydrocephalus. Postoperative communicating hydrocephalus has been recognized in patients with brain tumors where the changes in ventricle volume can be difficult to identify, particularly over short time intervals. Because of the complex alterations of brain morphology in these patients, the segmentation of brain ventricles is challenging. Our method evaluates ventricle size from serial brain MRI examinations; we (i) combined serial images to increase SNR, (ii) automatically segmented this image to generate a ventricle template using fast marching methods and geodesic active contours, and (iii) propagated the segmentation using deformable registration of the original MRI datasets. By applying this deformation to the ventricle template, serial volume estimates were obtained in a robust manner from routine clinical images (0.93 overlap) and their variation analyzed.

  20. Comparison of EM-based and level set partial volume segmentations of MR brain images

    NASA Astrophysics Data System (ADS)

    Tagare, Hemant D.; Chen, Yunmei; Fulbright, Robert K.

    2008-03-01

    EM and level set algorithms are competing methods for segmenting MRI brain images. This paper presents a fair comparison of the two techniques using the Montreal Neurological Institute's software phantom. There are many flavors of level set algorithms for segmentation into multiple regions (multi-phase algorithms, multi-layer algorithms). The specific algorithm evaluated by us is a variant of the multi-layer level set algorithm. It uses a single level set function for segmenting the image into multiple classes and can be run to completion without restarting. The EM-based algorithm is standard. Both algorithms have the capacity to model a variable number of partial volume classes as well as image inhomogeneity (bias field). Our evaluation consists of systematically changing the number of partial volume classes, additive image noise, and regularization parameters. The results suggest that the performances of both algorithms are comparable across noise, number of partial volume classes, and regularization. The segmentation errors of both algorithms are around 5 - 10% for cerebrospinal fluid, gray and white matter. The level set algorithm appears to have a slight advantage for gray matter segmentation. This may be beneficial in studying certain brain diseases (Multiple Sclerosis or Alzheimer's disease) where small changes in gray matter volume are significant.

  1. Similarity enhancement for automatic segmentation of cardiac structures in computed tomography volumes

    PubMed Central

    Vera, Miguel; Bravo, Antonio; Garreau, Mireille; Medina, Rubén

    2011-01-01

    The aim of this research is proposing a 3–D similarity enhancement technique useful for improving the segmentation of cardiac structures in Multi-Slice Computerized Tomography (MSCT) volumes. The similarity enhancement is obtained by subtracting the intensity of the current voxel and the gray levels of their adjacent voxels in two volumes resulting after preprocessing. Such volumes are: a.- a volume obtained after applying a Gaussian distribution and a morphological top-hat filter to the input and b.- a smoothed volume generated by processing the input with an average filter. Then, the similarity volume is used as input to a region growing algorithm. This algorithm is applied to extract the shape of cardiac structures, such as left and right ventricles, in MSCT volumes. Qualitative and quantitative results show the good performance of the proposed approach for discrimination of cardiac cavities. PMID:22256220

  2. Similarity enhancement for automatic segmentation of cardiac structures in computed tomography volumes.

    PubMed

    Vera, Miguel; Bravo, Antonio; Garreau, Mireille; Medina, Rubén

    2011-01-01

    The aim of this research is proposing a 3-D similarity enhancement technique useful for improving the segmentation of cardiac structures in Multi-Slice Computerized Tomography (MSCT) volumes. The similarity enhancement is obtained by subtracting the intensity of the current voxel and the gray levels of their adjacent voxels in two volumes resulting after preprocessing. Such volumes are: a. - a volume obtained after applying a Gaussian distribution and a morphological top-hat filter to the input and b. - a smoothed volume generated by processing the input with an average filter. Then, the similarity volume is used as input to a region growing algorithm. This algorithm is applied to extract the shape of cardiac structures, such as left and right ventricles, in MSCT volumes. Qualitative and quantitative results show the good performance of the proposed approach for discrimination of cardiac cavities. PMID:22256220

  3. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  4. Analysis of Random Segment Errors on Coronagraph Performance

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; N'Diaye, Mamadou; Stahl, Mark T.; Stahl, H. Philip

    2016-01-01

    At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

  5. Multi-Segment Hemodynamic and Volume Assessment With Impedance Plethysmography: Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Ku, Yu-Tsuan E.; Montgomery, Leslie D.; Webbon, Bruce W. (Technical Monitor)

    1995-01-01

    Definition of multi-segmental circulatory and volume changes in the human body provides an understanding of the physiologic responses to various aerospace conditions. We have developed instrumentation and testing procedures at NASA Ames Research Center that may be useful in biomedical research and clinical diagnosis. Specialized two, four, and six channel impedance systems will be described that have been used to measure calf, thigh, thoracic, arm, and cerebral hemodynamic and volume changes during various experimental investigations.

  6. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes

    PubMed Central

    Eapen, Maya; Korah, Reeba; Geetha, G.

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  7. Segmentation of Cerebral Gyri in the Sectioned Images by Referring to Volume Model

    PubMed Central

    Park, Jin Seo; Chung, Min Suk; Chi, Je-Geun; Park, Hyo Seok

    2010-01-01

    Authors had prepared the high-quality sectioned images of a cadaver head. For the delineation of each cerebral gyrus, three-dimensional model of the same brain was required. The purpose of this study was to develop the segmentation protocol of cerebral gyri by referring to the three-dimensional model on the personal computer. From the 114 sectioned images (intervals, 1 mm), a cerebral hemisphere was outlined. On MRIcro software, sectioned images including only the cerebral hemisphere were volume reconstructed. The volume model was rotated to capture the lateral, medial, superior, and inferior views of the cerebral hemisphere. On these four views, areas of 33 cerebral gyri were painted with colors. Derived from the painted views, the cerebral gyri in sectioned images were identified and outlined on the Photoshop to prepare segmented images. The segmented images were used for production of volume and surface models of the selected gyri. The segmentation method developed in this research is expected to be applied to other types of images, such as MRIs. Our results of the sectioned and segmented images of the cadaver brain, acquired in the present study, are hopefully utilized for medical learning tools of neuroanatomy. PMID:21165283

  8. Lung Segmentation in 4D CT Volumes Based on Robust Active Shape Model Matching

    PubMed Central

    Gill, Gurman; Beichel, Reinhard R.

    2015-01-01

    Dynamic and longitudinal lung CT imaging produce 4D lung image data sets, enabling applications like radiation treatment planning or assessment of response to treatment of lung diseases. In this paper, we present a 4D lung segmentation method that mutually utilizes all individual CT volumes to derive segmentations for each CT data set. Our approach is based on a 3D robust active shape model and extends it to fully utilize 4D lung image data sets. This yields an initial segmentation for the 4D volume, which is then refined by using a 4D optimal surface finding algorithm. The approach was evaluated on a diverse set of 152 CT scans of normal and diseased lungs, consisting of total lung capacity and functional residual capacity scan pairs. In addition, a comparison to a 3D segmentation method and a registration based 4D lung segmentation approach was performed. The proposed 4D method obtained an average Dice coefficient of 0.9773 ± 0.0254, which was statistically significantly better (p value ≪0.001) than the 3D method (0.9659 ± 0.0517). Compared to the registration based 4D method, our method obtained better or similar performance, but was 58.6% faster. Also, the method can be easily expanded to process 4D CT data sets consisting of several volumes. PMID:26557844

  9. Sequential Registration-Based Segmentation of the Prostate Gland in MR Image Volumes.

    PubMed

    Khalvati, Farzad; Salmanpour, Aryan; Rahnamayan, Shahryar; Haider, Masoom A; Tizhoosh, H R

    2016-04-01

    Accurate and fast segmentation and volume estimation of the prostate gland in magnetic resonance (MR) images are necessary steps in the diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semi-automated segmentation of individual slices in T2-weighted MR image sequences. The proposed sequential registration-based segmentation (SRS) algorithm, which was inspired by the clinical workflow during medical image contouring, relies on inter-slice image registration and user interaction/correction to segment the prostate gland without the use of an anatomical atlas. It automatically generates contours for each slice using a registration algorithm, provided that the user edits and approves the marking in some previous slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid). Five radiation oncologists participated in the study where they contoured the prostate MR (T2-weighted) images of 15 patients both manually and using the SRS algorithm. Compared to the manual segmentation, on average, the SRS algorithm reduced the contouring time by 62 % (a speedup factor of 2.64×) while maintaining the segmentation accuracy at the same level as the intra-user agreement level (i.e., Dice similarity coefficient of 91 versus 90 %). The proposed algorithm exploits the inter-slice similarity of volumetric MR image series to achieve highly accurate results while significantly reducing the contouring time. PMID:26546179

  10. Semi-automatic active contour approach to segmentation of computed tomography volumes

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Kovacevic, Domagoj; Sorantin, Erich

    2000-06-01

    In this paper a method for three-dimensional (3-D) semi- automatic segmentation of volumes of medical images is described. The method is semi-automatic in the sense that, in the initial phase, the user assistance is required for manual segmentation of a certain number of slices (cross-sections) of the volume. In the second phase, the algorithm for automatic segmentation is started. The segmentation algorithm is based on the active contour approach. A semi 3-D active contour algorithm is used in the sense that additional inter-slice forces are introduced in order to constrain the obtained solution. The energy function which is minimized is modified to exploit information provided by the manual segmentation of some of the slices performed by the user. The experiments have been performed using computed tomography (CT) scans of the abdominal region of the human body. In particular, CT images of abdominal aortic aneurysms have been segmented to determine the location of aorta. The experiments have shown the feasibility of the approach.

  11. Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory.

    PubMed

    Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin

    2012-01-01

    In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region. PMID:23286081

  12. Hierarchical Exploration of Volumes Using Multilevel Segmentation of the Intensity-Gradient Histograms.

    PubMed

    Ip, Cheuk Yiu; Varshney, A; JaJa, J

    2012-12-01

    Visual exploration of volumetric datasets to discover the embedded features and spatial structures is a challenging and tedious task. In this paper we present a semi-automatic approach to this problem that works by visually segmenting the intensity-gradient 2D histogram of a volumetric dataset into an exploration hierarchy. Our approach mimics user exploration behavior by analyzing the histogram with the normalized-cut multilevel segmentation technique. Unlike previous work in this area, our technique segments the histogram into a reasonable set of intuitive components that are mutually exclusive and collectively exhaustive. We use information-theoretic measures of the volumetric data segments to guide the exploration. This provides a data-driven coarse-to-fine hierarchy for a user to interactively navigate the volume in a meaningful manner. PMID:26357143

  13. Texture segmentation and analysis for tissue characterization

    NASA Astrophysics Data System (ADS)

    Redondo, Rafael; Fischer, Sylvain; Cristobal, Gabriel; Forero, Manuel; Santos, Andres; Hormigo, Javier; Gabarda, Salvador

    2004-10-01

    Early detection of tissue changes in a disease process is of utmost interest and a challenge for non-invasive imaging techniques. Texture is an important property of image regions and many texture descriptors have been proposed in the literature. In this paper we introduce a new approach related to texture descriptors and texture grouping. There exist some applications, e.g. shape from texture, that require a more dense sampling as provided by the pseudo-Wigner distribution. Therefore, the first step to the problem is to use a modular pattern detection in textured images based on the use of a Pseudo-Wigner Distribution (PWD) followed by a PCA stage. The second scheme is to consider a direct local frequency analysis by splitting the PWD spectra following a "cortex-like" structure. As an alternative technique, the use of a Gabor multiresolution approach was considered. Gabor functions constitute a family of band-pass filters that gather the most salient properties of spatial frequency and orientation selectivity. This paper presents a comparison of time-frequency methods, based on the use of the PWD, with sparse filtering approaches using a Gabor-based multiresolution representation. Performance the current methods is evaluated for the segmentation for synthetic texture mosaics and for osteoporosis images.

  14. Semi-automatic tool for segmentation and volumetric analysis of medical images.

    PubMed

    Heinonen, T; Dastidar, P; Kauppinen, P; Malmivuo, J; Eskola, H

    1998-05-01

    Segmentation software is described, developed for medical image processing and run on Windows. The software applies basic image processing techniques through a graphical user interface. For particular applications, such as brain lesion segmentation, the software enables the combination of different segmentation techniques to improve its efficiency. The program is applied for magnetic resonance imaging, computed tomography and optical images of cryosections. The software can be utilised in numerous applications, including pre-processing for three-dimensional presentations, volumetric analysis and construction of volume conductor models. PMID:9747567

  15. Automated segmentation and measurement of global white matter lesion volume in patients with multiple sclerosis.

    PubMed

    Alfano, B; Brunetti, A; Larobina, M; Quarantelli, M; Tedeschi, E; Ciarmiello, A; Covelli, E M; Salvatore, M

    2000-12-01

    A fully automated magnetic resonance (MR) segmentation method for identification and volume measurement of demyelinated white matter has been developed. Spin-echo MR brain scans were performed in 38 patients with multiple sclerosis (MS) and in 46 healthy subjects. Segmentation of normal tissues and white matter lesions (WML) was obtained, based on their relaxation rates and proton density maps. For WML identification, additional criteria included three-dimensional (3D) lesion shape and surrounding tissue composition. Segmented images were generated, and normal brain tissues and WML volumes were obtained. Sensitivity, specificity, and reproducibility of the method were calculated, using the WML identified by two neuroradiologists as the gold standard. The average volume of "abnormal" white matter in normal subjects (false positive) was 0.11 ml (range 0-0.59 ml). In MS patients the average WML volume was 31.0 ml (range 1.1-132.5 ml), with a sensitivity of 87.3%. In the reproducibility study, the mean SD of WML volumes was 2.9 ml. The procedure appears suitable for monitoring disease changes over time. J. Magn. Reson. Imaging 2000;12:799-807. PMID:11105017

  16. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    NASA Astrophysics Data System (ADS)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  17. Generalized method for partial volume estimation and tissue segmentation in cerebral magnetic resonance images

    PubMed Central

    Khademi, April; Venetsanopoulos, Anastasios; Moody, Alan R.

    2014-01-01

    Abstract. An artifact found in magnetic resonance images (MRI) called partial volume averaging (PVA) has received much attention since accurate segmentation of cerebral anatomy and pathology is impeded by this artifact. Traditional neurological segmentation techniques rely on Gaussian mixture models to handle noise and PVA, or high-dimensional feature sets that exploit redundancy in multispectral datasets. Unfortunately, model-based techniques may not be optimal for images with non-Gaussian noise distributions and/or pathology, and multispectral techniques model probabilities instead of the partial volume (PV) fraction. For robust segmentation, a PV fraction estimation approach is developed for cerebral MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead, the PV fraction is estimated directly from each image using an adaptively defined global edge map constructed by exploiting a relationship between edge content and PVA. The final PVA map is used to segment anatomy and pathology with subvoxel accuracy. Validation on simulated and real, pathology-free T1 MRI (Gaussian noise), as well as pathological fluid attenuation inversion recovery MRI (non-Gaussian noise), demonstrate that the PV fraction is accurately estimated and the resultant segmentation is robust. Comparison to model-based methods further highlight the benefits of the current approach. PMID:26158022

  18. Exploratory analysis of genomic segmentations with Segtools

    PubMed Central

    2011-01-01

    Background As genome-wide experiments and annotations become more prevalent, researchers increasingly require tools to help interpret data at this scale. Many functional genomics experiments involve partitioning the genome into labeled segments, such that segments sharing the same label exhibit one or more biochemical or functional traits. For example, a collection of ChlP-seq experiments yields a compendium of peaks, each labeled with one or more associated DNA-binding proteins. Similarly, manually or automatically generated annotations of functional genomic elements, including cis-regulatory modules and protein-coding or RNA genes, can also be summarized as genomic segmentations. Results We present a software toolkit called Segtools that simplifies and automates the exploration of genomic segmentations. The software operates as a series of interacting tools, each of which provides one mode of summarization. These various tools can be pipelined and summarized in a single HTML page. We describe the Segtools toolkit and demonstrate its use in interpreting a collection of human histone modification data sets and Plasmodium falciparum local chromatin structure data sets. Conclusions Segtools provides a convenient, powerful means of interpreting a genomic segmentation. PMID:22029426

  19. Cell nuclei segmentation for histopathological image analysis

    NASA Astrophysics Data System (ADS)

    Kong, Hui; Belkacem-Boussaid, Kamel; Gurcan, Metin

    2011-03-01

    In this paper, we propose a supervised method for segmenting cell nuclei from background and extra-cellular regions in pathological images. To this end, we segment the cell regions from the other areas by classifying the image pixels into either cell or extra-cellular category. Instead of using pixel color intensities, the color-texture extracted at the local neighborhood of each pixel is utilized as the input to our classification algorithm. The color-texture at each pixel is extracted by local Fourier transform (LFT) from a new color space, the most discriminant color space (MDC). The MDC color space is optimized to be a linear combination of the original RGB color space so that the extracted LFT texture features in the MDC color space can achieve the most discrimination in terms of classification (segmentation) performance. To speed up the texture feature extraction process, we develop an efficient LFT extraction algorithm based on image shifting and image integral. For evaluation, our method is compared with the state-of-the-art segmentation algorithms (Graph-cut, Mean-shift, etc.). Empirical results show that our segmentation method achieves better performance than these popular methods.

  20. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    NASA Astrophysics Data System (ADS)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  1. Influence of leg position and environmental temperature on segmental volume expansion during venous occlusion plethysmography.

    PubMed

    Jorfeldt, Lennart; Vedung, Torbjörn; Forsström, Elisabeth; Henriksson, Jan

    2003-06-01

    Blood flow determinations by venous occlusion plethysmography applying the strain-gauge technique are frequently used. A problem with the strain-gauge technique is that the relationship between venous volume and transmural pressure is not linear and, furthermore, changes with the sympathetic tone. The present study tests the hypothesis that these factors lead to a redistribution of venous blood, which may impair the accuracy of the technique. The relative volume expansion rates of four leg segments were studied with the leg in different positions and at disparate temperatures, thereby inducing varying venous pressures and sympathetic tone ( n =6). With elevated leg and relaxed veins (at 50 degrees C), the distal thigh showed a relatively low expansion rate (25.8+/-4.5 ml.min(-1).l(-1)), whereas values in the calf segments were higher (34.5-39.0 ml.min(-1).l(-1)). With lower initial transmural pressure, calf segments can increase their volume much more during occlusion compared with the distal thigh. In a higher transmural pressure region (lowered leg), the difference in compliance between limb segments is less. In this case, compliance and volume expansion rate was higher in the distal thigh (14.2, 13.5 and 22.2 ml.min(-1).l(-1) at 10, 20 and 50 degrees C respectively) than in the calf segments (for the distal calf: 6.4, 7.7 and 16.2 ml.min(-1).l(-1) respectively). There was a significant interaction ( P <0.001) between temperature and leg position, indicating a higher degree of sympathetic vasoactivity in the calf. It is concluded that blood flow determination by strain-gauge plethysmography is less accurate, due to a potential redistribution of the venous blood. Therefore possible influences of variations in sympathetic tone and venous pressure must be considered even in intra-individual comparisons, especially in interventional studies. PMID:12529168

  2. Automatic large-volume object region segmentation in LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2014-10-01

    LiDAR is a remote sensing method which produces precise point clouds consisting of millions of geo-spatially located 3D data points. Because of the nature of LiDAR point clouds, it can often be difficult for analysts to accurately and efficiently recognize and categorize objects. The goal of this paper is automatic large-volume object region segmentation in LiDAR point clouds. This efficient segmentation technique is intended to be a pre- processing step for the eventual classification of objects within the point cloud. The data is initially segmented into local histogram bins. This local histogram bin representation allows for the efficient consolidation of the point cloud data into voxels without the loss of location information. Additionally, by binning the points, important feature information can be extracted, such as the distribution of points, the density of points and a local ground. From these local histograms, a 3D automatic seeded region growing technique is applied. This technique performs seed selection based on two criteria, similarity and Euclidean distance to nearest neighbors. The neighbors of selected seeds are then examined and assigned labels based on location and Euclidean distance to a region mean. After the initial segmentation step, region integration is performed to rejoin over-segmented regions. The large amount of points in LiDAR data can make other segmentation techniques extremely time consuming. In addition to producing accurate object segmentation results, the proposed local histogram binning process allows for efficient segmentation, covering a point cloud of over 9,000 points in 10 seconds.

  3. Robust semi-automatic segmentation of single- and multichannel MRI volumes through adaptable class-specific representation

    NASA Astrophysics Data System (ADS)

    Nielsen, Casper F.; Passmore, Peter J.

    2002-05-01

    Segmentation of MRI volumes is complicated by noise, inhomogeneity and partial volume artefacts. Fully or semi-automatic methods often require time consuming or unintuitive initialization. Adaptable Class-Specific Representation (ACSR) is a semi-automatic segmentation framework implemented by the Path Growing Algorithm (PGA), which reduces artefacts near segment boundaries. The user visually defines the desired segment classes through the selection of class templates and the following segmentation process is fully automatic. Good results have previously been achieved with color cryo section segmentation and ACSR has been developed further for the MRI modality. In this paper we present two optimizations for robust ACSR segmentation of MRI volumes. Automatic template creation based on an initial segmentation step using Learning Vector Quantization is applied for higher robustness to noise. Inhomogeneity correction is added as a pre-processing step, comparing the EQ and N3 algorithms. Results based on simulated T1-weighed and multispectral (T1 and T2) MRI data from the BrainWeb database and real data from the Internet Brain Segmentation Repository are presented. We show that ACSR segmentation compares favorably to previously published results on the same volumes and discuss the pros and cons of using quantitative ground truth evaluation compared to qualitative visual assessment.

  4. A local contrast based approach to threshold segmentation for PET target volume delineation

    SciTech Connect

    Drever, Laura; Robinson, Don M.; McEwan, Alexander; Roa, Wilson

    2006-06-15

    Current radiation therapy techniques, such as intensity modulated radiation therapy and three-dimensional conformal radiotherapy rely on the precise delivery of high doses of radiation to well-defined volumes. CT, the imaging modality that is most commonly used to determine treatment volumes cannot, however, easily distinguish between cancerous and normal tissue. The ability of positron emission tomography (PET) to more readily differentiate between malignant and healthy tissues has generated great interest in using PET images to delineate target volumes for radiation treatment planning. At present the accurate geometric delineation of tumor volumes is a subject open to considerable interpretation. The possibility of using a local contrast based approach to threshold segmentation to accurately delineate PET target cross sections is investigated using well-defined cylindrical and spherical volumes. Contrast levels which yield correct volumetric quantification are found to be a function of the activity concentration ratio between target and background, target size, and slice location. Possibilities for clinical implementation are explored along with the limits posed by this form of segmentation.

  5. Automatic segmentation of blood vessels from MR angiography volume data by using fuzzy logic technique

    NASA Astrophysics Data System (ADS)

    Kobashi, Syoji; Hata, Yutaka; Tokimoto, Yasuhiro; Ishikawa, Makato

    1999-05-01

    This paper shows a novel medical image segmentation method applied to blood vessel segmentation from magnetic resonance angiography volume data. The principle idea of the method is fuzzy information granulation concept. The method consists of 2 parts: (1) quantization and feature extraction, (2) iterative fuzzy synthesis. In the first part, volume quantization is performed with watershed segmentation technique. Each quantum is represented by three features, vascularity, narrowness and histogram consistency. Using these features, we estimate the fuzzy degrees of each quantum for knowledge models about MRA volume data. In the second part, the method increases the fuzzy degrees by selectively synthesizing neighboring quantums. As a result, we obtain some synthesized quantums. We regard them as fuzzy granules and classify them into blood vessel or fat by evaluating the fuzzy degrees. In the experimental result, three dimensional images are generated using target maximum intensity projection (MIP) and surface shaded display. The comparison with conventional MIP images shows that the unclarity region in conventional images are clearly depict in our images. The qualitative evaluation done by a physician shows that our method can extract blood vessel region and that the results are useful to diagnose the cerebral diseases.

  6. Hitchhiker’s Guide to Voxel Segmentation for Partial Volume Correction of In Vivo Magnetic Resonance Spectroscopy

    PubMed Central

    Quadrelli, Scott; Mountford, Carolyn; Ramadan, Saadallah

    2016-01-01

    Partial volume effects have the potential to cause inaccuracies when quantifying metabolites using proton magnetic resonance spectroscopy (MRS). In order to correct for cerebrospinal fluid content, a spectroscopic voxel needs to be segmented according to different tissue contents. This article aims to detail how automated partial volume segmentation can be undertaken and provides a software framework for researchers to develop their own tools. While many studies have detailed the impact of partial volume correction on proton magnetic resonance spectroscopy quantification, there is a paucity of literature explaining how voxel segmentation can be achieved using freely available neuroimaging packages. PMID:27147822

  7. Four-chamber heart modeling and automatic segmentation for 3D cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Georgescu, Bogdan; Barbu, Adrian; Scheuering, Michael; Comaniciu, Dorin

    2008-03-01

    Multi-chamber heart segmentation is a prerequisite for quantification of the cardiac function. In this paper, we propose an automatic heart chamber segmentation system. There are two closely related tasks to develop such a system: heart modeling and automatic model fitting to an unseen volume. The heart is a complicated non-rigid organ with four chambers and several major vessel trunks attached. A flexible and accurate model is necessary to capture the heart chamber shape at an appropriate level of details. In our four-chamber surface mesh model, the following two factors are considered and traded-off: 1) accuracy in anatomy and 2) easiness for both annotation and automatic detection. Important landmarks such as valves and cusp points on the interventricular septum are explicitly represented in our model. These landmarks can be detected reliably to guide the automatic model fitting process. We also propose two mechanisms, the rotation-axis based and parallel-slice based resampling methods, to establish mesh point correspondence, which is necessary to build a statistical shape model to enforce priori shape constraints in the model fitting procedure. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3D computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9-dimensional similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3D shape through learning-based boundary delineation. Extensive experiments demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the state-of-the-art. This is the first study reporting stable results on a large cardiac CT dataset with 323 volumes. In addition, we achieve a speed of less than eight seconds for automatic segmentation of all four chambers.

  8. A novel colonic polyp volume segmentation method for computer tomographic colonography

    NASA Astrophysics Data System (ADS)

    Wang, Huafeng; Li, Lihong C.; Han, Hao; Song, Bowen; Peng, Hao; Wang, Yunhong; Wang, Lihua; Liang, Zhengrong

    2014-03-01

    Colorectal cancer is the third most common type of cancer. However, this disease can be prevented by detection and removal of precursor adenomatous polyps after the diagnosis given by experts on computer tomographic colonography (CTC). During CTC diagnosis, the radiologist looks for colon polyps and measures not only the size but also the malignancy. It is a common sense that to segment polyp volumes from their complicated growing environment is of much significance for accomplishing the CTC based early diagnosis task. Previously, the polyp volumes are mainly given from the manually or semi-automatically drawing by the radiologists. As a result, some deviations cannot be avoided since the polyps are usually small (6~9mm) and the radiologists' experience and knowledge are varying from one to another. In order to achieve automatic polyp segmentation carried out by the machine, we proposed a new method based on the colon decomposition strategy. We evaluated our algorithm on both phantom and patient data. Experimental results demonstrate our approach is capable of segment the small polyps from their complicated growing background.

  9. Semi-automated segmentation of carotid artery total plaque volume from three dimensional ultrasound carotid imaging

    NASA Astrophysics Data System (ADS)

    Buchanan, D.; Gyacskov, I.; Ukwatta, E.; Lindenmaier, T.; Fenster, A.; Parraga, G.

    2012-03-01

    Carotid artery total plaque volume (TPV) is a three-dimensional (3D) ultrasound (US) imaging measurement of carotid atherosclerosis, providing a direct non-invasive and regional estimation of atherosclerotic plaque volume - the direct determinant of carotid stenosis and ischemic stroke. While 3DUS measurements of TPV provide the potential to monitor plaque in individual patients and in populations enrolled in clinical trials, until now, such measurements have been performed manually which is laborious, time-consuming and prone to intra-observer and inter-observer variability. To address this critical translational limitation, here we describe the development and application of a semi-automated 3DUS plaque volume measurement. This semi-automated TPV measurement incorporates three user-selected boundaries in two views of the 3DUS volume to generate a geometric approximation of TPV for each plaque measured. We compared semi-automated repeated measurements to manual segmentation of 22 individual plaques ranging in volume from 2mm3 to 151mm3. Mean plaque volume was 43+/-40mm3 for semi-automated and 48+/-46mm3 for manual measurements and these were not significantly different (p=0.60). Mean coefficient of variation (CV) was 12.0+/-5.1% for the semi-automated measurements.

  10. AMASS: algorithm for MSI analysis by semi-supervised segmentation.

    PubMed

    Bruand, Jocelyne; Alexandrov, Theodore; Sistla, Srinivas; Wisztorski, Maxence; Meriaux, Céline; Becker, Michael; Salzet, Michel; Fournier, Isabelle; Macagno, Eduardo; Bafna, Vineet

    2011-10-01

    Mass Spectrometric Imaging (MSI) is a molecular imaging technique that allows the generation of 2D ion density maps for a large complement of the active molecules present in cells and sectioned tissues. Automatic segmentation of such maps according to patterns of co-expression of individual molecules can be used for discovery of novel molecular signatures (molecules that are specifically expressed in particular spatial regions). However, current segmentation techniques are biased toward the discovery of higher abundance molecules and large segments; they allow limited opportunity for user interaction, and validation is usually performed by similarity to known anatomical features. We describe here a novel method, AMASS (Algorithm for MSI Analysis by Semi-supervised Segmentation). AMASS relies on the discriminating power of a molecular signal instead of its intensity as a key feature, uses an internal consistency measure for validation, and allows significant user interaction and supervision as options. An automated segmentation of entire leech embryo data images resulted in segmentation domains congruent with many known organs, including heart, CNS ganglia, nephridia, nephridiopores, and lateral and ventral regions, each with a distinct molecular signature. Likewise, segmentation of a rat brain MSI slice data set yielded known brain features and provided interesting examples of co-expression between distinct brain regions. AMASS represents a new approach for the discovery of peptide masses with distinct spatial features of expression. Software source code and installation and usage guide are available at http://bix.ucsd.edu/AMASS/ . PMID:21800894

  11. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  12. Effect of cellular acidosis on cell volume in S2 segments of renal proximal tubules.

    PubMed

    Sullivan, L P; Wallace, D P; Clancy, R L; Grantham, J J

    1990-04-01

    The presence of pH-sensitive transport mechanisms in the basolateral membrane of proximal tubular cells suggests that cell volume and its regulation may be sensitive to changes in cell pH. We have measured the response of cell pH and cell volume to changes in the acid-base composition of solutions bathing isolated, lumen-collapsed, proximal S2 tubular segments taken from the rabbit kidney. Cell pH was determined by measurement of the fluorescence emission of 2',7'-bis(carboxyethyl)-5(6)-carboxyfluorescein. Cell volume was calculated from measurements of tubular diameter. An increase in CO2 from 5 to 15% reduced cell pH 0.30 units and raised cell bicarbonate concentration ([HCO3]) 10 mM. Cell volume rose to 108.6% of control in 4 min. A decrease in bath [HCO3] from 25 to 5 mM reduced cell pH 0.41 units and cell [HCO3] by 15 mM. Cell volume gradually increased to 105.7% at 8 min. The rate of the regulatory volume decrease after cell swelling on exposure to a 160 mosM solution was determined in the presence of 5 and 15% CO2. The latter reduced the maximum fractional rate of recovery of volume from 0.18 to 0.11 min-1 but did not affect the extent of regulation. We conclude that acidosis causes cell swelling and reduces the rate of volume regulation in response to hypotonic media. PMID:2109935

  13. Leaf image segmentation method based on multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Jin-Wei; Shi, Wen; Liao, Gui-Ping

    2013-12-01

    To identify singular regions of crop leaf affected by diseases, based on multifractal detrended fluctuation analysis (MF-DFA), an image segmentation method is proposed. In the proposed method, first, we defend a new texture descriptor: local generalized Hurst exponent, recorded as LHq based on MF-DFA. And then, box-counting dimension f(LHq) is calculated for sub-images constituted by the LHq of some pixels, which come from a specific region. Consequently, series of f(LHq) of the different regions can be obtained. Finally, the singular regions are segmented according to the corresponding f(LHq). Six kinds of corn diseases leaf's images are tested in our experiments. Both the proposed method and other two segmentation methods—multifractal spectrum based and fuzzy C-means clustering have been compared in the experiments. The comparison results demonstrate that the proposed method can recognize the lesion regions more effectively and provide more robust segmentations.

  14. Segmental Kidney Volumes Measured by Dynamic Contrast-Enhanced Magnetic Resonance Imaging and Their Association With CKD in Older People

    PubMed Central

    Woodard, Todd; Sigurdsson, Sigurdur; Gotal, John D.; Torjesen, Alyssa A.; Inker, Lesley A.; Aspelund, Thor; Eiriksdottir, Gudny; Gudnason, Vilmundur; Harris, Tamara; Launer, Lenore J.; Levey, Andrew S.; Mitchell, Gary F.

    2014-01-01

    Background Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a potentially powerful tool for analysis of kidney structure and function. The ability to measure functional and hypofunctional tissues could provide important information in groups at risk for chronic kidney disease (CKD) like the elderly. Study Design Observational study with a cross-sectional design. Setting & Participants 493 volunteers (72–94 years old; 278 women; mean estimated glomerular filtration rate [eGFR], 67±15 ml/min/1.73 m2; 40% with CKD) in the Age, Gene/Environment Susceptibility (AGES)-Reykjavik study. Predictor DCE-MRI kidney segmentation data. Outcomes & Measurements eGFR, urine albumin-creatinine ratio (ACR), and risk factors for and complications of CKD. Results After adjustment for age, sex and height, eGFR was related to kidney volume (ΔR2=0.19; P<0.001), cortex volume (ΔR2=0.14; P<0.001), medulla volume (ΔR2=0.18; P<0.001) and volume percentages of fibrosis (ΔR2=0.03; P<0.001) and fat (ΔR2=0.01; P=0.03). In similarly adjusted models, log(ACR) was related to kidney volume (ΔR2=0.02; P<0.001) and fibrosis volume percentage (ΔR2=0.03; P<0.001). Using multivariable regression models adjusted for eGFR, ACR, age, sex, and height, kidney volume was related positively to body mass index (β=29.9±2.1 ml [SE]; P<0.001), smoking (β=19.7±7.7 ml; P=0.01) and diabetes mellitus (β=14.8±7.1 ml; P=0.04) and negatively to hematocrit (β=−4.4±2.1 ml; P=0.04 [model R2=0.72; P<0.001]); relations were per 1-SD greater value of the variable. Fibrosis volume percentage was associated positively with body mass index (β=0.28±0.03; P<0.001), cardiac output (β=0.15±0.03; P<0.001), and heart rate (β=0.08±0.03; P=0.01) and negatively with hematocrit (β=−0.07±0.3; P=0.02) and augmentation index (β=−0.06±0.03; P=0.04 [model R2=0.49; p<0.001]); again, relatins are per 1-SD greater value of the variable. Limitations Automatic segmentations were not validated by histology. The limited age range prevented meaningful interpretation of age effects on measured data or the automatic segmentation procedure. Conclusions Kidney volume, cortex volume, and hypofunctional volume fraction assessed by DCE-MRI may provide information about CKD risk and prognosis beyond that provided by eGFR and urine ACR. PMID:25022339

  15. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    NASA Astrophysics Data System (ADS)

    Leibfarth, S.; Eckert, F.; Welz, S.; Siegel, C.; Schmidt, H.; Schwenzer, N.; Zips, D.; Thorwarth, D.

    2015-07-01

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only.

  16. Partial volume segmentation of brain magnetic resonance images based on maximum a posteriori probability

    SciTech Connect

    Li Xiang; Li Lihong; Lu Hongbing; Liang Zhengrong

    2005-07-15

    Noise, partial volume (PV) effect, and image-intensity inhomogeneity render a challenging task for segmentation of brain magnetic resonance (MR) images. Most of the current MR image segmentation methods focus on only one or two of the above-mentioned effects. The objective of this paper is to propose a unified framework, based on the maximum a posteriori probability principle, by taking all these effects into account simultaneously in order to improve image segmentation performance. Instead of labeling each image voxel with a unique tissue type, the percentage of each voxel belonging to different tissues, which we call a mixture, is considered to address the PV effect. A Markov random field model is used to describe the noise effect by considering the nearby spatial information of the tissue mixture. The inhomogeneity effect is modeled as a bias field characterized by a zero mean Gaussian prior probability. The well-known fuzzy C-mean model is extended to define the likelihood function of the observed image. This framework reduces theoretically, under some assumptions, to the adaptive fuzzy C-mean (AFCM) algorithm proposed by Pham and Prince. Digital phantom and real clinical MR images were used to test the proposed framework. Improved performance over the AFCM algorithm was observed in a clinical environment where the inhomogeneity, noise level, and PV effect are commonly encountered.

  17. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    NASA Astrophysics Data System (ADS)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  18. Machine learning based vesselness measurement for coronary artery segmentation in cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Loziczonek, Maciej; Georgescu, Bogdan; Zhou, S. Kevin; Vega-Higuera, Fernando; Comaniciu, Dorin

    2011-03-01

    Automatic coronary centerline extraction and lumen segmentation facilitate the diagnosis of coronary artery disease (CAD), which is a leading cause of death in developed countries. Various coronary centerline extraction methods have been proposed and most of them are based on shortest path computation given one or two end points on the artery. The major variation of the shortest path based approaches is in the different vesselness measurements used for the path cost. An empirically designed measurement (e.g., the widely used Hessian vesselness) is by no means optimal in the use of image context information. In this paper, a machine learning based vesselness is proposed by exploiting the rich domain specific knowledge embedded in an expert-annotated dataset. For each voxel, we extract a set of geometric and image features. The probabilistic boosting tree (PBT) is then used to train a classifier, which assigns a high score to voxels inside the artery and a low score to those outside. The detection score can be treated as a vesselness measurement in the computation of the shortest path. Since the detection score measures the probability of a voxel to be inside the vessel lumen, it can also be used for the coronary lumen segmentation. To speed up the computation, we perform classification only for voxels around the heart surface, which is achieved by automatically segmenting the whole heart from the 3D volume in a preprocessing step. An efficient voxel-wise classification strategy is used to further improve the speed. Experiments demonstrate that the proposed learning based vesselness outperforms the conventional Hessian vesselness in both speed and accuracy. On average, it only takes approximately 2.3 seconds to process a large volume with a typical size of 512x512x200 voxels.

  19. Whole-body and segmental muscle volume are associated with ball velocity in high school baseball pitchers

    PubMed Central

    Yamada, Yosuke; Yamashita, Daichi; Yamamoto, Shinji; Matsui, Tomoyuki; Seo, Kazuya; Azuma, Yoshikazu; Kida, Yoshikazu; Morihara, Toru; Kimura, Misaka

    2013-01-01

    The aim of the study was to examine the relationship between pitching ball velocity and segmental (trunk, upper arm, forearm, upper leg, and lower leg) and whole-body muscle volume (MV) in high school baseball pitchers. Forty-seven male high school pitchers (40 right-handers and seven left-handers; age, 16.2 ± 0.7 years; stature, 173.6 ± 4.9 cm; mass, 65.0 ± 6.8 kg, years of baseball experience, 7.5 ± 1.8 years; maximum pitching ball velocity, 119.0 ± 9.0 km/hour) participated in the study. Segmental and whole-body MV were measured using segmental bioelectrical impedance analysis. Maximum ball velocity was measured with a sports radar gun. The MV of the dominant arm was significantly larger than the MV of the non-dominant arm (P < 0.001). There was no difference in MV between the dominant and non-dominant legs. Whole-body MV was significantly correlated with ball velocity (r = 0.412, P < 0.01). Trunk MV was not correlated with ball velocity, but the MV for both lower legs, and the dominant upper leg, upper arm, and forearm were significantly correlated with ball velocity (P < 0.05). The results were not affected by age or years of baseball experience. Whole-body and segmental MV are associated with ball velocity in high school baseball pitchers. However, the contribution of the muscle mass on pitching ball velocity is limited, thus other fundamental factors (ie, pitching skill) are also important. PMID:24379713

  20. Segment clustering methodology for unsupervised Holter recordings analysis

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sotelo, Jose Luis; Peluffo-Ordoñez, Diego; Castellanos Dominguez, German

    2015-01-01

    Cardiac arrhythmia analysis on Holter recordings is an important issue in clinical settings, however such issue implicitly involves attending other problems related to the large amount of unlabelled data which means a high computational cost. In this work an unsupervised methodology based in a segment framework is presented, which consists of dividing the raw data into a balanced number of segments in order to identify fiducial points, characterize and cluster the heartbeats in each segment separately. The resulting clusters are merged or split according to an assumed criterion of homogeneity. This framework compensates the high computational cost employed in Holter analysis, being possible its implementation for further real time applications. The performance of the method is measure over the records from the MIT/BIH arrhythmia database and achieves high values of sensibility and specificity, taking advantage of database labels, for a broad kind of heartbeats types recommended by the AAMI.

  1. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  2. A comprehensive segmentation analysis of crude oil market based on time irreversibility

    NASA Astrophysics Data System (ADS)

    Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi

    2016-05-01

    In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.

  3. Analysis of recent segmental duplications in the bovine genome

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Duplicated sequences are an important source of gene innovation and structural variation within mammalian genomes. We describe the first systematic and genome-wide analysis of segmental duplications in the modern domesticated cattle (Bos taurus). Using two distinct computational analyses, we estimat...

  4. Atrophy of the Cerebellar Vermis in Essential Tremor: Segmental Volumetric MRI Analysis.

    PubMed

    Shin, Hyeeun; Lee, Dong-Kyun; Lee, Jong-Min; Huh, Young-Eun; Youn, Jinyoung; Louis, Elan D; Cho, Jin Whan

    2016-04-01

    Postmortem studies of essential tremor (ET) have demonstrated the presence of degenerative changes in the cerebellum, and imaging studies have examined related structural changes in the brain. However, their results have not been completely consistent and the number of imaging studies has been limited. We aimed to study cerebellar involvement in ET using MRI segmental volumetric analysis. In addition, a unique feature of this study was that we stratified ET patients into subtypes based on the clinical presence of cerebellar signs and compared their MRI findings. Thirty-nine ET patients and 36 normal healthy controls, matched for age and sex, were enrolled. Cerebellar signs in ET patients were assessed using the clinical tremor rating scale and International Cooperative Ataxia Rating Scale. ET patients were divided into two groups: patients with cerebellar signs (cerebellar-ET) and those without (classic-ET). MRI volumetry was performed using CIVET pipeline software. Data on whole and segmented cerebellar volumes were analyzed using SPSS. While there was a trend for whole cerebellar volume to decrease from controls to classic-ET to cerebellar-ET, this trend was not significant. The volume of several contiguous segments of the cerebellar vermis was reduced in ET patients versus controls. Furthermore, these vermis volumes were reduced in the cerebellar-ET group versus the classic-ET group. The volume of several adjacent segments of the cerebellar vermis was reduced in ET. This effect was more evident in ET patients with clinical signs of cerebellar dysfunction. The presence of tissue atrophy suggests that ET might be a neurodegenerative disease. PMID:26062905

  5. 4-D segmentation and normalization of 3He MR images for intrasubject assessment of ventilated lung volumes

    NASA Astrophysics Data System (ADS)

    Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.

    2012-03-01

    Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.

  6. Volume change of segments II and III of the liver after gastrectomy in patients with gastric cancer

    PubMed Central

    Ozutemiz, Can; Obuz, Funda; Taylan, Abdullah; Atila, Koray; Bora, Seymen; Ellidokuz, Hulya

    2016-01-01

    PURPOSE We aimed to evaluate the relationship between gastrectomy and the volume of liver segments II and III in patients with gastric cancer. METHODS Computed tomography images of 54 patients who underwent curative gastrectomy for gastric adenocarcinoma were retrospectively evaluated by two blinded observers. Volumes of the total liver and segments II and III were measured. The difference between preoperative and postoperative volume measurements was compared. RESULTS Total liver volumes measured by both observers in the preoperative and postoperative scans were similar (P > 0.05). High correlation was found between both observers (preoperative r=0.99; postoperative r=0.98). Total liver volumes showed a mean reduction of 13.4% after gastrectomy (P = 0.977). The mean volume of segments II and III showed similar decrease in measurements of both observers (38.4% vs. 36.4%, P = 0.363); the correlation between the observers were high (preoperative r=0.97, P < 0.001; postoperative r=0.99, P < 0.001). Volume decrease in the rest of the liver was not different between the observers (8.2% vs. 9.1%, P = 0.388). Time had poor correlation with volume change of segments II and III and the total liver for each observer (observer 1, rseg2/3=0.32, rtotal=0.13; observer 2, rseg2/3=0.37, rtotal=0.16). CONCLUSION Segments II and III of the liver showed significant atrophy compared with the rest of the liver and the total liver after gastrectomy. Volume reduction had poor correlation with time. PMID:26899148

  7. Linear test bed. Volume 1: Test bed no. 1. [aerospike test bed with segmented combustor

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Linear Test Bed program was to design, fabricate, and evaluation test an advanced aerospike test bed which employed the segmented combustor concept. The system is designated as a linear aerospike system and consists of a thrust chamber assembly, a power package, and a thrust frame. It was designed as an experimental system to demonstrate the feasibility of the linear aerospike-segmented combustor concept. The overall dimensions are 120 inches long by 120 inches wide by 96 inches in height. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure, at a mixture ratio of 5.5. At the design conditions, the sea level thrust is 200,000 pounds. The complete program including concept selection, design, fabrication, component test, system test, supporting analysis and posttest hardware inspection is described.

  8. Salted and preserved duck eggs: a consumer market segmentation analysis.

    PubMed

    Arthur, Jennifer; Wiseman, Kelleen; Cheng, K M

    2015-08-01

    The combination of increasing ethnic diversity in North America and growing consumer support for local food products may present opportunities for local producers and processors in the ethnic foods product category. Our study examined the ethnic Chinese (pop. 402,000) market for salted and preserved duck eggs in Vancouver, British Columbia (BC), Canada. The objective of the study was to develop a segmentation model using survey data to categorize consumer groups based on their attitudes and the importance they placed on product attributes. We further used post-segmentation acculturation score, demographics and buyer behaviors to define these groups. Data were gathered via a survey of randomly selected Vancouver households with Chinese surnames (n = 410), targeting the adult responsible for grocery shopping. Results from principal component analysis and a 2-step cluster analysis suggest the existence of 4 market segments, described as Enthusiasts, Potentialists, Pragmatists, Health Skeptics (salted duck eggs), and Neutralists (preserved duck eggs). Kruskal Wallis tests and post hoc Mann-Whitney tests found significant differences between segments in terms of attitudes and the importance placed on product characteristics. Health Skeptics, preserved egg Potentialists, and Pragmatists of both egg products were significantly biased against Chinese imports compared to others. Except for Enthusiasts, segments disagreed that eggs are 'Healthy Products'. Preserved egg Enthusiasts had a significantly lower acculturation score (AS) compared to all others, while salted egg Enthusiasts had a lower AS compared to Health Skeptics. All segments rated "produced in BC, not mainland China" products in the "neutral to very likely" range for increasing their satisfaction with the eggs. Results also indicate that buyers of each egg type are willing to pay an average premium of at least 10% more for BC produced products versus imports, with all other characteristics equal. Overall results indicate that opportunities exist for local producers and processors: Chinese Canadians with lower AS form a core part of the potential market. PMID:26089479

  9. Small rural hospitals: an example of market segmentation analysis.

    PubMed

    Mainous, A G; Shelby, R L

    1991-01-01

    In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution. PMID:10111266

  10. Documented Safety Analysis for the B695 Segment

    SciTech Connect

    Laycak, D

    2008-09-11

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems, and keeping them as simple as possible while complying with industry standards and institutional requirements. No operations to be performed in the B695 Segment or building system are considered to be complex. No anticipated future change in the facility mission is expected to impact the extent of safety analysis documented in this DSA.

  11. Three-dimensional segmentation of pulmonary artery volume from thoracic computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Lindenmaier, Tamas J.; Sheikh, Khadija; Bluemke, Emma; Gyacskov, Igor; Mura, Marco; Licskai, Christopher; Mielniczuk, Lisa; Fenster, Aaron; Cunningham, Ian A.; Parraga, Grace

    2015-03-01

    Chronic obstructive pulmonary disease (COPD), is a major contributor to hospitalization and healthcare costs in North America. While the hallmark of COPD is airflow limitation, it is also associated with abnormalities of the cardiovascular system. Enlargement of the pulmonary artery (PA) is a morphological marker of pulmonary hypertension, and was previously shown to predict acute exacerbations using a one-dimensional diameter measurement of the main PA. We hypothesized that a three-dimensional (3D) quantification of PA size would be more sensitive than 1D methods and encompass morphological changes along the entire central pulmonary artery. Hence, we developed a 3D measurement of the main (MPA), left (LPA) and right (RPA) pulmonary arteries as well as total PA volume (TPAV) from thoracic CT images. This approach incorporates segmentation of pulmonary vessels in cross-section for the MPA, LPA and RPA to provide an estimate of their volumes. Three observers performed five repeated measurements for 15 ex-smokers with ≥10 pack-years, and randomly identified from a larger dataset of 199 patients. There was a strong agreement (r2=0.76) for PA volume and PA diameter measurements, which was used as a gold standard. Observer measurements were strongly correlated and coefficients of variation for observer 1 (MPA:2%, LPA:3%, RPA:2%, TPA:2%) were not significantly different from observer 2 and 3 results. In conclusion, we generated manual 3D pulmonary artery volume measurements from thoracic CT images that can be performed with high reproducibility. Future work will involve automation for implementation in clinical workflows.

  12. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  13. An automatic method of brain tumor segmentation from MRI volume based on the symmetry of brain and level set method

    NASA Astrophysics Data System (ADS)

    Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su

    2010-02-01

    This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.

  14. Asymmetry analysis of breast thermograms with morphological image segmentation.

    PubMed

    Tang, Xianwu; Ding, Haishu

    2005-01-01

    Breast thermography is considered particularly valuable for early breast tumors detection. The fast growing tumor has a higher metabolic rate and associated increase in local vascularization. It will cause the occurrence of some asymmetric heat patterns. Clinical interpretation of a breast thermogram is primarily based on the asymmetry analysis of these heat patterns visually and subjectively. In this paper, a new approach of asymmetry analysis of breast thermograms was proposed. The heat patterns are first segmented with mathematical morphology. The asymmetry analysis is performed both qualitatively and quantitatively according to the extracted features. The abnormality of a breast thermogram is clearly indicated by the features. PMID:17282535

  15. Three-dimensional model-guided segmentation and analysis of medical images

    NASA Astrophysics Data System (ADS)

    Arata, Louis K.; Dhawan, Atam P.; Broderick, Joseph; Gaskill, Mary

    1992-06-01

    Automated or semi-automated analysis and labeling of structural brain images, such as magnetic resonance (MR) and computed tomography, is desirable for a number of reasons. Quantification of brain volumes can aid in the study of various diseases and the affect of various drug regimes. A labeled structural image, when registered with a functional image such as positron emission tomography or single photon emission computed tomography, allows the quantification of activity in various brain subvolumes such as the major lobes. Because even low resolution scans (7.5 to 8.0 mm slices) have 15 to 17 slices in order to image the entire head of the subject hand segmentation of these slices is a very laborious process. However, because of the spatial complexity of many of the brain structures notably the ventricles, automatic segmentation is not a simple undertaking. In order to accurately segment a structure such as the ventricles we must have a model of equal complexity to guide the segmentation. Also, we must have a model which can incorporate the variability among different subjects from a pre-specified group. Analysis of MR brain scans is accomplished by utilizing the data from T2 weighted and proton density images to isolate the regions of interest. Identification is then done automatically with the aid of a composite model formed from the operator assisted segmentation of MR scans of subjects from the same group. We describe the construction of the model and demonstrate its use in the segmentation and labeling of the ventricles in the brain.

  16. Influence of cold walls on PET image quantification and volume segmentation: A phantom study

    SciTech Connect

    Berthon, B.; Marshall, C.; Edwards, A.; Spezi, E.; Evans, M.

    2013-08-15

    Purpose: Commercially available fillable plastic inserts used in positron emission tomography phantoms usually have thick plastic walls, separating their content from the background activity. These “cold” walls can modify the intensity values of neighboring active regions due to the partial volume effect, resulting in errors in the estimation of standardized uptake values. Numerous papers suggest that this is an issue for phantom work simulating tumor tissue, quality control, and calibration work. This study aims to investigate the influence of the cold plastic wall thickness on the quantification of 18F-fluorodeoxyglucose on the image activity recovery and on the performance of advanced automatic segmentation algorithms for the delineation of active regions delimited by plastic walls.Methods: A commercial set of six spheres of different diameters was replicated using a manufacturing technique which achieves a reduction in plastic walls thickness of up to 90%, while keeping the same internal volume. Both sets of thin- and thick-wall inserts were imaged simultaneously in a custom phantom for six different tumor-to-background ratios. Intensity values were compared in terms of mean and maximum standardized uptake values (SUVs) in the spheres and mean SUV of the hottest 1 ml region (SUV{sub max}, SUV{sub mean}, and SUV{sub peak}). The recovery coefficient (RC) was also derived for each sphere. The results were compared against the values predicted by a theoretical model of the PET-intensity profiles for the same tumor-to-background ratios (TBRs), sphere sizes, and wall thicknesses. In addition, ten automatic segmentation methods, written in house, were applied to both thin- and thick-wall inserts. The contours obtained were compared to computed tomography derived gold standard (“ground truth”), using five different accuracy metrics.Results: The authors' results showed that thin-wall inserts achieved significantly higher SUV{sub mean}, SUV{sub max}, and RC values (up to 25%, 16%, and 25% higher, respectively) compared to thick-wall inserts, which was in agreement with the theory. This effect decreased with increasing sphere size and TBR, and resulted in substantial (>5%) differences between thin- and thick-wall inserts for spheres up to 30 mm diameter and TBR up to 4. Thinner plastic walls were also shown to significantly improve the delineation accuracy for the majority of the segmentation methods tested, by increasing the proportion of lesion voxels detected, although the errors in image quantification remained non-negligible.Conclusions: This study quantified the significant effect of a 90% reduction in the thickness of insert walls on SUV quantification and PET-based boundary detection. Mean SUVs inside the inserts and recovery coefficients were particularly affected by the presence of thick cold walls, as predicted by a theoretical approach. The accuracy of some delineation algorithms was also significantly improved by the introduction of thin wall inserts instead of thick wall inserts. This study demonstrates the risk of errors deriving from the use of cold wall inserts to assess and compare the performance of PET segmentation methods.

  17. Education, Work and Employment--Volume II. Segmented Labour Markets, Workplace Democracy and Educational Planning, Education and Self-Employment.

    ERIC Educational Resources Information Center

    Carnoy, Martin; And Others

    This volume contains three studies covering separate yet complementary aspects of the problem of the relationships between the educational system and the production system as manpower user. The first monograph on the theories of the markets seeks to answer two questions: what can be learned from the work done on the segmentation of the labor…

  18. High volume data storage architecture analysis

    NASA Technical Reports Server (NTRS)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  19. Study of tracking and data acquisition system for the 1990's. Volume 4: TDAS space segment architecture

    NASA Technical Reports Server (NTRS)

    Orr, R. S.

    1984-01-01

    Tracking and data acquisition system (TDAS) requirements, TDAS architectural goals, enhanced TDAS subsystems, constellation and networking options, TDAS spacecraft options, crosslink implementation, baseline TDAS space segment architecture, and treat model development/security analysis are addressed.

  20. Three dimensional level set based semiautomatic segmentation of atherosclerotic carotid artery wall volume using 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Hossain, Md. Murad; AlMuhanna, Khalid; Zhao, Limin; Lal, Brajesh K.; Sikdar, Siddhartha

    2014-03-01

    3D segmentation of carotid plaque from ultrasound (US) images is challenging due to image artifacts and poor boundary definition. Semiautomatic segmentation algorithms for calculating vessel wall volume (VWV) have been proposed for the common carotid artery (CCA) but they have not been applied on plaques in the internal carotid artery (ICA). In this work, we describe a 3D segmentation algorithm that is robust to shadowing and missing boundaries. Our algorithm uses distance regularized level set method with edge and region based energy to segment the adventitial wall boundary (AWB) and lumen-intima boundary (LIB) of plaques in the CCA, ICA and external carotid artery (ECA). The algorithm is initialized by manually placing points on the boundary of a subset of transverse slices with an interslice distance of 4mm. We propose a novel user defined stopping surface based energy to prevent leaking of evolving surface across poorly defined boundaries. Validation was performed against manual segmentation using 3D US volumes acquired from five asymptomatic patients with carotid stenosis using a linear 4D probe. A pseudo gold-standard boundary was formed from manual segmentation by three observers. The Dice similarity coefficient (DSC), Hausdor distance (HD) and modified HD (MHD) were used to compare the algorithm results against the pseudo gold-standard on 1205 cross sectional slices of 5 3D US image sets. The algorithm showed good agreement with the pseudo gold standard boundary with mean DSC of 93.3% (AWB) and 89.82% (LIB); mean MHD of 0.34 mm (AWB) and 0.24 mm (LIB); mean HD of 1.27 mm (AWB) and 0.72 mm (LIB). The proposed 3D semiautomatic segmentation is the first step towards full characterization of 3D plaque progression and longitudinal monitoring.

  1. Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging

    PubMed Central

    Beaton, L.; Mazzaferri, J.; Lalonde, F.; Hidalgo-Aguirre, M.; Descovich, D.; Lesk, M. R.; Costantino, S.

    2015-01-01

    We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373

  2. Breast Density Analysis Using an Automatic Density Segmentation Algorithm.

    PubMed

    Oliver, Arnau; Tortajada, Meritxell; Lladó, Xavier; Freixenet, Jordi; Ganau, Sergi; Tortajada, Lidia; Vilagran, Mariona; Sentís, Melcior; Martí, Robert

    2015-10-01

    Breast density is a strong risk factor for breast cancer. In this paper, we present an automated approach for breast density segmentation in mammographic images based on a supervised pixel-based classification and using textural and morphological features. The objective of the paper is not only to show the feasibility of an automatic algorithm for breast density segmentation but also to prove its potential application to the study of breast density evolution in longitudinal studies. The database used here contains three complete screening examinations, acquired 2 years apart, of 130 different patients. The approach was validated by comparing manual expert annotations with automatically obtained estimations. Transversal analysis of the breast density analysis of craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts acquired in the same study showed a correlation coefficient of ρ = 0.96 between the mammographic density percentage for left and right breasts, whereas a comparison of both mammographic views showed a correlation of ρ = 0.95. A longitudinal study of breast density confirmed the trend that dense tissue percentage decreases over time, although we noticed that the decrease in the ratio depends on the initial amount of breast density. PMID:25720749

  3. Automated target recognition technique for image segmentation and scene analysis

    NASA Astrophysics Data System (ADS)

    Baumgart, Chris W.; Ciarcia, Christopher A.

    1994-03-01

    Automated target recognition (ATR) software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army's Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multisensor system designed to detect buried and surface- emplaced metallic and nonmetallic antitank mines. The basic requirements for this ATR software were the following: (1) an ability to separate target objects from the background in low signal-noise conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed using an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics, which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a tradeoff between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  4. Multi-level segment analysis: definition and applications in turbulence

    NASA Astrophysics Data System (ADS)

    Wang, Lipo

    2015-11-01

    The interaction of different scales is among the most interesting and challenging features in turbulence research. Existing approaches used for scaling analysis such as structure-function and Fourier spectrum method have their respective limitations, for instance scale mixing, i.e. the so-called infrared and ultraviolet effects. For a given function, by specifying different window sizes, the local extremal point set will be different. Such window size dependent feature indicates multi-scale statistics. A new method, multi-level segment analysis (MSA) based on the local extrema statistics, has been developed. The part of the function between two adjacent extremal points is defined as a segment, which is characterized by the functional difference and scale difference. The structure function can be differently derived from these characteristic parameters. Data test results show that MSA can successfully reveal different scaling regimes in turbulence systems such as Lagrangian and two-dimensional turbulence, which have been remaining controversial in turbulence research. In principle MSA can generally be extended for various analyses.

  5. Pulse shape analysis and position determination in segmented HPGe detectors: The AGATA detector library

    NASA Astrophysics Data System (ADS)

    Bruyneel, B.; Birkenbach, B.; Reiter, P.

    2016-03-01

    The AGATA Detector Library (ADL) was developed for the calculation of signals from highly segmented large volume high-purity germanium (HPGe) detectors. ADL basis sets comprise a huge amount of calculated position-dependent detector pulse shapes. A basis set is needed for Pulse Shape Analysis (PSA). By means of PSA the interaction position of a γ-ray inside the active detector volume is determined. Theoretical concepts of the calculations are introduced and cover the relevant aspects of signal formation in HPGe. The approximations and the realization of the computer code with its input parameters are explained in detail. ADL is a versatile and modular computer code; new detectors can be implemented in this library. Measured position resolutions of the AGATA detectors based on ADL are discussed.

  6. Layout pattern analysis using the Voronoi diagram of line segments

    NASA Astrophysics Data System (ADS)

    Dey, Sandeep Kumar; Cheilaris, Panagiotis; Gabrani, Maria; Papadopoulou, Evanthia

    2016-01-01

    Early identification of problematic patterns in very large scale integration (VLSI) designs is of great value as the lithographic simulation tools face significant timing challenges. To reduce the processing time, such a tool selects only a fraction of possible patterns which have a probable area of failure, with the risk of missing some problematic patterns. We introduce a fast method to automatically extract patterns based on their structure and context, using the Voronoi diagram of line-segments as derived from the edges of VLSI design shapes. Designers put line segments around the problematic locations in patterns called "gauges," along which the critical distance is measured. The gauge center is the midpoint of a gauge. We first use the Voronoi diagram of VLSI shapes to identify possible problematic locations, represented as gauge centers. Then we use the derived locations to extract windows containing the problematic patterns from the design layout. The problematic locations are prioritized by the shape and proximity information of the design polygons. We perform experiments for pattern selection in a portion of a 22-nm random logic design layout. The design layout had 38,584 design polygons (consisting of 199,946 line segments) on layer Mx, and 7079 markers generated by an optical rule checker (ORC) tool. The optical rules specify requirements for printing circuits with minimum dimension. Markers are the locations of some optical rule violations in the layout. We verify our approach by comparing the coverage of our extracted patterns to the ORC-generated markers. We further derive a similarity measure between patterns and between layouts. The similarity measure helps to identify a set of representative gauges that reduces the number of patterns for analysis.

  7. Matching 3D segmented objects using wire frame analysis

    NASA Astrophysics Data System (ADS)

    Allen, Charles R.; O'Brien, Stephan

    1993-08-01

    This paper describes a novel technique in 3D sensory fusion for autonomous mobile vehicles. The primary sensor is a monocular camera mounted on a robot manipulator which pans to up to three positions on a 0.5 m vertical circle, while mounted on the mobile vehicle. The passive scene is analyzed using a method of inverse perspective, which is described and the resulting scene analysis comprises 3D wire frames of all surfaces detected in 3D. The 3D scene analysis uses a dual T-800 transputer based multiprocessor which cycles through generating primary scene information at rates of 1 update per 10 seconds. A PC-based 3D matching algorithm is then used to match the segmented objects to a database of pre-taught 3D wire frames. The matching software is written in Prolog.

  8. Segmentation of the Manila subduction system from migrated multichannel seismics and wedge taper analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Junjiang; Sun, Zongxun; Kopp, Heidrun; Qiu, Xuelin; Xu, Huilong; Li, Sanzhong; Zhan, Wenhuan

    2013-12-01

    Based on bathymetric data and multichannel seismic data, the Manila subduction system is divided into three segments, the North Luzon segment, the seamount chain segment and the West Luzon segment starts in Southwest Taiwan and runs as far as Mindoro. The volume variations of the accretionary prism, the forearc slope angle, taper angle variations support the segmentation of the Manila subduction system. The accretionary prism is composed of the outer wedge and the inner wedge separated by the slope break. The backstop structure and a 0.5-1 km thick subduction channel are interpreted in the seismic Line 973 located in the northeastern South China Sea. The clear décollement horizon reveals the oceanic sediment has been subducted beneath the accretionary prism. A number of splay faults occur in the active outer wedge. Taper angles vary from 8.0° ± 1° in the North Luzon segment, 9.9° ± 1° in the seamount segment to 11° ± 1° in the West Luzon segment. Based on variations between the taper angle and orthogonal convergence rates in the world continental margins and comparison between our results and the global compilation, different segments of the Manila subduction system fit well the global pattern. It suggests that subduction accretion dominates the north Luzon and seamount chain segment, but the steep slope indicates in the West Luzon segment and implies that tectonic erosion could dominate the West Luzon segment.

  9. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  10. Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.

    ERIC Educational Resources Information Center

    Lay, Robert S.; Maguire, John J.

    1983-01-01

    Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)

  11. Analysis of Retinal Peripapillary Segmentation in Early Alzheimer's Disease Patients

    PubMed Central

    Salobrar-Garcia, Elena; Hoyas, Irene; Leal, Mercedes; de Hoz, Rosa; Rojas, Blanca; Ramirez, Ana I.; Salazar, Juan J.; Yubero, Raquel; Gil, Pedro; Triviño, Alberto; Ramirez, José M.

    2015-01-01

    Decreased thickness of the retinal nerve fiber layer (RNFL) may reflect retinal neuronal-ganglion cell death. A decrease in the RNFL has been demonstrated in Alzheimer's disease (AD) in addition to aging by optical coherence tomography (OCT). Twenty-three mild-AD patients and 28 age-matched control subjects with mean Mini-Mental State Examination 23.3 and 28.2, respectively, with no ocular disease or systemic disorders affecting vision, were considered for study. OCT peripapillary and macular segmentation thickness were examined in the right eye of each patient. Compared to controls, eyes of patients with mild-AD patients showed no statistical difference in peripapillary RNFL thickness (P > 0.05); however, sectors 2, 3, 4, 8, 9, and 11 of the papilla showed thinning, while in sectors 1, 5, 6, 7, and 10 there was thickening. Total macular volume and RNFL thickness of the fovea in all four inner quadrants and in the outer temporal quadrants proved to be significantly decreased (P < 0.01). Despite the fact that peripapillary RNFL thickness did not statistically differ in comparison to control eyes, the increase in peripapillary thickness in our mild-AD patients could correspond to an early neurodegeneration stage and may entail the existence of an inflammatory process that could lead to progressive peripapillary fiber damage. PMID:26557684

  12. Recurrence interval analysis of trading volumes.

    PubMed

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes. PMID:20866478

  13. Bifilar analysis study, volume 1

    NASA Technical Reports Server (NTRS)

    Miao, W.; Mouzakis, T.

    1980-01-01

    A coupled rotor/bifilar/airframe analysis was developed and utilized to study the dynamic characteristics of the centrifugally tuned, rotor-hub-mounted, bifilar vibration absorber. The analysis contains the major components that impact the bifilar absorber performance, namely, an elastic rotor with hover aerodynamics, a flexible fuselage, and nonlinear individual degrees of freedom for each bifilar mass. Airspeed, rotor speed, bifilar mass and tuning variations are considered. The performance of the bifilar absorber is shown to be a function of its basic parameters: dynamic mass, damping and tuning, as well as the impedance of the rotor hub. The effect of the dissimilar responses of the individual bifilar masses which are caused by tolerance induced mass, damping and tuning variations is also examined.

  14. Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET

    NASA Astrophysics Data System (ADS)

    Bousse, Alexandre; Pedemonte, Stefano; Thomas, Benjamin A.; Erlandsson, Kjell; Ourselin, Sébastien; Arridge, Simon; Hutton, Brian F.

    2012-10-01

    In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image.

  15. Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images

    PubMed Central

    Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.

    2015-01-01

    Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634

  16. Automated cerebellar lobule segmentation with application to cerebellar structural analysis in cerebellar disease.

    PubMed

    Yang, Zhen; Ye, Chuyang; Bogovic, John A; Carass, Aaron; Jedynak, Bruno M; Ying, Sarah H; Prince, Jerry L

    2016-02-15

    The cerebellum plays an important role in both motor control and cognitive function. Cerebellar function is topographically organized and diseases that affect specific parts of the cerebellum are associated with specific patterns of symptoms. Accordingly, delineation and quantification of cerebellar sub-regions from magnetic resonance images are important in the study of cerebellar atrophy and associated functional losses. This paper describes an automated cerebellar lobule segmentation method based on a graph cut segmentation framework. Results from multi-atlas labeling and tissue classification contribute to the region terms in the graph cut energy function and boundary classification contributes to the boundary term in the energy function. A cerebellar parcellation is achieved by minimizing the energy function using the α-expansion technique. The proposed method was evaluated using a leave-one-out cross-validation on 15 subjects including both healthy controls and patients with cerebellar diseases. Based on reported Dice coefficients, the proposed method outperforms two state-of-the-art methods. The proposed method was then applied to 77 subjects to study the region-specific cerebellar structural differences in three spinocerebellar ataxia (SCA) genetic subtypes. Quantitative analysis of the lobule volumes shows distinct patterns of volume changes associated with different SCA subtypes consistent with known patterns of atrophy in these genetic subtypes. PMID:26408861

  17. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  18. Applicability of semi-automatic segmentation for volumetric analysis of brain lesions.

    PubMed

    Heinonen, T; Dastidar, P; Eskola, H; Frey, H; Ryymin, P; Laasonen, E

    1998-01-01

    This project involves the development of a fast semi-automatic segmentation procedure to make an accurate volumetric estimation of brain lesions. This method has been applied in the segmentation of demyelination plaques in Multiple Sclerosis (MS) and right cerebral hemispheric infarctions in patients with neglect. The developed segmentation method includes several image processing techniques, such as image enhancement, amplitude segmentation, and region growing. The entire program operates on a PC-based computer and applies graphical user interfaces. Twenty three patients with MS and 43 patients with right cerebral hemisphere infarctions were studied on a 0.5 T MRI unit. The MS plaques and cerebral infarctions were thereafter segmented. The volumetric accuracy of the program was demonstrated by segmenting Magnetic Resonance (MR) images of fluid filled syringes. The relative error of the total volume measurement based on the MR images of syringes was 1.5%. Also the repeatability test was carried out as inter-and intra-observer study in which MS plaques of six randomly selected patients were segmented. These tests indicated 7% variability in the inter-observer study and 4% variability in the intra-observer study. Average time used to segment and calculate the total plaque volumes for one patient was 10 min. This simple segmentation method can be utilized in the quantitation of anatomical structures, such as air cells in the sinonasal and temporal bone area, as well as in different pathological conditions, such as brain tumours, intracerebral haematomas and bony destructions. PMID:9680601

  19. A novel approach for the automated segmentation and volume quantification of cardiac fats on computed tomography.

    PubMed

    Rodrigues, É O; Morais, F F C; Morais, N A O S; Conci, L S; Neto, L V; Conci, A

    2016-01-01

    The deposits of fat on the surroundings of the heart are correlated to several health risk factors such as atherosclerosis, carotid stiffness, coronary artery calcification, atrial fibrillation and many others. These deposits vary unrelated to obesity, which reinforces its direct segmentation for further quantification. However, manual segmentation of these fats has not been widely deployed in clinical practice due to the required human workload and consequential high cost of physicians and technicians. In this work, we propose a unified method for an autonomous segmentation and quantification of two types of cardiac fats. The segmented fats are termed epicardial and mediastinal, and stand apart from each other by the pericardium. Much effort was devoted to achieve minimal user intervention. The proposed methodology mainly comprises registration and classification algorithms to perform the desired segmentation. We compare the performance of several classification algorithms on this task, including neural networks, probabilistic models and decision tree algorithms. Experimental results of the proposed methodology have shown that the mean accuracy regarding both epicardial and mediastinal fats is 98.5% (99.5% if the features are normalized), with a mean true positive rate of 98.0%. In average, the Dice similarity index was equal to 97.6%. PMID:26474835

  20. Automated segmentation of chronic stroke lesions using LINDA: Lesion identification with neighborhood data analysis.

    PubMed

    Pustina, Dorian; Coslett, H Branch; Turkeltaub, Peter E; Tustison, Nicholas; Schwartz, Myrna F; Avants, Brian

    2016-04-01

    The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696 ± 0.16, Hausdorff distance of 17.9 ± 9.8 mm, and average displacement of 2.54 ± 1.38 mm. The manual and predicted lesion volumes correlated at r = 0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. Hum Brain Mapp 37:1405-1421, 2016. © 2016 Wiley Periodicals, Inc. PMID:26756101

  1. Hippocampus and amygdala volumes from magnetic resonance images in children: Assessing accuracy of FreeSurfer and FSL against manual segmentation.

    PubMed

    Schoemaker, Dorothee; Buss, Claudia; Head, Kevin; Sandman, Curt A; Davis, Elysia P; Chakravarty, M Mallar; Gauthier, Serge; Pruessner, Jens C

    2016-04-01

    The volumetric quantification of brain structures is of great interest in pediatric populations because it allows the investigation of different factors influencing neurodevelopment. FreeSurfer and FSL both provide frequently used packages for automatic segmentation of brain structures. In this study, we examined the accuracy and consistency of those two automated protocols relative to manual segmentation, commonly considered as the "gold standard" technique, for estimating hippocampus and amygdala volumes in a sample of preadolescent children aged between 6 to 11years. The volumes obtained with FreeSurfer and FSL-FIRST were evaluated and compared with manual segmentations with respect to volume difference, spatial agreement and between- and within-method correlations. Results highlighted a tendency for both automated techniques to overestimate hippocampus and amygdala volumes, in comparison to manual segmentation. This was more pronounced when using FreeSurfer than FSL-FIRST and, for both techniques, the overestimation was more marked for the amygdala than the hippocampus. Pearson correlations support moderate associations between manual tracing and FreeSurfer for hippocampus (right r=0.69, p<0.001; left r=0.77, p<0.001) and amygdala (right r=0.61, p<0.001; left r=0.67, p<0.001) volumes. Correlation coefficients between manual segmentation and FSL-FIRST were statistically significant (right hippocampus r=0.59, p<0.001; left hippocampus r=0.51, p<0.001; right amygdala r=0.35, p<0.001; left amygdala r=0.31, p<0.001) but were significantly weaker, for all investigated structures. When computing intraclass correlation coefficients between manual tracing and automatic segmentation, all comparisons, except for left hippocampus volume estimated with FreeSurfer, failed to reach 0.70. When looking at each method separately, correlations between left and right hemispheric volumes showed strong associations between bilateral hippocampus and bilateral amygdala volumes when assessed using manual segmentation or FreeSurfer. These correlations were significantly weaker when volumes were assessed with FSL-FIRST. Finally, Bland-Altman plots suggest that the difference between manual and automatic segmentation might be influenced by the volume of the structure, because smaller volumes were associated with larger volume differences between techniques. These results demonstrate that, at least in a pediatric population, the agreement between amygdala and hippocampus volumes obtained with automated FSL-FIRST and FreeSurfer protocols and those obtained with manual segmentation is not strong. Visual inspection by an informed individual and, if necessary, manual correction of automated segmentation outputs are important to ensure validity of volumetric results and interpretation of related findings. PMID:26824403

  2. Segmentation and classification of capnograms: application in respiratory variability analysis.

    PubMed

    Herry, C L; Townsend, D; Green, G C; Bravi, A; Seely, A J E

    2014-12-01

    Variability analysis of respiratory waveforms has been shown to provide key insights into respiratory physiology and has been used successfully to predict clinical outcomes. The current standard for quality assessment of the capnogram signal relies on a visual analysis performed by an expert in order to identify waveform artifacts. Automated processing of capnograms is desirable in order to extract clinically useful features over extended periods of time in a patient monitoring environment. However, the proper interpretation of capnogram derived features depends upon the quality of the underlying waveform. In addition, the comparison of capnogram datasets across studies requires a more practical approach than a visual analysis and selection of high-quality breath data. This paper describes a system that automatically extracts breath-by-breath features from capnograms and estimates the quality of individual breaths derived from them. Segmented capnogram breaths were presented to expert annotators, who labeled the individual physiological breaths into normal and multiple abnormal breath types. All abnormal breath types were aggregated into the abnormal class for the purpose of this manuscript, with respiratory variability analysis as the end-application. A database of 11,526 breaths from over 300 patients was created, comprising around 35% abnormal breaths. Several simple classifiers were trained through a stratified repeated ten-fold cross-validation and tested on an unseen portion of the labeled breath database, using a subset of 15 features derived from each breath curve. Decision Tree, K-Nearest Neighbors (KNN) and Naive Bayes classifiers were close in terms of performance (AUC of 90%, 89% and 88% respectively), while using 7, 4 and 5 breath features, respectively. When compared to airflow derived timings, the 95% confidence interval on the mean difference in interbreath intervals was ± 0.18 s. This breath classification system provides a fast and robust pre-processing of continuous respiratory waveforms, thereby ensuring reliable variability analysis of breath-by-breath parameter time series. PMID:25389703

  3. Fractal Segmentation and Clustering Analysis for Seismic Time Slices

    NASA Astrophysics Data System (ADS)

    Ronquillo, G.; Oleschko, K.; Korvin, G.; Arizabalo, R. D.

    2002-05-01

    Fractal analysis has become part of the standard approach for quantifying texture on gray-tone or colored images. In this research we introduce a multi-stage fractal procedure to segment, classify and measure the clustering patterns on seismic time slices from a 3-D seismic survey. Five fractal classifiers (c1)-(c5) were designed to yield standardized, unbiased and precise measures of the clustering of seismic signals. The classifiers were tested on seismic time slices from the AKAL field, Cantarell Oil Complex, Mexico. The generalized lacunarity (c1), fractal signature (c2), heterogeneity (c3), rugosity of boundaries (c4) and continuity resp. tortuosity (c5) of the clusters are shown to be efficient measures of the time-space variability of seismic signals. The Local Fractal Analysis (LFA) of time slices has proved to be a powerful edge detection filter to detect and enhance linear features, like faults or buried meandering rivers. The local fractal dimensions of the time slices were also compared with the self-affinity dimensions of the corresponding parts of porosity-logs. It is speculated that the spectral dimension of the negative-amplitude parts of the time-slice yields a measure of connectivity between the formation's high-porosity zones, and correlates with overall permeability.

  4. REACH. Teacher's Guide, Volume III. Task Analysis.

    ERIC Educational Resources Information Center

    Morris, James Lee; And Others

    Designed for use with individualized instructional units (CE 026 345-347, CE 026 349-351) in the electromechanical cluster, this third volume of the postsecondary teacher's guide presents the task analysis which was used in the development of the REACH (Refrigeration, Electro-Mechanical, Air Conditioning, Heating) curriculum. The major blocks of…

  5. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  6. Automated Segmentation of Cerebellum Using Brain Mask and Partial Volume Estimation Map.

    PubMed

    Lee, Dong-Kyun; Yoon, Uicheul; Kwak, Kichang; Lee, Jong-Min

    2015-01-01

    While segmentation of the cerebellum is an indispensable step in many studies, its contrast is not clear because of the adjacent cerebrospinal fluid, meninges, and cerebra peduncle. Thus, various cerebellar segmentation methods, such as a deformable model or a template-based algorithm might exhibit incorrect segmentation of the venous sinuses and the cerebellar peduncle. In this study, we propose a fully automated procedure combining cerebellar tissue classification, a template-based approach, and morphological operations sequentially. The cerebellar region was defined approximately by removing the cerebral region from the brain mask. Then, the noncerebellar region was trimmed using a morphological operator and the brain-stem atlas was aligned to the individual brain to define the brain-stem area. The proposed method was validated with the well-known FreeSurfer and ITK-SNAP packages using the dice similarity index and recall and precision scores. As a result, the proposed method was significantly better than the other methods for the dice similarity index (0.93, FreeSurfer: 0.92, ITK-SNAP: 0.87) and precision (0.95, FreeSurfer: 0.90, ITK-SNAP: 0.93). Therefore, it could be said that the proposed method yielded a robust and accurate segmentation result. Moreover, additional postprocessing with the brain-stem atlas could improve its result. PMID:26060504

  7. Performance evaluation of automated segmentation software on optical coherence tomography volume data.

    PubMed

    Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E; Debuc, Delia Cabrera

    2016-05-01

    Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth. PMID:27159849

  8. Estimation of body composition in Chinese and British men by ultrasonographic assessment of segmental adipose tissue volume.

    PubMed

    Eston, R; Evans, R; Fu, F

    1994-03-01

    It has been shown that ultrasonographic measurements can be used to predict body composition in adults. The purpose of this study was to assess the relationship between ultrasonograph and caliper (SKF) measurements of subcutaneous adipose tissue thickness in athletic Caucasian (English, E) and Asian (Chinese, C) men against estimates of body composition determined from hydrodensitometry (HYD). The usefulness of a proposed ultrasonographic method of estimating lean and fat proportions in the upper and lower limbs was also evaluated as a potential method of predicting body composition. Ultrasonography (US) was used to measure adipose and skin thickness at the following sites: biceps, triceps, subscapular, suprailiac, abdominal, pectoral, thigh and calf. Caliper measurements were also made at the above sites. Subcutaneous fat thickness and segmental radius were measured directly from the display screen of the ultrasonic scanner (Aloka 500 SD). By applying the geometry of a cone, the proximal and distal radii of the upper arm and upper leg were used to calculate the proportionate volumes of adipose tissue. The best correlations for US and SKF were obtained at the quadriceps, subscapular and pectoral sites for E (r = 0.96, 0.93 and 0.90, respectively) and at the quadriceps, calf and abdominal sites for C (r = 0.90, 0.81 and 0.75, respectively). The best ultrasonographic predictor of the percentage fat in both groups was the percentage adipose tissue volume in the upper leg (r = 0.83 and 0.79 for C and E, respectively). Stepwise multiple regression analysis indicated that the prediction of percentage fat was improved by the addition of the ultrasonographic abdomen measurement in both groups: Chinese sample: %fat = %fat(leg) (0.491) + US abdomen (0.337) + 0.95 ( R = 0.89, s.e.e. = 1.9%); English sample: %fat = %fat(leg) (0.435) + US abdomen (0.230) - 0.765 ( R = 0.80, s.e.e. = 3.6%). It is concluded that ultrasonographic measurements of subcutaneous adipose tissue and volumetric assessment of percentage adipose tissue in the thigh are useful estimates of body composition in athletic English and Chinese males. PMID:8044501

  9. Finite Volume Methods: Foundation and Analysis

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Ohlberger, Mario

    2003-01-01

    Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

  10. SU-E-J-238: Monitoring Lymph Node Volumes During Radiotherapy Using Semi-Automatic Segmentation of MRI Images

    SciTech Connect

    Veeraraghavan, H; Tyagi, N; Riaz, N; McBride, S; Lee, N; Deasy, J

    2014-06-01

    Purpose: Identification and image-based monitoring of lymph nodes growing due to disease, could be an attractive alternative to prophylactic head and neck irradiation. We evaluated the accuracy of the user-interactive Grow Cut algorithm for volumetric segmentation of radiotherapy relevant lymph nodes from MRI taken weekly during radiotherapy. Method: The algorithm employs user drawn strokes in the image to volumetrically segment multiple structures of interest. We used a 3D T2-wturbo spin echo images with an isotropic resolution of 1 mm3 and FOV of 492×492×300 mm3 of head and neck cancer patients who underwent weekly MR imaging during the course of radiotherapy. Various lymph node (LN) levels (N2, N3, N4'5) were individually contoured on the weekly MR images by an expert physician and used as ground truth in our study. The segmentation results were compared with the physician drawn lymph nodes based on DICE similarity score. Results: Three head and neck patients with 6 weekly MR images were evaluated. Two patients had level 2 LN drawn and one patient had level N2, N3 and N4'5 drawn on each MR image. The algorithm took an average of a minute to segment the entire volume (512×512×300 mm3). The algorithm achieved an overall DICE similarity score of 0.78. The time taken for initializing and obtaining the volumetric mask was about 5 mins for cases with only N2 LN and about 15 mins for the case with N2,N3 and N4'5 level nodes. The longer initialization time for the latter case was due to the need for accurate user inputs to separate overlapping portions of the different LN. The standard deviation in segmentation accuracy at different time points was utmost 0.05. Conclusions: Our initial evaluation of the grow cut segmentation shows reasonably accurate and consistent volumetric segmentations of LN with minimal user effort and time.

  11. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  12. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  13. Infant Word Segmentation and Childhood Vocabulary Development: A Longitudinal Analysis

    PubMed Central

    Singh, Leher; Reznick, J. Steven; Xuehua, Liang

    2012-01-01

    Infants begin to segment novel words from speech by 7.5 months, demonstrating an ability to track, encode and retrieve words in the context of larger units. Although it is presumed that word recognition at this stage is a prerequisite to constructing a vocabulary, the continuity between these stages of development has not yet been empirically demonstrated. The goal of the present study is to investigate whether infant word segmentation skills are indeed related to later lexical development. Two word segmentation tasks, varying in complexity, were administered in infancy and related to childhood outcome measures. Outcome measures consisted of age-normed productive vocabulary percentiles and a measure of cognitive development. Results demonstrated a strong degree of association between infant word segmentation abilities at 7 months and productive vocabulary size at 24 months. In addition, outcome groups, as defined by median vocabulary size and growth trajectories at 24 months, showed distinct word segmentation abilities as infants. These findings provide the first prospective evidence supporting the predictive validity of infant word segmentation tasks and suggest that they are indeed associated with mature word knowledge. PMID:22709398

  14. An efficient image segmentation algorithm for landscape analysis

    NASA Astrophysics Data System (ADS)

    Devereux, B. J.; Amable, G. S.; Posada, C. Costa

    2004-11-01

    Widespread development and use of object-based GIS in the environmental sciences has stimulated a rapid growth in demand for parcel-based land cover data. Despite the fact that image segmentation techniques applied to remotely sensed data offer the most effective and direct approach to generating such data their use is still restricted to specialist applications. This paper describes a general purpose segmentation algorithm capable of creating parcel boundaries from a wide range of image types. A brief review of image segmentation in a range of disciplines identifies key elements of a successful segmentation algorithm. The structure and implementation of the algorithm is then described and its performance is illustrated using Landsat ETM imagery of Eastern England. Comparison of the segmentation product generated by the algorithm with those generated by independent human analysts demonstrates that the computer algorithm and the manually derived products have just less than eighty percent correspondence. Most of the differences stem from the more detailed results achieved by the segmentation algorithm.

  15. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  16. Application of taxonomy theory, Volume 1: Computing a Hopf bifurcation-related segment of the feasibility boundary. Final report

    SciTech Connect

    Zaborszky, J.; Venkatasubramanian, V.

    1995-10-01

    Taxonomy Theory is the first precise comprehensive theory for large power system dynamics modeled in any detail. The motivation for this project is to show that it can be used, practically, for analyzing a disturbance that actually occurred on a large system, which affected a sizable portion of the Midwest with supercritical Hopf type oscillations. This event is well documented and studied. The report first summarizes Taxonomy Theory with an engineering flavor. Then various computational approaches are sighted and analyzed for desirability to use with Taxonomy Theory. Then working equations are developed for computing a segment of the feasibility boundary that bounds the region of (operating) parameters throughout which the operating point can be moved without losing stability. Then experimental software incorporating large EPRI software packages PSAPAC is developed. After a summary of the events during the subject disturbance, numerous large scale computations, up to 7600 buses, are reported. These results are reduced into graphical and tabular forms, which then are analyzed and discussed. The report is divided into two volumes. This volume illustrates the use of the Taxonomy Theory for computing the feasibility boundary and presents evidence that the event indeed led to a Hopf type oscillation on the system. Furthermore it proves that the Feasibility Theory can indeed be used for practical computation work with very large systems. Volume 2, a separate volume, will show that the disturbance has led to a supercritical (that is stable oscillation) Hopf bifurcation.

  17. Atlas-Based Segmentation Improves Consistency and Decreases Time Required for Contouring Postoperative Endometrial Cancer Nodal Volumes

    SciTech Connect

    Young, Amy V.; Wortham, Angela; Wernick, Iddo; Evans, Andrew; Ennis, Ronald D.

    2011-03-01

    Purpose: Accurate target delineation of the nodal volumes is essential for three-dimensional conformal and intensity-modulated radiotherapy planning for endometrial cancer adjuvant therapy. We hypothesized that atlas-based segmentation ('autocontouring') would lead to time savings and more consistent contours among physicians. Methods and Materials: A reference anatomy atlas was constructed using the data from 15 postoperative endometrial cancer patients by contouring the pelvic nodal clinical target volume on the simulation computed tomography scan according to the Radiation Therapy Oncology Group 0418 trial using commercially available software. On the simulation computed tomography scans from 10 additional endometrial cancer patients, the nodal clinical target volume autocontours were generated. Three radiation oncologists corrected the autocontours and delineated the manual nodal contours under timed conditions while unaware of the other contours. The time difference was determined, and the overlap of the contours was calculated using Dice's coefficient. Results: For all physicians, manual contouring of the pelvic nodal target volumes and editing the autocontours required a mean {+-} standard deviation of 32 {+-} 9 vs. 23 {+-} 7 minutes, respectively (p = .000001), a 26% time savings. For each physician, the time required to delineate the manual contours vs. correcting the autocontours was 30 {+-} 3 vs. 21 {+-} 5 min (p = .003), 39 {+-} 12 vs. 30 {+-} 5 min (p = .055), and 29 {+-} 5 vs. 20 {+-} 5 min (p = .0002). The mean overlap increased from manual contouring (0.77) to correcting the autocontours (0.79; p = .038). Conclusion: The results of our study have shown that autocontouring leads to increased consistency and time savings when contouring the nodal target volumes for adjuvant treatment of endometrial cancer, although the autocontours still required careful editing to ensure that the lymph nodes at risk of recurrence are properly included in the target volume.

  18. Simultaneous Segmentation of Retinal Surfaces and Microcystic Macular Edema in SDOCT Volumes

    PubMed Central

    Antony, Bhavna J.; Lang, Andrew; Swingle, Emily K.; Al-Louzi, Omar; Carass, Aaron; Solomon, Sharon; Calabresi, Peter A.; Saidha, Shiv; Prince, Jerry L.

    2016-01-01

    Optical coherence tomography (OCT) is a noninvasive imaging modality that has begun to find widespread use in retinal imaging for the detection of a variety of ocular diseases. In addition to structural changes in the form of altered retinal layer thicknesses, pathological conditions may also cause the formation of edema within the retina. In multiple sclerosis, for instance, the nerve fiber and ganglion cell layers are known to thin. Additionally, the formation of pseudocysts called microcystic macular edema (MME) have also been observed in the eyes of about 5% of MS patients, and its presence has been shown to be correlated with disease severity. Previously, we proposed separate algorithms for the segmentation of retinal layers and MME, but since MME mainly occurs within specific regions of the retina, a simultaneous approach is advantageous. In this work, we propose an automated globally optimal graph-theoretic approach that simultaneously segments the retinal layers and the MME in volumetric OCT scans. SD-OCT scans from one eye of 12 MS patients with known MME and 8 healthy controls were acquired and the pseudocysts manually traced. The overall precision and recall of the pseudocyst detection was found to be 86.0% and 79.5%, respectively. PMID:27199502

  19. Three-dimensional visualization of the craniofacial patient: volume segmentation, data integration and animation.

    PubMed

    Enciso, R; Memon, A; Mah, J

    2003-01-01

    The research goal at the Craniofacial Virtual Reality Laboratory of the School of Dentistry in conjunction with the Integrated Media Systems Center, School of Engineering, University of Southern California, is to develop computer methods to accurately visualize patients in three dimensions using advanced imaging and data acquisition devices such as cone-beam computerized tomography (CT) and mandibular motion capture. Data from these devices were integrated for three-dimensional (3D) patient-specific visualization, modeling and animation. Generic methods are in development that can be used with common CT image format (DICOM), mesh format (STL) and motion data (3D position over time). This paper presents preliminary descriptive studies on: 1) segmentation of the lower and upper jaws with two types of CT data--(a) traditional whole head CT data and (b) the new dental Newtom CT; 2) manual integration of accurate 3D tooth crowns with the segmented lower jaw 3D model; 3) realistic patient-specific 3D animation of the lower jaw. PMID:14606537

  20. Accurate segmentation for quantitative analysis of vascular trees in 3D micro-CT images

    NASA Astrophysics Data System (ADS)

    Riedel, Christian H.; Chuah, Siang C.; Zamir, Mair; Ritman, Erik L.

    2002-04-01

    Quantitative analysis of the branching geometry of multiple branching-order vascular trees from 3D micro-CT data requires an efficient segmentation algorithm that leads to a consistent, accurate representation of the tree structure. To explore different segmentation techniques, we use isotropic micro-CT-images of intact rat coronary, pulmonary and hepatic opacified arterial trees with cubic voxel-side length of 5-20 micrometer. We implemented an active topology adaptive surface model for segmentation and compared the results from this algorithm with segmentations of the same image data using conventional segmentation methods. Because of the modulation transfer function of the micro-CT scanner, thresholding and region growing techniques usually underestimate small, or overestimate large, vessel diameters depending on the chosen grayscale thresholds. Furthermore, these approaches lack the robustness needed to overcome the effects of typical imaging artifacts, such as image noise at the vessel surfaces, which tend to propagate errors in the analysis of the tree due to its hierarchical nature. Our adaptable surface models include local gray- scale statistics, object boundary and object size information into the segmentation algorithm, thus leading to a higher stability and accuracy of the segmentation process. 5-20 micrometer. We implemented an active topology adaptive surface model for segmentation and compared the results from this algorithm with segmentations of the same image data using conventional segmentation methods. Because of the modulation transfer function of the micro-CT scanner, thresholding and region growing techniques usually underestimate small, or overestimate large, vessel diameters depending on the chosen grayscale thresholds. Furthermore, these approaches lack the robustness needed to overcome the e*ects of typical imaging artifacts, such as image noise at the vessel surfaces, which tend to propagate errors in the analysis of the tree due to its hierarchical nature. Our adaptable surface models include local gray-scale statistics, object boundary and object size information into the segmentation algorithm, thus leading to a higher stability and accuracy of the segmentation process.

  1. Normative Data for Body Segment Weights, Volumes, and Densities in Cadaver and Living Subjects

    ERIC Educational Resources Information Center

    Gold, Ellen; Katch, Victor

    1976-01-01

    Application of only Dempster's data on problems in human motion studies to living subjects is at best a rough approximation, in light of apparent differences between Dempster's data and the grand mean calculated for all data, with respect to volume and weight. (MB)

  2. Analysis and comparison of space/spatial-frequency and multiscale methods for texture segmentation

    NASA Astrophysics Data System (ADS)

    Zhu, Yue Min; Goutte, Robert

    1995-01-01

    We investigate the use of space/spatial-frequency and multiscale analysis methods for texture segmentation, with emphasis on the 2D Wigner-Ville distribution and Morlet wavelet transform. For these two methods, the discrete versions that are necessary for numerical implementations are discussed. Texture segmentation paradigms making use of local spectral measurements from these two types of representations are described. The interest of the proposed spatial-frequency- and scale-based segmentation methods is illustrated with the aid of examples on both synthesized and natural images, and their segmentation performance is analyzed and compared.

  3. Infant Word Segmentation and Childhood Vocabulary Development: A Longitudinal Analysis

    ERIC Educational Resources Information Center

    Singh, Leher; Reznick, J. Steven; Xuehua, Liang

    2012-01-01

    Infants begin to segment novel words from speech by 7.5 months, demonstrating an ability to track, encode and retrieve words in the context of larger units. Although it is presumed that word recognition at this stage is a prerequisite to constructing a vocabulary, the continuity between these stages of development has not yet been empirically…

  4. Health Lifestyles: Audience Segmentation Analysis for Public Health Interventions.

    ERIC Educational Resources Information Center

    Slater, Michael D.; Flora, June A.

    This paper is concerned with the application of market research techniques to segment large populations into homogeneous units in order to improve the reach, utilization, and effectiveness of health programs. The paper identifies seven distinctive patterns of health attitudes, social influences, and behaviors using cluster analytic techniques in a…

  5. An Experimental Analysis of Phoneme Blending and Segmenting Skills

    ERIC Educational Resources Information Center

    Daly, Edward J., III; Johnson, Sarah; LeClair, Courtney

    2009-01-01

    In this 2-experiment study, experimental analyses of phoneme blending and segmenting skills were conducted with four-first grade students. Intraindividual analyses were conducted to identify the effects of classroom-based instruction on blending phonemes in Experiment 1. In Experiment 2, the effects of an individualized intervention for the…

  6. Fast Hough transform analysis: pattern deviation from line segment

    NASA Astrophysics Data System (ADS)

    Ershov, E.; Terekhin, A.; Nikolaev, D.; Postnikov, V.; Karpenko, S.

    2015-12-01

    In this paper, we analyze properties of dyadic patterns. These pattern were proposed to approximate line segments in the fast Hough transform (FHT). Initially, these patterns only had recursive computational scheme. We provide simple closed form expression for calculating point coordinates and their deviation from corresponding ideal lines.

  7. Extended fractal analysis for texture classification and segmentation.

    PubMed

    Kaplan, L M

    1999-01-01

    The Hurst parameter for two-dimensional (2-D) fractional Brownian motion (fBm) provides a single number that completely characterizes isotropic textured surfaces whose roughness is scale-invariant. Extended self-similar (ESS) processes were previously introduced in order to provide a generalization of fBm. These new processes are described by a number of multiscale Hurst parameters. In contrast to the single Hurst parameter, the extended parameters are able to characterize a greater variety of natural textures where the roughness of these textures is not necessarily scale-invariant. In this work, we evaluate the effectiveness of multiscale Hurst parameters as features for texture classification and segmentation. For texture classification, the performance of the generalized Hurst features is compared to traditional Hurst and Gabor features. Our experiments show that classification accuracy for the generalized Hurst and Gabor features are comparable even though the generalized Hurst features lower the dimensionality by a factor of five. Next, the segmentation accuracy using generalized and standard Hurst features is evaluated on images of texture mosaics. For these experiments, the performance is evaluated with and without supplemental contrast and average grayscale features. Finally, we investigate the effectiveness of the Hurst features to segment real synthetic aperture radar (SAR) imagery. PMID:18267432

  8. Three-dimensional reconstruction of active muscle cell segment volume from two-dimensional optical sections

    NASA Astrophysics Data System (ADS)

    Lake, David S.; Griffiths, P. J.; Cecchi, G.; Taylor, Stuart R.

    1999-06-01

    An ultramicroscope coupled to a square-aspect-ratio sensor was used to image the dynamic geometry of live muscle cells. Skeletal muscle cells, dissected from frogs, were suspended in the optical axis and illuminated from one side by a focused slit of white light. The sensor detected light scattered at 90 degrees to the incident beam. Serial cross-sections were acquired as a motorized stage moved the cell through the slit of light. The axial force at right angles to the cross- sections was recorded simultaneously. Cross-sections were aligned by a least-squares fit of their centroids to a straight line, to correct for misalignments between the axes of the microscope, the stage, and the sensor. Three- dimensional volumes were reconstructed from each series and viewed from all directions to locate regions that remained at matching axial positions. The angle of the principal axis and the cross-sectional area were calculated and associated with force recorded concurrently. The cells adjusted their profile and volume to remain stable against turning as contractile force rose and fell, as predicted by the law of conservation of angular momentum.

  9. Fetal brain MRI: segmentation and biometric analysis of the posterior fossa.

    PubMed

    Claude, Isabelle; Daire, Jean-Luc; Sebag, Guy

    2004-04-01

    This paper presents a novel approach to fetal magnetic resonance image segmentation and biometric analysis of the posterior fossa's midline structures. We developed a semi-automatic segmentation method (based on a region growing technique) and tested the algorithm on images of 104 normal fetuses. Using the segmented regions of interest (posterior fossa, vermis, and brainstem), we computed four relative area ratios. Statistical and clinical analysis of our results showed that the relative development of these structures appears to be independent of pregnancy term. In an additional study of 23 pathological cases, one of the four measurements was always significantly different from the corresponding value observed in normal cases. PMID:15072216

  10. Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Syed, Abdul Haleem

    Land-use-and-land-cover (LULC) mapping is crucial in precision agriculture, environmental monitoring, disaster response, and military applications. The demand for improved and more accurate LULC maps has led to the emergence of a key methodology known as Geographic Object-Based Image Analysis (GEOBIA). The core idea of the GEOBIA for an object-based classification system (OBC) is to change the unit of analysis from single-pixels to groups-of-pixels called `objects' through segmentation. While this new paradigm solved problems and improved global accuracy, it also raised new challenges such as the loss of accuracy in categories that are less abundant, but potentially important. Although this trade-off may be acceptable in some domains, the consequences of such an accuracy loss could be potentially fatal in others (for instance, landmine detection). This thesis proposes a method to improve OBC performance by eliminating such accuracy losses. Specifically, we examine the two key players of an OBC system: Hierarchical Segmentation and Supervised Classification. Further, we propose a model to understand the source of accuracy errors in minority categories and provide a method called Scale Fusion to eliminate those errors. This proposed fusion method involves two stages. First, the characteristic scale for each category is estimated through a combination of segmentation and supervised classification. Next, these estimated scales (segmentation maps) are fused into one combined-object-map. Classification performance is evaluated by comparing results of the multi-cut-and-fuse approach (proposed) to the traditional single-cut (SC) scale selection strategy. Testing on four different data sets revealed that our proposed algorithm improves accuracy on minority classes while performing just as well on abundant categories. Another active obstacle, presented by today's remotely sensed images, is the volume of information produced by our modern sensors with high spatial and temporal resolution. For instance, over this decade, it is projected that 353 earth observation satellites from 41 countries are to be launched. Timely production of geo-spatial information, from these large volumes, is a challenge. This is because in the traditional methods, the underlying representation and information processing is still primarily pixel-based, which implies that as the number of pixels increases, so does the computational complexity. To overcome this bottleneck, created by pixel-based representation, this thesis proposes a dart-based discrete topological representation (DBTR), where the DBTR differs from pixel-based methods in its use of a reduced boundary based representation. Intuitively, the efficiency gains arise from the observation that, it is lighter to represent a region by its boundary (darts) than by its area (pixels). We found that our implementation of DBTR, not only improved our computational efficiency, but also enhanced our ability to encode and extract spatial information. Overall, this thesis presents solutions to two problems of an object-based classification system: accuracy and efficiency. Our proposed Scale Fusion method demonstrated improvements in accuracy, while our dart-based topology representation (DBTR) showed improved efficiency in the extraction and encoding of spatial information.

  11. Who avoids going to the doctor and why? Audience segmentation analysis for application of message development.

    PubMed

    Kannan, Viji Diane; Veazie, Peter J

    2015-01-01

    This exploratory study examines the prevalent and detrimental health care phenomenon of patient delay in order to inform formative research leading to the design of communication strategies. Delayed medical care diminishes optimal treatment choices, negatively impacts prognosis, and increases medical costs. Various communication strategies have been employed to combat patient delay, with limited success. This study fills a gap in research informing those interventions by focusing on the portion of patient delay occurring after symptoms have been assessed as a sign of illness and the need for medical care has been determined. We used CHAID segmentation analysis to produce homogeneous segments from the sample according to the propensity to avoid medical care. CHAID is a criterion-based predictive cluster analysis technique. CHAID examines a variety of characteristics to find the one most strongly associated with avoiding doctor visits through a chi-squared test and assessment of statistical significance. The characteristics identified then define the segments. Fourteen segments were produced. Age was the first delineating characteristic, with younger age groups comprising a greater proportion of avoiders. Other segments containing a comparatively larger percent of avoiders were characterized by lower income, lower education, being uninsured, and being male. Each segment was assessed for psychographic properties associated with avoiding care, reasons for avoiding care, and trust in health information sources. While the segments display distinct profiles, having had positive provider experiences, having high health self-efficacy, and having an internal rather than external or chance locus of control were associated with low avoidance among several segments. Several segments were either more or less likely to cite time or money as the reason for avoiding care. And several older aged segments were less likely than the remaining sample to trust the government as a source for health information. Implications for future research are discussed. PMID:25062466

  12. Combined texture feature analysis of segmentation and classification of benign and malignant tumour CT slices.

    PubMed

    Padma, A; Sukanesh, R

    2013-01-01

    A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity. PMID:23094909

  13. Comparison of Acute and Chronic Traumatic Brain Injury Using Semi-Automatic Multimodal Segmentation of MR Volumes

    PubMed Central

    Chambers, Micah C.; Alger, Jeffry R.; Filippou, Maria; Prastawa, Marcel W.; Wang, Bo; Hovda, David A.; Gerig, Guido; Toga, Arthur W.; Kikinis, Ron; Vespa, Paul M.; Van Horn, John D.

    2011-01-01

    Abstract Although neuroimaging is essential for prompt and proper management of traumatic brain injury (TBI), there is a regrettable and acute lack of robust methods for the visualization and assessment of TBI pathophysiology, especially for of the purpose of improving clinical outcome metrics. Until now, the application of automatic segmentation algorithms to TBI in a clinical setting has remained an elusive goal because existing methods have, for the most part, been insufficiently robust to faithfully capture TBI-related changes in brain anatomy. This article introduces and illustrates the combined use of multimodal TBI segmentation and time point comparison using 3D Slicer, a widely-used software environment whose TBI data processing solutions are openly available. For three representative TBI cases, semi-automatic tissue classification and 3D model generation are performed to perform intra-patient time point comparison of TBI using multimodal volumetrics and clinical atrophy measures. Identification and quantitative assessment of extra- and intra-cortical bleeding, lesions, edema, and diffuse axonal injury are demonstrated. The proposed tools allow cross-correlation of multimodal metrics from structural imaging (e.g., structural volume, atrophy measurements) with clinical outcome variables and other potential factors predictive of recovery. In addition, the workflows described are suitable for TBI clinical practice and patient monitoring, particularly for assessing damage extent and for the measurement of neuroanatomical change over time. With knowledge of general location, extent, and degree of change, such metrics can be associated with clinical measures and subsequently used to suggest viable treatment options. PMID:21787171

  14. Landmine detection using IR image segmentation by means of fractal dimension analysis

    NASA Astrophysics Data System (ADS)

    Abbate, Horacio A.; Gambini, Juliana; Delrieux, Claudio; Castro, Eduardo H.

    2009-05-01

    This work is concerned with buried landmines detection by long wave infrared images obtained during the heating or cooling of the soil and a segmentation process of the images. The segmentation process is performed by means of a local fractal dimension analysis (LFD) as a feature descriptor. We use two different LFD estimators, box-counting dimension (BC), and differential box counting dimension (DBC). These features are computed in a per pixel basis, and the set of features is clusterized by means of the K-means method. This segmentation technique produces outstanding results, with low computational cost.

  15. Theoretical analysis and experimental verification on valve-less piezoelectric pump with hemisphere-segment bluff-body

    NASA Astrophysics Data System (ADS)

    Ji, Jing; Zhang, Jianhui; Xia, Qixiao; Wang, Shouyin; Huang, Jun; Zhao, Chunsheng

    2014-05-01

    Existing researches on no-moving part valves in valve-less piezoelectric pumps mainly concentrate on pipeline valves and chamber bottom valves, which leads to the complex structure and manufacturing process of pump channel and chamber bottom. Furthermore, position fixed valves with respect to the inlet and outlet also makes the adjustability and controllability of flow rate worse. In order to overcome these shortcomings, this paper puts forward a novel implantable structure of valve-less piezoelectric pump with hemisphere-segments in the pump chamber. Based on the theory of flow around bluff-body, the flow resistance on the spherical and round surface of hemisphere-segment is different when fluid flows through, and the macroscopic flow resistance differences thus formed are also different. A novel valve-less piezoelectric pump with hemisphere-segment bluff-body (HSBB) is presented and designed. HSBB is the no-moving part valve. By the method of volume and momentum comparison, the stress on the bluff-body in the pump chamber is analyzed. The essential reason of unidirectional fluid pumping is expounded, and the flow rate formula is obtained. To verify the theory, a prototype is produced. By using the prototype, experimental research on the relationship between flow rate, pressure difference, voltage, and frequency has been carried out, which proves the correctness of the above theory. This prototype has six hemisphere-segments in the chamber filled with water, and the effective diameter of the piezoelectric bimorph is 30mm. The experiment result shows that the flow rate can reach 0.50 mL/s at the frequency of 6 Hz and the voltage of 110 V. Besides, the pressure difference can reach 26.2 mm H2O at the frequency of 6 Hz and the voltage of 160 V. This research proposes a valve-less piezoelectric pump with hemisphere-segment bluff-body, and its validity and feasibility is verified through theoretical analysis and experiment.

  16. Tracking and data acquisition system for the 1990's. Volume 5: TDAS ground segment architecture and operations concept

    NASA Technical Reports Server (NTRS)

    Daly, R.

    1983-01-01

    Tracking and data acquisition system (TDAS) ground segment and operational requirements, TDAS RF terminal configurations, TDAS ground segment elements, the TDAS network, and the TDAS ground terminal hardware are discussed.

  17. Analysis of radially cracked ring segments subject to forces and couples

    NASA Technical Reports Server (NTRS)

    Gross, B.; Strawley, J. E.

    1975-01-01

    Results of planar boundary collocation analysis are given for ring segment (C shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5, and ratios of crack length to segment width in the range 0.1 to 0.8.

  18. Vessel segmentation analysis of ischemic stroke images acquired with photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Soetikno, Brian; Hu, Song; Gonzales, Ernie; Zhong, Qiaonan; Maslov, Konstantin; Lee, Jin-Moo; Wang, Lihong V.

    2012-02-01

    We have applied optical-resolution photoacoustic microscopy (OR-PAM) for longitudinal monitoring of cerebral metabolism through the intact skull of mice before, during, and up to 72 hours after a 1-hour transient middle cerebral artery occlusion (tMCAO). The high spatial resolution of OR-PAM enabled us to develop vessel segmentation techniques for segment-wise analysis of cerebrovascular responses.

  19. Decomposition analysis of differential dose volume histograms.

    PubMed

    Van den Heuvela, Frank

    2006-02-01

    Dose volume histograms are a common tool to assess the value of a treatment plan for various forms of radiation therapy treatment. The purpose of this work is to introduce, validate, and apply a set of tools to analyze differential dose volume histograms by decomposing them into physically and clinically meaningful normal distributions. A weighted sum of the decomposed normal distributions (e.g., weighted dose) is proposed as a new measure of target dose, rather than the more unstable point dose. The method and its theory are presented and validated using simulated distributions. Additional validation is performed by analyzing simple four field box techniques encompassing a predefined target, using different treatment energies inside a water phantom. Furthermore, two clinical situations are analyzed using this methodology to illustrate practical usefulness. A comparison of a treatment plan for a breast patient using a tangential field setup with wedges is compared to a comparable geometry using dose compensators. Finally, a normal tissue complication probability (NTCP) calculation is refined using this decomposition. The NTCP calculation is performed on a liver as organ at risk in a treatment of a mesothelioma patient with involvement of the right lung. The comparison of the wedged breast treatment versus the compensator technique yields comparable classical dose parameters (e.g., conformity index approximately = 1 and equal dose at the ICRU dose point). The methodology proposed here shows a 4% difference in weighted dose outlining the difference in treatment using a single parameter instead of at least two in a classical analysis (e.g., mean dose, and maximal dose, or total dose variance). NTCP-calculations for the mesothelioma case are generated automatically and show a 3% decrease with respect to the classical calculation. The decrease is slightly dependant on the fractionation and on the alpha/beta-value utilized. In conclusion, this method is able to distinguish clinically important differences between treatment plans using a single parameter. This methodology shows promise as an objective tool for analyzing NTCP and doses in larger studies, as the only information needed is the dose volume histogram. PMID:16532934

  20. Decomposition analysis of differential dose volume histograms

    SciTech Connect

    Heuvel, Frank van den

    2006-02-15

    Dose volume histograms are a common tool to assess the value of a treatment plan for various forms of radiation therapy treatment. The purpose of this work is to introduce, validate, and apply a set of tools to analyze differential dose volume histograms by decomposing them into physically and clinically meaningful normal distributions. A weighted sum of the decomposed normal distributions (e.g., weighted dose) is proposed as a new measure of target dose, rather than the more unstable point dose. The method and its theory are presented and validated using simulated distributions. Additional validation is performed by analyzing simple four field box techniques encompassing a predefined target, using different treatment energies inside a water phantom. Furthermore, two clinical situations are analyzed using this methodology to illustrate practical usefulness. A comparison of a treatment plan for a breast patient using a tangential field setup with wedges is compared to a comparable geometry using dose compensators. Finally, a normal tissue complication probability (NTCP) calculation is refined using this decomposition. The NTCP calculation is performed on a liver as organ at risk in a treatment of a mesothelioma patient with involvement of the right lung. The comparison of the wedged breast treatment versus the compensator technique yields comparable classical dose parameters (e.g., conformity index {approx_equal}1 and equal dose at the ICRU dose point). The methodology proposed here shows a 4% difference in weighted dose outlining the difference in treatment using a single parameter instead of at least two in a classical analysis (e.g., mean dose, and maximal dose, or total dose variance). NTCP-calculations for the mesothelioma case are generated automatically and show a 3% decrease with respect to the classical calculation. The decrease is slightly dependant on the fractionation and on the {alpha}/{beta}-value utilized. In conclusion, this method is able to distinguish clinically important differences between treatment plans using a single parameter. This methodology shows promise as an objective tool for analyzing NTCP and doses in larger studies, as the only information needed is the dose volume histogram.

  1. Label-fusion-segmentation and deformation-based shape analysis of deep gray matter in multiple sclerosis: the impact of thalamic subnuclei on disability.

    PubMed

    Magon, Stefano; Chakravarty, M Mallar; Amann, Michael; Weier, Katrin; Naegelin, Yvonne; Andelova, Michaela; Radue, Ernst-Wilhelm; Stippich, Christoph; Lerch, Jason P; Kappos, Ludwig; Sprenger, Till

    2014-08-01

    Deep gray matter (DGM) atrophy has been reported in patients with multiple sclerosis (MS) already at early stages of the disease and progresses throughout the disease course. We studied DGM volume and shape and their relation to disability in a large cohort of clinically well-described MS patients using new subcortical segmentation methods and shape analysis. Structural 3D magnetic resonance images were acquired at 1.5 T in 118 patients with relapsing remitting MS. Subcortical structures were segmented using a multiatlas technique that relies on the generation of an automatically generated template library. To localize focal morphological changes, shape analysis was performed by estimating the vertex-wise displacements each subject must undergo to deform to a template. Multiple linear regression analysis showed that the volume of specific thalamic nuclei (the ventral nuclear complex) together with normalized gray matter volume explains a relatively large proportion of expanded disability status scale (EDSS) variability. The deformation-based displacement analysis confirmed the relation between thalamic shape and EDSS scores. Furthermore, white matter lesion volume was found to relate to the shape of all subcortical structures. This novel method for the analysis of subcortical volume and shape allows depicting specific contributions of DGM abnormalities to neurological deficits in MS patients. The results stress the importance of ventral thalamic nuclei in this respect. PMID:24510715

  2. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations

    PubMed Central

    Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.

    2015-01-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key points Musculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest. Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis. Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349

  3. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations.

    PubMed

    Hart, Nicolas H; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L; Newton, Robert U

    2015-09-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key pointsMusculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest.Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis.Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349

  4. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends.

    PubMed

    Mansoor, Awais; Bagci, Ulas; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z; Folio, Les R; Udupa, Jayaram K; Mollura, Daniel J

    2015-01-01

    The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy-guided, and (e) machine learning-based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. PMID:26172351

  5. Influence of segmented vessel size due to limited imaging resolution on coronary hyperemic flow prediction from arterial crown volume.

    PubMed

    van Horssen, P; van Lier, M G J T B; van den Wijngaard, J P H M; VanBavel, E; Hoefer, I E; Spaan, J A E; Siebes, M

    2016-04-01

    Computational predictions of the functional stenosis severity from coronary imaging data use an allometric scaling law to derive hyperemic blood flow (Q) from coronary arterial volume (V), Q = αV(β) Reliable estimates of α and β are essential for meaningful flow estimations. We hypothesize that the relation between Q and V depends on imaging resolution. In five canine hearts, fluorescent microspheres were injected into the left anterior descending coronary artery during maximal hyperemia. The coronary arteries of the excised heart were filled with fluorescent cast material, frozen, and processed with an imaging cryomicrotome to yield a three-dimensional representation of the coronary arterial network. The effect of limited image resolution was simulated by assessing scaling law parameters from the virtual arterial network at 11 truncation levels ranging from 50 to 1,000 μm segment radius. Mapped microsphere locations were used to derive the corresponding relative Q using a reference truncation level of 200 μm. The scaling law factor α did not change with truncation level, despite considerable intersubject variability. In contrast, the scaling law exponent β decreased from 0.79 to 0.55 with increasing truncation radius and was significantly lower for truncation radii above 500 μm vs. 50 μm (P< 0.05). Hyperemic Q was underestimated for vessel truncation above the reference level. In conclusion, flow-crown volume relations confirmed overall power law behavior; however, this relation depends on the terminal vessel radius that can be visualized. The scaling law exponent β should therefore be adapted to the resolution of the imaging modality. PMID:26825519

  6. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Conclusions Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited. PMID:20064248

  7. A new partial volume segmentation approach to extract bladder wall for computer-aided detection in virtual cystoscopy

    NASA Astrophysics Data System (ADS)

    Li, Lihong; Wang, Zigang; Li, Xiang; Wei, Xinzhou; Adler, Howard L.; Huang, Wei; Rizvi, Syed A.; Meng, Hong; Harrington, Donald P.; Liang, Zhengrong

    2004-04-01

    We propose a new partial volume (PV) segmentation scheme to extract bladder wall for computer aided detection (CAD) of bladder lesions using multispectral MR images. Compared with CT images, MR images provide not only a better tissue contrast between bladder wall and bladder lumen, but also the multispectral information. As multispectral images are spatially registered over three-dimensional space, information extracted from them is more valuable than that extracted from each image individually. Furthermore, the intrinsic T1 and T2 contrast of the urine against the bladder wall eliminates the invasive air insufflation procedure. Because the earliest stages of bladder lesion growth tend to develop gradually and migrate slowly from the mucosa into the bladder wall, our proposed PV algorithm quantifies images as percentages of tissues inside each voxel. It preserves both morphology and texture information and provides tissue growth tendency in addition to the anatomical structure. Our CAD system utilizes a multi-scan protocol on dual (full and empty of urine) states of the bladder to extract both geometrical and texture information. Moreover, multi-scan of transverse and coronal MR images eliminates motion artifacts. Experimental results indicate that the presented scheme is feasible towards mass screening and lesion detection for virtual cystoscopy (VC).

  8. Automated abdominal lymph node segmentation based on RST analysis and SVM

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Misawa, Kazunari; Mori, Kensaku

    2014-03-01

    This paper describes a segmentation method for abdominal lymph node (LN) using radial structure tensor analysis (RST) and support vector machine. LN analysis is one of crucial parts of lymphadenectomy, which is a surgical procedure to remove one or more LNs in order to evaluate them for the presence of cancer. Several works for automated LN detection and segmentation have been proposed. However, there are a lot of false positives (FPs). The proposed method consists of LN candidate segmentation and FP reduction. LN candidates are extracted using RST analysis in each voxel of CT scan. RST analysis can discriminate between difference local intensity structures without influence of surrounding structures. In FP reduction process, we eliminate FPs using support vector machine with shape and intensity information of the LN candidates. The experimental result reveals that the sensitivity of the proposed method was 82.0 % with 21.6 FPs/case.

  9. Imalytics Preclinical: Interactive Analysis of Biomedical Volume Data

    PubMed Central

    Gremse, Felix; Stärk, Marius; Ehling, Josef; Menzel, Jan Robert; Lammers, Twan; Kiessling, Fabian

    2016-01-01

    A software tool is presented for interactive segmentation of volumetric medical data sets. To allow interactive processing of large data sets, segmentation operations, and rendering are GPU-accelerated. Special adjustments are provided to overcome GPU-imposed constraints such as limited memory and host-device bandwidth. A general and efficient undo/redo mechanism is implemented using GPU-accelerated compression of the multiclass segmentation state. A broadly applicable set of interactive segmentation operations is provided which can be combined to solve the quantification task of many types of imaging studies. A fully GPU-accelerated ray casting method for multiclass segmentation rendering is implemented which is well-balanced with respect to delay, frame rate, worst-case memory consumption, scalability, and image quality. Performance of segmentation operations and rendering are measured using high-resolution example data sets showing that GPU-acceleration greatly improves the performance. Compared to a reference marching cubes implementation, the rendering was found to be superior with respect to rendering delay and worst-case memory consumption while providing sufficiently high frame rates for interactive visualization and comparable image quality. The fast interactive segmentation operations and the accurate rendering make our tool particularly suitable for efficient analysis of multimodal image data sets which arise in large amounts in preclinical imaging studies. PMID:26909109

  10. Imalytics Preclinical: Interactive Analysis of Biomedical Volume Data.

    PubMed

    Gremse, Felix; Stärk, Marius; Ehling, Josef; Menzel, Jan Robert; Lammers, Twan; Kiessling, Fabian

    2016-01-01

    A software tool is presented for interactive segmentation of volumetric medical data sets. To allow interactive processing of large data sets, segmentation operations, and rendering are GPU-accelerated. Special adjustments are provided to overcome GPU-imposed constraints such as limited memory and host-device bandwidth. A general and efficient undo/redo mechanism is implemented using GPU-accelerated compression of the multiclass segmentation state. A broadly applicable set of interactive segmentation operations is provided which can be combined to solve the quantification task of many types of imaging studies. A fully GPU-accelerated ray casting method for multiclass segmentation rendering is implemented which is well-balanced with respect to delay, frame rate, worst-case memory consumption, scalability, and image quality. Performance of segmentation operations and rendering are measured using high-resolution example data sets showing that GPU-acceleration greatly improves the performance. Compared to a reference marching cubes implementation, the rendering was found to be superior with respect to rendering delay and worst-case memory consumption while providing sufficiently high frame rates for interactive visualization and comparable image quality. The fast interactive segmentation operations and the accurate rendering make our tool particularly suitable for efficient analysis of multimodal image data sets which arise in large amounts in preclinical imaging studies. PMID:26909109

  11. A robust and fast line segment detector based on top-down smaller eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing

    2014-01-01

    In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.

  12. Segmenting Business Students Using Cluster Analysis Applied to Student Satisfaction Survey Results

    ERIC Educational Resources Information Center

    Gibson, Allen

    2009-01-01

    This paper demonstrates a new application of cluster analysis to segment business school students according to their degree of satisfaction with various aspects of the academic program. The resulting clusters provide additional insight into drivers of student satisfaction that are not evident from analysis of the responses of the student body as a

  13. Control volume based hydrocephalus research; analysis of human data

    NASA Astrophysics Data System (ADS)

    Cohen, Benjamin; Wei, Timothy; Voorhees, Abram; Madsen, Joseph; Anor, Tomer

    2010-11-01

    Hydrocephalus is a neuropathophysiological disorder primarily diagnosed by increased cerebrospinal fluid volume and pressure within the brain. To date, utilization of clinical measurements have been limited to understanding of the relative amplitude and timing of flow, volume and pressure waveforms; qualitative approaches without a clear framework for meaningful quantitative comparison. Pressure volume models and electric circuit analogs enforce volume conservation principles in terms of pressure. Control volume analysis, through the integral mass and momentum conservation equations, ensures that pressure and volume are accounted for using first principles fluid physics. This approach is able to directly incorporate the diverse measurements obtained by clinicians into a simple, direct and robust mechanics based framework. Clinical data obtained for analysis are discussed along with data processing techniques used to extract terms in the conservation equation. Control volume analysis provides a non-invasive, physics-based approach to extracting pressure information from magnetic resonance velocity data that cannot be measured directly by pressure instrumentation.

  14. Automatic segmentation of the colon

    NASA Astrophysics Data System (ADS)

    Wyatt, Christopher L.; Ge, Yaorong; Vining, David J.

    1999-05-01

    Virtual colonoscopy is a minimally invasive technique that enables detection of colorectal polyps and cancer. Normally, a patient's bowel is prepared with colonic lavage and gas insufflation prior to computed tomography (CT) scanning. An important step for 3D analysis of the image volume is segmentation of the colon. The high-contrast gas/tissue interface that exists in the colon lumen makes segmentation of the majority of the colon relatively easy; however, two factors inhibit automatic segmentation of the entire colon. First, the colon is not the only gas-filled organ in the data volume: lungs, small bowel, and stomach also meet this criteria. User-defined seed points placed in the colon lumen have previously been required to spatially isolate only the colon. Second, portions of the colon lumen may be obstructed by peristalsis, large masses, and/or residual feces. These complicating factors require increased user interaction during the segmentation process to isolate additional colon segments. To automate the segmentation of the colon, we have developed a method to locate seed points and segment the gas-filled lumen with no user supervision. We have also developed an automated approach to improve lumen segmentation by digitally removing residual contrast-enhanced fluid resulting from a new bowel preparation that liquefies and opacifies any residual feces.

  15. Segmentation of ECG-gated multidetector row-CT cardiac images for functional analysis

    NASA Astrophysics Data System (ADS)

    Kim, Jin Sung; Na, Yonghum; Bae, Kyongtae T.

    2002-05-01

    Multi-row detector CT (MDCT) gated with ECG-tracing allows continuous image acquisition of the heart during a breath-hold with a high spatial and temporal resolution. Dynamic segmentation and display of CT images, especially short- and long-axis view, is important in functional analysis of cardiac morphology. The size of dynamic MDCT cardiac images, however, is typically very large involving several hundred CT images and thus a manual analysis of these images can be time-consuming and tedious. In this paper, an automatic scheme was proposed to segment and reorient the left ventricular images in MDCT. Two segmentation techniques, deformable model and region-growing methods, were developed and tested. The contour of the ventricular cavity was segmented iteratively from a set of initial coarse boundary points placed on a transaxial CT image and was propagated to adjacent CT images. Segmented transaxial diastolic cardiac phase MDCT images were reoriented along the long- and short-axis of the left ventricle. The axes were estimated by calculating the principal components of the ventricular boundary points and then confirmed or adjusted by an operator. The reorientation of the coordinates was applied to other transaxial MDCT image sets reconstructed at different cardiac phases. Estimated short-axes of the left ventricle were in a close agreement with the qualitative assessment by a radiologist. Preliminary results from our methods were promising, with a considerable reduction in analysis time and manual operations.

  16. Total variation based edge enhancement for level set segmentation and asymmetry analysis in breast thermograms.

    PubMed

    Prabha, S; Anandh, K R; Sujatha, C M; Ramakrishnan, S

    2014-01-01

    In this work, an attempt has been made to perform asymmetry analysis in breast thermograms using non-linear total variation diffusion filter and reaction diffusion based level set method. Breast images used in this study are obtained from online database of the project PROENG. Initially the images are subjected to total variation (TV) diffusion filter to generate the edge map. Reaction diffusion based level set method is employed to segment the breast tissues using TV edge map as stopping boundary function. Asymmetry analysis is performed on the segmented breast tissues using wavelet based structural texture features. The results show that nonlinear total variation based reaction diffusion level set method could efficiently segment the breast tissues. This method yields high correlation between the segmented output and the ground truth than the conventional level set. Structural texture features extracted from the wavelet coefficients are found to be significant in demarcating normal and abnormal tissues. Hence, it appears that the asymmetry analysis on segmented breast tissues extracted using total variation edge map can be used efficiently to identify the pathological conditions of breast thermograms. PMID:25571470

  17. Fire flame detection using color segmentation and space-time analysis

    NASA Astrophysics Data System (ADS)

    Ruchanurucks, Miti; Saengngoen, Praphin; Sajjawiso, Theeraphat

    2011-10-01

    This paper presents a fire flame detection using CCTV cameras based on image processing. The scheme relies on color segmentation and space-time analysis. The segmentation is performed to extract fire-like-color regions in an image. Many methods are benchmarked against each other to find the best for practical CCTV camera. After that, the space-time analysis is used to recognized fire behavior. A space-time window is generated from contour of the threshold image. Feature extraction is done in Fourier domain of the window. Neural network is used for behavior recognition. The system will be shown to be practical and robust.

  18. Gene expression analysis reveals that Delta/Notch signalling is not involved in onychophoran segmentation.

    PubMed

    Janssen, Ralf; Budd, Graham E

    2016-03-01

    Delta/Notch (Dl/N) signalling is involved in the gene regulatory network underlying the segmentation process in vertebrates and possibly also in annelids and arthropods, leading to the hypothesis that segmentation may have evolved in the last common ancestor of bilaterian animals. Because of seemingly contradicting results within the well-studied arthropods, however, the role and origin of Dl/N signalling in segmentation generally is still unclear. In this study, we investigate core components of Dl/N signalling by means of gene expression analysis in the onychophoran Euperipatoides kanangrensis, a close relative to the arthropods. We find that neither Delta or Notch nor any other investigated components of its signalling pathway are likely to be involved in segment addition in onychophorans. We instead suggest that Dl/N signalling may be involved in posterior elongation, another conserved function of these genes. We suggest further that the posterior elongation network, rather than classic Dl/N signalling, may be in the control of the highly conserved segment polarity gene network and the lower-level pair-rule gene network in onychophorans. Consequently, we believe that the pair-rule gene network and its interaction with Dl/N signalling may have evolved within the arthropod lineage and that Dl/N signalling has thus likely been recruited independently for segment addition in different phyla. PMID:26935716

  19. Finite difference based vibration simulation analysis of a segmented distributed piezoelectric structronic plate system

    NASA Astrophysics Data System (ADS)

    Ren, B. Y.; Wang, L.; Tzou, H. S.; Yue, H. H.

    2010-08-01

    Electrical modeling of piezoelectric structronic systems by analog circuits has the disadvantages of huge circuit structure and low precision. However, studies of electrical simulation of segmented distributed piezoelectric structronic plate systems (PSPSs) by using output voltage signals of high-speed digital circuits to evaluate the real-time dynamic displacements are scarce in the literature. Therefore, an equivalent dynamic model based on the finite difference method (FDM) is presented to simulate the actual physical model of the segmented distributed PSPS with simply supported boundary conditions. By means of the FDM, the four-ordered dynamic partial differential equations (PDEs) of the main structure/segmented distributed sensor signals/control moments of the segmented distributed actuator of the PSPS are transformed to finite difference equations. A dynamics matrix model based on the Newmark-β integration method is established. The output voltage signal characteristics of the lower modes (m <= 3, n <= 3) with different finite difference mesh dimensions and different integration time steps are analyzed by digital signal processing (DSP) circuit simulation software. The control effects of segmented distributed actuators with different effective areas are consistent with the results of the analytical model in relevant references. Therefore, the method of digital simulation for vibration analysis of segmented distributed PSPSs presented in this paper can provide a reference for further research into the electrical simulation of PSPSs.

  20. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  1. Proteomic Analysis of the Retina: Removal of RPE Alters Outer Segment Assembly and Retinal Protein Expression

    PubMed Central

    Wang, XiaoFei; Nookala, Suba; Narayanan, Chidambarathanu; Giorgianni, Francesco; Beranova-Giorgianni, Sarka; McCollum, Gary; Gerling, Ivan; Penn, John S.; Jablonski, Monica M.

    2008-01-01

    The mechanisms that regulate the complex physiologic task of photoreceptor outer segment assembly remain an enigma. One limiting factor in revealing the mechanism(s) by which this process is modulated is that not all of the role players that participate in this process are known. The purpose of this study was to determine some of the retinal proteins that likely play a critical role in regulating photoreceptor outer segment assembly. To do so, we analyzed and compared the proteome map of tadpole Xenopus laevis retinal pigment epithelium (RPE)-supported retinas containing organized outer segments with that of RPE-deprived retinas containing disorganized outer segments. Solubilized proteins were labeled with CyDye fluors followed by multiplexed two-dimensional separation. The intensity of protein spots and comparison of proteome maps was performed using DeCyder software. Identification of differentially regulated proteins was determined using nanoLC-ESI-MS/MS analysis. We found a total of 27 protein spots, 21 of which were unique proteins, which were differentially expressed in retinas with disorganized outer segments. We predict that in the absence of the RPE, oxidative stress initiates an unfolded protein response. Subsequently, downregulation of several candidate Müller glial cell proteins may explain the inability of photoreceptors to properly fold their outer segment membranes. In this study we have used identification and bioinformatics assessment of proteins that are differentially expressed in retinas with disorganized outer segments as a first step in determining probable key molecules involved in regulating photoreceptor outer segment assembly. PMID:18803304

  2. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  3. Microreactors with integrated UV/Vis spectroscopic detection for online process analysis under segmented flow.

    PubMed

    Yue, Jun; Falke, Floris H; Schouten, Jaap C; Nijhuis, T Alexander

    2013-12-21

    Combining reaction and detection in multiphase microfluidic flow is becoming increasingly important for accelerating process development in microreactors. We report the coupling of UV/Vis spectroscopy with microreactors for online process analysis under segmented flow conditions. Two integration schemes are presented: one uses a cross-type flow-through cell subsequent to a capillary microreactor for detection in the transmission mode; the other uses embedded waveguides on a microfluidic chip for detection in the evanescent wave field. Model experiments reveal the capabilities of the integrated systems in real-time concentration measurements and segmented flow characterization. The application of such integration for process analysis during gold nanoparticle synthesis is demonstrated, showing its great potential in process monitoring in microreactors operated under segmented flow. PMID:24178763

  4. 3-D segmentation and quantitative analysis of inner and outer walls of thrombotic abdominal aortic aneurysms

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Yin, Yin; Wahle, Andreas; Olszewski, Mark E.; Sonka, Milan

    2008-03-01

    An abdominal aortic aneurysm (AAA) is an area of a localized widening of the abdominal aorta, with a frequent presence of thrombus. A ruptured aneurysm can cause death due to severe internal bleeding. AAA thrombus segmentation and quantitative analysis are of paramount importance for diagnosis, risk assessment, and determination of treatment options. Until now, only a small number of methods for thrombus segmentation and analysis have been presented in the literature, either requiring substantial user interaction or exhibiting insufficient performance. We report a novel method offering minimal user interaction and high accuracy. Our thrombus segmentation method is composed of an initial automated luminal surface segmentation, followed by a cost function-based optimal segmentation of the inner and outer surfaces of the aortic wall. The approach utilizes the power and flexibility of the optimal triangle mesh-based 3-D graph search method, in which cost functions for thrombus inner and outer surfaces are based on gradient magnitudes. Sometimes local failures caused by image ambiguity occur, in which case several control points are used to guide the computer segmentation without the need to trace borders manually. Our method was tested in 9 MDCT image datasets (951 image slices). With the exception of a case in which the thrombus was highly eccentric, visually acceptable aortic lumen and thrombus segmentation results were achieved. No user interaction was used in 3 out of 8 datasets, and 7.80 +/- 2.71 mouse clicks per case / 0.083 +/- 0.035 mouse clicks per image slice were required in the remaining 5 datasets.

  5. Three-dimensional analysis of cervical spine segmental motion in rotation

    PubMed Central

    Zhao, Xiong; Wu, Zi-xiang; Han, Bao-jun; Yan, Ya-bo; Zhang, Yang

    2013-01-01

    Introduction The movements of the cervical spine during head rotation are too complicated to measure using conventional radiography or computed tomography (CT) techniques. In this study, we measure three-dimensional segmental motion of cervical spine rotation in vivo using a non-invasive measurement technique. Material and methods Sixteen healthy volunteers underwent three-dimensional CT of the cervical spine during head rotation. Occiput (Oc) – T1 reconstructions were created of volunteers in each of 3 positions: supine and maximum left and right rotations of the head with respect to the bosom. Segmental motions were calculated using Euler angles and volume merge methods in three major planes. Results Mean maximum axial rotation of the cervical spine to one side was 1.6° to 38.5° at each level. Coupled lateral bending opposite to lateral bending was observed in the upper cervical levels, while in the subaxial cervical levels, it was observed in the same direction as axial rotation. Coupled extension was observed in the cervical levels of C5-T1, while coupled flexion was observed in the cervical levels of Oc-C5. Conclusions The three-dimensional cervical segmental motions in rotation were accurately measured with the non-invasive measure. These findings will be helpful as the basis for understanding cervical spine movement in rotation and abnormal conditions. The presented data also provide baseline segmental motions for the design of prostheses for the cervical spine. PMID:23847675

  6. Segmentation of vascular structures and hematopoietic cells in 3D microscopy images and quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mu, Jian; Yang, Lin; Kamocka, Malgorzata M.; Zollman, Amy L.; Carlesso, Nadia; Chen, Danny Z.

    2015-03-01

    In this paper, we present image processing methods for quantitative study of how the bone marrow microenvironment changes (characterized by altered vascular structure and hematopoietic cell distribution) caused by diseases or various factors. We develop algorithms that automatically segment vascular structures and hematopoietic cells in 3-D microscopy images, perform quantitative analysis of the properties of the segmented vascular structures and cells, and examine how such properties change. In processing images, we apply local thresholding to segment vessels, and add post-processing steps to deal with imaging artifacts. We propose an improved watershed algorithm that relies on both intensity and shape information and can separate multiple overlapping cells better than common watershed methods. We then quantitatively compute various features of the vascular structures and hematopoietic cells, such as the branches and sizes of vessels and the distribution of cells. In analyzing vascular properties, we provide algorithms for pruning fake vessel segments and branches based on vessel skeletons. Our algorithms can segment vascular structures and hematopoietic cells with good quality. We use our methods to quantitatively examine the changes in the bone marrow microenvironment caused by the deletion of Notch pathway. Our quantitative analysis reveals property changes in samples with deleted Notch pathway. Our tool is useful for biologists to quantitatively measure changes in the bone marrow microenvironment, for developing possible therapeutic strategies to help the bone marrow microenvironment recovery.

  7. Robust Detection and Identification of Sparse Segments in Ultra-High Dimensional Data Analysis

    PubMed Central

    Cai, T. Tony; Jeng, X. Jessie; Li, Hongzhe

    2012-01-01

    Summary Copy number variants (CNVs) are alternations of DNA of a genome that results in the cell having a less or more than two copies of segments of the DNA. CNVs correspond to relatively large regions of the genome, ranging from about one kilobase to several megabases, that are deleted or duplicated. Motivated by CNV analysis based on next generation sequencing data, we consider the problem of detecting and identifying sparse short segments hidden in a long linear sequence of data with an unspecified noise distribution. We propose a computationally efficient method that provides a robust and near-optimal solution for segment identification over a wide range of noise distributions. We theoretically quantify the conditions for detecting the segment signals and show that the method near-optimally estimates the signal segments whenever it is possible to detect their existence. Simulation studies are carried out to demonstrate the efficiency of the method under different noise distributions. We present results from a CNV analysis of a HapMap Yoruban sample to further illustrate the theory and the methods. PMID:23393425

  8. Morphotectonic Index Analysis as an Indicator of Neotectonic Segmentation of the Nicoya Peninsula, Costa Rica

    NASA Astrophysics Data System (ADS)

    Morrish, S.; Marshall, J. S.

    2013-12-01

    The Nicoya Peninsula lies within the Costa Rican forearc where the Cocos plate subducts under the Caribbean plate at ~8.5 cm/yr. Rapid plate convergence produces frequent large earthquakes (~50yr recurrence interval) and pronounced crustal deformation (0.1-2.0m/ky uplift). Seven uplifted segments have been identified in previous studies using broad geomorphic surfaces (Hare & Gardner 1984) and late Quaternary marine terraces (Marshall et al. 2010). These surfaces suggest long term net uplift and segmentation of the peninsula in response to contrasting domains of subducting seafloor (EPR, CNS-1, CNS-2). In this study, newer 10m contour digital topographic data (CENIGA- Terra Project) will be used to characterize and delineate this segmentation using morphotectonic analysis of drainage basins and correlation of fluvial terrace/ geomorphic surface elevations. The peninsula has six primary watersheds which drain into the Pacific Ocean; the Río Andamojo, Río Tabaco, Río Nosara, Río Ora, Río Bongo, and Río Ario which range in area from 200 km2 to 350 km2. The trunk rivers follow major lineaments that define morphotectonic segment boundaries and in turn their drainage basins are bisected by them. Morphometric analysis of the lower (1st and 2nd) order drainage basins will provide insight into segmented tectonic uplift and deformation by comparing values of drainage basin asymmetry, stream length gradient, and hypsometry with respect to margin segmentation and subducting seafloor domain. A general geomorphic analysis will be conducted alongside the morphometric analysis to map previously recognized (Morrish et al. 2010) but poorly characterized late Quaternary fluvial terraces. Stream capture and drainage divide migration are common processes throughout the peninsula in response to the ongoing deformation. Identification and characterization of basin piracy throughout the peninsula will provide insight into the history of landscape evolution in response to differential uplift. Conducting this morphotectonic analysis of the Nicoya Peninsula will provide further constraints on rates of segment uplift, location of segment boundaries, and advance the understanding of the long term deformation of the region in relation to subduction.

  9. Simplex volume analysis for finding endmembers in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Hsiao-Chi; Song, Meiping; Chang, Chein-I.

    2015-05-01

    Using maximal simplex volume as an optimal criterion for finding endmembers is a common approach and has been widely studied in the literature. Interestingly, very little work has been reported on how simplex volume is calculated. It turns out that the issue of calculating simplex volume is much more complicated and involved than what we may think. This paper investigates this issue from two different aspects, geometric structure and eigen-analysis. The geometric structure is derived from its simplex structure whose volume can be calculated by multiplying its base with its height. On the other hand, eigen-analysis takes advantage of the Cayley-Menger determinant to calculate the simplex volume. The major issue of this approach is that when the matrix is ill-rank where determinant is desired. To deal with this problem two methods are generally considered. One is to perform data dimensionality reduction to make the matrix to be of full rank. The drawback of this method is that the original volume has been shrunk and the found volume of a dimensionality-reduced simplex is not the real original simplex volume. Another is to use singular value decomposition (SVD) to find singular values for calculating simplex volume. The dilemma of this method is its instability in numerical calculations. This paper explores all of these three methods in simplex volume calculation. Experimental results show that geometric structure-based method yields the most reliable simplex volume.

  10. Texture analysis and segmentation using modulation features, generative models, and weighted curve evolution.

    PubMed

    Kokkinos, Iasonas; Evangelopoulos, Georgios; Maragos, Petros

    2009-01-01

    In this work we approach the analysis and segmentation of natural textured images by combining ideas from image analysis and probabilistic modeling. We rely on AM-FM texture models and specifically on the Dominant Component Analysis (DCA) paradigm for feature extraction. This method provides a low-dimensional, dense and smooth descriptor, capturing essential aspects of texture, namely scale, orientation, and contrast. Our contributions are at three levels of the texture analysis and segmentation problems: First, at the feature extraction stage we propose a Regularized Demodulation Algorithm that provides more robust texture features and explore the merits of modifying the channel selection criterion of DCA. Second, we propose a probabilistic interpretation of DCA and Gabor filtering in general, in terms of Local Generative Models. Extending this point of view to edge detection facilitates the estimation of posterior probabilities for the edge and texture classes. Third, we propose the Weighted Curve Evolution scheme that enhances the Region Competition/ Geodesic Active Regions methods by allowing for the locally adaptive fusion of heterogeneous cues. Our segmentation results are evaluated on the Berkeley Segmentation Benchmark, and compare favorably to current state-of-the-art methods. PMID:19029552

  11. Analysis of the ISS Russian Segment Outer Surface Materials Installed on the CKK Detachable Cassette

    NASA Astrophysics Data System (ADS)

    Naumov, S. F.; Borisov, V. A.; Plotnikov, A. D.; Sokolova, S. P.; Kurilenok, A. O.; Skurat, V. E.; Leipunsky, I. O.; Pshechenkov, P. A.; Beryozkina, N. G.; Volkov, I. O.

    2009-01-01

    This report presents an analysis of the effects caused by space environmental factors (SEF) and the International Space Station's (ISS) outer environment on operational parameters of the outer surface materials of the ISS Russian Segment (RS). The tests were performed using detachable container cassettes (CKK) that serve as a part of the ISS RS contamination control system.

  12. Loads analysis and testing of flight configuration solid rocket motor outer boot ring segments

    NASA Technical Reports Server (NTRS)

    Ahmed, Rafiq

    1990-01-01

    The loads testing on in-house-fabricated flight configuration Solid Rocket Motor (SRM) outer boot ring segments. The tests determined the bending strength and bending stiffness of these beams and showed that they compared well with the hand analysis. The bending stiffness test results compared very well with the finite element data.

  13. Phylogenomic analysis reveals ancient segmental duplications in the human genome.

    PubMed

    Hafeez, Madiha; Shabbir, Madiha; Altaf, Fouzia; Abbasi, Amir Ali

    2016-01-01

    Evolution of organismal complexity and origin of novelties during vertebrate history has been widely explored in context of both regulation of gene expression and gene duplication events. Ohno (1970) for the first time put forward the idea of two rounds whole genome duplication events as the most plausible explanation for evolutionarizing the vertebrate lineage (2R hypothesis). To test the validity of 2R hypothesis, a robust phylogenomic analysis of multigene families with triplicated or quadruplicated representation on human FGFR bearing chromosomes (4/5/8/10) was performed. Topology comparison approach categorized members of 80 families into five distinct co-duplicated groups. Genes belonging to one co-duplicated group are duplicated concurrently, whereas genes of two different co-duplicated groups do not share their duplication history and have not duplicated in congruency. Our findings contradict the 2R model and are indicative of small-scale duplications and rearrangements that cover the entire span of animal's history. PMID:26327327

  14. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  15. Effect of ST segment measurement point on performance of exercise ECG analysis.

    PubMed

    Lehtinen, R; Sievänen, H; Turjanmaa, V; Niemelä, K; Malmivuo, J

    1997-10-10

    To evaluate the effect of ST-segment measurement point on diagnostic performance of the ST-segment/heart rate (ST/HR) hysteresis, the ST/HR index, and the end-exercise ST-segment depression in the detection of coronary artery disease, we analysed the exercise electrocardiograms of 347 patients using ST-segment depression measured at 0, 20, 40, 60 and 80 ms after the J-point. Of these patients, 127 had and 13 had no significant coronary artery disease according to angiography, 18 had no myocardial perfusion defect according to technetium-99m sestamibi single-photon emission computed tomography, and 189 were clinically 'normal' having low likelihood of coronary artery disease. Comparison of areas under the receiver operating characteristic curves showed that the discriminative capacity of the above diagnostic variables improved systematically up to the ST-segment measurement point of 60 ms after the J-point. As compared to analysis at the J-point (0 ms), the areas based on the 60-ms point were 89 vs. 84% (p=0.0001) for the ST/HR hysteresis, 83 vs. 76% (p<0.0001) for the ST/HR index, and 76 vs. 61% (p<0.0001) for the end-exercise ST depression. These findings suggest that the ST-segment measurement at 60 ms after the J-point is the most reasonable point of choice in terms of discriminative capacity of both the simple and the heart rate-adjusted indices of ST depression. Moreover, the ST/HR hysteresis had the best discriminative capacity independently of the ST-segment measurement point, the observation thus giving further support to clinical utility of this new method in the detection of coronary artery disease. PMID:9363740

  16. Concerted Assembly and Cloning of Multiple DNA Segments Using In Vitro Site-Specific Recombination: Functional Analysis of Multi-Segment Expression Clones

    PubMed Central

    Cheo, David L.; Titus, Steven A.; Byrd, Devon R.N.; Hartley, James L.; Temple, Gary F.; Brasch, Michael A.

    2004-01-01

    The ability to clone and manipulate DNA segments is central to molecular methods that enable expression, screening, and functional characterization of genes, proteins, and regulatory elements. We previously described the development of a novel technology that utilizes in vitro site-specific recombination to provide a robust and flexible platform for high-throughput cloning and transfer of DNA segments. By using an expanded repertoire of recombination sites with unique specificities, we have extended the technology to enable the high-efficiency in vitro assembly and concerted cloning of multiple DNA segments into a vector backbone in a predefined order, orientation, and reading frame. The efficiency and flexibility of this approach enables collections of functional elements to be generated and mixed in a combinatorial fashion for the parallel assembly of numerous multi-segment constructs. The assembled constructs can be further manipulated by directing exchange of defined segments with alternate DNA segments. In this report, we demonstrate feasibility of the technology and application to the generation of fusion proteins, the linkage of promoters to genes, and the assembly of multiple protein domains. The technology has broad implications for cell and protein engineering, the expression of multidomain proteins, and gene function analysis. PMID:15489333

  17. Segmentation, statistical analysis, and modelling of the wall system in ceramic foams

    SciTech Connect

    Kampf, Jürgen; Schlachter, Anna-Lena; Redenbach, Claudia; Liebscher, André

    2015-01-15

    Closed walls in otherwise open foam structures may have a great impact on macroscopic properties of the materials. In this paper, we present two algorithms for the segmentation of such closed walls from micro-computed tomography images of the foam structure. The techniques are compared on simulated data and applied to tomographic images of ceramic filters. This allows for a detailed statistical analysis of the normal directions and sizes of the walls. Finally, we explain how the information derived from the segmented wall system can be included in a stochastic microstructure model for the foam.

  18. Moving cast shadow resistant for foreground segmentation based on shadow properties analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Gao, Yun; Yuan, Guowu; Ji, Rongbin

    2015-12-01

    Moving object detection is the fundamental task in machine vision applications. However, moving cast shadows detection is one of the major concerns for accurate video segmentation. Since detected moving object areas are often contain shadow points, errors in measurements, localization, segmentation, classification and tracking may arise from this. A novel shadow elimination algorithm is proposed in this paper. A set of suspected moving object area are detected by the adaptive Gaussian approach. A model is established based on shadow optical properties analysis. And shadow regions are discriminated from the set of moving pixels by using the properties of brightness, chromaticity and texture in sequence.

  19. Patient Segmentation Analysis Offers Significant Benefits For Integrated Care And Support.

    PubMed

    Vuik, Sabine I; Mayer, Erik K; Darzi, Ara

    2016-05-01

    Integrated care aims to organize care around the patient instead of the provider. It is therefore crucial to understand differences across patients and their needs. Segmentation analysis that uses big data can help divide a patient population into distinct groups, which can then be targeted with care models and intervention programs tailored to their needs. In this article we explore the potential applications of patient segmentation in integrated care. We propose a framework for population strategies in integrated care-whole populations, subpopulations, and high-risk populations-and show how patient segmentation can support these strategies. Through international case examples, we illustrate practical considerations such as choosing a segmentation logic, accessing data, and tailoring care models. Important issues for policy makers to consider are trade-offs between simplicity and precision, trade-offs between customized and off-the-shelf solutions, and the availability of linked data sets. We conclude that segmentation can provide many benefits to integrated care, and we encourage policy makers to support its use. PMID:27140981

  20. Two-dimensional finite-element analysis of tapered segmented structures

    NASA Astrophysics Data System (ADS)

    Rubio Noriega, Ruth; Hernandez-Figueroa, Hugo

    2013-03-01

    We present the results of the theoretical study and two-dimensional frequency domain finite-element simulation of tapered segmented waveguides. The application that we propose for this device is an adiabatically tapered and chirped PSW transmission, to eliminate higher order modes that can be propagated in a multimode semiconductor waveguide assuring mono mode propagation at 1.55μm. We demonstrate that by reducing the taper functions for the design of a segmented waveguide we can filter higher order modes at pump wavelength in WDM systems and at the same time low coupling losses between the continuous waveguide and the segmented waveguide. We obtained the cutoff wavelength as a function of the duty cycle of the segmented waveguide to show that we can, in fact, guide 1.55μm fundamental mode over a silicon-on-insulator platform using both, silica and SU-8 as substrate material. For the two-dimensional finite element analysis a new module over a commercial platform is proposed. Its contribution is the inclusion of the anisotropic perfectly matched layer that is more suitable for solving periodic segmented structures and other discontinuity problems.

  1. Segmental analysis of indocyanine green pharmacokinetics for the reliable diagnosis of functional vascular insufficiency

    NASA Astrophysics Data System (ADS)

    Kang, Yujung; Lee, Jungsul; An, Yuri; Jeon, Jongwook; Choi, Chulhee

    2011-03-01

    Accurate and reliable diagnosis of functional insufficiency of peripheral vasculature is essential since Raynaud phenomenon (RP), most common form of peripheral vascular insufficiency, is commonly associated with systemic vascular disorders. We have previously demonstrated that dynamic imaging of near-infrared fluorophore indocyanine green (ICG) can be a noninvasive and sensitive tool to measure tissue perfusion. In the present study, we demonstrated that combined analysis of multiple parameters, especially onset time and modified Tmax which means the time from onset of ICG fluorescence to Tmax, can be used as a reliable diagnostic tool for RP. To validate the method, we performed the conventional thermographic analysis combined with cold challenge and rewarming along with ICG dynamic imaging and segmental analysis. A case-control analysis demonstrated that segmental pattern of ICG dynamics in both hands was significantly different between normal and RP case, suggesting the possibility of clinical application of this novel method for the convenient and reliable diagnosis of RP.

  2. Computer model analysis of the relationship of ST-segment and ST-segment/heart rate slope response to the constituents of the ischemic injury source.

    PubMed

    Hyttinen, J; Viik, J; Lehtinen, R; Plonsey, R; Malmivuo, J

    1997-07-01

    The objective of the study was to investigate a proposed linear relationship between the extent of myocardial ischemic injury and the ST-segment/heart rate (ST/HR) slope by computer simulation of the injury sources arising in exercise electrocardiographic (ECG) tests. The extent and location of the ischemic injury were simulated for both single- and multivessel coronary artery disease by use of an accurate source-volume conductor model which assumes a linear relationship between heart rate and extent of ischemia. The results indicated that in some cases the ST/HR slope in leads II, aVF, and especially V5 may be related to the extent of ischemia. However, the simulations demonstrated that neither the ST-segment deviation nor the ST/HR slope was directly proportional to either the area of the ischemic boundary or the number of vessels occluded. Furthermore, in multivessel coronary artery disease, the temporal and spatial diversity of the generated multiple injury sources distorted the presumed linearity between ST-segment deviation and heart rate. It was concluded that the ST/HR slope and ST-segment deviation of the 12-lead ECG are not able to indicate extent of ischemic injury or number of vessels occluded. PMID:9261724

  3. New Software for Market Segmentation Analysis: A Chi-Square Interaction Detector. AIR 1983 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Lay, Robert S.

    The advantages and disadvantages of new software for market segmentation analysis are discussed, and the application of this new, chi-square based procedure (CHAID), is illustrated. A comparison is presented of an earlier, binary segmentation technique (THAID) and a multiple discriminant analysis. It is suggested that CHAID is superior to earlier…

  4. FIELD VALIDATION OF EXPOSURE ASSESSMENT MODELS. VOLUME 2. ANALYSIS

    EPA Science Inventory

    This is the second of two volumes describing a series of dual tracer experiments designed to evaluate the PAL-DS model, a Gaussian diffusion model modified to take into account settling and deposition, as well as three other deposition models. In this volume, an analysis of the d...

  5. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-01

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  6. Segmental increases in force application during colonoscope insertion: quantitative analysis using force monitoring technology

    PubMed Central

    Korman, Louis Y.; Brandt, Lawrence J.; Metz, David C.; Haddad, Nadim G.; Benjamin, Stanley B.; Lazerow, Susan K.; Miller, Hannah L.; Greenwald, David A.; Desale, Sameer; Patel, Milind; Sarvazyan, Armen

    2012-01-01

    Background Colonoscopy is a frequently performed procedure that requires extensive training and a high skill level. Objective Quantification of forces applied to the external portion of the colonoscope insertion tube during the insertion phase of colonoscopy. Design Observational cohort study of 7 expert and 9 trainee endoscopists for analysis of colonic segment force application in 49 patients. Forces were measured by using the colonoscopy force monitor, which is a wireless, handheld device that attaches to the insertion tube of the colonoscope. Setting Academic gastroenterology training programs. Patients Patients undergoing routine screening or diagnostic colonoscopy with complete segment force recordings. Main Outcome Measurements Axial and radial force and examination time. Results Both axial and radial force increased significantly as the colonoscope was advanced from the rectum to the cecum. Analysis of variance demonstrated highly significant operator-independent differences between segments of the colon (zones) in all axial and radial forces except average torque. Expert and trainee endoscopists differed only in the magnitude of counterclockwise force, average push/pull force rate used, and examination time. Limitations Small study, observational design, effect of prototype device on insertion tube manipulation. Conclusion Axial and radial forces used to advance the colonoscope increase through the segments of the colon and are operator independent. PMID:22840291

  7. Mean-Field Analysis of Recursive Entropic Segmentation of Biological Sequences

    NASA Astrophysics Data System (ADS)

    Cheong, Siew-Ann; Stodghill, Paul; Schneider, David; Myers, Christopher

    2007-03-01

    Horizontal gene transfer in bacteria results in genomic sequences which are mosaic in nature. An important first step in the analysis of a bacterial genome would thus be to model the statistically nonstationary nucleotide or protein sequence with a collection of P stationary Markov chains, and partition the sequence of length N into M statistically stationary segments/domains. This can be done for Markov chains of order K = 0 using a recursive segmentation scheme based on the Jensen-Shannon divergence, where the unknown parameters P and M are estimated from a hypothesis testing/model selection process. In this talk, we describe how the Jensen-Shannon divergence can be generalized to Markov chains of order K > 0, as well as an algorithm optimizing the positions of a fixed number of domain walls. We then describe a mean field analysis of the generalized recursive Jensen-Shannon segmentation scheme, and show how most domain walls appear as local maxima in the divergence spectrum of the sequence, before highlighting the main problem associated with the recursive segmentation scheme, i.e. the strengths of the domain walls selected recursively do not decrease monotonically. This problem is especially severe in repetitive sequences, whose statistical signatures we will also discuss.

  8. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  9. An approach to multi-temporal MODIS image analysis using image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Senthilnath, J.; Bajpai, Shivesh; Omkar, S. N.; Diwakar, P. G.; Mani, V.

    2012-11-01

    This paper discusses an approach for river mapping and flood evaluation based on multi-temporal time series analysis of satellite images utilizing pixel spectral information for image classification and region-based segmentation for extracting water-covered regions. Analysis of MODIS satellite images is applied in three stages: before flood, during flood and after flood. Water regions are extracted from the MODIS images using image classification (based on spectral information) and image segmentation (based on spatial information). Multi-temporal MODIS images from "normal" (non-flood) and flood time-periods are processed in two steps. In the first step, image classifiers such as Support Vector Machines (SVM) and Artificial Neural Networks (ANN) separate the image pixels into water and non-water groups based on their spectral features. The classified image is then segmented using spatial features of the water pixels to remove the misclassified water. From the results obtained, we evaluate the performance of the method and conclude that the use of image classification (SVM and ANN) and region-based image segmentation is an accurate and reliable approach for the extraction of water-covered regions.

  10. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets

    PubMed Central

    Belevich, Ilya; Joensuu, Merja; Kumar, Darshan; Vihinen, Helena; Jokitalo, Eija

    2016-01-01

    Understanding the structure–function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program. PMID:26727152

  11. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    PubMed

    Belevich, Ilya; Joensuu, Merja; Kumar, Darshan; Vihinen, Helena; Jokitalo, Eija

    2016-01-01

    Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program. PMID:26727152

  12. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  13. Automated iterative neutrosophic lung segmentation for image analysis in thoracic computed tomography

    PubMed Central

    Guo, Yanhui; Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Kazerooni, Ella A.

    2013-01-01

    Purpose: Lung segmentation is a fundamental step in many image analysis applications for lung diseases and abnormalities in thoracic computed tomography (CT). The authors have previously developed a lung segmentation method based on expectation-maximization (EM) analysis and morphological operations (EMM) for our computer-aided detection (CAD) system for pulmonary embolism (PE) in CT pulmonary angiography (CTPA). However, due to the large variations in pathology that may be present in thoracic CT images, it is difficult to extract the lung regions accurately, especially when the lung parenchyma contains extensive lung diseases. The purpose of this study is to develop a new method that can provide accurate lung segmentation, including those affected by lung diseases. Methods: An iterative neutrosophic lung segmentation (INLS) method was developed to improve the EMM segmentation utilizing the anatomic features of the ribs and lungs. The initial lung regions (ILRs) were extracted using our previously developed EMM method, in which the ribs were extracted using 3D hierarchical EM segmentation and the ribcage was constructed using morphological operations. Based on the anatomic features of ribs and lungs, the initial EMM segmentation was refined using INLS to obtain the final lung regions. In the INLS method, the anatomic features were mapped into a neutrosophic domain, and the neutrosophic operation was performed iteratively to refine the ILRs. With IRB approval, 5 and 58 CTPA scans were collected retrospectively and used as training and test sets, of which 2 and 34 cases had lung diseases, respectively. The lung regions manually outlined by an experienced thoracic radiologist were used as reference standard for performance evaluation of the automated lung segmentation. The percentage overlap area (POA), the Hausdorff distance (Hdist), and the average distance (AvgDist) of the lung boundaries relative to the reference standard were used as performance metrics. Results: The proposed method achieved larger POAs and smaller distance errors than the EMM method. For the 58 test cases, the average POA, Hdist, and AvgDist were improved from 85.4 ± 18.4%, 22.6 ± 29.4 mm, and 3.5 ± 5.4 mm using EMM to 91.2 ± 6.7%, 16.0 ± 11.3 mm, and 2.5 ± 1.0 mm using INLS, respectively. The improvements were statistically significant (p < 0.05). To evaluate the accuracy of the INLS method in the identification of the lung boundaries affected by lung diseases, the authors separately analyzed the performance of the proposed method on the cases with versus without the lung diseases. The results showed that the cases without lung diseases were segmented more accurately than the cases with lung diseases by both the EMM and the INLS methods, but the INLS method achieved better performance than the EMM method in both cases. Conclusions: The new INLS method utilizing the anatomic features of the rib and lung significantly improved the accuracy of lung segmentation, especially for the cases affected by lung diseases. Improvement in lung segmentation will facilitate many image analysis tasks and CAD applications for lung diseases and abnormalities in thoracic CT, including automated PE detection. PMID:23927326

  14. Mimicking human expert interpretation of remotely sensed raster imagery by using a novel segmentation analysis within ArcGIS

    NASA Astrophysics Data System (ADS)

    Le Bas, Tim; Scarth, Anthony; Bunting, Peter

    2015-04-01

    Traditional computer based methods for the interpretation of remotely sensed imagery use each pixel individually or the average of a small window of pixels to calculate a class or thematic value, which provides an interpretation. However when a human expert interprets imagery, the human eye is excellent at finding coherent and homogenous areas and edge features. It may therefore be advantageous for computer analysis to mimic human interpretation. A new toolbox for ArcGIS 10.x will be presented that segments the data layers into a set of polygons. Each polygon is defined by a K-means clustering and region growing algorithm, thus finding areas, their edges and any lineations in the imagery. Attached to each polygon are the characteristics of the imagery such as mean and standard deviation of the pixel values, within the polygon. The segmentation of imagery into a jigsaw of polygons also has the advantage that the human interpreter does not need to spend hours digitising the boundaries. The segmentation process has been taken from the RSGIS library of analysis and classification routines (Bunting et al., 2014). These routines are freeware and have been modified to be available in the ArcToolbox under the Windows (v7) operating system. Input to the segmentation process is a multi-layered raster image, for example; a Landsat image, or a set of raster datasets made up from derivatives of topography. The size and number of polygons are set by the user and are dependent on the imagery used. Examples will be presented of data from the marine environment utilising bathymetric depth, slope, rugosity and backscatter from a multibeam system. Meaningful classification of the polygons using their numerical characteristics is the next goal. Object based image analysis (OBIA) should help this workflow. Fully calibrated imagery systems will allow numerical classification to be translated into more readily understandable terms. Peter Bunting, Daniel Clewley, Richard M. Lucas and Sam Gillingham. 2014. The Remote Sensing and GIS Software Library (RSGISLib), Computers & Geosciences. Volume 62, Pages 216-226 http://dx.doi.org/10.1016/j.cageo.2013.08.007.

  15. The Impact of Policy Guidelines on Hospital Antibiotic Use over a Decade: A Segmented Time Series Analysis

    PubMed Central

    Chandy, Sujith J.; Naik, Girish S.; Charles, Reni; Jeyaseelan, Visalakshi; Naumova, Elena N.; Thomas, Kurien; Lundborg, Cecilia Stalsby

    2014-01-01

    Introduction Antibiotic pressure contributes to rising antibiotic resistance. Policy guidelines encourage rational prescribing behavior, but effectiveness in containing antibiotic use needs further assessment. This study therefore assessed the patterns of antibiotic use over a decade and analyzed the impact of different modes of guideline development and dissemination on inpatient antibiotic use. Methods Antibiotic use was calculated monthly as defined daily doses (DDD) per 100 bed days for nine antibiotic groups and overall. This time series compared trends in antibiotic use in five adjacent time periods identified as Segments, divided based on differing modes of guideline development and implementation: Segment 1 Baseline prior to antibiotic guidelines development; Segment 2 During preparation of guidelines and booklet dissemination; Segment 3 Dormant period with no guidelines dissemination; Segment 4 Booklet dissemination of revised guidelines; Segment 5 Booklet dissemination of revised guidelines with intranet access. Regression analysis adapted for segmented time series and adjusted for seasonality assessed changes in antibiotic use trend. Results Overall antibiotic use increased at a monthly rate of 0.95 (SE?=?0.18), 0.21 (SE?=?0.08) and 0.31 (SE?=?0.06) for Segments 1, 2 and 3, stabilized in Segment 4 (0.05; SE?=?0.10) and declined in Segment 5 (?0.37; SE?=?0.11). Segments 1, 2 and 4 exhibited seasonal fluctuations. Pairwise segmented regression adjusted for seasonality revealed a significant drop in monthly antibiotic use of 0.401 (SE?=?0.089; p<0.001) for Segment 5 compared to Segment 4. Most antibiotic groups showed similar trends to overall use. Conclusion Use of overall and specific antibiotic groups showed varied patterns and seasonal fluctuations. Containment of rising overall antibiotic use was possible during periods of active guideline dissemination. Wider access through intranet facilitated significant decline in use. Stakeholders and policy makers are urged to develop guidelines, ensure active dissemination and enable accessibility through computer networks to contain antibiotic use and decrease antibiotic pressure. PMID:24647339

  16. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    SciTech Connect

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time (compared to an average of 39 min per case by manual segmentation). Conclusions: The computerized liver extraction scheme provides an efficient and accurate way of measuring liver volumes in CT.

  17. Stress Analysis of Bolted, Segmented Cylindrical Shells Exhibiting Flange Mating-Surface Waviness

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2009-01-01

    Bolted, segmented cylindrical shells are a common structural component in many engineering systems especially for aerospace launch vehicles. Segmented shells are often needed due to limitations of manufacturing capabilities or transportation issues related to very long, large-diameter cylindrical shells. These cylindrical shells typically have a flange or ring welded to opposite ends so that shell segments can be mated together and bolted to form a larger structural system. As the diameter of these shells increases, maintaining strict fabrication tolerances for the flanges to be flat and parallel on a welded structure is an extreme challenge. Local fit-up stresses develop in the structure due to flange mating-surface mismatch (flange waviness). These local stresses need to be considered when predicting a critical initial flaw size. Flange waviness is one contributor to the fit-up stress state. The present paper describes the modeling and analysis effort to simulate fit-up stresses due to flange waviness in a typical bolted, segmented cylindrical shell. Results from parametric studies are presented for various flange mating-surface waviness distributions and amplitudes.

  18. Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis

    NASA Astrophysics Data System (ADS)

    Chung, Howard; Cobzas, Dana; Birdsell, Laura; Lieffers, Jessica; Baracos, Vickie

    2009-02-01

    The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.

  19. Analysis of gene expression levels in individual bacterial cells without image segmentation

    SciTech Connect

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J.

    2012-05-11

    Highlights: Black-Right-Pointing-Pointer We present a method for extracting gene expression data from images of bacterial cells. Black-Right-Pointing-Pointer The method does not employ cell segmentation and does not require high magnification. Black-Right-Pointing-Pointer Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. Black-Right-Pointing-Pointer We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

  20. Power Loss Analysis and Comparison of Segmented and Unsegmented Energy Coupling Coils for Wireless Energy Transfer

    PubMed Central

    Tang, Sai Chun; McDannold, Nathan J.

    2015-01-01

    This paper investigated the power losses of unsegmented and segmented energy coupling coils for wireless energy transfer. Four 30-cm energy coupling coils with different winding separations, conductor cross-sectional areas, and number of turns were developed. The four coils were tested in both unsegmented and segmented configurations. The winding conduction and intrawinding dielectric losses of the coils were evaluated individually based on a well-established lumped circuit model. We found that the intrawinding dielectric loss can be as much as seven times higher than the winding conduction loss at 6.78 MHz when the unsegmented coil is tightly wound. The dielectric loss of an unsegmented coil can be reduced by increasing the winding separation or reducing the number of turns, but the power transfer capability is reduced because of the reduced magnetomotive force. Coil segmentation using resonant capacitors has recently been proposed to significantly reduce the operating voltage of a coil to a safe level in wireless energy transfer for medical implants. Here, we found that it can naturally eliminate the dielectric loss. The coil segmentation method and the power loss analysis used in this paper could be applied to the transmitting, receiving, and resonant coils in two- and four-coil energy transfer systems. PMID:26640745

  1. Analysis, design, and test of a graphite/polyimide Shuttle orbiter body flap segment

    NASA Technical Reports Server (NTRS)

    Graves, S. R.; Morita, W. H.

    1982-01-01

    For future missions, increases in Space Shuttle orbiter deliverable and recoverable payload weight capability may be needed. Such increases could be obtained by reducing the inert weight of the Shuttle. The application of advanced composites in orbiter structural components would make it possible to achieve such reductions. In 1975, NASA selected the orbiter body flap as a demonstration component for the Composite for Advanced Space Transportation Systems (CASTS) program. The progress made in 1977 through 1980 was integrated into a design of a graphite/polyimide (Gr/Pi) body flap technology demonstration segment (TDS). Aspects of composite body flap design and analysis are discussed, taking into account the direct-bond fibrous refractory composite insulation (FRCI) tile on Gr/Pi structure, Gr/Pi body flap weight savings, the body flap design concept, and composite body flap analysis. Details regarding the Gr/Pi technology demonstration segment are also examined.

  2. Screening Analysis : Volume 1, Description and Conclusions.

    SciTech Connect

    Bonneville Power Administration; Corps of Engineers; Bureau of Reclamation

    1992-08-01

    The SOR consists of three analytical phases leading to a Draft EIS. The first phase Pilot Analysis, was performed for the purpose of testing the decision analysis methodology being used in the SOR. The Pilot Analysis is described later in this chapter. The second phase, Screening Analysis, examines all possible operating alternatives using a simplified analytical approach. It is described in detail in this and the next chapter. This document also presents the results of screening. The final phase, Full-Scale Analysis, will be documented in the Draft EIS and is intended to evaluate comprehensively the few, best alternatives arising from the screening analysis. The purpose of screening is to analyze a wide variety of differing ways of operating the Columbia River system to test the reaction of the system to change. The many alternatives considered reflect the range of needs and requirements of the various river users and interests in the Columbia River Basin. While some of the alternatives might be viewed as extreme, the information gained from the analysis is useful in highlighting issues and conflicts in meeting operating objectives. Screening is also intended to develop a broad technical basis for evaluation including regional experts and to begin developing an evaluation capability for each river use that will support full-scale analysis. Finally, screening provides a logical method for examining all possible options and reaching a decision on a few alternatives worthy of full-scale analysis. An organizational structure was developed and staffed to manage and execute the SOR, specifically during the screening phase and the upcoming full-scale analysis phase. The organization involves ten technical work groups, each representing a particular river use. Several other groups exist to oversee or support the efforts of the work groups.

  3. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  4. Fully Bayesian Inference for Structural MRI: Application to Segmentation and Statistical Analysis of T2-Hypointensities

    PubMed Central

    Schmidt, Paul; Schmid, Volker J.; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark

    2013-01-01

    Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: ; range, ). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data. PMID:23874537

  5. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  6. Fully Bayesian inference for structural MRI: application to segmentation and statistical analysis of T2-hypointensities.

    PubMed

    Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark

    2013-01-01

    Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data. PMID:23874537

  7. Comparision between Brain Atrophy and Subdural Volume to Predict Chronic Subdural Hematoma: Volumetric CT Imaging Analysis

    PubMed Central

    Ju, Min-Wook; Kwon, Hyon-Jo; Choi, Seung-Won; Koh, Hyeon-Song; Youm, Jin-Young; Song, Shi-Hun

    2015-01-01

    Objective Brain atrophy and subdural hygroma were well known factors that enlarge the subdural space, which induced formation of chronic subdural hematoma (CSDH). Thus, we identified the subdural volume that could be used to predict the rate of future CSDH after head trauma using a computed tomography (CT) volumetric analysis. Methods A single institution case-control study was conducted involving 1,186 patients who visited our hospital after head trauma from January 1, 2010 to December 31, 2014. Fifty-one patients with delayed CSDH were identified, and 50 patients with age and sex matched for control. Intracranial volume (ICV), the brain parenchyme, and the subdural space were segmented using CT image-based software. To adjust for variations in head size, volume ratios were assessed as a percentage of ICV [brain volume index (BVI), subdural volume index (SVI)]. The maximum depth of the subdural space on both sides was used to estimate the SVI. Results Before adjusting for cranium size, brain volume tended to be smaller, and subdural space volume was significantly larger in the CSDH group (p=0.138, p=0.021, respectively). The BVI and SVI were significantly different (p=0.003, p=0.001, respectively). SVI [area under the curve (AUC), 77.3%; p=0.008] was a more reliable technique for predicting CSDH than BVI (AUC, 68.1%; p=0.001). Bilateral subdural depth (sum of subdural depth on both sides) increased linearly with SVI (p<0.0001). Conclusion Subdural space volume was significantly larger in CSDH groups. SVI was a more reliable technique for predicting CSDH. Bilateral subdural depth was useful to measure SVI. PMID:27169071

  8. Information architecture. Volume 2, Part 1: Baseline analysis summary

    SciTech Connect

    1996-12-01

    The Department of Energy (DOE) Information Architecture, Volume 2, Baseline Analysis, is a collaborative and logical next-step effort in the processes required to produce a Departmentwide information architecture. The baseline analysis serves a diverse audience of program management and technical personnel and provides an organized way to examine the Department`s existing or de facto information architecture. A companion document to Volume 1, The Foundations, it furnishes the rationale for establishing a Departmentwide information architecture. This volume, consisting of the Baseline Analysis Summary (part 1), Baseline Analysis (part 2), and Reference Data (part 3), is of interest to readers who wish to understand how the Department`s current information architecture technologies are employed. The analysis identifies how and where current technologies support business areas, programs, sites, and corporate systems.

  9. Concept Area One Objectives (Rev), Test Items (Rev), and Instructional Events. Economic Analysis Course. Segments 1 - 16.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    A multimedia course in economic analysis was prepared and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and the development model.) This report presents the first concept area--basic principles--in 24 segments, of which eight are "enrichment segments." The…

  10. Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data

    NASA Astrophysics Data System (ADS)

    Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus

    The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.

  11. Laser power conversion system analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-ground laser power conversion system analysis investigated the feasibility and cost effectiveness of converting solar energy into laser energy in space, and transmitting the laser energy to earth for conversion to electrical energy. The analysis included space laser systems with electrical outputs on the ground ranging from 100 to 10,000 MW. The space laser power system was shown to be feasible and a viable alternate to the microwave solar power satellite. The narrow laser beam provides many options and alternatives not attainable with a microwave beam.

  12. Breast volume measurement of 248 women using biostereometric analysis.

    PubMed

    Loughry, C W; Sheffer, D B; Price, T E; Lackney, M J; Bartfai, R G; Morek, W M

    1987-10-01

    A study of volumes of the right and left breasts of 248 subjects was undertaken using biostereometric analysis. This measurement technique uses close-range stereophotogrammetry to characterize the shape of the breast and is noncontact, noninvasive, accurate, and rapid with respect to the subject involvement time. Volumes and volumetric differences between breast pairs were compared, using chi-square tests, with handedness, perception of breast size by each subject, age, and menstrual status. No significant relationship was found between the handedness of the subject and the larger breast volume. Several groups of subjects based on age and menstrual status were accurate in their perception of breast size difference. Analysis did not confirm the generally accepted clinical impression of left breast volume dominance. Although a size difference in breast pairs was documented, neither breast predominated. PMID:3659165

  13. Breast volume measurement of 598 women using biostereometric analysis.

    PubMed

    Loughry, C W; Sheffer, D B; Price, T E; Einsporn, R L; Bartfai, R G; Morek, W M; Meli, N M

    1989-05-01

    A study of the volumes of the right and left breasts of 598 subjects was undertaken using biostereometric analysis. This measurement uses close-range stereophotogrammetry to characterize the shape of the breast, and is noncontact, noninvasive, accurate, and rapid with respect to the subject involvement time. Using chi-square tests, volumes and volumetric differences between breast pairs were compared with handedness, perception of breast size by each subject, age, and menstrual status. No significant relationship was found between the handedness, age, or menstrual status of the subject and the breast volume. Several groups of subjects were accurate in their perception of breast size difference. Analysis did confirm the generally accepted clinical impression of left-breast volume dominance. These results are shown to be consistent with those of a previous study using 248 women. PMID:2729845

  14. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    SciTech Connect

    Akhbardeh, Alireza; Jacobs, Michael A.

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data. In the synthetic data, the authors demonstrated the performance of the NLDR method compared with conventional linear DR methods. The NLDR approach enabled successful segmentation of the structures, whereas, in most cases, PCA and MDS failed. The NLDR approach was able to segment different breast tissue types with a high accuracy and the embedded image of the breast MRI data demonstrated fuzzy boundaries between the different types of breast tissue, i.e., fatty, glandular, and tissue with lesions (>86%). Conclusions: The proposed hybrid NLDR methods were able to segment clinical breast data with a high accuracy and construct an embedded image that visualized the contribution of different radiological parameters.

  15. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  16. FEM correlation and shock analysis of a VNC MEMS mirror segment

    NASA Astrophysics Data System (ADS)

    Aguayo, Eduardo J.; Lyon, Richard; Helmbrecht, Michael; Khomusi, Sausan

    2014-08-01

    Microelectromechanical systems (MEMS) are becoming more prevalent in today's advanced space technologies. The Visible Nulling Coronagraph (VNC) instrument, being developed at the NASA Goddard Space Flight Center, uses a MEMS Mirror to correct wavefront errors. This MEMS Mirror, the Multiple Mirror Array (MMA), is a key component that will enable the VNC instrument to detect Jupiter and ultimately Earth size exoplanets. Like other MEMS devices, the MMA faces several challenges associated with spaceflight. Therefore, Finite Element Analysis (FEA) is being used to predict the behavior of a single MMA segment under different spaceflight-related environments. Finite Element Analysis results are used to guide the MMA design and ensure its survival during launch and mission operations. A Finite Element Model (FEM) has been developed of the MMA using COMSOL. This model has been correlated to static loading on test specimens. The correlation was performed in several steps—simple beam models were correlated initially, followed by increasingly complex and higher fidelity models of the MMA mirror segment. Subsequently, the model has been used to predict the dynamic behavior and stresses of the MMA segment in a representative spaceflight mechanical shock environment. The results of the correlation and the stresses associated with a shock event are presented herein.

  17. Scanning and transmission electron microscopic analysis of ampullary segment of oviduct during estrous cycle in caprines.

    PubMed

    Sharma, R K; Singh, R; Bhardwaj, J K

    2015-01-01

    The ampullary segment of the mammalian oviduct provides suitable milieu for fertilization and development of zygote before implantation into uterus. It is, therefore, in the present study, the cyclic changes in the morphology of ampullary segment of goat oviduct were studied during follicular and luteal phases using scanning and transmission electron microscopy techniques. Topographical analysis revealed the presence of uniformly ciliated ampullary epithelia, concealing apical processes of non-ciliated cells along with bulbous secretory cells during follicular phase. The luteal phase was marked with decline in number of ciliated cells with increased occurrence of secretory cells. The ultrastructure analysis has demonstrated the presence of indented nuclear membrane, supranuclear cytoplasm, secretory granules, rough endoplasmic reticulum, large lipid droplets, apically located glycogen masses, oval shaped mitochondria in the secretory cells. The ciliated cells were characterized by the presence of elongated nuclei, abundant smooth endoplasmic reticulum, oval or spherical shaped mitochondria with crecentric cristae during follicular phase. However, in the luteal phase, secretory cells were possessing highly indented nucleus with diffused electron dense chromatin, hyaline nucleosol, increased number of lipid droplets. The ciliated cells had numerous fibrous granules and basal bodies. The parallel use of scanning and transmission electron microscopy techniques has enabled us to examine the cyclic and hormone dependent changes occurring in the topography and fine structure of epithelium of ampullary segment and its cells during different reproductive phases that will be great help in understanding major bottle neck that limits success rate in vitro fertilization and embryo transfer technology. PMID:25491952

  18. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  19. Phylogenetic and recombination analysis of rice black-streaked dwarf virus segment 9 in China.

    PubMed

    Zhou, Yu; Weng, Jian-Feng; Chen, Yan-Ping; Liu, Chang-Lin; Han, Xiao-Hua; Hao, Zhuan-Fang; Li, Ming-Shun; Yong, Hong-Jun; Zhang, De-Gui; Zhang, Shi-Huang; Li, Xin-Hai

    2015-04-01

    Rice black-streaked dwarf virus (RBSDV) is an economically important virus that causes maize rough dwarf disease and rice black-streaked dwarf disease in East Asia. To study RBSDV variation and recombination, we examined the segment 9 (S9) sequences of 49 RBSDV isolates from maize and rice in China. Three S9 recombinants were detected in Baoding, Jinan, and Jining, China. Phylogenetic analysis showed that Chinese RBSDV isolates could be classified into two groups based on their S9 sequences, regardless of host or geographical origin. Further analysis suggested that S9 has undergone negative and purifying selection. PMID:25633210

  20. Sampling and Electrophoretic Analysis of Segmented Flow Streams Using Virtual Walls in a Microfluidic Device

    PubMed Central

    Roman, Gregory T.; Wang, Meng; Shultz, Kristin N.; Jennings, Colin; Kennedy, Robert T.

    2008-01-01

    A method for sampling and electrophoretic analysis of aqueous plugs segmented in a stream of immiscible oil is described. In the method, an aqueous buffer and oil stream flow parallel to each other to form a stable virtual wall in a microfabricated K-shaped fluidic element. As aqueous sample plugs in the oil stream make contact with the virtual wall coalescence occurs and sample is electrokinetically transferred to the aqueous stream. Using this virtual wall, two methods of injection for channel electrophoresis were developed. In the first, discrete sample zones flow past the inlet of an electrophoresis channel and a portion is injected by electroosmotic flow, termed the discrete injector. With this approach at least 800 plugs could be injected without interruption from a continuous segmented stream with 5.1% RSD in peak area. This method generated up to 1,050 theoretical plates; although analysis of the injector suggested that improvements may be possible. In a second method, aqueous plugs are sampled in a way that allows them to form a continuous stream that is directed to a microfluidic cross-style injector, termed the desegmenting injector. This method does not analyze each individual plug but instead allows periodic sampling of a high-frequency stream of plugs. Using this system at least 1000 injections could be performed sequentially with 5.8% RSD in peak area and 53,500 theoretical plates. This method was demonstrated to be useful for monitoring concentration changes from a sampling device with 10 s temporal resolution. Aqueous plugs in segmented flows have been applied to many different chemical manipulations including synthesis, assays, sampling processing and sampling. Nearly all such studies have used optical methods to analyze plug contents. This method offers a new way to analyze such samples and should enable new applications of segmented flow systems. PMID:18831564

  1. Multivariate statistical analysis as a tool for the segmentation of 3D spectral data.

    PubMed

    Lucas, G; Burdet, P; Cantoni, M; Hébert, C

    2013-01-01

    Acquisition of three-dimensional (3D) spectral data is nowadays common using many different microanalytical techniques. In order to proceed to the 3D reconstruction, data processing is necessary not only to deal with noisy acquisitions but also to segment the data in term of chemical composition. In this article, we demonstrate the value of multivariate statistical analysis (MSA) methods for this purpose, allowing fast and reliable results. Using scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) coupled with a focused ion beam (FIB), a stack of spectrum images have been acquired on a sample produced by laser welding of a nickel-titanium wire and a stainless steel wire presenting a complex microstructure. These data have been analyzed using principal component analysis (PCA) and factor rotations. PCA allows to significantly improve the overall quality of the data, but produces abstract components. Here it is shown that rotated components can be used without prior knowledge of the sample to help the interpretation of the data, obtaining quickly qualitative mappings representative of elements or compounds found in the material. Such abundance maps can then be used to plot scatter diagrams and interactively identify the different domains in presence by defining clusters of voxels having similar compositions. Identified voxels are advantageously overlaid on secondary electron (SE) images with higher resolution in order to refine the segmentation. The 3D reconstruction can then be performed using available commercial softwares on the basis of the provided segmentation. To asses the quality of the segmentation, the results have been compared to an EDX quantification performed on the same data. PMID:24035679

  2. A common neural substrate for the analysis of pitch and duration pattern in segmented sound?

    PubMed

    Griffiths, T D; Johnsrude, I; Dean, J L; Green, G G

    1999-12-16

    The analysis of patterns of pitch and duration over time in natural segmented sounds is fundamentally relevant to the analysis of speech, environmental sounds and music. The neural basis for differences between the processing of pitch and duration sequences is not established. We carried out a PET activation study on nine right-handed musically naive subjects, in order to examine the basis for early pitch- and duration-sequence analysis. The input stimuli and output task were closely controlled. We demonstrated a strikingly similar bilateral neural network for both types of analysis. The network is right lateralised and includes the cerebellum, posterior superior temporal cortices, and inferior frontal cortices. These data are consistent with a common initial mechanism for the analysis of pitch and duration patterns within sequences. PMID:10716217

  3. AN ANALYSIS OF THE SEGMENTATION THRESHOLD USED IN AXIAL-SHEAR STRAIN ELASTOGRAPHY

    PubMed Central

    Thittai, Arun K.; Xia, Rongmin

    2014-01-01

    Axial - shear strain elastography was introduced recently to image the tumor-host tissue boundary bonding characteristics. The image depicting the axial-shear strain distribution in a tissue under axial compression was termed as an axial-shear strain elastogram (ASSE). It has been demonstrated through simulation, tissue-mimicking phantom experiments, and retrospective analysis of in vivo breast lesion data that metrics quantifying the pattern of axial-shear strain distribution on ASSE can be used as features for identifying the lesion boundary condition as loosely-bonded or firmly-bonded. Consequently, features from ASSE have been shown to have potential in non-invasive breast lesion classification into benign versus malignant. Although there appears to be a broad concurrence in the results reported by different groups, important details pertaining to the appropriate segmentation threshold needed for – 1) displaying the ASSE as a color-overlay on top of corresponding Axial Strain Elastogram (ASE) and/or sonogram for feature visualization and 2) ASSE feature extraction are not yet fully addressed. In this study, we utilize ASSE from tissue mimicking phantom (with loosely-bonded & firmly-bonded inclusions) experiments and freehand –acquired in vivo breast lesion data (7 benign & 9 malignant) to analyze the effect of segmentation threshold on ASSE feature value, specifically, the “fill-in” feature that was introduced recently. We varied the segmentation threshold from 20% to 70% (of the maximum ASSE value) for each frame of the acquisition cine-loop of every data and computed the number of ASSE pixels within the lesion that was greater than or equal to this threshold value. If at least 40% of the pixels within the lesion area crossed this segmentation threshold, the ASSE frame was considered to demonstrate a “fill-in” that would indicate a loosely-bonded lesion boundary condition (suggestive of a benign lesion). Otherwise, the ASSE frame was considered not to demonstrate a “fill-in” indicating a firmly-bonded lesion boundary condition (suggestive of a malignant lesion). The results demonstrate that in the case of in vivo breast lesion data the appropriate range for the segmentation threshold value seems to be 40% to 60%. It was noted that for a segmentation threshold within this range ( for example, at 50%) all of the analyzed breast lesion cases can be correctly classified into benign and malignant, based on the percentage number of frames within the acquisition cine-loop that demonstrate a “fill-in”. PMID:25173068

  4. Method 349.0 Determination of Ammonia in Estuarine and Coastal Waters by Gas Segmented Continuous Flow Colorimetric Analysis

    EPA Science Inventory

    This method provides a procedure for the determination of ammonia in estuarine and coastal waters. The method is based upon the indophenol reaction,1-5 here adapted to automated gas-segmented continuous flow analysis.

  5. Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population

    USGS Publications Warehouse

    Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel

    2002-01-01

    A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.

  6. Advanced finite element analysis of L4-L5 implanted spine segment

    NASA Astrophysics Data System (ADS)

    Pawlikowski, Marek; Domański, Janusz; Suchocki, Cyprian

    2015-09-01

    In the paper finite element (FE) analysis of implanted lumbar spine segment is presented. The segment model consists of two lumbar vertebrae L4 and L5 and the prosthesis. The model of the intervertebral disc prosthesis consists of two metallic plates and a polyurethane core. Bone tissue is modelled as a linear viscoelastic material. The prosthesis core is made of a polyurethane nanocomposite. It is modelled as a non-linear viscoelastic material. The constitutive law of the core, derived in one of the previous papers, is implemented into the FE software Abaqus®. It was done by means of the User-supplied procedure UMAT. The metallic plates are elastic. The most important parts of the paper include: description of the prosthesis geometrical and numerical modelling, mathematical derivation of stiffness tensor and Kirchhoff stress and implementation of the constitutive model of the polyurethane core into Abaqus® software. Two load cases were considered, i.e. compression and stress relaxation under constant displacement. The goal of the paper is to numerically validate the constitutive law, which was previously formulated, and to perform advanced FE analyses of the implanted L4-L5 spine segment in which non-standard constitutive law for one of the model materials, i.e. the prosthesis core, is implemented.

  7. Tissue color image segmentation and analysis for automated diagnostics of adenocarcinoma of the lung

    NASA Astrophysics Data System (ADS)

    Sammouda, Mohamed; Niki, Noboru; Niki, Toshiro; Yamaguchi, Naohito

    2001-07-01

    Designing and developing computer-assisted image processing techniques to help doctors improve their diagnosis has received considerable interests over the past years. In this paper, we present a method for segmentation and analysis of lung tissue that can assist for the diagnosis of the adenocancinoma of the lung. The segmentation problem is formulated as minimization of an energy function synonymous to that of Hopfield Neural Network (HNN) for optimization. We modify the HNN to reach a status close to the global minimum in a pre-specified time of convergence. The energy function constructed with two terms, the cost-term as a sum of squared errors, and the second term a temporary noise added to the network as an excitation to escape certain local minima to be close to the global minimum. Each lung color image is represented in RGB and HSV color spaces and the segmentation results are comparatively presented. Furthermore, the nuclei are automatically extracted based on green color histogram threshold. Then, the nucleus radius is computed using the maximum drawable circle inside the object. Finally, all nuclei with abnormal size are extracted, and their morphology in the raw tissue image drew automatically. These results can provide the pathologists with more accurate quantitative information that can help greatly in the final decision.

  8. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a clinical context and showed a good accuracy both in ideal and in realistic conditions.

  9. Automated system for ST segment and arrhythmia analysis in exercise radionuclide ventriculography

    SciTech Connect

    Hsia, P.W.; Jenkins, J.M.; Shimoni, Y.; Gage, K.P.; Santinga, J.T.; Pitt, B.

    1986-06-01

    A computer-based system for interpretation of the electrocardiogram (ECG) in the diagnosis of arrhythmia and ST segment abnormality in an exercise system is presented. The system was designed for inclusion in a gamma camera so the ECG diagnosis could be combined with the diagnostic capability of radionuclide ventriculography. Digitized data are analyzed in a beat-by-beat mode and a contextual diagnosis of underlying rhythm is provided. Each beat is assigned a beat code based on a combination of waveform analysis and RR interval measurement. The waveform analysis employs a new correlation coefficient formula which corrects for baseline wander. Selective signal averaging, in which only normal beats are included, is done for an improved signal-to-noise ratio prior to ST segment analysis. Template generation, R wave detection, QRS window size, baseline correction, and continuous updating of heart rate have all been automated. ST level and slope measurements are computed on signal-averaged data. Arrhythmia analysis of 13 passages of abnormal rhythm by computer was found to be correct in 98.4 percent of all beats. 25 passages of exercise data, 1-5 min in length, were evaluated by the cardiologist and found to be in agreement in 95.8 percent in measurements of ST level and 91.7 percent in measurements of ST slope.

  10. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  11. Profiling the different needs and expectations of patients for population-based medicine: a case study using segmentation analysis

    PubMed Central

    2012-01-01

    Background This study illustrates an evidence-based method for the segmentation analysis of patients that could greatly improve the approach to population-based medicine, by filling a gap in the empirical analysis of this topic. Segmentation facilitates individual patient care in the context of the culture, health status, and the health needs of the entire population to which that patient belongs. Because many health systems are engaged in developing better chronic care management initiatives, patient profiles are critical to understanding whether some patients can move toward effective self-management and can play a central role in determining their own care, which fosters a sense of responsibility for their own health. A review of the literature on patient segmentation provided the background for this research. Method First, we conducted a literature review on patient satisfaction and segmentation to build a survey. Then, we performed 3,461 surveys of outpatient services users. The key structures on which the subjects’ perception of outpatient services was based were extrapolated using principal component factor analysis with varimax rotation. After the factor analysis, segmentation was performed through cluster analysis to better analyze the influence of individual attitudes on the results. Results Four segments were identified through factor and cluster analysis: the “unpretentious,” the “informed and supported,” the “experts” and the “advanced” patients. Their policies and managerial implications are outlined. Conclusions With this research, we provide the following: – a method for profiling patients based on common patient satisfaction surveys that is easily replicable in all health systems and contexts; – a proposal for segments based on the results of a broad-based analysis conducted in the Italian National Health System (INHS). Segments represent profiles of patients requiring different strategies for delivering health services. Their knowledge and analysis might support an effort to build an effective population-based medicine approach. PMID:23256543

  12. A Comparison of Amplitude-Based and Phase-Based Positron Emission Tomography Gating Algorithms for Segmentation of Internal Target Volumes of Tumors Subject to Respiratory Motion

    SciTech Connect

    Jani, Shyam S.; Robinson, Clifford G.; Dahlbom, Magnus; White, Benjamin M.; Thomas, David H.; Gaudio, Sergio; Low, Daniel A.; Lamb, James M.

    2013-11-01

    Purpose: To quantitatively compare the accuracy of tumor volume segmentation in amplitude-based and phase-based respiratory gating algorithms in respiratory-correlated positron emission tomography (PET). Methods and Materials: List-mode fluorodeoxyglucose-PET data was acquired for 10 patients with a total of 12 fluorodeoxyglucose-avid tumors and 9 lymph nodes. Additionally, a phantom experiment was performed in which 4 plastic butyrate spheres with inner diameters ranging from 1 to 4 cm were imaged as they underwent 1-dimensional motion based on 2 measured patient breathing trajectories. PET list-mode data were gated into 8 bins using 2 amplitude-based (equal amplitude bins [A1] and equal counts per bin [A2]) and 2 temporal phase-based gating algorithms. Gated images were segmented using a commercially available gradient-based technique and a fixed 40% threshold of maximum uptake. Internal target volumes (ITVs) were generated by taking the union of all 8 contours per gated image. Segmented phantom ITVs were compared with their respective ground-truth ITVs, defined as the volume subtended by the tumor model positions covering 99% of breathing amplitude. Superior-inferior distances between sphere centroids in the end-inhale and end-exhale phases were also calculated. Results: Tumor ITVs from amplitude-based methods were significantly larger than those from temporal-based techniques (P=.002). For lymph nodes, A2 resulted in ITVs that were significantly larger than either of the temporal-based techniques (P<.0323). A1 produced the largest and most accurate ITVs for spheres with diameters of ≥2 cm (P=.002). No significant difference was shown between algorithms in the 1-cm sphere data set. For phantom spheres, amplitude-based methods recovered an average of 9.5% more motion displacement than temporal-based methods under regular breathing conditions and an average of 45.7% more in the presence of baseline drift (P<.001). Conclusions: Target volumes in images generated from amplitude-based gating are larger and more accurate, at levels that are potentially clinically significant, compared with those from temporal phase-based gating.

  13. Segmentation of Unstructured Datasets

    NASA Technical Reports Server (NTRS)

    Bhat, Smitha

    1996-01-01

    Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

  14. Texture analysis improves level set segmentation of the anterior abdominal wall

    SciTech Connect

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Landman, Bennett A.

    2013-12-15

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention.Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall.Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture.Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture analysis can improve the level set segmentation around the abdominal region.

  15. Texture analysis improves level set segmentation of the anterior abdominal wall

    PubMed Central

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Landman, Bennett A.

    2013-01-01

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention. Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall. Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture. Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture analysis can improve the level set segmentation around the abdominal region. PMID:24320512

  16. An automated target recognition technique for image segmentation and scene analysis

    SciTech Connect

    Baumgart, C.W.; Ciarcia, C.A.

    1994-02-01

    Automated target recognition software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army`s Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multi-sensor system designed to detect buried and surface-emplaced metallic and non-metallic anti-tank mines. The basic requirements for this ATR software were: (1) an ability to separate target objects from the background in low S/N conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed utilizing an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a trade-off between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  17. Micro analysis of fringe field formed inside LDA measuring volume

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhijit; Nirala, A. K.

    2016-05-01

    In the present study we propose a technique for micro analysis of fringe field formed inside laser Doppler anemometry (LDA) measuring volume. Detailed knowledge of the fringe field obtained by this technique allows beam quality, alignment and fringe uniformity to be evaluated with greater precision and may be helpful for selection of an appropriate optical element for LDA system operation. A complete characterization of fringes formed at the measurement volume using conventional, as well as holographic optical elements, is presented. Results indicate the qualitative, as well as quantitative, improvement of fringes formed at the measurement volume by holographic optical elements. Hence, use of holographic optical elements in LDA systems may be advantageous for improving accuracy in the measurement.

  18. Improved execution efficiency of model-based roentgen stereophotogrammetric analysis: simplification and segmentation of model meshes.

    PubMed

    Syu, Ci-Bin; Lin, Shang-Chih; Huang, Chung-Yi; Lai, Jiing-Yih; Shih, Kao-Shang; Chen, Kuo-Jen

    2012-01-01

    Recently, the model-based roentgen stereophotogrammetric analysis (RSA) method has been developed as an in vivo tool to estimate static pose and dynamic motion of the instrumented prostheses. The two essential inputs for the RSA method are prosthetic models and roentgen images. During RSA calculation, the implants are often reversely scanned and input in the form of meshes to estimate the outline error between prosthetic projection and roentgen images. However, the execution efficiency of the RSA iterative calculation may limit its clinical practicability, and one reason for inefficiency may be very large number of meshes in the model. This study uses two methods of mesh manipulation to improve the execution efficiency of RSA calculation. The first is to simplify the model meshes and the other is to segment and delete the meshes of insignificant regions. An index (i.e. critical percentage) of an optimal element number is defined as the trade-off between execution efficiency and result accuracy. The predicted results are numerically validated by total knee prosthetic system. The outcome shows that the optimal strategy of the mesh manipulation is simplification and followed by segmentation. On average, the element number can even be reduced to 1% of the original models. After the mesh manipulation, the execution efficiency can be increased about 75% without compromising the accuracy of the predicted RSA results (the increment of rotation and translation error: 0.06° and 0.02 mm). In conclusion, prosthetic models should be manipulated by simplification and segmentation methods prior to the RSA calculation to increase the execution efficiency and then to improve clinical applicability of the RSA method. PMID:22401491

  19. Sequence and phylogenetic analysis of M-class genome segments of novel duck reovirus NP03

    PubMed Central

    Wang, Shao; Chen, Shilong; Cheng, Xiaoxia; Chen, Shaoying; Lin, FengQiang; Jiang, Bing; Zhu, Xiaoli; Li, Zhaolong; Wang, Jinxiang

    2015-01-01

    We report the sequence and phylogenetic analysis of the entire M1, M2, and M3 genome segments of the novel duck reovirus (NDRV) NP03. Alignment between the newly determined nucleotide sequences as well as their deduced amino acid sequences and the published sequences of avian reovirus (ARV) was carried out with DNASTAR software. Sequence comparison showed that the M2 gene had the most variability among the M-class genes of DRV. Phylogenetic analysis of the M-class genes of ARV strains revealed different lineages and clusters within DRVs. The 5 NDRV strains used in this study fall into a well-supported lineage that includes chicken ARV strains, whereas Muscovy DRV (MDRV) strains are separate from NDRV strains and form a distinct genetic lineage in the M2 gene tree. However, the MDRV and NDRV strains are closely related and located in a common lineage in the M1 and M3 gene trees, respectively. PMID:25852231

  20. Semi-automatic segmentation and modeling of the cervical spinal cord for volume quantification in multiple sclerosis patients from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sonkova, Pavlina; Evangelou, Iordanis E.; Gallo, Antonio; Cantor, Fredric K.; Ohayon, Joan; McFarland, Henry F.; Bagnato, Francesca

    2008-03-01

    Spinal cord (SC) tissue loss is known to occur in some patients with multiple sclerosis (MS), resulting in SC atrophy. Currently, no measurement tools exist to determine the magnitude of SC atrophy from Magnetic Resonance Images (MRI). We have developed and implemented a novel semi-automatic method for quantifying the cervical SC volume (CSCV) from Magnetic Resonance Images (MRI) based on level sets. The image dataset consisted of SC MRI exams obtained at 1.5 Tesla from 12 MS patients (10 relapsing-remitting and 2 secondary progressive) and 12 age- and gender-matched healthy volunteers (HVs). 3D high resolution image data were acquired using an IR-FSPGR sequence acquired in the sagittal plane. The mid-sagittal slice (MSS) was automatically located based on the entropy calculation for each of the consecutive sagittal slices. The image data were then pre-processed by 3D anisotropic diffusion filtering for noise reduction and edge enhancement before segmentation with a level set formulation which did not require re-initialization. The developed method was tested against manual segmentation (considered ground truth) and intra-observer and inter-observer variability were evaluated.

  1. Incorporation of texture-based features in optimal graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Abràmoff, Michael D.; Sonka, Milan; Kwon, Young H.; Garvin, Mona K.

    2012-02-01

    While efficient graph-theoretic approaches exist for the optimal (with respect to a cost function) and simultaneous segmentation of multiple surfaces within volumetric medical images, the appropriate design of cost functions remains an important challenge. Previously proposed methods have used simple cost functions or optimized a combination of the same, but little has been done to design cost functions using learned features from a training set, in a less biased fashion. Here, we present a method to design cost functions for the simultaneous segmentation of multiple surfaces using the graph-theoretic approach. Classified texture features were used to create probability maps, which were incorporated into the graph-search approach. The efficiency of such an approach was tested on 10 optic nerve head centered optical coherence tomography (OCT) volumes obtained from 10 subjects that presented with glaucoma. The mean unsigned border position error was computed with respect to the average of manual tracings from two independent observers and compared to our previously reported results. A significant improvement was noted in the overall means which reduced from 9.25 +/- 4.03μm to 6.73 +/- 2.45μm (p < 0.01) and is also comparable with the inter-observer variability of 8.85 +/- 3.85μm.

  2. Quantitative analysis of peristaltic and segmental motion in vivo in the rat small intestine using dynamic MRI.

    PubMed

    Ailiani, Amit C; Neuberger, Thomas; Brasseur, James G; Banco, Gino; Wang, Yanxing; Smith, Nadine B; Webb, Andrew G

    2009-07-01

    Conventional methods of quantifying segmental and peristaltic motion in animal models are highly invasive; involving, for example, the external isolation of segments of the gastrointestinal (GI) tract either from dead or anesthetized animals. The present study was undertaken to determine the utility of MRI to quantitatively analyze these motions in the jejunum region of anesthetized rats (N = 6) noninvasively. Dynamic images of the GI tract after oral gavage with a Gd contrast agent were acquired at a rate of six frames per second, followed by image segmentation based on a combination of three-dimensional live wire (3D LW) and directional dynamic gradient vector flow snakes (DDGVFS). Quantitative analysis of the variation in diameter at a fixed constricting location showed clear indications of both segmental and peristaltic motions. Quantitative analysis of the frequency response gave results in good agreement with those acquired in previous studies using invasive measurement techniques. Principal component analysis (PCA) of the segmented data using active shape models resulted in three major modes. The individual modes revealed unique spatial patterns for peristaltic and segmental motility. PMID:19353667

  3. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    SciTech Connect

    Not Available

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  4. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    PubMed Central

    Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-01-01

    Abstract. Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform. PMID:23942632

  5. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  6. Quantitative analysis of volume images: electron microscopic tomography of HIV

    NASA Astrophysics Data System (ADS)

    Nystroem, Ingela; Bengtsson, Ewert W.; Nordin, Bo G.; Borgefors, Gunilla

    1994-05-01

    Three-dimensional objects should be represented by 3D images. So far, most of the evaluation of images of 3D objects have been done visually, either by looking at slices through the volumes or by looking at 3D graphic representations of the data. In many applications a more quantitative evaluation would be valuable. Our application is the analysis of volume images of the causative agent of the acquired immune deficiency syndrome (AIDS), namely human immunodeficiency virus (HIV), produced by electron microscopic tomography (EMT). A structural analysis of the virus is of importance. The representation of some of the interesting structural features will depend on the orientation and the position of the object relative to the digitization grid. We describe a method of defining orientation and position of objects based on the moment of inertia of the objects in the volume image. In addition to a direct quantification of the 3D object a quantitative description of the convex deficiency may provide valuable information about the geometrical properties. The convex deficiency is the volume object subtracted from its convex hull. We describe an algorithm for creating an enclosing polyhedron approximating the convex hull of an arbitrarily shaped object.

  7. Computerized analysis of coronary artery disease: Performance evaluation of segmentation and tracking of coronary arteries in CT angiograms

    SciTech Connect

    Zhou, Chuan Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean; Agarwal, Prachi; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Patel, Smita; Wei, Jun

    2014-08-15

    Purpose: The authors are developing a computer-aided detection system to assist radiologists in analysis of coronary artery disease in coronary CT angiograms (cCTA). This study evaluated the accuracy of the authors’ coronary artery segmentation and tracking method which are the essential steps to define the search space for the detection of atherosclerotic plaques. Methods: The heart region in cCTA is segmented and the vascular structures are enhanced using the authors’ multiscale coronary artery response (MSCAR) method that performed 3D multiscale filtering and analysis of the eigenvalues of Hessian matrices. Starting from seed points at the origins of the left and right coronary arteries, a 3D rolling balloon region growing (RBG) method that adapts to the local vessel size segmented and tracked each of the coronary arteries and identifies the branches along the tracked vessels. The branches are queued and subsequently tracked until the queue is exhausted. With Institutional Review Board approval, 62 cCTA were collected retrospectively from the authors’ patient files. Three experienced cardiothoracic radiologists manually tracked and marked center points of the coronary arteries as reference standard following the 17-segment model that includes clinically significant coronary arteries. Two radiologists visually examined the computer-segmented vessels and marked the mistakenly tracked veins and noisy structures as false positives (FPs). For the 62 cases, the radiologists marked a total of 10191 center points on 865 visible coronary artery segments. Results: The computer-segmented vessels overlapped with 83.6% (8520/10191) of the center points. Relative to the 865 radiologist-marked segments, the sensitivity reached 91.9% (795/865) if a true positive is defined as a computer-segmented vessel that overlapped with at least 10% of the reference center points marked on the segment. When the overlap threshold is increased to 50% and 100%, the sensitivities were 86.2% and 53.4%, respectively. For the 62 test cases, a total of 55 FPs were identified by radiologist in 23 of the cases. Conclusions: The authors’ MSCAR-RBG method achieved high sensitivity for coronary artery segmentation and tracking. Studies are underway to further improve the accuracy for the arterial segments affected by motion artifacts, severe calcified and noncalcified soft plaques, and to reduce the false tracking of the veins and other noisy structures. Methods are also being developed to detect coronary artery disease along the tracked vessels.

  8. Theoretical analysis of segmented Wolter/LSM X-ray telescope systems

    NASA Technical Reports Server (NTRS)

    Shealy, D. L.; Chao, S. H.

    1986-01-01

    The Segmented Wolter I/LSM X-ray Telescope, which consists of a Wolter I Telescope with a tilted, off-axis convex spherical Layered Synthetic Microstructure (LSM) optics placed near the primary focus to accommodate multiple off-axis detectors, has been analyzed. The Skylab ATM Experiment S056 Wolter I telescope and the Stanford/MSFC nested Wolter-Schwarzschild x-ray telescope have been considered as the primary optics. A ray trace analysis has been performed to calculate the RMS blur circle radius, point spread function (PSF), the meridional and sagittal line functions (LST), and the full width half maximum (PWHM) of the PSF to study the spatial resolution of the system. The effects on resolution of defocussing the image plane, tilting and decentrating of the multilayer (LSM) optics have also been investigated to give the mounting and alignment tolerances of the LSM optic. Comparison has been made between the performance of the segmented Wolter/LSM optical system and that of the Spectral Slicing X-ray Telescope (SSXRT) systems.

  9. ANALYSIS OF THE SEGMENTAL IMPACTION OF FEMORAL HEAD FOLLOWING AN ACETABULAR FRACTURE SURGICALLY MANAGED

    PubMed Central

    Guimarães, Rodrigo Pereira; Kaleka, Camila Cohen; Cohen, Carina; Daniachi, Daniel; Keiske Ono, Nelson; Honda, Emerson Kiyoshi; Polesello, Giancarlo Cavalli; Riccioli, Walter

    2015-01-01

    Objective: Correlate the postoperative radiographic evaluation with variables accompanying acetabular fractures in order to determine the predictive factors for segmental impaction of femoral head. Methods: Retrospective analysis of medial files of patients submitted to open reduction surgery with internal acetabular fixation. Within approximately 35 years, 596 patients were treated for acetabular fractures; 267 were followed up for at least two years. The others were excluded either because their follow up was shorter than the minimum time, or as a result of the lack of sufficient data reported on files, or because they had been submitted to non-surgical treatment. The patients were followed up by one of three surgeons of the group using the Merle d'Aubigné and Postel clinical scales as well as radiological studies. Results: Only tow studied variables-age and amount of postoperative reductionshowed statistically significant correlation with femoral head impaction. Conclusions: The quality of reduction-anatomical or with up to 2mm residual deviation-presents a good radiographic evolution, reducing the potential for segmental impaction of the femoral head, a statistically significant finding. PMID:27004191

  10. Interfacial energetics approach for analysis of endothelial cell and segmental polyurethane interactions.

    PubMed

    Hill, Michael J; Cheah, Calvin; Sarkar, Debanjan

    2016-08-01

    Understanding the physicochemical interactions between endothelial cells and biomaterials is vital for regenerative medicine applications. Particularly, physical interactions between the substratum interface and spontaneously deposited biomacromolecules as well as between the induced biomolecular interface and the cell in terms of surface energetics are important factors to regulate cellular functions. In this study, we examined the physical interactions between endothelial cells and segmental polyurethanes (PUs) using l-tyrosine based PUs to examine the structure-property relations in terms of PU surface energies and endothelial cell organization. Since, contact angle analysis used to probe surface energetics provides incomplete interpretation and understanding of the physical interactions, we sought a combinatorial surface energetics approach utilizing water contact angle, Zisman's critical surface tension (CST), Kaelble's numerical method, and van Oss-Good-Chaudhury theory (vOGCT), and applied to both substrata and serum adsorbed matrix to correlate human umbilical vein endothelial cell (HUVEC) behavior with surface energetics of l-tyrosine based PU surfaces. We determined that, while water contact angle of substratum or adsorbed matrix did not correlate well with HUVEC behavior, overall higher polarity according to the numerical method as well as Lewis base character of the substratum explained increased HUVEC interaction and monolayer formation as opposed to organization into networks. Cell interaction was also interpreted in terms of the combined effects of substratum and adsorbed matrix polarity and Lewis acid-base character to determine the effect of PU segments. PMID:27065449

  11. Facial expression during emotional monologues in unilateral stroke: an analysis of monologue segments.

    PubMed

    Kazandjian, Seta; Borod, Joan C; Brickman, Adam M

    2007-01-01

    Emotional monologues of brain-damaged subjects were examined to determine whether interhemispheric or intrahemispheric differences exist for facial emotional expression. A special feature was the comparison of expressions produced during the initial, middle, and last segments of the monologues. Videotaped emotional and non-emotional monologues from the New York Emotion Battery (Borod, Welkowitz, & Obler, 1992) of eight right brain-damaged (RBD), eight left brain-damaged (LBD), and eight normal control (NC) subjects, with matching for demographics and lesion location, were rated. Five raters were trained to evaluate the emotional intensity and category accuracy of the facial expressions produced during these monologues. Results revealed some support for a reversed valence effect, with RBDs showing relatively less accurate performance during positive monologues. Intrahemispheric results revealed that, overall, RBDs with frontal lobe lesions showed the least intense facial expressions. Segment analysis found that individuals produced facial expressions with significantly more emotional intensity during the middle and last thirds of the monologues than during the initial third of the monologues. Findings indicate intrahemispheric as well as interhemispheric differences in facial emotional expression and suggest the utilization of the latter parts of monologues in the evaluation of emotional expression, which has potential clinical implications. PMID:18067419

  12. Investigating materials for breast nodules simulation by using segmentation and similarity analysis of digital images

    NASA Astrophysics Data System (ADS)

    Siqueira, Paula N.; Marcomini, Karem D.; Sousa, Maria A. Z.; Schiabel, Homero

    2015-03-01

    The task of identifying the malignancy of nodular lesions on mammograms becomes quite complex due to overlapped structures or even to the granular fibrous tissue which can cause confusion in classifying masses shape, leading to unnecessary biopsies. Efforts to develop methods for automatic masses detection in CADe (Computer Aided Detection) schemes have been made with the aim of assisting radiologists and working as a second opinion. The validation of these methods may be accomplished for instance by using databases with clinical images or acquired through breast phantoms. With this aim, some types of materials were tested in order to produce radiographic phantom images which could characterize a good enough approach to the typical mammograms corresponding to actual breast nodules. Therefore different nodules patterns were physically produced and used on a previous developed breast phantom. Their characteristics were tested according to the digital images obtained from phantom exposures at a LORAD M-IV mammography unit. Two analysis were realized the first one by the segmentation of regions of interest containing the simulated nodules by an automated segmentation technique as well as by an experienced radiologist who has delineated the contour of each nodule by means of a graphic display digitizer. Both results were compared by using evaluation metrics. The second one used measure of quality Structural Similarity (SSIM) to generate quantitative data related to the texture produced by each material. Although all the tested materials proved to be suitable for the study, the PVC film yielded the best results.

  13. Evolutionary analysis of the segment from helix 3 through helix 5 in vertebrate progesterone receptors.

    PubMed

    Baker, Michael E; Uh, Kayla Y

    2012-10-01

    The interaction between helix 3 and helix 5 in the human mineralocorticoid receptor [MR], progesterone receptor [PR] and glucocorticoid receptor [GR] influences their response to steroids. For the human PR, mutations at Gly-722 on helix 3 and Met-759 on helix 5 alter responses to progesterone. We analyzed the evolution of these two sites and the rest of a 59 residue segment containing helices 3, 4 and 5 in vertebrate PRs and found that a glycine corresponding to Gly-722 on helix 3 in human PR first appears in platypus, a monotreme. In lamprey, skates, fish, amphibians and birds, cysteine is found at this position in helix 3. This suggests that the cysteine to glycine replacement in helix 3 in the PR was important in the evolution of mammals. Interestingly, our analysis of the rest of the 59 residue segment finds 100% sequence conservation in almost all mammal PRs, substantial conservation in reptile and amphibian PRs and divergence of land vertebrate PR sequences from the fish PR sequences. The differences between fish and land vertebrate PRs may be important in the evolution of different biological progestins in fish and mammalian PR, as well as differences in susceptibility to environmental chemicals that disrupt PR-mediated physiology. PMID:22575083

  14. Evaluation of poly-drug use in methadone-related fatalities using segmental hair analysis.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2015-03-01

    In Denmark, fatal poisoning among drug addicts is often related to methadone. The primary mechanism contributing to fatal methadone overdose is respiratory depression. Concurrent use of other central nervous system (CNS) depressants is suggested to heighten the potential for fatal methadone toxicity. Reduced tolerance due to a short-time abstinence period is also proposed to determine a risk for fatal overdose. The primary aims of this study were to investigate if concurrent use of CNS depressants or reduced tolerance were significant risk factors in methadone-related fatalities using segmental hair analysis. The study included 99 methadone-related fatalities collected in Denmark from 2008 to 2011, where both blood and hair were available. The cases were divided into three subgroups based on the cause of death; methadone poisoning (N=64), poly-drug poisoning (N=28) or methadone poisoning combined with fatal diseases (N=7). No significant differences between methadone concentrations in the subgroups were obtained in both blood and hair. The methadone blood concentrations were highly variable (0.015-5.3, median: 0.52mg/kg) and mainly within the concentration range detected in living methadone users. In hair, methadone was detected in 97 fatalities with concentrations ranging from 0.061 to 211ng/mg (median: 11ng/mg). In the remaining two cases, methadone was detected in blood but absent in hair specimens, suggesting that these two subjects were methadone-naive users. Extensive poly-drug use was observed in all three subgroups, both recently and within the last months prior to death. Especially, concurrent use of multiple benzodiazepines was prevalent among the deceased followed by the abuse of morphine, codeine, amphetamine, cannabis, cocaine and ethanol. By including quantitative segmental hair analysis, additional information on poly-drug use was obtained. Especially, 6-acetylmorphine was detected more frequently in hair specimens, indicating that regular abuse of heroin was common among the deceased. In conclusion, continuous exposure of methadone provide by segmental hair analysis suggested that reduced tolerance of methadone was not a critical factor among methadone-related fatalities. In contrast, a high abundance of co-ingested CNS depressants suggested that adverse effects from drug-drug interactions were more important risk factors for fatal outcome in these deaths. PMID:25622032

  15. A Randomized Trial of Intrapartum Fetal ECG ST-Segment Analysis

    PubMed Central

    Belfort, Michael A.; Saade, George R.; Thom, Elizabeth; Blackwell, Sean C.; Reddy, Uma M.; Thorp, John M.; Tita, Alan T.N.; Miller, Russell S.; Peaceman, Alan M.; McKenna, David S.; Chien, Edward K.S.; Rouse, Dwight J.; Gibbs, Ronald S.; El-Sayed, Yasser Y.; Sorokin, Yoram; Caritis, Steve N.; VanDorsten, J. Peter

    2015-01-01

    BACKGROUND It is unclear whether using fetal electrocardiographic (ECG) ST-segment analysis as an adjunct to conventional intrapartum electronic fetal heart-rate monitoring modifies intrapartum and neonatal outcomes. METHODS We performed a multicenter trial in which women with a singleton fetus who were attempting vaginal delivery at more than 36 weeks of gestation and who had cervical dilation of 2 to 7 cm were randomly assigned to “open” or “masked” monitoring with fetal ST-segment analysis. The masked system functioned as a normal fetal heart-rate monitor. The open system displayed additional information for use when uncertain fetal heart-rate patterns were detected. The primary outcome was a composite of intrapartum fetal death, neonatal death, an Apgar score of 3 or less at 5 minutes, neonatal seizure, an umbilical-artery blood pH of 7.05 or less with a base deficit of 12 mmol per liter or more, intubation for ventilation at delivery, or neonatal encephalopathy. RESULTS A total of 11,108 patients underwent randomization; 5532 were assigned to the open group, and 5576 to the masked group. The primary outcome occurred in 52 fetuses or neonates of women in the open group (0.9%) and 40 fetuses or neonates of women in the masked group (0.7%) (relative risk, 1.31; 95% confidence interval, 0.87 to 1.98; P = 0.20). Among the individual components of the primary outcome, only the frequency of a 5-minute Apgar score of 3 or less differed significantly between neonates of women in the open group and those in the masked group (0.3% vs. 0.1%, P = 0.02). There were no significant between-group differences in the rate of cesarean delivery (16.9% and 16.2%, respectively; P = 0.30) or any operative delivery (22.8% and 22.0%, respectively; P = 0.31). Adverse events were rare and occurred with similar frequency in the two groups. CONCLUSIONS Fetal ECG ST-segment analysis used as an adjunct to conventional intrapartum electronic fetal heart-rate monitoring did not improve perinatal outcomes or decrease operative-delivery rates. (Funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and Neoventa Medical; ClinicalTrials.gov number, NCT01131260.) PMID:26267623

  16. Robust Anisotropic Diffusion Based Edge Enhancement for Level Set Segmentation and Asymmetry Analysis of Breast Thermograms using Zernike Moments.

    PubMed

    Prabha, S; Sujatha, C M; Ramakrishnan, S

    2015-01-01

    Breast thermography plays a major role in early detection of breast cancer in which the thermal variations are associated with precancerous state of breast. The distribution of asymmetrical thermal patterns indicates the pathological condition in breast thermal images. In this work, asymmetry analysis of breast thermal images is carried out using level set segmentation and Zernike moments. The breast tissues are subjected to Tukey’s biweight robust anisotropic diffusion filtering (TBRAD) for the generation of edge map. Reaction diffusion level set method is employed for segmentation in which TBRAD edge map is used as stopping criterion during the level set evolution. Zernike moments are extracted from the segmented breast tissues to perform asymmetry analysis. Results show that the TBRAD filter is able to enhance the edges near infra mammary folds and lower breast boundaries effectively. It is observed that segmented breast tissues are found to be continuous and has sharper boundary. This method yields high degree of correlation (98%) between the segmented output and the ground truth images. Among the extracted Zernike features, higher order moments are found to be significant in demarcating normal and carcinoma breast tissues by 9%. It appears that, the methodology adopted here is useful in accurate segmentation and differentiation of normal and carcinoma breast tissues for automated diagnosis of breast abnormalities. PMID:25996737

  17. Do tumor volume, percent tumor volume predict biochemical recurrence after radical prostatectomy? A meta-analysis

    PubMed Central

    Meng, Yang; Li, He; Xu, Peng; Wang, Jia

    2015-01-01

    The aim of this meta-analysis was to explore the effects of tumor volume (TV) and percent tumor volume (PTV) on biochemical recurrence (BCR) after radical prostatectomy (RP). An electronic search of Medline, Embase and CENTRAL was performed for relevant studies. Studies evaluated the effects of TV and/or PTV on BCR after RP and provided detailed results of multivariate analyses were included. Combined hazard ratios (HRs) and their corresponding 95% confidence intervals (CIs) were calculated using random-effects or fixed-effects models. A total of 15 studies with 16 datasets were included in the meta-analysis. Our study showed that both TV (HR 1.04, 95% CI: 1.00-1.07; P=0.03) and PTV (HR 1.01, 95% CI: 1.00-1.02; P=0.02) were predictors of BCR after RP. The subgroup analyses revealed that TV predicted BCR in studies from Asia, PTV was significantly correlative with BCR in studies in which PTV was measured by computer planimetry, and both TV and PTV predicted BCR in studies with small sample sizes (<1000). In conclusion, our meta-analysis demonstrated that both TV and PTV were significantly associated with BCR after RP. Therefore, TV and PTV should be considered when assessing the risk of BCR in RP specimens. PMID:26885209

  18. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images

    PubMed Central

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L.; Levin, Michael; Miller, Eric L.

    2015-01-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach. PMID:26601004

  19. Multi-level segment analysis: definition and application in turbulent systems

    NASA Astrophysics Data System (ADS)

    Wang, L. P.; Huang, Y. X.

    2015-06-01

    For many complex systems the interaction of different scales is among the most interesting and challenging features. It seems not very successful to extract the physical properties in different scale regimes by the existing approaches, such as the structure-function and Fourier spectrum method. Fundamentally, these methods have their respective limitations, for instance scale mixing, i.e. the so-called infrared and ultraviolet effects. To make improvements in this regard, a new method, multi-level segment analysis (MSA) based on the local extrema statistics, has been developed. Benchmark (fractional Brownian motion) verifications and the important case tests (Lagrangian and two-dimensional turbulence) show that MSA can successfully reveal different scaling regimes which have remained quite controversial in turbulence research. In general the MSA method proposed here can be applied to different dynamic systems in which the concepts of multiscale and multifractality are relevant.

  20. Global fractional anisotropy and mean diffusivity together with segmented brain volumes assemble a predictive discriminant model for young and elderly healthy brains: a pilot study at 3T

    PubMed Central

    Garcia-Lazaro, Haydee Guadalupe; Becerra-Laparra, Ivonne; Cortez-Conradis, David; Roldan-Valadez, Ernesto

    2016-01-01

    Summary Several parameters of brain integrity can be derived from diffusion tensor imaging. These include fractional anisotropy (FA) and mean diffusivity (MD). Combination of these variables using multivariate analysis might result in a predictive model able to detect the structural changes of human brain aging. Our aim was to discriminate between young and older healthy brains by combining structural and volumetric variables from brain MRI: FA, MD, and white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) volumes. This was a cross-sectional study in 21 young (mean age, 25.71±3.04 years; range, 21–34 years) and 10 elderly (mean age, 70.20±4.02 years; range, 66–80 years) healthy volunteers. Multivariate discriminant analysis, with age as the dependent variable and WM, GM and CSF volumes, global FA and MD, and gender as the independent variables, was used to assemble a predictive model. The resulting model was able to differentiate between young and older brains: Wilks’ λ = 0.235, χ2 (6) = 37.603, p = .000001. Only global FA, WM volume and CSF volume significantly discriminated between groups. The total accuracy was 93.5%; the sensitivity, specificity and positive and negative predictive values were 91.30%, 100%, 100% and 80%, respectively. Global FA, WM volume and CSF volume are parameters that, when combined, reliably discriminate between young and older brains. A decrease in FA is the strongest predictor of membership of the older brain group, followed by an increase in WM and CSF volumes. Brain assessment using a predictive model might allow the follow-up of selected cases that deviate from normal aging. PMID:27027893

  1. Segmentation and volumetric measurement of renal cysts and parenchyma from MR images of polycystic kidneys using multi-spectral analysis method

    NASA Astrophysics Data System (ADS)

    Bae, K. T.; Commean, P. K.; Brunsden, B. S.; Baumgarten, D. A.; King, B. F., Jr.; Wetzel, L. H.; Kenney, P. J.; Chapman, A. B.; Torres, V. E.; Grantham, J. J.; Guay-Woodford, L. M.; Tao, C.; Miller, J. P.; Meyers, C. M.; Bennett, W. M.

    2008-03-01

    For segmentation and volume measurement of renal cysts and parenchyma from kidney MR images in subjects with autosomal dominant polycystic kidney disease (ADPKD), a semi-automated, multi-spectral anaylsis (MSA) method was developed and applied to T1- and T2-weighted MR images. In this method, renal cysts and parenchyma were characterized and segmented for their characteristic T1 and T2 signal intensity differences. The performance of the MSA segmentation method was tested on ADPKD phantoms and patients. Segmented renal cysts and parenchyma volumes were measured and compared with reference standard measurements by fluid displacement method in the phantoms and stereology and region-based thresholding methods in patients, respectively. As results, renal cysts and parenchyma were segmented successfully with the MSA method. The volume measurements obtained with MSA were in good agreement with the measurements by other segmentation methods for both phantoms and subjects. The MSA method, however, was more time-consuming than the other segmentation methods because it required pre-segmentation, image registration and tissue classification-determination steps.

  2. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  3. Application of Control Volume Analysis to Cerebrospinal Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Wei, Timothy; Cohen, Benjamin; Anor, Tomer; Madsen, Joseph

    2011-11-01

    Hydrocephalus is among the most common birth defects and may not be prevented nor cured. Afflicted individuals face serious issues, which at present are too complicated and not well enough understood to treat via systematic therapies. This talk outlines the framework and application of a control volume methodology to clinical Phase Contrast MRI data. Specifically, integral control volume analysis utilizes a fundamental, fluid dynamics methodology to quantify intracranial dynamics within a precise, direct, and physically meaningful framework. A chronically shunted, hydrocephalic patient in need of a revision procedure was used as an in vivo case study. Magnetic resonance velocity measurements within the patient's aqueduct were obtained in four biomedical state and were analyzed using the methods presented in this dissertation. Pressure force estimates were obtained, showing distinct differences in amplitude, phase, and waveform shape for different intracranial states within the same individual. Thoughts on the physiological and diagnostic research and development implications/opportunities will be presented.

  4. Optimal analysis for segmented mirror capture and alignment in space optics system

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofang; Yu, Xin; Wang, Xia; Zhao, Lei

    2008-07-01

    A great deal segmented mirror errors consisting of piston and tip-tilt exist when space large aperture segmented optics system deploys. These errors will result in the departure of segmented mirrors images from the view. For that, proper scanning function should be adopted to control actuators rotating the segmented mirror, so that the images of segmented mirror can be put into the view and placed in the ideal position. In my paper, the scanning functions such as screw-type, rose-type, and helianthus-type and so on are analyzed and discussed. And the optimal scanning function principle based on capturing images by the fastest velocity is put forward. After capturing, each outer segmented mirror should be brought back into alignment with the central segment. In my paper, the central and outer segments with surface errors have the different figure, a new way to control the alignment accuracy is present, which can decrease the bad effects from mirror surface and position errors effectively. As a sample, a simulation experiment is carried to study the characteristics of different scanning functions and the effects of mirror surface and position errors on alignment accuracy. In simulation experiment, the piston and tip-tilt errors scale and the ideal position of segmented mirror are given, the capture and alignment process is realized by utilizing the improved optics design software ZEMAX, the optimal scanning function and the alignment accuracy is determined.

  5. Quantitative Analysis of the Drosophila Segmentation Regulatory Network Using Pattern Generating Potentials

    PubMed Central

    Richards, Adam; McCutchan, Michael; Wakabayashi-Ito, Noriko; Hammonds, Ann S.; Celniker, Susan E.; Kumar, Sudhir; Wolfe, Scot A.; Brodsky, Michael H.; Sinha, Saurabh

    2010-01-01

    Cis-regulatory modules that drive precise spatial-temporal patterns of gene expression are central to the process of metazoan development. We describe a new computational strategy to annotate genomic sequences based on their “pattern generating potential” and to produce quantitative descriptions of transcriptional regulatory networks at the level of individual protein-module interactions. We use this approach to convert the qualitative understanding of interactions that regulate Drosophila segmentation into a network model in which a confidence value is associated with each transcription factor-module interaction. Sequence information from multiple Drosophila species is integrated with transcription factor binding specificities to determine conserved binding site frequencies across the genome. These binding site profiles are combined with transcription factor expression information to create a model to predict module activity patterns. This model is used to scan genomic sequences for the potential to generate all or part of the expression pattern of a nearby gene, obtained from available gene expression databases. Interactions between individual transcription factors and modules are inferred by a statistical method to quantify a factor's contribution to the module's pattern generating potential. We use these pattern generating potentials to systematically describe the location and function of known and novel cis-regulatory modules in the segmentation network, identifying many examples of modules predicted to have overlapping expression activities. Surprisingly, conserved transcription factor binding site frequencies were as effective as experimental measurements of occupancy in predicting module expression patterns or factor-module interactions. Thus, unlike previous module prediction methods, this method predicts not only the location of modules but also their spatial activity pattern and the factors that directly determine this pattern. As databases of transcription factor specificities and in vivo gene expression patterns grow, analysis of pattern generating potentials provides a general method to decode transcriptional regulatory sequences and networks. PMID:20808951

  6. Sequence Analysis of the Segmental Duplication Responsible for Paris Sex-Ratio Drive in Drosophila simulans.

    PubMed

    Fouvry, Lucie; Ogereau, David; Berger, Anne; Gavory, Frederick; Montchamp-Moreau, Catherine

    2011-10-01

    Sex-ratio distorters are X-linked selfish genetic elements that facilitate their own transmission by subverting Mendelian segregation at the expense of the Y chromosome. Naturally occurring cases of sex-linked distorters have been reported in a variety of organisms, including several species of Drosophila; they trigger genetic conflict over the sex ratio, which is an important evolutionary force. However, with a few exceptions, the causal loci are unknown. Here, we molecularly characterize the segmental duplication involved in the Paris sex-ratio system that is still evolving in natural populations of Drosophila simulans. This 37.5 kb tandem duplication spans six genes, from the second intron of the Trf2 gene (TATA box binding protein-related factor 2) to the first intron of the org-1 gene (optomotor-blind-related-gene-1). Sequence analysis showed that the duplication arose through the production of an exact copy on the template chromosome itself. We estimated this event to be less than 500 years old. We also detected specific signatures of the duplication mechanism; these support the Duplication-Dependent Strand Annealing model. The region at the junction between the two duplicated segments contains several copies of an active transposable element, Hosim1, alternating with 687 bp repeats that are noncoding but transcribed. The almost-complete sequence identity between copies made it impossible to complete the sequencing and assembly of this region. These results form the basis for the functional dissection of Paris sex-ratio drive and will be valuable for future studies designed to better understand the dynamics and the evolutionary significance of sex chromosome drive. PMID:22384350

  7. Sequence Analysis of the Segmental Duplication Responsible for Paris Sex-Ratio Drive in Drosophila simulans

    PubMed Central

    Fouvry, Lucie; Ogereau, David; Berger, Anne; Gavory, Frederick; Montchamp-Moreau, Catherine

    2011-01-01

    Sex-ratio distorters are X-linked selfish genetic elements that facilitate their own transmission by subverting Mendelian segregation at the expense of the Y chromosome. Naturally occurring cases of sex-linked distorters have been reported in a variety of organisms, including several species of Drosophila; they trigger genetic conflict over the sex ratio, which is an important evolutionary force. However, with a few exceptions, the causal loci are unknown. Here, we molecularly characterize the segmental duplication involved in the Paris sex-ratio system that is still evolving in natural populations of Drosophila simulans. This 37.5 kb tandem duplication spans six genes, from the second intron of the Trf2 gene (TATA box binding protein-related factor 2) to the first intron of the org-1 gene (optomotor-blind-related-gene-1). Sequence analysis showed that the duplication arose through the production of an exact copy on the template chromosome itself. We estimated this event to be less than 500 years old. We also detected specific signatures of the duplication mechanism; these support the Duplication-Dependent Strand Annealing model. The region at the junction between the two duplicated segments contains several copies of an active transposable element, Hosim1, alternating with 687 bp repeats that are noncoding but transcribed. The almost-complete sequence identity between copies made it impossible to complete the sequencing and assembly of this region. These results form the basis for the functional dissection of Paris sex-ratio drive and will be valuable for future studies designed to better understand the dynamics and the evolutionary significance of sex chromosome drive. PMID:22384350

  8. Precise delineation of clinical target volume for crossing-segments thoracic esophageal squamous cell carcinoma based on the pattern of lymph node metastases

    PubMed Central

    Dong, Yuanli; Guan, Hui; Huang, Wei; Zhang, Zicheng; Zhao, Dongbo; Liu, Yang; Zhou, Tao

    2015-01-01

    Background This work aims to investigate lymph node metastases (LNM) pattern of crossing-segments thoracic esophageal squamous cell carcinoma (ESCC) and its significance in clinical target volume (CTV) delineation. Methods From January 2000 to December 2014, 3,587 patients with thoracic ESCC underwent surgery including esophagectomy and lymphadenectomy at Shandong Cancer Hospital and Institute. Information of tumor location based on preoperative endoscopic ultrasonography (EUS) and postoperative pathological results were retrospectively collected. The extent of the irradiation field was determined based on LNM pattern. Results Among the patients reviewed, 1,501 (41.8%) were crossing-segments thoracic ESCC patients. The rate of LNM were 12.1%, 15.2%, 8.0%, 3.0%, and 7.1% in neck, upper mediastinum, middle mediastinum, lower mediastinum, and abdominal cavity for patients with upper-middle thoracic ESCC, 10.3%, 8.2%, 11.0%, 4.8%, 8.2% for middle-upper thoracic ESCC, 4.8%, 4.8%, 24.1%, 6.3%, 22.8% for middle-lower thoracic ESCC and 3.9%, 3.1%, 22.8%, 11.9%, 25.8% for lower-middle thoracic ESCC, respectively. The top three sites of LNM were 105 (12.1%), 108 (6.1%), 101 (6.1%) for upper-middle thoracic ESCC, 108 (8.2%), 105 (7.5%), 106 (6.8%) for middle-upper thoracic ESCC, 1 (18.8%), 108 (17.9%), 107 (9.6%) for middle-lower thoracic ESCC, 1 (21.3%), 108 (16.1%), 107 (10.1%) for lower-middle thoracic ESCC. Conclusions Crossing-segments thoracic ESCC was remarkably common among patients. When delineating their CTV, tumor location should be taken into consideration seriously. For upper-middle and middle-upper thoracic ESCC, abdominal cavity may be free from irradiation. For middle-lower and lower-middle thoracic ESCC, besides irradiation of relative mediastinal, irradiation of abdominal cavity can’t be neglected. PMID:26793353

  9. Ultratrace LC-MS/MS analysis of segmented calf hair for retrospective assessment of time of clenbuterol administration in Agriforensics.

    PubMed

    Duvivier, Wilco F; van Beek, Teris A; Meijer, Thijs; Peeters, Ruth J P; Groot, Maria J; Sterk, Saskia S; Nielen, Michel W F

    2015-01-21

    In agriforensics, time of administration is often debated when illegal drug residues, such as clenbuterol, are found in frequently traded cattle. In this proof-of-concept work, the feasibility of obtaining retrospective timeline information from segmented calf tail hair analyses has been studied. First, an ultraperformance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) hair analysis method was adapted to accommodate smaller sample sizes and in-house validated. Then, longitudinal 1 cm segments of calf tail hair were analyzed to obtain clenbuterol concentration profiles. The profiles found were in good agreement with calculated, theoretical positions of the clenbuterol residues along the hair. Following assessment of the average growth rate of calf tail hair, time of clenbuterol administration could be retrospectively determined from segmented hair analysis data. The data from the initial animal treatment study (n = 2) suggest that time of treatment can be retrospectively estimated with an error of 3-17 days. PMID:25537490

  10. Identifying Like-Minded Audiences for Global Warming Public Engagement Campaigns: An Audience Segmentation Analysis and Tool Development

    PubMed Central

    Maibach, Edward W.; Leiserowitz, Anthony; Roser-Renouf, Connie; Mertz, C. K.

    2011-01-01

    Background Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation – a process of identifying coherent groups within a population – can be used to improve the effectiveness of public engagement campaigns. Methodology/Principal Findings In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164) to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%), to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%). Three of the segments (totaling 70%) were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18%) were unsupportive, and one was largely disengaged (12%), having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively. Conclusions/Significance In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are available to assist in that process. PMID:21423743

  11. Multi-atlas multi-shape segmentation of fetal brain MRI for volumetric and morphometric analysis of ventriculomegaly.

    PubMed

    Gholipour, Ali; Akhondi-Asl, Alireza; Estroff, Judy A; Warfield, Simon K

    2012-04-15

    The recent development of motion robust super-resolution fetal brain MRI holds out the potential for dramatic new advances in volumetric and morphometric analysis. Volumetric analysis based on volumetric and morphometric biomarkers of the developing fetal brain must include segmentation. Automatic segmentation of fetal brain MRI is challenging, however, due to the highly variable size and shape of the developing brain; possible structural abnormalities; and the relatively poor resolution of fetal MRI scans. To overcome these limitations, we present a novel, constrained, multi-atlas, multi-shape automatic segmentation method that specifically addresses the challenge of segmenting multiple structures with similar intensity values in subjects with strong anatomic variability. Accordingly, we have applied this method to shape segmentation of normal, dilated, or fused lateral ventricles for quantitative analysis of ventriculomegaly (VM), which is a pivotal finding in the earliest stages of fetal brain development, and warrants further investigation. Utilizing these innovative techniques, we introduce novel volumetric and morphometric biomarkers of VM comparing these values to those that are generated by standard methods of VM analysis, i.e., by measuring the ventricular atrial diameter (AD) on manually selected sections of 2D ultrasound or 2D MRI. To this end, we studied 25 normal and abnormal fetuses in the gestation age (GA) range of 19 to 39 weeks (mean=28.26, stdev=6.56). This heterogeneous dataset was essentially used to 1) validate our segmentation method for normal and abnormal ventricles; and 2) show that the proposed biomarkers may provide improved detection of VM as compared to the AD measurement. PMID:22500924

  12. Multi-Atlas Multi-Shape Segmentation of Fetal Brain MRI for Volumetric and Morphometric Analysis of Ventriculomegaly

    PubMed Central

    Gholipour, Ali; Akhondi-Asl, Alireza; Estroff, Judy A.; Warfield, Simon K.

    2012-01-01

    The recent development of motion robust super-resolution fetal brain MRI holds out the potential for dramatic new advances in volumetric and morphometric analysis. Volumetric analysis based on volumetric and morphometric biomarkers of the developing fetal brain must include segmentation. Automatic segmentation of fetal brain MRI is challenging, however, due to the highly variable size and shape of the developing brain; possible structural abnormalities; and the relatively poor resolution of fetal MRI scans. To overcome these limitations, we present a novel, constrained, multi-atlas, multi-shape automatic segmentation method that specifically addresses the challenge of segmenting multiple structures with similar intensity values in subjects with strong anatomic variability. Accordingly, we have applied this method to shape segmentation of normal, dilated, or fused lateral ventricles for quantitative analysis of ventriculomegaly (VM), which is a pivotal finding in the earliest stages of fetal brain development, and warrants further investigation. Utilizing these innovative techniques, we introduce novel volumetric and morphometric biomarkers of VM comparing these values to those that are generated by standard methods of VM analysis, i.e., by measuring the ventricular atrial diameter (AD) on manually selected sections of 2D ultrasound or 2D MRI. To this end, we studied 25 normal and abnormal fetuses in the gestation age (GA) range of 19 to 39 weeks (mean=28.26, stdev=6.56). This heterogenous dataset was essentially used to 1) validate our segmentation method for normal and abnormal ventricles; and 2) show that the proposed biomarkers may provide improved detection of VM as compared to the AD measurement. PMID:22500924

  13. Optical granulometric analysis of sedimentary deposits by color segmentation-based software: OPTGRAN-CS

    NASA Astrophysics Data System (ADS)

    Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.

    2015-12-01

    The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.

  14. Analysis of volume holographic storage allowing large-angle illumination

    NASA Astrophysics Data System (ADS)

    Shamir, Joseph

    2005-05-01

    Advanced technological developments have stimulated renewed interest in volume holography for applications such as information storage and wavelength multiplexing for communications and laser beam shaping. In these and many other applications, the information-carrying wave fronts usually possess narrow spatial-frequency bands, although they may propagate at large angles with respect to each other or a preferred optical axis. Conventional analytic methods are not capable of properly analyzing the optical architectures involved. For mitigation of the analytic difficulties, a novel approximation is introduced to treat narrow spatial-frequency band wave fronts propagating at large angles. This approximation is incorporated into the analysis of volume holography based on a plane-wave decomposition and Fourier analysis. As a result of the analysis, the recently introduced generalized Bragg selectivity is rederived for this more general case and is shown to provide enhanced performance for the above indicated applications. The power of the new theoretical description is demonstrated with the help of specific examples and computer simulations. The simulations reveal some interesting effects, such as coherent motion blur, that were predicted in an earlier publication.

  15. Stereophotogrammetrie Mass Distribution Parameter Determination Of The Lower Body Segments For Use In Gait Analysis

    NASA Astrophysics Data System (ADS)

    Sheffer, Daniel B.; Schaer, Alex R.; Baumann, Juerg U.

    1989-04-01

    Inclusion of mass distribution information in biomechanical analysis of motion is a requirement for the accurate calculation of external moments and forces acting on the segmental joints during locomotion. Regression equations produced from a variety of photogrammetric, anthropometric and cadaeveric studies have been developed and espoused in literature. Because of limitations in the accuracy of predicted inertial properties based on the application of regression equation developed on one population and then applied on a different study population, the employment of a measurement technique that accurately defines the shape of each individual subject measured is desirable. This individual data acquisition method is especially needed when analyzing the gait of subjects with large differences in their extremity geo-metry from those considered "normal", or who may possess gross asymmetries in shape in their own contralateral limbs. This study presents the photogrammetric acquisition and data analysis methodology used to assess the inertial tensors of two groups of subjects, one with spastic diplegic cerebral palsy and the other considered normal.

  16. Impact of BAC limit reduction on different population segments: a Poisson fixed effect analysis.

    PubMed

    Kaplan, Sigal; Prato, Carlo Giacomo

    2007-11-01

    Over the past few decades, several countries enacted the reduction of the legal blood alcohol concentration (BAC) limit, often alongside the administrative license revocation or suspension, to battle drinking-and-driving behavior. Several researchers investigated the effectiveness of these policies by applying different analysis procedures, while assuming population homogeneity in responding to these laws. The present analysis focuses on the evaluation of the impact of BAC limit reduction on different population segments. Poisson regression models, adapted to account for possible observation dependence over time and state specific effects, are estimated to measure the reduction of the number of alcohol-related accidents and fatalities for single-vehicle accidents in 22 U.S. jurisdictions over a period of 15 years starting in 1990. Model estimates demonstrate that, for alcohol-related single-vehicle crashes, (i) BAC laws are more effective in terms of reduction of number of casualties rather than number of accidents, (ii) women and elderly population exhibit higher law compliance with respect to men and to young adult and adult population, respectively, and (iii) the presence of passengers in the vehicle enhances the sense of responsibility of the driver. PMID:17920837

  17. High-throughput histopathological image analysis via robust cell segmentation and hashing.

    PubMed

    Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting

    2015-12-01

    Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells . PMID:26599156

  18. Image Segmentation and Analysis of Flexion-Extension Radiographs of Cervical Spines

    PubMed Central

    Enikov, Eniko T.

    2014-01-01

    We present a new analysis tool for cervical flexion-extension radiographs based on machine vision and computerized image processing. The method is based on semiautomatic image segmentation leading to detection of common landmarks such as the spinolaminar (SL) line or contour lines of the implanted anterior cervical plates. The technique allows for visualization of the local curvature of these landmarks during flexion-extension experiments. In addition to changes in the curvature of the SL line, it has been found that the cervical plates also deform during flexion-extension examination. While extension radiographs reveal larger curvature changes in the SL line, flexion radiographs on the other hand tend to generate larger curvature changes in the implanted cervical plates. Furthermore, while some lordosis is always present in the cervical plates by design, it actually decreases during extension and increases during flexion. Possible causes of this unexpected finding are also discussed. The described analysis may lead to a more precise interpretation of flexion-extension radiographs, allowing diagnosis of spinal instability and/or pseudoarthrosis in already seemingly fused spines. PMID:27006937

  19. A coronary artery segmentation method based on multiscale analysis and region growing.

    PubMed

    Kerkeni, Asma; Benabdallah, Asma; Manzanera, Antoine; Bedoui, Mohamed Hedi

    2016-03-01

    Accurate coronary artery segmentation is a fundamental step in various medical imaging applications such as stenosis detection, 3D reconstruction and cardiac dynamics assessing. In this paper, a multiscale region growing (MSRG) method for coronary artery segmentation in 2D X-ray angiograms is proposed. First, a region growing rule incorporating both vesselness and direction information in a unique way is introduced. Then an iterative multiscale search based on this criterion is performed. Selected points in each step are considered as seeds for the following step. By combining vesselness and direction information in the growing rule, this method is able to avoid blockage caused by low vesselness values in vascular regions, which in turn, yields continuous vessel tree. Performing the process in a multiscale fashion helps to extract thin and peripheral vessels often missed by other segmentation methods. Quantitative evaluation performed on real angiography images shows that the proposed segmentation method identifies about 80% of the total coronary artery tree in relatively easy images and 70% in challenging cases with a mean precision of 82% and outperforms others segmentation methods in terms of sensitivity. The MSRG segmentation method was also implemented with different enhancement filters and it has been shown that the Frangi filter gives better results. The proposed segmentation method has proven to be tailored for coronary artery segmentation. It keeps an acceptable performance when dealing with challenging situations such as noise, stenosis and poor contrast. PMID:26748040

  20. A Unified Set of Analysis Tools for Uterine Cervix Image Segmentation

    PubMed Central

    Xue, Zhiyun; Long, Rodney; Antani, Sameer; Neve, Leif; Zhu, Yaoyao; Thoma, George

    2010-01-01

    Segmentation is a fundamental component of many medical image processing applications, and it has long been recognized as a challenging problem. In this paper, we report our research and development efforts on analyzing and extracting clinically meaningful regions from uterine cervix images in a large database created for the study of cervical cancer. In addition to proposing new algorithms, we also focus on developing open source tools which are in synchrony with the research objectives. These efforts have resulted in three Web-accessible tools which address three important and interrelated sub-topics in medical image segmentation, respectively: the BMT (Boundary Marking Tool), CST (Cervigram Segmentation Tool), and MOSES (Multi-Observer Segmentation Evaluation System). The BMT is for manual segmentation, typically to collect “ground truth” image regions from medical experts. The CST is for automatic segmentation, and MOSES is for segmentation evaluation. These tools are designed to be a unified set in which data can be conveniently exchanged. They have value not only for improving the reliability and accuracy of algorithms of uterine cervix image segmentation, but also promoting collaboration between biomedical experts and engineers which are crucial to medical image processing applications. Although the CST is designed for the unique characteristics of cervigrams, the BMT and MOSES are very general and extensible, and can be easily adapted to other biomedical image collections. PMID:20510585

  1. A Theoretical Analysis of How Segmentation of Dynamic Visualizations Optimizes Students' Learning

    ERIC Educational Resources Information Center

    Spanjers, Ingrid A. E.; van Gog, Tamara; van Merrienboer, Jeroen J. G.

    2010-01-01

    This article reviews studies investigating segmentation of dynamic visualizations (i.e., showing dynamic visualizations in pieces with pauses in between) and discusses two not mutually exclusive processes that might underlie the effectiveness of segmentation. First, cognitive activities needed for dealing with the transience of dynamic…

  2. Unconventional Word Segmentation in Emerging Bilingual Students' Writing: A Longitudinal Analysis

    ERIC Educational Resources Information Center

    Sparrow, Wendy

    2014-01-01

    This study explores cross-language and longitudinal patterns in unconventional word segmentation in 25 emerging bilingual students' (Spanish/English) writing from first through third grade. Spanish and English writing samples were collected annually and analyzed for two basic types of unconventional word segmentation: hyposegmentation, in…

  3. Understanding the market for geographic information: A market segmentation and characteristics analysis

    NASA Technical Reports Server (NTRS)

    Piper, William S.; Mick, Mark W.

    1994-01-01

    Findings and results from a marketing research study are presented. The report identifies market segments and the product types to satisfy demand in each. An estimate of market size is based on the specific industries in each segment. A sample of ten industries was used in the study. The scientific study covered U.S. firms only.

  4. Analysis of Cavity Volumes in Proteins Using Percolation Theory

    NASA Astrophysics Data System (ADS)

    Green, Sheridan; Jacobs, Donald; Farmer, Jenny

    Molecular packing is studied in a diverse set of globular proteins in their native state ranging in size from 34 to 839 residues An new algorithm has been developed that builds upon the classic Hoshen-Kopelman algorithm for site percolation combined with a local connection criterion that classifies empty space within a protein as a cavity when large enough to hold a spherical shaped probe of radius, R, otherwise a microvoid. Although microvoid cannot fit an object (e.g. molecule or ion) that is the size of the probe or larger, total microvoid volume is a major contribution to protein volume. Importantly, the cavity and microvoid classification depends on probe radius. As probe size decreases, less microvoid forms in favor of more cavities. As probe size is varied from large to small, many disconnected cavities merge to form a percolating path. For fixed probe size, microvoid, cavity and solvent accessible boundary volume properties reflect conformational fluctuations. These results are visualized on three-dimensional structures. Analysis of the cluster statistics within the framework of percolation theory suggests interconversion between microvoid and cavity pathways regulate the dynamics of solvent penetration during partial unfolding events important to protein function.

  5. Volume measurements of normal orbital structures by computed tomographic analysis

    SciTech Connect

    Forbes, G.; Gehring, D.G.; Gorman, C.A.; Brennan, M.D.; Jackson, I.T.

    1985-07-01

    Computed tomographic digital data and special off-line computer graphic analysis were used to measure volumes of normal orbital soft tissue, extraocular muscle, orbital fat, and total bony orbit in vivo in 29 patients (58 orbits). The upper limits of normal for adult bony orbit, soft tissue exclusive of the globe, orbital fat, and muscle are 30.1 cm/sup 3/, 20.0 cm/sup 3/, 14.4 cm/sup 3/, and 6.5 cm/sup 3/, respectively. There are small differences in men as a group compared with women but minimal difference between right and left orbits in the same person. The accuracy of the techniques was established at 7%-8% for these orbit structural volumes in physical phantoms and in simulated silicone orbit phantoms in dry skulls. Mean values and upper limits of normal for volumes were determined in adult orbital structures for future comparison with changes due to endocrine ophthalmopathy, trauma, and congenital deformity.

  6. Parallel runway requirement analysis study. Volume 1: The analysis

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.

    1993-01-01

    The correlation of increased flight delays with the level of aviation activity is well recognized. A main contributor to these flight delays has been the capacity of airports. Though new airport and runway construction would significantly increase airport capacity, few programs of this type are currently underway, let alone planned, because of the high cost associated with such endeavors. Therefore, it is necessary to achieve the most efficient and cost effective use of existing fixed airport resources through better planning and control of traffic flows. In fact, during the past few years the FAA has initiated such an airport capacity program designed to provide additional capacity at existing airports. Some of the improvements that that program has generated thus far have been based on new Air Traffic Control procedures, terminal automation, additional Instrument Landing Systems, improved controller display aids, and improved utilization of multiple runways/Instrument Meteorological Conditions (IMC) approach procedures. A useful element to understanding potential operational capacity enhancements at high demand airports has been the development and use of an analysis tool called The PLAND_BLUNDER (PLB) Simulation Model. The objective for building this simulation was to develop a parametric model that could be used for analysis in determining the minimum safety level of parallel runway operations for various parameters representing the airplane, navigation, surveillance, and ATC system performance. This simulation is useful as: a quick and economical evaluation of existing environments that are experiencing IMC delays, an efficient way to study and validate proposed procedure modifications, an aid in evaluating requirements for new airports or new runways in old airports, a simple, parametric investigation of a wide range of issues and approaches, an ability to tradeoff air and ground technology and procedures contributions, and a way of considering probable blunder mechanisms and range of blunder scenarios. This study describes the steps of building the simulation and considers the input parameters, assumptions and limitations, and available outputs. Validation results and sensitivity analysis are addressed as well as outlining some IMC and Visual Meteorological Conditions (VMC) approaches to parallel runways. Also, present and future applicable technologies (e.g., Digital Autoland Systems, Traffic Collision and Avoidance System II, Enhanced Situational Awareness System, Global Positioning Systems for Landing, etc.) are assessed and recommendations made.

  7. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  8. User's operating procedures. Volume 2: Scout project financial analysis program

    NASA Technical Reports Server (NTRS)

    Harris, C. G.; Haris, D. K.

    1985-01-01

    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.

  9. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms

    PubMed Central

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly

    2013-01-01

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652

  10. Concept Area Two Objectives and Test Items (Rev.) Part One, Part Two. Economic Analysis Course. Segments 17-49.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This report deals with the second concept area of the course and focuses on macroeconomics. Segments 17 through 49 are presented,…

  11. Bivariate segmentation of SNP-array data for allele-specific copy number analysis in tumour samples

    PubMed Central

    2013-01-01

    Background SNP arrays output two signals that reflect the total genomic copy number (LRR) and the allelic ratio (BAF), which in combination allow the characterisation of allele-specific copy numbers (ASCNs). While methods based on hidden Markov models (HMMs) have been extended from array comparative genomic hybridisation (aCGH) to jointly handle the two signals, only one method based on change-point detection, ASCAT, performs bivariate segmentation. Results In the present work, we introduce a generic framework for bivariate segmentation of SNP array data for ASCN analysis. For the matter, we discuss the characteristics of the typically applied BAF transformation and how they affect segmentation, introduce concepts of multivariate time series analysis that are of concern in this field and discuss the appropriate formulation of the problem. The framework is implemented in a method named CnaStruct, the bivariate form of the structural change model (SCM), which has been successfully applied to transcriptome mapping and aCGH. Conclusions On a comprehensive synthetic dataset, we show that CnaStruct outperforms the segmentation of existing ASCN analysis methods. Furthermore, CnaStruct can be integrated into the workflows of several ASCN analysis tools in order to improve their performance, specially on tumour samples highly contaminated by normal cells. PMID:23497144

  12. Change Detection and Land Use / Land Cover Database Updating Using Image Segmentation, GIS Analysis and Visual Interpretation

    NASA Astrophysics Data System (ADS)

    Mas, J.-F.; González, R.

    2015-08-01

    This article presents a hybrid method that combines image segmentation, GIS analysis, and visual interpretation in order to detect discrepancies between an existing land use/cover map and satellite images, and assess land use/cover changes. It was applied to the elaboration of a multidate land use/cover database of the State of Michoacán, Mexico using SPOT and Landsat imagery. The method was first applied to improve the resolution of an existing 1:250,000 land use/cover map produced through the visual interpretation of 2007 SPOT images. A segmentation of the 2007 SPOT images was carried out to create spectrally homogeneous objects with a minimum area of two hectares. Through an overlay operation with the outdated map, each segment receives the "majority" category from the map. Furthermore, spectral indices of the SPOT image were calculated for each band and each segment; therefore, each segment was characterized from the images (spectral indices) and the map (class label). In order to detect uncertain areas which present discrepancy between spectral response and class label, a multivariate trimming, which consists in truncating a distribution from its least likely values, was applied. The segments that behave like outliers were detected and labeled as "uncertain" and a probable alternative category was determined by means of a digital classification using a decision tree classification algorithm. Then, the segments were visually inspected in the SPOT image and high resolution imagery to assign a final category. The same procedure was applied to update the map to 2014 using Landsat imagery. As a final step, an accuracy assessment was carried out using verification sites selected from a stratified random sampling and visually interpreted using high resolution imagery and ground truth.

  13. Using Paleoseismic Trenching and LiDAR Analysis to Evaluate Rupture Propagation Through Segment Boundaries of the Central Wasatch Fault Zone, Utah

    NASA Astrophysics Data System (ADS)

    Bennett, S. E. K.; DuRoss, C. B.; Reitman, N. G.; Devore, J. R.; Hiscock, A.; Gold, R. D.; Briggs, R. W.; Personius, S. F.

    2014-12-01

    Paleoseismic data near fault segment boundaries constrain the extent of past surface ruptures and the persistence of rupture termination at segment boundaries. Paleoseismic evidence for large (M≥7.0) earthquakes on the central Holocene-active fault segments of the 350-km-long Wasatch fault zone (WFZ) generally supports single-segment ruptures but also permits multi-segment rupture scenarios. The extent and frequency of ruptures that span segment boundaries remains poorly known, adding uncertainty to seismic hazard models for this populated region of Utah. To address these uncertainties we conducted four paleoseismic investigations near the Salt Lake City-Provo and Provo-Nephi segment boundaries of the WFZ. We examined an exposure of the WFZ at Maple Canyon (Woodland Hills, UT) and excavated the Flat Canyon trench (Salem, UT), 7 and 11 km, respectively, from the southern tip of the Provo segment. We document evidence for at least five earthquakes at Maple Canyon and four to seven earthquakes that post-date mid-Holocene fan deposits at Flat Canyon. These earthquake chronologies will be compared to seven earthquakes observed in previous trenches on the northern Nephi segment to assess rupture correlation across the Provo-Nephi segment boundary. To assess rupture correlation across the Salt Lake City-Provo segment boundary we excavated the Alpine trench (Alpine, UT), 1 km from the northern tip of the Provo segment, and the Corner Canyon trench (Draper, UT) 1 km from the southern tip of the Salt Lake City segment. We document evidence for six earthquakes at both sites. Ongoing geochronologic analysis (14C, optically stimulated luminescence) will constrain earthquake chronologies and help identify through-going ruptures across these segment boundaries. Analysis of new high-resolution (0.5m) airborne LiDAR along the entire WFZ will quantify latest Quaternary displacements and slip rates and document spatial and temporal slip patterns near fault segment boundaries.

  14. Interactive 3D segmentation of the prostate in magnetic resonance images using shape and local appearance similarity analysis

    NASA Astrophysics Data System (ADS)

    Shahedi, Maysam; Fenster, Aaron; Cool, Derek W.; Romagnoli, Cesare; Ward, Aaron D.

    2013-03-01

    3D segmentation of the prostate in medical images is useful to prostate cancer diagnosis and therapy guidance, but is time-consuming to perform manually. Clinical translation of computer-assisted segmentation algorithms for this purpose requires a comprehensive and complementary set of evaluation metrics that are informative to the clinical end user. We have developed an interactive 3D prostate segmentation method for 1.5T and 3.0T T2-weighted magnetic resonance imaging (T2W MRI) acquired using an endorectal coil. We evaluated our method against manual segmentations of 36 3D images using complementary boundary-based (mean absolute distance; MAD), regional overlap (Dice similarity coefficient; DSC) and volume difference (ΔV) metrics. Our technique is based on inter-subject prostate shape and local boundary appearance similarity. In the training phase, we calculated a point distribution model (PDM) and a set of local mean intensity patches centered on the prostate border to capture shape and appearance variability. To segment an unseen image, we defined a set of rays - one corresponding to each of the mean intensity patches computed in training - emanating from the prostate centre. We used a radial-based search strategy and translated each mean intensity patch along its corresponding ray, selecting as a candidate the boundary point with the highest normalized cross correlation along each ray. These boundary points were then regularized using the PDM. For the whole gland, we measured a mean+/-std MAD of 2.5+/-0.7 mm, DSC of 80+/-4%, and ΔV of 1.1+/-8.8 cc. We also provided an anatomic breakdown of these metrics within the prostatic base, mid-gland, and apex.

  15. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    SciTech Connect

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-19

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

  16. a New Framework for Object-Based Image Analysis Based on Segmentation Scale Space and Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Hadavand, A.; Saadatseresht, M.; Homayouni, S.

    2015-12-01

    In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS), a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  17. Volume analysis of heat-induced cracks in human molars: A preliminary study

    PubMed Central

    Sandholzer, Michael A.; Baron, Katharina; Heimel, Patrick; Metscher, Brian D.

    2014-01-01

    Context: Only a few methods have been published dealing with the visualization of heat-induced cracks inside bones and teeth. Aims: As a novel approach this study used nondestructive X-ray microtomography (micro-CT) for volume analysis of heat-induced cracks to observe the reaction of human molars to various levels of thermal stress. Materials and Methods: Eighteen clinically extracted third molars were rehydrated and burned under controlled temperatures (400, 650, and 800°C) using an electric furnace adjusted with a 25°C increase/min. The subsequent high-resolution scans (voxel-size 17.7 μm) were made with a compact micro-CT scanner (SkyScan 1174). In total, 14 scans were automatically segmented with Definiens XD Developer 1.2 and three-dimensional (3D) models were computed with Visage Imaging Amira 5.2.2. The results of the automated segmentation were analyzed with an analysis of variance (ANOVA) and uncorrected post hoc least significant difference (LSD) tests using Statistical Package for Social Sciences (SPSS) 17. A probability level of P < 0.05 was used as an index of statistical significance. Results: A temperature-dependent increase of heat-induced cracks was observed between the three temperature groups (P < 0.05, ANOVA post hoc LSD). In addition, the distributions and shape of the heat-induced changes could be classified using the computed 3D models. Conclusion: The macroscopic heat-induced changes observed in this preliminary study correspond with previous observations of unrestored human teeth, yet the current observations also take into account the entire microscopic 3D expansions of heat-induced cracks within the dental hard tissues. Using the same experimental conditions proposed in the literature, this study confirms previous results, adds new observations, and offers new perspectives in the investigation of forensic evidence. PMID:25125923

  18. Structural and functional analysis of transmembrane segment IV of the salt tolerance protein Sod2.

    PubMed

    Ullah, Asad; Kemp, Grant; Lee, Brian; Alves, Claudia; Young, Howard; Sykes, Brian D; Fliegel, Larry

    2013-08-23

    Sod2 is the plasma membrane Na(+)/H(+) exchanger of the fission yeast Schizosaccharomyces pombe. It provides salt tolerance by removing excess intracellular sodium (or lithium) in exchange for protons. We examined the role of amino acid residues of transmembrane segment IV (TM IV) ((126)FPQINFLGSLLIAGCITSTDPVLSALI(152)) in activity by using alanine scanning mutagenesis and examining salt tolerance in sod2-deficient S. pombe. Two amino acids were critical for function. Mutations T144A and V147A resulted in defective proteins that did not confer salt tolerance when reintroduced into S. pombe. Sod2 protein with other alanine mutations in TM IV had little or no effect. T144D and T144K mutant proteins were inactive; however, a T144S protein was functional and provided lithium, but not sodium, tolerance and transport. Analysis of sensitivity to trypsin indicated that the mutations caused a conformational change in the Sod2 protein. We expressed and purified TM IV (amino acids 125-154). NMR analysis yielded a model with two helical regions (amino acids 128-142 and 147-154) separated by an unwound region (amino acids 143-146). Molecular modeling of the entire Sod2 protein suggested that TM IV has a structure similar to that deduced by NMR analysis and an overall structure similar to that of Escherichia coli NhaA. TM IV of Sod2 has similarities to TM V of the Zygosaccharomyces rouxii Na(+)/H(+) exchanger and TM VI of isoform 1 of mammalian Na(+)/H(+) exchanger. TM IV of Sod2 is critical to transport and may be involved in cation binding or conformational changes of the protein. PMID:23836910

  19. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 4

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 4 of the four major tasks included in the study. Task 4 uses flight plan segment wind and temperature differences as indicators of dates and geographic areas for which significant forecast errors may have occurred. An in-depth analysis is then conducted for the days identified. The analysis show that significant errors occur in the operational forecast on 15 of the 33 arbitrarily selected days included in the study. Wind speeds in an area of maximum winds are underestimated by at least 20 to 25 kts. on 14 of these days. The analysis also show that there is a tendency to repeat the same forecast errors from prog to prog. Also, some perceived forecast errors from the flight plan comparisons could not be verified by visual inspection of the corresponding National Meteorological Center forecast and analyses charts, and it is likely that they are the result of weather data interpolation techniques or some other data processing procedure in the airlines' flight planning systems.

  20. Influence of staining on fast automated cell segmentation, feature extraction and cell image analysis.

    PubMed

    Wittekind, D; Reinhardt, E R; Kretschmer, V; Zipfel, E

    1983-03-01

    FAZYTAN, a system for fast automated cell segmentation, cell image analysis and extraction of nuclear features, was used to analyze cervical cell images variously stained by the conventional Papanicolaou stain, the new Papanicolaou stain and hematoxylin and thionin only; the last two dyes are used as the nuclear stains in the two versions of the Papanicolaou stain. Other dyes were also tried in cell classification experiments. All cell images in the variously stained samples could be described by the same nuclear features as had been adapted for the discrimination of conventional-Papanicolaou-stained cells. Variances were lower for thionin-stained cells as compared with hematoxylin-stained cells. By application of spectrophotometry, it was confirmed that the spectra of the cytoplasmic counterstains are superimposed on those of the nuclear stains. It appears that a variety of dyes are suitable as cytologic stains for cell classification by the FAZYTAN system, provided that they achieve sufficiently strong nuclear-cytoplasmic contrast by precisely delineating the chromatin texture. PMID:6189436

  1. Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method

    PubMed Central

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  2. Fitness effect analysis of a heterochromatic supernumerary segment in the grasshopper Eyprepocnemis plorans.

    PubMed

    Perfectti, F; Cabrero, J; López-León, M D; Muñoz, E; Pardo, M C; Camacho, J P

    2000-01-01

    Several components of fitness were analysed in relation to the presence of a supernumerary chromosome segment (SCS) in two natural populations of the grasshopper Eyprepocnemis plorans, including clutch size, egg fertility, egg and embryo productivity and survivability from embryo to adult, and SCS transmission through males. The results have shown the absence of a significant relationship between SCS presence and these fitness components, with the single exception of egg fertility which decreases significantly in SCS females with mating shortage. This fertility decrease is thus expected to be relevant for the population dynamics of the SCS only in low-density populations, those in which it is difficult for females to find a male to copulate with before each egg-batch is ready to be laid. The analysis of the SCS transmission through males showed no significant differences between expected and observed SCS frequencies. The SCS polymorphism seems to be at a status close to neutrality in respect to fitness, but its slight disadvantage in transmission through females carrying B chromosomes predicts that the polymorphism should tend to disappear, unless SCS recurrent amplification, or another undiscovered force, counteracts this tendency. PMID:10997782

  3. Movement Analysis of Flexion and Extension of Honeybee Abdomen Based on an Adaptive Segmented Structure

    PubMed Central

    Zhao, Jieliang; Wu, Jianing; Yan, Shaoze

    2015-01-01

    Honeybees (Apis mellifera) curl their abdomens for daily rhythmic activities. Prior to determining this fact, people have concluded that honeybees could curl their abdomen casually. However, an intriguing but less studied feature is the possible unidirectional abdominal deformation in free-flying honeybees. A high-speed video camera was used to capture the curling and to analyze the changes in the arc length of the honeybee abdomen not only in free-flying mode but also in the fixed sample. Frozen sections and environment scanning electron microscope were used to investigate the microstructure and motion principle of honeybee abdomen and to explore the physical structure restricting its curling. An adaptive segmented structure, especially the folded intersegmental membrane (FIM), plays a dominant role in the flexion and extension of the abdomen. The structural features of FIM were utilized to mimic and exhibit movement restriction on honeybee abdomen. Combining experimental analysis and theoretical demonstration, a unidirectional bending mechanism of honeybee abdomen was revealed. Through this finding, a new perspective for aerospace vehicle design can be imitated. PMID:26223946

  4. phenoVein—A Tool for Leaf Vein Segmentation and Analysis1[OPEN

    PubMed Central

    Pflugfelder, Daniel; Huber, Gregor; Scharr, Hanno; Hülskamp, Martin; Koornneef, Maarten; Jahnke, Siegfried

    2015-01-01

    Precise measurements of leaf vein traits are an important aspect of plant phenotyping for ecological and genetic research. Here, we present a powerful and user-friendly image analysis tool named phenoVein. It is dedicated to automated segmenting and analyzing of leaf veins in images acquired with different imaging modalities (microscope, macrophotography, etc.), including options for comfortable manual correction. Advanced image filtering emphasizes veins from the background and compensates for local brightness inhomogeneities. The most important traits being calculated are total vein length, vein density, piecewise vein lengths and widths, areole area, and skeleton graph statistics, like the number of branching or ending points. For the determination of vein widths, a model-based vein edge estimation approach has been implemented. Validation was performed for the measurement of vein length, vein width, and vein density of Arabidopsis (Arabidopsis thaliana), proving the reliability of phenoVein. We demonstrate the power of phenoVein on a set of previously described vein structure mutants of Arabidopsis (hemivenata, ondulata3, and asymmetric leaves2-101) compared with wild-type accessions Columbia-0 and Landsberg erecta-0. phenoVein is freely available as open-source software. PMID:26468519

  5. A comparison between handgrip strength, upper limb fat free mass by segmental bioelectrical impedance analysis (SBIA) and anthropometric measurements in young males

    NASA Astrophysics Data System (ADS)

    Gonzalez-Correa, C. H.; Caicedo-Eraso, J. C.; Varon-Serna, D. R.

    2013-04-01

    The mechanical function and size of a muscle may be closely linked. Handgrip strength (HGS) has been used as a predictor of functional performing. Anthropometric measurements have been made to estimate arm muscle area (AMA) and physical muscle mass volume of upper limb (ULMMV). Electrical volume estimation is possible by segmental BIA measurements of fat free mass (SBIA-FFM), mainly muscle-mass. Relationship among these variables is not well established. We aimed to determine if physical and electrical muscle mass estimations relate to each other and to what extent HGS is to be related to its size measured by both methods in normal or overweight young males. Regression analysis was used to determine association between these variables. Subjects showed a decreased HGS (65.5%), FFM, (85.5%) and AMA (74.5%). It was found an acceptable association between SBIA-FFM and AMA (r2 = 0.60) and poorer between physical and electrical volume (r2 = 0.55). However, a paired Student t-test and Bland and Altman plot showed that physical and electrical models were not interchangeable (pt<0.0001). HGS showed a very weak association with anthropometric (r2 = 0.07) and electrical (r2 = 0.192) ULMMV showing that muscle mass quantity does not mean muscle strength. Other factors influencing HGS like physical training or nutrition require more research.

  6. Finite element analysis of weightbath hydrotraction treatment of degenerated lumbar spine segments in elastic phase.

    PubMed

    Kurutz, M; Oroszváry, L

    2010-02-10

    3D finite element models of human lumbar functional spinal units (FSU) were used for numerical analysis of weightbath hydrotraction therapy (WHT) applied for treating degenerative diseases of the lumbar spine. Five grades of age-related degeneration were modeled by material properties. Tensile material parameters of discs were obtained by parameter identification based on in vivo measured elongations of lumbar segments during regular WHT, compressive material constants were obtained from the literature. It has been proved numerically that young adults of 40-45 years have the most deformable and vulnerable discs, while the stability of segments increases with further aging. The reasons were found by analyzing the separated contrasting effects of decreasing incompressibility and increasing hardening of nucleus, yielding non-monotonous functions of stresses and deformations in terms of aging and degeneration. WHT consists of indirect and direct traction phases. Discs show a bilinear material behaviour with higher resistance in indirect and smaller in direct traction phase. Consequently, although the direct traction load is only 6% of the indirect one, direct traction deformations are 15-90% of the indirect ones, depending on the grade of degeneration. Moreover, the ratio of direct stress relaxation remains equally about 6-8% only. Consequently, direct traction controlled by extra lead weights influences mostly the deformations being responsible for the nerve release; while the stress relaxation is influenced mainly by the indirect traction load coming from the removal of the compressive body weight and muscle forces in the water. A mildly degenerated disc in WHT shows 0.15mm direct, 0.45mm indirect and 0.6mm total extension; 0.2mm direct, 0.6mm indirect and 0.8mm total posterior contraction. A severely degenerated disc exhibits 0.05mm direct, 0.05mm indirect and 0.1mm total extension; 0.05mm direct, 0.25mm indirect and 0.3mm total posterior contraction. These deformations are related to the instant elastic phase of WHT that are doubled during the creep period of the treatment. The beneficial clinical impacts of WHT are still evident even 3 months later. PMID:19883918

  7. Relationship between methamphetamine use history and segmental hair analysis findings of MA users.

    PubMed

    Han, Eunyoung; Lee, Sangeun; In, Sanghwan; Park, Meejung; Park, Yonghoon; Cho, Sungnam; Shin, Junguk; Lee, Hunjoo

    2015-09-01

    The aim of this study was to investigate the relationship between methamphetamine (MA) use history and segmental hair analysis (1 and 3cm sections) and whole hair analysis results in Korean MA users in rehabilitation programs. Hair samples were collected from 26 Korean MA users. Eleven of the 26 subjects used cannabis with MA and two used cocaine, opiates, and MDMA with MA. Self-reported single dose of MA from the 26 subjects ranged from 0.03 to 0.5g/one time. Concentrations of MA and its metabolite amphetamine (AP) in hair were determined by gas chromatography mass spectrometry (GC/MS) after derivatization. The method used was well validated. Qualitative analysis from all 1cm sections (n=154) revealed a good correlation between positive or negative results for MA in hair and self-reported MA use (69.48%, n=107). In detail, MA results were positive in 66 hair specimens of MA users who reported administering MA, and MA results were negative in 41 hair specimens of MA users who denied MA administration in the corresponding month. Test results were false-negative in 10.39% (n=16) of hair specimens and false-positive in 20.13% (n=31) of hair specimens. In false positive cases, it is considered that after MA cessation it continued to be accumulated in hair still, while in false negative cases, self-reported histories showed a small amount of MA use or MA use 5-7 months previously. In terms of quantitative analysis, the concentrations of MA in 1 and 3cm long hair segments and in whole hair samples ranged from 1.03 to 184.98 (mean 22.01), 2.26 to 89.33 (mean 18.71), and 0.91 to 124.49 (mean 15.24)ng/mg, respectively. Ten subjects showed a good correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 7 among 10 subjects ranged from 0.71 to 0.98 (mean 0.85). Four subjects showed a low correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 4 subjects ranged from 0.36 to 0.55. Eleven subjects showed a poor correlation between MA use and MA concentration in hair. Correlation between MA use and MA concentration in hair of remaining one subject could not be determined or calculated. In this study, the correlation between accurate MA use histories obtained by psychiatrists and well-trained counselors and MA concentrations in hair was shown. This report provides objective scientific findings that should considerably aid the interpretation of forensic results and of the results of trials related to MA use. PMID:26197349

  8. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  9. Flow Analysis on a Limited Volume Chilled Water System

    SciTech Connect

    Zheng, Lin

    2012-07-31

    LANL Currently has a limited volume chilled water system for use in a glove box, but the system needs to be updated. Before we start building our new system, a flow analysis is needed to ensure that there are no high flow rates, extreme pressures, or any other hazards involved in the system. In this project the piping system is extremely important to us because it directly affects the overall design of the entire system. The primary components necessary for the chilled water piping system are shown in the design. They include the pipes themselves (perhaps of more than one diameter), the various fitting used to connect the individual pipes to form the desired system, the flow rate control devices (valves), and the pumps that add energy to the fluid. Even the most simple pipe systems are actually quite complex when they are viewed in terms of rigorous analytical considerations. I used an 'exact' analysis and dimensional analysis considerations combined with experimental results for this project. When 'real-world' effects are important (such as viscous effects in pipe flows), it is often difficult or impossible to use only theoretical methods to obtain the desired results. A judicious combination of experimental data with theoretical considerations and dimensional analysis are needed in order to reduce risks to an acceptable level.

  10. Three-dimensional volume analysis of vasculature in engineered tissues

    NASA Astrophysics Data System (ADS)

    YousefHussien, Mohammed; Garvin, Kelley; Dalecki, Diane; Saber, Eli; Helguera, María.

    2013-01-01

    Three-dimensional textural and volumetric image analysis holds great potential in understanding the image data produced by multi-photon microscopy. In this paper, an algorithm that quantitatively analyzes the texture and the morphology of vasculature in engineered tissues is proposed. The investigated 3D artificial tissues consist of Human Umbilical Vein Endothelial Cells (HUVEC) embedded in collagen exposed to two regimes of ultrasound standing wave fields under different pressure conditions. Textural features were evaluated using the normalized Gray-Scale Cooccurrence Matrix (GLCM) combined with Gray-Level Run Length Matrix (GLRLM) analysis. To minimize error resulting from any possible volume rotation and to provide a comprehensive textural analysis, an averaged version of nine GLCM and GLRLM orientations is used. To evaluate volumetric features, an automatic threshold using the gray level mean value is utilized. Results show that our analysis is able to differentiate among the exposed samples, due to morphological changes induced by the standing wave fields. Furthermore, we demonstrate that providing more textural parameters than what is currently being reported in the literature, enhances the quantitative understanding of the heterogeneity of artificial tissues.

  11. Biomechanical Analysis of Fusion Segment Rigidity Upon Stress at Both the Fusion and Adjacent Segments: A Comparison between Unilateral and Bilateral Pedicle Screw Fixation

    PubMed Central

    Kim, Ho-Joong; Kang, Kyoung-Tak; Chang, Bong-Soon; Lee, Choon-Ki; Kim, Jang-Woo

    2014-01-01

    Purpose The purpose of this study was to investigate the effects of unilateral pedicle screw fixation on the fusion segment and the superior adjacent segment after one segment lumbar fusion using validated finite element models. Materials and Methods Four L3-4 fusion models were simulated according to the extent of decompression and the method of pedicle screws fixation in L3-4 lumbar fusion. These models included hemi-laminectomy with bilateral pedicle screw fixation in the L3-4 segment (BF-HL model), total laminectomy with bilateral pedicle screw fixation (BF-TL model), hemi-laminectomy with unilateral pedicle screw fixation (UF-HL model), and total laminectomy with unilateral pedicle screw fixation (UF-TL model). In each scenario, intradiscal pressures, annulus stress, and range of motion at the L2-3 and L3-4 segments were analyzed under flexion, extension, lateral bending, and torsional moments. Results Under four pure moments, the unilateral fixation leads to a reduction in increment of range of motion at the adjacent segment, but larger motions were noted at the fusion segment (L3-4) in the unilateral fixation (UF-HL and UF-TL) models when compared to bilateral fixation. The maximal von Mises stress showed similar patterns to range of motion at both superior adjacent L2-3 segments and fusion segment. Conclusion The current study suggests that unilateral pedicle screw fixation seems to be unable to afford sufficient biomechanical stability in case of bilateral total laminectomy. Conversely, in the case of hemi-laminectomy, unilateral fixation could be an alternative option, which also has potential benefit to reduce the stress of the adjacent segment. PMID:25048501

  12. Analysis of human hair to assess exposure to organophosphate flame retardants: Influence of hair segments and gender differences.

    PubMed

    Qiao, Lin; Zheng, Xiao-Bo; Zheng, Jing; Lei, Wei-Xiang; Li, Hong-Fang; Wang, Mei-Huan; He, Chun-Tao; Chen, She-Jun; Yuan, Jian-Gang; Luo, Xiao-Jun; Yu, Yun-Jiang; Yang, Zhong-Yi; Mai, Bi-Xian

    2016-07-01

    Hair is a promising, non-invasive, human biomonitoring matrix that can provide insight into retrospective and integral exposure to organic pollutants. In the present study, we measured the concentrations of organophosphate flame retardants (PFRs) in hair and serum samples from university students in Guangzhou, China, and compared the PFR concentrations in the female hair segments using paired distal (5~10cm from the root) and proximal (0~5cm from the root) samples. PFRs were not detected in the serum samples. All PFRs except tricresyl phosphate (TMPP) and tri-n-propyl phosphate (TPP) were detected in more than half of all hair samples. The concentrations of total PFRs varied from 10.1 to 604ng/g, with a median of 148ng/g. Tris(chloroisopropyl) phosphate (TCIPP) and tri(2-ethylexyl) phosphate (TEHP) were the predominant PFRs in hair. The concentrations of most PFRs in the distal segments were 1.5~8.6 times higher than those in the proximal segments of the hair (t-test, p<0.05), which may be due to the longer exposure time of the distal segments to external sources. The values of log (PFR concentrations-distal/PFR concentrations-proximal) were positively and significantly correlated with log KOA of PFRs (p<0.05, r=0.68), indicating that PFRs with a higher log KOA tend to accumulate in hair at a higher rate than PFRs with a lower log KOA. Using combined segments of female hair, significantly higher PFR concentrations were observed in female hair than in male hair. In contrast, female hair exhibited significantly lower PFR concentrations than male hair when using the same hair position for both genders (0-5cm from the scalp). The controversial results regarding gender differences in PFRs in hair highlight the importance of segmental analysis when using hair as an indicator of human exposure to PFRs. PMID:27078091

  13. Coal gasification systems engineering and analysis. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Feasibility analyses and systems engineering studies for a 20,000 tons per day medium Btu (MBG) coal gasification plant to be built by TVA in Northern Alabama were conducted. Major objectives were as follows: (1) provide design and cost data to support the selection of a gasifier technology and other major plant design parameters, (2) provide design and cost data to support alternate product evaluation, (3) prepare a technology development plan to address areas of high technical risk, and (4) develop schedules, PERT charts, and a work breakdown structure to aid in preliminary project planning. Volume one contains a summary of gasification system characterizations. Five gasification technologies were selected for evaluation: Koppers-Totzek, Texaco, Lurgi Dry Ash, Slagging Lurgi, and Babcock and Wilcox. A summary of the trade studies and cost sensitivity analysis is included.

  14. Efficacy of bronchoscopic lung volume reduction: a meta-analysis

    PubMed Central

    Iftikhar, Imran H; McGuire, Franklin R; Musani, Ali I

    2014-01-01

    Background Over the last several years, the morbidity, mortality, and high costs associated with lung volume reduction (LVR) surgery has fuelled the development of different methods for bronchoscopic LVR (BLVR) in patients with emphysema. In this meta-analysis, we sought to study and compare the efficacy of most of these methods. Methods Eligible studies were retrieved from PubMed and Embase for the following BLVR methods: one-way valves, sealants (BioLVR), LVR coils, airway bypass stents, and bronchial thermal vapor ablation. Primary study outcomes included the mean change post-intervention in the lung function tests, the 6-minute walk distance, and the St George’s Respiratory Questionnaire. Secondary outcomes included treatment-related complications. Results Except for the airway bypass stents, all other methods of BLVR showed efficacy in primary outcomes. However, in comparison, the BioLVR method showed the most significant findings and was the least associated with major treatment-related complications. For the BioLVR method, the mean change in forced expiratory volume (in first second) was 0.18 L (95% confidence interval [CI]: 0.09 to 0.26; P<0.001); in 6-minute walk distance was 23.98 m (95% CI: 12.08 to 35.88; P<0.01); and in St George’s Respiratory Questionnaire was -8.88 points (95% CI: −12.12 to −5.64; P<0.001). Conclusion The preliminary findings of our meta-analysis signify the importance of most methods of BLVR. The magnitude of the effect on selected primary outcomes shows noninfe-riority, if not equivalence, when compared to what is known for surgical LVR. PMID:24868153

  15. On the development of weighting factors for ballast ranking prioritization & development of the relationship and rate of defective segments based on volume of missing ballast

    NASA Astrophysics Data System (ADS)

    Cronin, John

    This thesis explores the effects of missing ballast on track behavior and degradation. As ballast is an integral part of the track structure, the hypothesized effect of missing ballast is that defects will be more common which in turn leads to more derailments. In order to quantify the volume of missing ballast, remote sensing technologies were used to provide an accurate profile of the ballast. When the existing profile is compared to an idealized profile, the area of missing ballast can be computed. The area is then subdivided into zones which represent the area in which the ballast performs a key function in the track structure. These areas are then extrapolated into the volume of missing ballast for each zone based on the distance between collected profiles. In order to emphasize the key functions that the zones previously created perform, weighting factors were developed based on common risk-increasing hazards, such as curves and heavy axle loads, which are commonly found on railways. These weighting factors are applied to the specified zones' missing ballast volume when such a hazard exists in that segment of track. Another set of weighting factors were developed to represent the increased risk, or preference for lower risk, for operational factors such as the transport of hazardous materials or for being a key route. Through these weighting factors, ballast replenishment can be prioritized to focus on the areas that pose a higher risk of derailments and their associated costs. For the special cases where the risk or aversion to risk comes from what is being transported, such as the case with hazardous materials or passengers, an economic risk assessment was completed in order to quantify the risk associated with their transport. This economic risk assessment looks at the increased costs associated with incidents that occur and how they compare to incidents which do not directly involve the special cargos. In order to provide support for the use of the previously developed weightings as well as to quantify the actual impact that missing ballast has on the rate of geometry defects, analyses which quantified the risk of missing ballast were performed. In addition to quantifying the rate of defects, analyses were performed which looked at the impact associated with curved track, how the location of missing ballast impacts the rate of geometry defects and how the combination of the two compared with the previous analyses. Through this research, the relationship between the volume of missing ballast and ballast-related defects has been identified and quantified. This relationship is positive for the aggregate of all ballast-related defects but does not always exist for individual defects which occasionally have unique behavior. For the non-ballast defects, a relationship between missing ballast and their rate of occurrence did not always appear to exist. The impact of curves was apparent, showing that the rate of defects was either similar to or exceeded the rate of defects for tangent track. For the analyses which looked at the location of ballast in crib or shoulder, the results were quite similar to the previous analyses. The development, application and improvements of a risk-based ballast maintenance prioritization system provides a relatively low-cost and effective method to improve the operational safety for all railroads.

  16. Biomechanical comparison of mono-segment transpedicular fixation with short-segment fixation for treatment of thoracolumbar fractures: a finite element analysis.

    PubMed

    Xu, Guijun; Fu, Xin; Du, Changling; Ma, Jianxiong; Li, Zhijun; Tian, Peng; Zhang, Tao; Ma, Xinlong

    2014-10-01

    Mono-segment transpedicular fixation is a method for the treatment of certain types of thoracolumbar spinal fracture. Finite element models were constructed to evaluate the biomechanics of mono-segment transpedicular fixation of thoracolumbar fracture. Spinal motion (T10-L2) was scanned and used to establish the models. The superior half of the cortical bone of T12 was removed and the superior half of the cancellous bone of the T12 body was assigned the material properties of injured bone to mimic vertebral fracture. Transpedicular fixation of T11 and T12 was performed to produce a mono-segment fixation model; T11 and L1 were fixed to produce a short-segment fixation model. Motion differences between functional units and von Mises stress on the spine and implants were measured under axial compression, anterior bending, extensional bending, lateral bending and axial rotation. We found no significant difference between mono- and short-segment fixations in the motion of any functional unit. Stress on the T10/T11 nucleus pulposus and T10/T11 and L1/L2 annulus fibrosus increased significantly by about 75% on anterior bending, extensional bending and lateral bending. In the fracture model, stress was increased by 24% at the inferior endplate of T10 and by 43% at the superior endplate of L2. All increased stresses were reduced after fixation and lower stress was observed with mono-segment fixation. In summary, the biomechanics of mono-segment pedicle screw instrumentation was similar to that of conventional short-segment fixation. As a minimally invasive treatment, mono-segment fixation would be appropriate for the treatment of selected thoracolumbar spinal fractures. PMID:25267283

  17. Synfuel program analysis. Volume 1: Procedures-capabilities

    NASA Astrophysics Data System (ADS)

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    The analytic procedures and capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative are described. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specify cases and interpret outputs. It contains an explicit description (with examples) of the types of results which can be obtained when applied for the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. The objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

  18. Study of Alternate Space Shuttle Concepts. Volume 2, Part 2: Concept Analysis and Definition

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This is the final report of a Phase A Study of Alternate Space Shuttle Concepts by the Lockheed Missiles & Space Company (LMSC) for the National Aeronautics and Space Administration George C. Marshall Space Flight Center (MSFC). The eleven-month study, which began on 30 June 1970, is to examine the stage-and-one-half and other Space Shuttle configurations and to establish feasibility, performance, cost, and schedules for the selected concepts. This final report consists of four volumes as follows: Volume I - Executive Summary, Volume II - Concept Analysis and Definition, Volume III - Program Planning, and Volume IV - Data Cost Data. This document is Volume II, Concept Analysis and Definition.

  19. Sequence analysis of both genome segments of three Croatian infectious bursal disease field viruses.

    PubMed

    Lojkić, I; Bidin, Z; Pokrić, B

    2008-09-01

    In order to determine the mutations responsible for virulence, three Croatian field infectious bursal disease viruses (IBDV), designated Cro-Ig/02, Cro-Po/00, and Cro-Pa/98 were characterized. Coding regions of both genomic segments were sequenced, and the nucleotide and deduced amino acid sequences were compared with previously reported full-length sequenced IBDV strains. Phylogenetic analysis, based on the nucleotide and deduced amino acid sequences of polyprotein and VP1, was performed. Eight characteristic amino acid residues, that were common to very virulent (vv) IBDV, were detected on polyprotein: 222A, 256I, 294I, 451L, 685N, 715S, 751D, and 1005A. All eight were found in Cro-Ig/02 and Cro-Po/00. C-Pa/98 had all the characteristics of an attenuated strain, except for glutamine on residue 253, which is common for vv, classical virulent, and variant strains. Between less virulent and vvIBDV, three substitutions were found on VP5: 49 G --> R, 79 --> F, and 137 R --> W. In VP1, there were nine characteristic amino acid residues common to vvwIBDV: 146D, 147N, 242E, 390M, 393D, 511S, 562P, 687P, and 695R. All nine residues were found in A-Ig/02, and eight were found in B-Po/00, which had isoleucine on residue 390. Based on our analyses, isolates Cro-Ig/02 and Cro-Po/00 were classified with vv IBDV strains. C-Pa/98 shared all characteristic amino acid residues with attenuated and classical virulence strains, so it was classified with those. PMID:18939645

  20. Global Warming’s Six Americas: An Audience Segmentation Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Roser-Renouf, C.; Maibach, E.; Leiserowitz, A.

    2009-12-01

    One of the first rules of effective communication is to “know thy audience.” People have different psychological, cultural and political reasons for acting - or not acting - to reduce greenhouse gas emissions, and climate change educators can increase their impact by taking these differences into account. In this presentation we will describe six unique audience segments within the American public that each responds to the issue in its own distinct way, and we will discuss methods of engaging each. The six audiences were identified using a nationally representative survey of American adults conducted in the fall of 2008 (N=2,164). In two waves of online data collection, the public’s climate change beliefs, attitudes, risk perceptions, values, policy preferences, conservation, and energy-efficiency behaviors were assessed. The data were subjected to latent class analysis, yielding six groups distinguishable on all the above dimensions. The Alarmed (18%) are fully convinced of the reality and seriousness of climate change and are already taking individual, consumer, and political action to address it. The Concerned (33%) - the largest of the Six Americas - are also convinced that global warming is happening and a serious problem, but have not yet engaged with the issue personally. Three other Americas - the Cautious (19%), the Disengaged (12%) and the Doubtful (11%) - represent different stages of understanding and acceptance of the problem, and none are actively involved. The final America - the Dismissive (7%) - are very sure it is not happening and are actively involved as opponents of a national effort to reduce greenhouse gas emissions. Mitigating climate change will require a diversity of messages, messengers and methods that take into account these differences within the American public. The findings from this research can serve as guideposts for educators on the optimal choices for reaching and influencing target groups with varied informational needs, values and beliefs.

  1. Breast Tissue 3D Segmentation and Visualization on MRI

    PubMed Central

    Cui, Xiangfei; Sun, Feifei

    2013-01-01

    Tissue segmentation and visualization are useful for breast lesion detection and quantitative analysis. In this paper, a 3D segmentation algorithm based on Kernel-based Fuzzy C-Means (KFCM) is proposed to separate the breast MR images into different tissues. Then, an improved volume rendering algorithm based on a new transfer function model is applied to implement 3D breast visualization. Experimental results have been shown visually and have achieved reasonable consistency. PMID:23983676

  2. Tumor Burden Analysis on Computed Tomography by Automated Liver and Tumor Segmentation

    PubMed Central

    Linguraru, Marius George; Richbourg, William J.; Liu, Jianfei; Watt, Jeremy M.; Pamulapati, Vivek; Wang, Shijun; Summers, Ronald M.

    2013-01-01

    The paper presents the automated computation of hepatic tumor burden from abdominal CT images of diseased populations with images with inconsistent enhancement. The automated segmentation of livers is addressed first. A novel three-dimensional (3D) affine invariant shape parameterization is employed to compare local shape across organs. By generating a regular sampling of the organ's surface, this parameterization can be effectively used to compare features of a set of closed 3D surfaces point-to-point, while avoiding common problems with the parameterization of concave surfaces. From an initial segmentation of the livers, the areas of atypical local shape are determined using training sets. A geodesic active contour corrects locally the segmentations of the livers in abnormal images. Graph cuts segment the hepatic tumors using shape and enhancement constraints. Liver segmentation errors are reduced significantly and all tumors are detected. Finally, support vector machines and feature selection are employed to reduce the number of false tumor detections. The tumor detection true position fraction of 100% is achieved at 2.3 false positives/case and the tumor burden is estimated with 0.9% error. Results from the test data demonstrate the method's robustness to analyze livers from difficult clinical cases to allow the temporal monitoring of patients with hepatic cancer. PMID:22893379

  3. An analysis of methods for the selection of atlases for use in medical image segmentation

    NASA Astrophysics Data System (ADS)

    Prescott, Jeffrey W.; Best, Thomas M.; Haq, Furqan; Jackson, Rebecca; Gurcan, Metin

    2010-03-01

    The use of atlases has been shown to be a robust method for segmentation of medical images. In this paper we explore different methods of selection of atlases for the segmentation of the quadriceps muscles in magnetic resonance (MR) images, although the results are pertinent for a wide range of applications. The experiments were performed using 103 images from the Osteoarthritis Initiative (OAI). The images were randomly split into a training set consisting of 50 images and a testing set of 53 images. Three different atlas selection methods were systematically compared. First, a set of readers was assigned the task of selecting atlases from a training population of images, which were selected to be representative subgroups of the total population. Second, the same readers were instructed to select atlases from a subset of the training data which was stratified based on population modes. Finally, every image in the training set was employed as an atlas, with no input from the readers, and the atlas which had the best initial registration, judged by an appropriate registration metric, was used in the final segmentation procedure. The segmentation results were quantified using the Zijdenbos similarity index (ZSI). The results show that over all readers the agreement of the segmentation algorithm decreased from 0.76 to 0.74 when using population modes to assist in atlas selection. The use of every image in the training set as an atlas outperformed both manual atlas selection methods, achieving a ZSI of 0.82.

  4. Sequence analysis on the information of folding initiation segments in ferredoxin-like fold proteins

    PubMed Central

    2014-01-01

    Background While some studies have shown that the 3D protein structures are more conservative than their amino acid sequences, other experimental studies have shown that even if two proteins share the same topology, they may have different folding pathways. There are many studies investigating this issue with molecular dynamics or Go-like model simulations, however, one should be able to obtain the same information by analyzing the proteins’ amino acid sequences, if the sequences contain all the information about the 3D structures. In this study, we use information about protein sequences to predict the location of their folding segments. We focus on proteins with a ferredoxin-like fold, which has a characteristic topology. Some of these proteins have different folding segments. Results Despite the simplicity of our methods, we are able to correctly determine the experimentally identified folding segments by predicting the location of the compact regions considered to play an important role in structural formation. We also apply our sequence analyses to some homologues of each protein and confirm that there are highly conserved folding segments despite the homologues’ sequence diversity. These homologues have similar folding segments even though the homology of two proteins’ sequences is not so high. Conclusion Our analyses have proven useful for investigating the common or different folding features of the proteins studied. PMID:24884463

  5. Stress and strain analysis of contractions during ramp distension in partially obstructed guinea pig jejunal segments

    PubMed Central

    Zhao, Jingbo; Liao, Donghua; Yang, Jian; Gregersen, Hans

    2011-01-01

    Previous studies have demonstrated morphological and biomechanical remodeling in the intestine proximal to an obstruction. The present study aimed to obtain stress and strain thresholds to initiate contraction and the maximal contraction stress and strain in partially obstructed guinea pig jejunal segments. Partial obstruction and sham operations were surgically created in mid-jejunum of male guinea pigs. The animals survived 2, 4, 7, and 14 days, respectively. Animals not being operated on served as normal controls. The segments were used for no-load state, zero-stress state and distension analyses. The segment was inflated to 10 cmH2O pressure in an organ bath containing 37°C Krebs solution and the outer diameter change was monitored. The stress and strain at the contraction threshold and at maximum contraction were computed from the diameter, pressure and the zero-stress state data. Young’s modulus was determined at the contraction threshold. The muscle layer thickness in obstructed intestinal segments increased up to 300%. Compared with sham-obstructed and normal groups, the contraction stress threshold, the maximum contraction stress and the Young’s modulus at the contraction threshold increased whereas the strain threshold and maximum contraction strain decreased after 7 days obstruction (P<0.05 and 0.01). In conclusion, in the partially obstructed intestinal segments, a larger distension force was needed to evoke contraction likely due to tissue remodeling. Higher contraction stresses were produced and the contraction deformation (strain) became smaller. PMID:21632056

  6. Airway segmentation and analysis for the study of mouse models of lung disease using micro-CT

    NASA Astrophysics Data System (ADS)

    Artaechevarria, X.; Pérez-Martín, D.; Ceresa, M.; de Biurrun, G.; Blanco, D.; Montuenga, L. M.; van Ginneken, B.; Ortiz-de-Solorzano, C.; Muñoz-Barrutia, A.

    2009-11-01

    Animal models of lung disease are gaining importance in understanding the underlying mechanisms of diseases such as emphysema and lung cancer. Micro-CT allows in vivo imaging of these models, thus permitting the study of the progression of the disease or the effect of therapeutic drugs in longitudinal studies. Automated analysis of micro-CT images can be helpful to understand the physiology of diseased lungs, especially when combined with measurements of respiratory system input impedance. In this work, we present a fast and robust murine airway segmentation and reconstruction algorithm. The algorithm is based on a propagating fast marching wavefront that, as it grows, divides the tree into segments. We devised a number of specific rules to guarantee that the front propagates only inside the airways and to avoid leaking into the parenchyma. The algorithm was tested on normal mice, a mouse model of chronic inflammation and a mouse model of emphysema. A comparison with manual segmentations of two independent observers shows that the specificity and sensitivity values of our method are comparable to the inter-observer variability, and radius measurements of the mainstem bronchi reveal significant differences between healthy and diseased mice. Combining measurements of the automatically segmented airways with the parameters of the constant phase model provides extra information on how disease affects lung function.

  7. Concepts and analysis for precision segmented reflector and feed support structures

    NASA Technical Reports Server (NTRS)

    Miller, Richard K.; Thomson, Mark W.; Hedgepeth, John M.

    1990-01-01

    Several issues surrounding the design of a large (20-meter diameter) Precision Segmented Reflector are investigated. The concerns include development of a reflector support truss geometry that will permit deployment into the required doubly-curved shape without significant member strains. For deployable and erectable reflector support trusses, the reduction of structural redundancy was analyzed to achieve reduced weight and complexity for the designs. The stiffness and accuracy of such reduced member trusses, however, were found to be affected to a degree that is unexpected. The Precision Segmented Reflector designs were developed with performance requirements that represent the Reflector application. A novel deployable sunshade concept was developed, and a detailed parametric study of various feed support structural concepts was performed. The results of the detailed study reveal what may be the most desirable feed support structure geometry for Precision Segmented Reflector/Large Deployable Reflector applications.

  8. Analysis of an Externally Radially Cracked Ring Segment Subject to Three-Point Radial Loading

    NASA Technical Reports Server (NTRS)

    Gross, B.; Srawlwy, J. E.; Shannon, J. L., Jr.

    1983-01-01

    The boundary collocation method was used to generate Mode 1 stress intensity and crack mouth opening displacement coefficients for externally radially cracked ring segments subjected to three point radial loading. Numerical results were obtained for ring segment outer-to-inner radius ratios (R sub o/R sub i) ranging from 1.10 to 2.50 and crack length to segment width ratios (a/W) ranging from 0.1 to 0.8. Stress intensity and crack mouth displacement coefficients were found to depend on the ratios R sub o/R sub i and a/W as well as the included angle between the directions of the reaction forces.

  9. Development and analysis of a linearly segmented CPC collector for industrial steam generation

    NASA Astrophysics Data System (ADS)

    Figueroa, J. A. A. F.

    1980-06-01

    The mirror consists of long and narrow planar segments placed inside sealed low-cost glass tubes. The absorber is a cylindrical fin inside an evacuated glass tube. The optical efficiency of the segmented concentrator was simulated by means of Monte-Carlo Ray-Tracing program. Laser Ray-Tracing techniques were also used to evaluate the possibilities of this new concept. A preliminary evaluation of the experimental concentrator was done using a relatively simple method that combines results from two experimental measurements: overall heat loss coefficient and optical efficienty. A transient behavior test was used to measure to overall heat loss coefficient throughout a wide range of temperature.

  10. A fully-automatic caudate nucleus segmentation of brain MRI: Application in volumetric analysis of pediatric attention-deficit/hyperactivity disorder

    PubMed Central

    2011-01-01

    Background Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD. PMID:22141926

  11. Analysis of RapidArc optimization strategies using objective function values and dose-volume histograms.

    PubMed

    Oliver, Michael; Gagne, Isabelle; Popescu, Carmen; Ansbacher, Will; Beckham, Wayne A

    2010-01-01

    RapidArc is a novel treatment planning and delivery system that has recently been made available for clinical use. Included within the Eclipse treatment planning system are a number of different optimization strategies that can be employed to improve the quality of the final treatment plan. The purpose of this study is to systematically assess three categories of strategies for four phantoms, and then apply proven strategies to clinical head and neck cases. Four phantoms were created within Eclipse with varying shapes and locations for the planning target volumes and organs at risk. A baseline optimization consisting of a single 359.8 degrees arc with collimator at 45 degrees was applied to all phantoms. Three categories of strategies were assessed and compared to the baseline strategy. They include changing the initialization parameters, increasing the total number of control points, and increasing the total optimization time. Optimization log files were extracted from the treatment planning system along with final dose-volume histograms for plan assessment. Treatment plans were also generated for four head and neck patients to determine whether the results for phantom plans can be extended to clinical plans. The strategies that resulted in a significant difference from baseline were: changing the maximum leaf speed prior to optimization ( p < 0.05), increasing the total number of segments by adding an arc ( p < 0.05), and increasing the total optimization time by either continuing the optimization ( p < 0.01) or adding time to the optimization by pausing the optimization ( p < 0.01). The reductions in objective function values correlated with improvements in the dose-volume histogram (DVH). The addition of arcs and pausing strategies were applied to head and neck cancer cases, which demonstrated similar benefits with respect to the final objective function value and DVH. Analysis of the optimization log files is a useful way to intercompare treatment plans that have the same dose-volume objectives and importance values. The results for clinical head and neck plans were consistent with phantom plans. PMID:20160684

  12. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment

    PubMed Central

    Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols. PMID:26147196

  13. Segmental and Positional Effects on Children's Coda Production: Comparing Evidence from Perceptual Judgments and Acoustic Analysis

    ERIC Educational Resources Information Center

    Theodore, Rachel M.; Demuth, Katherine; Shattuck-Hufnagel, Stephanie

    2012-01-01

    Children's early productions are highly variable. Findings from children's early productions of grammatical morphemes indicate that some of the variability is systematically related to segmental and phonological factors. Here, we extend these findings by assessing 2-year-olds' production of non-morphemic codas using both listener decisions and…

  14. Segmentation of hyper-pigmented spots in human skin using automated cluster analysis

    NASA Astrophysics Data System (ADS)

    Gossage, Kirk W.; Weissman, Jesse; Velthuizen, Robert

    2009-02-01

    The appearance and color distribution of skin are important characteristics that affect the human perception of health and vitality. Dermatologists and other skin researchers often use color and appearance to diagnose skin conditions and monitor the efficacy of procedures and treatments. Historically, most skin color and chromophore measurements have been performed using reflectance spectrometers and colorimeters. These devices acquire a single measurement over an integrated area defined by an aperture, and are therefore poorly suited to measure the color of pigmented lesions or other blemishes. Measurements of spots smaller than the aperture will be washed out with background, and spots that are larger may not be adequately sampled unless the blemish is homogenous. Recently, multispectral imaging devices have become available for skin imaging. These devices are designed to image regions of skin and provide information about the levels of endogenous chromophores present in the image field of view. This data is presented as four images at each measurement site including RGB color, melanin, collagen, and blood images. We developed a robust segmentation technique that can segment skin blemishes in these images and provide more precise values of melanin, blood, and collagen by only analyzing the segmented region of interest. Results from hundreds of skin images show this to be a robust automated segmentation technique over a range of skin tones and shades.

  15. 3D CT spine data segmentation and analysis of vertebrae bone lesions.

    PubMed

    Peter, R; Malinsky, M; Ourednicek, P; Jan, J

    2013-01-01

    A method is presented aiming at detecting and classifying bone lesions in 3D CT data of human spine, via Bayesian approach utilizing Markov random fields. A developed algorithm for necessary segmentation of individual possibly heavily distorted vertebrae based on 3D intensity modeling of vertebra types is presented as well. PMID:24110203

  16. Segmental and Positional Effects on Children's Coda Production: Comparing Evidence from Perceptual Judgments and Acoustic Analysis

    ERIC Educational Resources Information Center

    Theodore, Rachel M.; Demuth, Katherine; Shattuck-Hufnagel, Stephanie

    2012-01-01

    Children's early productions are highly variable. Findings from children's early productions of grammatical morphemes indicate that some of the variability is systematically related to segmental and phonological factors. Here, we extend these findings by assessing 2-year-olds' production of non-morphemic codas using both listener decisions and

  17. Sequence and phylogenetic analysis of the S1 Genome segment of turkey-origin reoviruses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Based on previous reports characterizing the turkey-origin avian reovirus (TRV) sigma-B (sigma-2) major outer capsid protein gene, the TRVs may represent a new group within the fusogenic orthoreoviruses. However, no sequence data from other TRV genes or genome segments has been reported. The sigma...

  18. Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images

    PubMed Central

    Lee, Kyungmoo; Buitendijk, Gabriëlle H.S.; Bogunovic, Hrvoje; Springelkamp, Henriët; Hofman, Albert; Wahle, Andreas; Sonka, Milan; Vingerling, Johannes R.; Klaver, Caroline C.W.; Abràmoff, Michael D.

    2016-01-01

    Purpose To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. Methods Six hundred ninety macular SD-OCT image volumes (6.0 × 6.0 × 2.3 mm3) were obtained from one eyes of 690 subjects (74.6 ± 9.7 [mean ± SD] years, 37.8% of males) randomly selected from the population-based Rotterdam Study. The dataset consisted of 420 OCT volumes with successful automated retinal nerve fiber layer (RNFL) segmentations obtained from our previously reported graph-based segmentation method and 270 volumes with failed segmentations. To evaluate the reliability of the layer segmentations, we have developed a new metric, segmentability index SI, which is obtained from a random forest regressor based on 12 features using OCT voxel intensities, edge-based costs, and on-surface costs. The SI was compared with well-known quality indices, quality index (QI), and maximum tissue contrast index (mTCI), using receiver operating characteristic (ROC) analysis. Results The 95% confidence interval (CI) and the area under the curve (AUC) for the QI are 0.621 to 0.805 with AUC 0.713, for the mTCI 0.673 to 0.838 with AUC 0.756, and for the SI 0.784 to 0.920 with AUC 0.852. The SI AUC is significantly larger than either the QI or mTCI AUC (P < 0.01). Conclusions The segmentability index SI is well suited to identify SD-OCT scans for which successful automated intraretinal layer segmentations can be expected. Translational Relevance Interpreting the quantification of SD-OCT images requires the underlying segmentation to be reliable, but standard SD-OCT quality metrics do not predict which segmentations are reliable and which are not. The segmentability index SI presented in this study does allow reliable segmentations to be identified, which is important for more accurate layer thickness analyses in research and population studies. PMID:27066311

  19. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 3

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.

  20. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  1. National Evaluation of Family Support Programs. Final Report Volume A: The Meta-Analysis.

    ERIC Educational Resources Information Center

    Layzer, Jean I.; Goodson, Barbara D.; Bernstein, Lawrence; Price, Cristofer

    This volume is part of the final report of the National Evaluation of Family Support Programs and details findings from a meta-analysis of extant research on programs providing family support services. Chapter A1 of this volume provides a rationale for using meta-analysis. Chapter A2 describes the steps of preparation for the meta-analysis.…

  2. Kohonen map as a visualization tool for the analysis of protein sequences: multiple alignments, domains and segments of secondary structures.

    PubMed

    Hanke, J; Reich, J G

    1996-12-01

    The method of Kohonen maps, a special form of neural networks, was applied as a visualization tool for the analysis of protein sequence similarity. The procedure converts sequence (domains, aligned sequences, segments of secondary structure) into a characteristic signal matrix. This conversion depends on the property or replacement score vector selected by the user. Similar sequences have small distance in the signal space. The trained Kohonen network is functionally equivalent to an unsupervised non-linear cluster analyzer. Protein families, or aligned sequences, or segments of similar secondary structure, aggregate as clusters, and their proximity may be inspected on a color screen or on paper. Pull-down menus permit access to background information in the established text-oriented way. PMID:9021261

  3. Incorporation of learned shape priors into a graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes of mice

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Song, Qi; Abràmoff, Michael D.; Sohn, Eliott; Wu, Xiaodong; Garvin, Mona K.

    2014-03-01

    Spectral-domain optical coherence tomography (SD-OCT) finds widespread use clinically for the detection and management of ocular diseases. This non-invasive imaging modality has also begun to find frequent use in research studies involving animals such as mice. Numerous approaches have been proposed for the segmentation of retinal surfaces in SD-OCT images obtained from human subjects; however, the segmentation of retinal surfaces in mice scans is not as well-studied. In this work, we describe a graph-theoretic segmentation approach for the simultaneous segmentation of 10 retinal surfaces in SD-OCT scans of mice that incorporates learned shape priors. We compared the method to a baseline approach that did not incorporate learned shape priors and observed that the overall unsigned border position errors reduced from 3.58 +/- 1.33 μm to 3.20 +/- 0.56 μm.

  4. Probabilistic analysis of activation volumes generated during deep brain stimulation.

    PubMed

    Butson, Christopher R; Cooper, Scott E; Henderson, Jaimie M; Wolgamuth, Barbara; McIntyre, Cameron C

    2011-02-01

    Deep brain stimulation (DBS) is an established therapy for the treatment of Parkinson's disease (PD) and shows great promise for the treatment of several other disorders. However, while the clinical analysis of DBS has received great attention, a relative paucity of quantitative techniques exists to define the optimal surgical target and most effective stimulation protocol for a given disorder. In this study we describe a methodology that represents an evolutionary addition to the concept of a probabilistic brain atlas, which we call a probabilistic stimulation atlas (PSA). We outline steps to combine quantitative clinical outcome measures with advanced computational models of DBS to identify regions where stimulation-induced activation could provide the best therapeutic improvement on a per-symptom basis. While this methodology is relevant to any form of DBS, we present example results from subthalamic nucleus (STN) DBS for PD. We constructed patient-specific computer models of the volume of tissue activated (VTA) for 163 different stimulation parameter settings which were tested in six patients. We then assigned clinical outcome scores to each VTA and compiled all of the VTAs into a PSA to identify stimulation-induced activation targets that maximized therapeutic response with minimal side effects. The results suggest that selection of both electrode placement and clinical stimulation parameter settings could be tailored to the patient's primary symptoms using patient-specific models and PSAs. PMID:20974269

  5. Value and limitations of segmental analysis of stress thallium myocardial imaging for localization of coronary artery disease

    SciTech Connect

    Rigo, P.; Bailey, I.K.; Griffith, L.S.C.; Pitt, B.; Borow, R.D.; Wagner, H.N.; Becker, L.C.

    1980-05-01

    This study was done to determine the value of thallium-201 myocardial scintigraphic imaging (MSI) for identifying disease in the individual coronary arteries. Segmental analysis of rest and stress MSI was performed in 133 patients with ateriographically proved coronary artery disease (CAD). Certain scintigraphic segments were highly specific (97 to 100%) for the three major coronary arteries: anterior wall and septum for the left anterior descending (LAD) coronary artery; the inferior wall for the right coronary artery (RCA); and the proximal lateral wall for the circumflex (LCX) artery. Perfusion defects located in the anterolateral wall in the anterior view were highly specific for proximal disease in the LAD involving the major diagonal branches, but this was not true for septal defects. The apical segments were not specific for any of the three major vessels. Although MSI was abnormal in 89% of these patients with CAD, it was less sensitive for identifying individual vessel disease: 63% for LAD, 50% for RCA, and 21% for LCX disease (narrowings > = 50%). Sensitivity increased with the severity of stenosis, but even for 100% occlusions was only 87% for LAD, 58% for RCA and 38% for LCX. Sensitivity diminished as the number of vessels involved increased: with single-vessel disease, 80% of LAD, 54% of RAC and 33% of LCX lesions were detected, but in patients with triple-vessel disease, only 50% of LAD, 50% of RCA and 16% of LCX lesions were identified. Thus, although segmented analysis of MSI can identify disease in the individual coronary arteries with high specificity, only moderate sensitivity is achieved, reflecting the tendency of MSI to identify only the most severely ischemic area among several that may be present in a heart. Perfusion scintigrams display relative distributions rather than absolute values for myocardial blood flow.

  6. Yucca Mountain transportation routes: Preliminary characterization and risk analysis; Volume 2, Figures [and] Volume 3, Technical Appendices

    SciTech Connect

    Souleyrette, R.R. II; Sathisan, S.K.; di Bartolo, R.

    1991-05-31

    This report presents appendices related to the preliminary assessment and risk analysis for high-level radioactive waste transportation routes to the proposed Yucca Mountain Project repository. Information includes data on population density, traffic volume, ecologically sensitive areas, and accident history.

  7. Volume component analysis for classification of LiDAR data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2015-03-01

    One of the most difficult challenges of working with LiDAR data is the large amount of data points that are produced. Analysing these large data sets is an extremely time consuming process. For this reason, automatic perception of LiDAR scenes is a growing area of research. Currently, most LiDAR feature extraction relies on geometrical features specific to the point cloud of interest. These geometrical features are scene-specific, and often rely on the scale and orientation of the object for classification. This paper proposes a robust method for reduced dimensionality feature extraction of 3D objects using a volume component analysis (VCA) approach.1 This VCA approach is based on principal component analysis (PCA). PCA is a method of reduced feature extraction that computes a covariance matrix from the original input vector. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix are used to describe an image. Block-based PCA is an adapted method for feature extraction in facial images because PCA, when performed in local areas of the image, can extract more significant features than can be extracted when the entire image is considered. The image space is split into several of these blocks, and PCA is computed individually for each block. This VCA proposes that a LiDAR point cloud can be represented as a series of voxels whose values correspond to the point density within that relative location. From this voxelized space, block-based PCA is used to analyze sections of the space where the sections, when combined, will represent features of the entire 3-D object. These features are then used as the input to a support vector machine which is trained to identify four classes of objects, vegetation, vehicles, buildings and barriers with an overall accuracy of 93.8%

  8. A Genetic Analysis of Brain Volumes and IQ in Children

    ERIC Educational Resources Information Center

    van Leeuwen, Marieke; Peper, Jiska S.; van den Berg, Stephanie M.; Brouwer, Rachel M.; Hulshoff Pol, Hilleke E.; Kahn, Rene S.; Boomsma, Dorret I.

    2009-01-01

    In a population-based sample of 112 nine-year old twin pairs, we investigated the association among total brain volume, gray matter and white matter volume, intelligence as assessed by the Raven IQ test, verbal comprehension, perceptual organization and perceptual speed as assessed by the Wechsler Intelligence Scale for Children-III. Phenotypic…

  9. A Genetic Analysis of Brain Volumes and IQ in Children

    ERIC Educational Resources Information Center

    van Leeuwen, Marieke; Peper, Jiska S.; van den Berg, Stephanie M.; Brouwer, Rachel M.; Hulshoff Pol, Hilleke E.; Kahn, Rene S.; Boomsma, Dorret I.

    2009-01-01

    In a population-based sample of 112 nine-year old twin pairs, we investigated the association among total brain volume, gray matter and white matter volume, intelligence as assessed by the Raven IQ test, verbal comprehension, perceptual organization and perceptual speed as assessed by the Wechsler Intelligence Scale for Children-III. Phenotypic

  10. EPA RREL'S MOBILE VOLUME REDUCTION UNIT -- APPLICATIONS ANALYSIS REPORT

    EPA Science Inventory

    The volume reduction unit (VRU) is a pilot-scale, mobile soil washing system designed to remove organic contaminants from the soil through particle size separation and solubilization. The VRU removes contaminants by suspending them in a wash solution and by reducing the volume of...

  11. Style, content and format guide for writing safety analysis documents. Volume 1, Safety analysis reports for DOE nuclear facilities

    SciTech Connect

    Not Available

    1994-06-01

    The purpose of Volume 1 of this 4-volume style guide is to furnish guidelines on writing and publishing Safety Analysis Reports (SARs) for DOE nuclear facilities at Sandia National Laboratories. The scope of Volume 1 encompasses not only the general guidelines for writing and publishing, but also the prescribed topics/appendices contents along with examples from typical SARs for DOE nuclear facilities.

  12. Design and Analysis of Modules for Segmented X-Ray Optics

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; BIskach, Michael P.; Chan, Kai-Wing; Saha, Timo T; Zhang, William W.

    2012-01-01

    Future X-ray astronomy missions demand thin, light, and closely packed optics which lend themselves to segmentation of the annular mirrors and, in turn, a modular approach to the mirror design. The modular approach to X-ray Flight Mirror Assembly (FMA) design allows excellent scalability of the mirror technology to support a variety of mission sizes and science objectives. This paper describes FMA designs using slumped glass mirror segments for several X-ray astrophysics missions studied by NASA and explores the driving requirements and subsequent verification tests necessary to qualify a slumped glass mirror module for space-flight. A rigorous testing program is outlined allowing Technical Development Modules to reach technical readiness for mission implementation while reducing mission cost and schedule risk.

  13. Comparative sequence analysis of rotavirus genomic segment 6--the gene specifying viral subgroups 1 and 2.

    PubMed Central

    Both, G W; Siegman, L J; Bellamy, A R; Ikegami, N; Shatkin, A J; Furuichi, Y

    1984-01-01

    Cloned DNA copies of rotavirus genomic segment 6 from simian 11 (subgroup 1) and human strain Wa (subgroup 2) rotaviruses have been used to determine the nucleotide sequences of the gene that determines viral subgroup specificity. Both genomic segments are 1,356 nucleotides in length and possess 5'- and 3'-terminal untranslated regions of 23 and 142 nucleotides, respectively. The inferred amino acid sequence reveals VP6 to be a polypeptide of 397 amino acids in which more than 90% of the amino acid sequence is conserved between the two viruses. There are 34 amino acid changes between the subgroup 1 and 2 polypeptides, most clustered in three regions of the molecule at residues 39 through 62, 80 through 122, and 281 through 315. PMID:6328048

  14. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity.

    PubMed

    Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin

    2016-01-01

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ±40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design. PMID:26805844

  15. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity

    PubMed Central

    Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin

    2016-01-01

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ±40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design. PMID:26805844

  16. Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.

    2013-01-01

    The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.

  17. Pulse shape analysis for segmented germanium detectors implemented in graphics processing units

    NASA Astrophysics Data System (ADS)

    Calore, Enrico; Bazzacco, Dino; Recchia, Francesco

    2013-08-01

    Position sensitive highly segmented germanium detectors constitute the state-of-the-art of the technology employed for γ-spectroscopy studies. The operation of large spectrometers composed of tens to hundreds of such detectors demands enormous amounts of computing power for the digital treatment of the signals. The use of Graphics Processing Units (GPUs) has been evaluated as a cost-effective solution to meet such requirements. Different implementations and the hardware constraints limiting the performance of the system are examined.

  18. Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

    2013-04-01

    In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

  19. A New MRI-Based Pediatric Subcortical Segmentation Technique (PSST).

    PubMed

    Loh, Wai Yen; Connelly, Alan; Cheong, Jeanie L Y; Spittle, Alicia J; Chen, Jian; Adamson, Christopher; Ahmadzai, Zohra M; Fam, Lillian Gabra; Rees, Sandra; Lee, Katherine J; Doyle, Lex W; Anderson, Peter J; Thompson, Deanne K

    2016-01-01

    Volumetric and morphometric neuroimaging studies of the basal ganglia and thalamus in pediatric populations have utilized existing automated segmentation tools including FIRST (Functional Magnetic Resonance Imaging of the Brain's Integrated Registration and Segmentation Tool) and FreeSurfer. These segmentation packages, however, are mostly based on adult training data. Given that there are marked differences between the pediatric and adult brain, it is likely an age-specific segmentation technique will produce more accurate segmentation results. In this study, we describe a new automated segmentation technique for analysis of 7-year-old basal ganglia and thalamus, called Pediatric Subcortical Segmentation Technique (PSST). PSST consists of a probabilistic 7-year-old subcortical gray matter atlas (accumbens, caudate, pallidum, putamen and thalamus) combined with a customized segmentation pipeline using existing tools: ANTs (Advanced Normalization Tools) and SPM (Statistical Parametric Mapping). The segmentation accuracy of PSST in 7-year-old data was compared against FIRST and FreeSurfer, relative to manual segmentation as the ground truth, utilizing spatial overlap (Dice's coefficient), volume correlation (intraclass correlation coefficient, ICC) and limits of agreement (Bland-Altman plots). PSST achieved spatial overlap scores ≥90 % and ICC scores ≥0.77 when compared with manual segmentation, for all structures except the accumbens. Compared with FIRST and FreeSurfer, PSST showed higher spatial overlap (p FDR  < 0.05) and ICC scores, with less volumetric bias according to Bland-Altman plots. PSST is a customized segmentation pipeline with an age-specific atlas that accurately segments typical and atypical basal ganglia and thalami at age 7 years, and has the potential to be applied to other pediatric datasets. PMID:26381159

  20. Segmental neurofibromatosis.

    PubMed

    Galhotra, Virat; Sheikh, Soheyl; Jindal, Sanjeev; Singla, Anshu

    2014-07-01

    Segmental neurofibromatosis is a rare disorder, characterized by neurofibromas or caf?-au-lait macules limited to one region of the body. Its occurrence on the face is extremely rare and only few cases of segmental neurofibromatosis over the face have been described so far. We present a case of segmental neurofibromatosis involving the buccal mucosa, tongue, cheek, ear, and neck on the right side of the face. PMID:25565748

  1. Automatic nevi segmentation using adaptive mean shift filters and feature analysis

    NASA Astrophysics Data System (ADS)

    King, Michael A.; Lee, Tim K.; Atkins, M. Stella; McLean, David I.

    2004-05-01

    A novel automatic method of segmenting nevi is explained and analyzed in this paper. The first step in nevi segmentation is to iteratively apply an adaptive mean shift filter to form clusters in the image and to remove noise. The goal of this step is to remove differences in skin intensity and hairs from the image, while still preserving the shape of nevi present on the skin. Each iteration of the mean shift filter changes pixel values to be a weighted average of pixels in its neighborhood. Some new extensions to the mean shift filter are proposed to allow for better segmentation of nevi from the skin. The kernel, that describes how the pixels in its neighborhood will be averaged, is adaptive; the shape of the kernel is a function of the local histogram. After initial clustering, a simple merging of clusters is done. Finally, clusters that are local minima are found and analyzed to determine which clusters are nevi. When this algorithm was compared to an assessment by an expert dermatologist, it showed a sensitivity rate and diagnostic accuracy of over 95% on the test set, for nevi larger than 1.5mm.

  2. Challenges in the segmentation and analysis of X-ray Micro-CT image data

    NASA Astrophysics Data System (ADS)

    Larsen, J. D.; Schaap, M. G.; Tuller, M.; Kulkarni, R.; Guber, A.

    2014-12-01

    Pore scale modeling of fluid flow is becoming increasing popular among scientific disciplines. With increased computational power, and technological advancements it is now possible to create realistic models of fluid flow through highly complex porous media by using a number of fluid dynamic techniques. One such technique that has gained popularity is lattice Boltzmann for its relative ease of programming and ability to capture and represent complex geometries with simple boundary conditions. In this study lattice Boltzmann fluid models are used on macro-porous silt loam soil imagery that was obtained using an industrial CT scanner. The soil imagery was segmented with six separate automated segmentation standards to reduce operator bias and provide distinction between phases. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We attempt to validate simulated permeability from differing segmentation algorithms to experimental findings. Limitations arise with X-ray micro-CT image data. Polychromatic X-ray CT has the potential to produce low image contrast and image artifacts. In this case, we find that the data is unsegmentable and unable to be modeled in a realistic and unbiased fashion.

  3. Growth and morphological analysis of segmented AuAg alloy nanowires created by pulsed electrodeposition in ion-track etched membranes

    PubMed Central

    Burr, Loic; Trautmann, Christina; Toimil-Molares, Maria Eugenia

    2015-01-01

    Summary Background: Multicomponent heterostructure nanowires and nanogaps are of great interest for applications in sensorics. Pulsed electrodeposition in ion-track etched polymer templates is a suitable method to synthesise segmented nanowires with segments consisting of two different types of materials. For a well-controlled synthesis process, detailed analysis of the deposition parameters and the size-distribution of the segmented wires is crucial. Results: The fabrication of electrodeposited AuAg alloy nanowires and segmented Au-rich/Ag-rich/Au-rich nanowires with controlled composition and segment length in ion-track etched polymer templates was developed. Detailed analysis by cyclic voltammetry in ion-track membranes, energy-dispersive X-ray spectroscopy and scanning electron microscopy was performed to determine the dependency between the chosen potential and the segment composition. Additionally, we have dissolved the middle Ag-rich segments in order to create small nanogaps with controlled gap sizes. Annealing of the created structures allows us to influence their morphology. Conclusion: AuAg alloy nanowires, segmented wires and nanogaps with controlled composition and size can be synthesised by electrodeposition in membranes, and are ideal model systems for investigation of surface plasmons. PMID:26199830

  4. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    SciTech Connect

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction and resulted in statistically not significantly different segmentation error indices (ANOVA test, significance level of 0.05). Conclusions: All three experts were able to produce liver segmentations with low error rates. User interaction time savings of up to 71% compared to a 2D refinement approach demonstrate the utility and potential of our approach. The system offers a range of different tools to manipulate segmentation results, and some users might benefit from a longer learning phase to develop efficient segmentation refinement strategies. The presented approach represents a generally applicable segmentation approach that can be applied to many medical image segmentation problems.

  5. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    PubMed Central

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-01-01

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test,p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction and resulted in statistically not significantly different segmentation error indices (ANOVA test, significance level of 0.05). Conclusions: All three experts were able to produce liver segmentations with low error rates. User interaction time savings of up to 71% compared to a 2D refinement approach demonstrate the utility and potential of our approach. The system offers a range of different tools to manipulate segmentation results, and some users might benefit from a longer learning phase to develop efficient segmentation refinement strategies. The presented approach represents a generally applicable segmentation approach that can be applied to many medical image segmentation problems. PMID:22380370

  6. Segmentation of thin section images for grain size analysis using region competition and edge-weighted region merging

    NASA Astrophysics Data System (ADS)

    Jungmann, Matthias; Pape, Hansgeorg; Wißkirchen, Peter; Clauser, Christoph; Berlage, Thomas

    2014-11-01

    Microscopic thin section images are a major source of information on physical properties, crystallization processes, and the evolution of rocks. Extracting the boundaries of grains is of special interest for estimating the volumetric structure of sandstone. To deal with large datasets and to relieve the geologist from a manual analysis of images, automated methods are needed for the segmentation task. This paper evaluates the region competition framework, which also includes region merging. The procedure minimizes an energy functional based on the Minimum Description Length (MDL) principle. To overcome some known drawbacks of current algorithms, we present an extension of MDL-based region merging by integrating edge information between adjacent regions. In addition, we introduce a modified implementation for region competition for overcoming computational complexities when dealing with multiple competing regions. Commonly used methods are based on solving differential equations for describing the movement of boundaries, whereas our approach implements a simple updating scheme. Furthermore, we propose intensity features for reducing the amount of data. They are derived by comparing theoretical values obtained from a model function describing the intensity inside uniaxial crystals with measured data. Error, standard deviation, and phase shift between the model and intensity measurements preserve sufficient information for a proper segmentation. Additionally, identified objects are classified into quartz grains, anhydrite, and reaction fringes by these features. This grouping is, in turn, used to improve the segmentation process further. We illustrate the benefits of this approach by four samples of microscopic thin sections and quantify them in a comparison of a segmentation result and a manually obtained one.

  7. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem. PMID:24390194

  8. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems

    PubMed Central

    Kim, Won Hwa; Chung, Moo K.; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape’s local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem. PMID:24390194

  9. Active Segmentation

    PubMed Central

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary. We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach. PMID:20686671

  10. DEVELOPMENT AND APPLICATION OF A WATER SUPPLY COST ANALYSIS SYSTEM. VOLUME II

    EPA Science Inventory

    A cost analysis for system water supply utility management has been developed and implemented in Kenton County, Kentucky, Water District No. 1. This volume contains the program documentation for the cost analysis system.

  11. A Rapid and Efficient 2D/3D Nuclear Segmentation Method for Analysis of Early Mouse Embryo and Stem Cell Image Data

    PubMed Central

    Lou, Xinghua; Kang, Minjung; Xenopoulos, Panagiotis; Muoz-Descalzo, Silvia; Hadjantonakis, Anna-Katerina

    2014-01-01

    Summary Segmentation is a fundamental problem that dominates the success of microscopic image analysis. In almost 25 years of cell detection software development, there is still no single piece of commercial software that works well in practice when applied to early mouse embryo or stem cell image data. To address this need, we developed MINS (modular interactive nuclear segmentation) as a MATLAB/C++-based segmentation tool tailored for counting cells and fluorescent intensity measurements of 2D and 3D image data. Our aim was to develop a tool that is accurate and efficient yet straightforward and user friendly. The MINS pipeline comprises three major cascaded modules: detection, segmentation, and cell position classification. An extensive evaluation of MINS on both 2D and 3D images, and comparison to related tools, reveals improvements in segmentation accuracy and usability. Thus, its accuracy and ease of use will allow MINS to be implemented for routine single-cell-level image analyses. PMID:24672759

  12. Functional analysis of centipede development supports roles for Wnt genes in posterior development and segment generation.

    PubMed

    Hayden, Luke; Schlosser, Gerhard; Arthur, Wallace

    2015-01-01

    The genes of the Wnt family play important and highly conserved roles in posterior growth and development in a wide range of animal taxa. Wnt genes also operate in arthropod segmentation, and there has been much recent debate regarding the relationship between arthropod and vertebrate segmentation mechanisms. Due to its phylogenetic position, body form, and possession of many (11) Wnt genes, the centipede Strigamia maritima is a useful system with which to examine these issues. This study takes a functional approach based on treatment with lithium chloride, which causes ubiquitous activation of canonical Wnt signalling. This is the first functional developmental study performed in any of the 15,000 species of the arthropod subphylum Myriapoda. The expression of all 11 Wnt genes in Strigamia was analyzed in relation to posterior development. Three of these genes, Wnt11, Wnt5, and WntA, were strongly expressed in the posterior region and, thus, may play important roles in posterior developmental processes. In support of this hypothesis, LiCl treatment of S. maritima embryos was observed to produce posterior developmental defects and perturbations in AbdB and Delta expression. The effects of LiCl differ depending on the developmental stage treated, with more severe effects elicited by treatment during germband formation than by treatment at later stages. These results support a role for Wnt signalling in conferring posterior identity in Strigamia. In addition, data from this study are consistent with the hypothesis of segmentation based on a "clock and wavefront" mechanism operating in this species. PMID:25627713

  13. Texture-based segmentation and analysis of emphysema depicted on CT images

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Zheng, Bin; Wang, Xingwei; Lederman, Dror; Pu, Jiantao; Sciurba, Frank C.; Gur, David; Leader, J. Ken

    2011-03-01

    In this study we present a texture-based method of emphysema segmentation depicted on CT examination consisting of two steps. Step 1, a fractal dimension based texture feature extraction is used to initially detect base regions of emphysema. A threshold is applied to the texture result image to obtain initial base regions. Step 2, the base regions are evaluated pixel-by-pixel using a method that considers the variance change incurred by adding a pixel to the base in an effort to refine the boundary of the base regions. Visual inspection revealed a reasonable segmentation of the emphysema regions. There was a strong correlation between lung function (FEV1%, FEV1/FVC, and DLCO%) and fraction of emphysema computed using the texture based method, which were -0.433, -.629, and -0.527, respectively. The texture-based method produced more homogeneous emphysematous regions compared to simple thresholding, especially for large bulla, which can appear as speckled regions in the threshold approach. In the texture-based method, single isolated pixels may be considered as emphysema only if neighboring pixels meet certain criteria, which support the idea that single isolated pixels may not be sufficient evidence that emphysema is present. One of the strength of our complex texture-based approach to emphysema segmentation is that it goes beyond existing approaches that typically extract a single or groups texture features and individually analyze the features. We focus on first identifying potential regions of emphysema and then refining the boundary of the detected regions based on texture patterns.

  14. Texture analysis of automatic graph cuts segmentations for detection of lung cancer recurrence after stereotactic radiotherapy

    NASA Astrophysics Data System (ADS)

    Mattonen, Sarah A.; Palma, David A.; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.

    2015-03-01

    Stereotactic ablative radiotherapy (SABR) is a treatment for early-stage lung cancer with local control rates comparable to surgery. After SABR, benign radiation induced lung injury (RILI) results in tumour-mimicking changes on computed tomography (CT) imaging. Distinguishing recurrence from RILI is a critical clinical decision determining the need for potentially life-saving salvage therapies whose high risks in this population dictate their use only for true recurrences. Current approaches do not reliably detect recurrence within a year post-SABR. We measured the detection accuracy of texture features within automatically determined regions of interest, with the only operator input being the single line segment measuring tumour diameter, normally taken during the clinical workflow. Our leave-one-out cross validation on images taken 2-5 months post-SABR showed robustness of the entropy measure, with classification error of 26% and area under the receiver operating characteristic curve (AUC) of 0.77 using automatic segmentation; the results using manual segmentation were 24% and 0.75, respectively. AUCs for this feature increased to 0.82 and 0.93 at 8-14 months and 14-20 months post SABR, respectively, suggesting even better performance nearer to the date of clinical diagnosis of recurrence; thus this system could also be used to support and reinforce the physician's decision at that time. Based on our ongoing validation of this automatic approach on a larger sample, we aim to develop a computer-aided diagnosis system which will support the physician's decision to apply timely salvage therapies and prevent patients with RILI from undergoing invasive and risky procedures.

  15. Segmentation and detection of breast cancer in mammograms combining wavelet analysis and genetic algorithm.

    PubMed

    Pereira, Danilo Cesar; Ramos, Rodrigo Pereira; do Nascimento, Marcelo Zanchetta

    2014-04-01

    In Brazil, the National Cancer Institute (INCA) reports more than 50,000 new cases of the disease, with risk of 51 cases per 100,000 women. Radiographic images obtained from mammography equipments are one of the most frequently used techniques for helping in early diagnosis. Due to factors related to cost and professional experience, in the last two decades computer systems to support detection (Computer-Aided Detection - CADe) and diagnosis (Computer-Aided Diagnosis - CADx) have been developed in order to assist experts in detection of abnormalities in their initial stages. Despite the large number of researches on CADe and CADx systems, there is still a need for improved computerized methods. Nowadays, there is a growing concern with the sensitivity and reliability of abnormalities diagnosis in both views of breast mammographic images, namely cranio-caudal (CC) and medio-lateral oblique (MLO). This paper presents a set of computational tools to aid segmentation and detection of mammograms that contained mass or masses in CC and MLO views. An artifact removal algorithm is first implemented followed by an image denoising and gray-level enhancement method based on wavelet transform and Wiener filter. Finally, a method for detection and segmentation of masses using multiple thresholding, wavelet transform and genetic algorithm is employed in mammograms which were randomly selected from the Digital Database for Screening Mammography (DDSM). The developed computer method was quantitatively evaluated using the area overlap metric (AOM). The mean ± standard deviation value of AOM for the proposed method was 79.2 ± 8%. The experiments demonstrate that the proposed method has a strong potential to be used as the basis for mammogram mass segmentation in CC and MLO views. Another important aspect is that the method overcomes the limitation of analyzing only CC and MLO views. PMID:24513228

  16. Analysis of the Command and Control Segment (CCS) attitude estimation algorithm

    NASA Technical Reports Server (NTRS)

    Stockwell, Catherine

    1993-01-01

    This paper categorizes the qualitative behavior of the Command and Control Segment (CCS) differential correction algorithm as applied to attitude estimation using simultaneous spin axis sun angle and Earth cord length measurements. The categories of interest are the domains of convergence, divergence, and their boundaries. Three series of plots are discussed that show the dependence of the estimation algorithm on the vehicle radius, the sun/Earth angle, and the spacecraft attitude. Common qualitative dynamics to all three series are tabulated and discussed. Out-of-limits conditions for the estimation algorithm are identified and discussed.

  17. Segmentation of acute pyelonephritis area on kidney SPECT images using binary shape analysis

    NASA Astrophysics Data System (ADS)

    Wu, Chia-Hsiang; Sun, Yung-Nien; Chiu, Nan-Tsing

    1999-05-01

    Acute pyelonephritis is a serious disease in children that may result in irreversible renal scarring. The ability to localize the site of urinary tract infection and the extent of acute pyelonephritis has considerable clinical importance. In this paper, we are devoted to segment the acute pyelonephritis area from kidney SPECT images. A two-step algorithm is proposed. First, the original images are translated into binary versions by automatic thresholding. Then the acute pyelonephritis areas are located by finding convex deficiencies in the obtained binary images. This work gives important diagnosis information for physicians and improves the quality of medical care for children acute pyelonephritis disease.

  18. Oil-spill risk analysis: Cook inlet outer continental shelf lease sale 149. Volume 2: Conditional risk contour maps of seasonal conditional probabilities. Final report

    SciTech Connect

    Johnson, W.R.; Marshall, C.F.; Anderson, C.M.; Lear, E.M.

    1994-08-01

    The Federal Government has proposed to offer Outer Continental Shelf (OCS) lands in Cook Inlet for oil and gas leasing. Because oil spills may occur from activities associated with offshore oil production, the Minerals Management Service conducts a formal risk assessment. In evaluating the significance of accidental oil spills, it is important to remember that the occurrence of such spills is fundamentally probabilistic. The effects of oil spills that could occur during oil and gas production must be considered. This report summarizes results of an oil-spill risk analysis conducted for the proposed Cook Inlet OCS Lease Sale 149. The objective of this analysis was to estimate relative risks associated with oil and gas production for the proposed lease sale. To aid the analysis, conditional risk contour maps of seasonal conditional probabilities of spill contact were generated for each environmental resource or land segment in the study area. This aspect is discussed in this volume of the two volume report.

  19. A computer program for comprehensive ST-segment depression/heart rate analysis of the exercise ECG test.

    PubMed

    Lehtinen, R; Vänttinen, H; Sievänen, H; Malmivuo, J

    1996-06-01

    The ST-segment depression/heart rate (ST/HR) analysis has been found to improve the diagnostic accuracy of the exercise ECG test in detecting myocardial ischemia. Recently, three different continuous diagnostic variables based on the ST/HR analysis have been introduced; the ST/HR slope, the ST/HR index and the ST/HR hysteresis. The latter utilises both the exercise and recovery phases of the exercise ECG test, whereas the two former are based on the exercise phase only. This present article presents a computer program which not only calculates the above three diagnostic variables but also plots the full diagrams of ST-segment depression against heart rate during both exercise and recovery phases for each ECG lead from given ST/HR data. The program can be used in the exercise ECG diagnosis of daily clinical practice provided that the ST/HR data from the ECG measurement system can be linked to the program. At present, the main purpose of the program is to provide clinical and medical researchers with a practical tool for comprehensive clinical evaluation and development of the ST/HR analysis. PMID:8835841

  20. Analysis of flexible aircraft longitudinal dynamics and handling qualities. Volume 1: Analysis methods

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Schmidt, D. S.

    1985-01-01

    As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.

  1. Brain MRI Segmentation with Multiphase Minimal Partitioning: A Comparative Study

    PubMed Central

    Angelini, Elsa D.; Song, Ting; Mensh, Brett D.; Laine, Andrew F.

    2007-01-01

    This paper presents the implementation and quantitative evaluation of a multiphase three-dimensional deformable model in a level set framework for automated segmentation of brain MRIs. The segmentation algorithm performs an optimal partitioning of three-dimensional data based on homogeneity measures that naturally evolves to the extraction of different tissue types in the brain. Random seed initialization was used to minimize the sensitivity of the method to initial conditions while avoiding the need for a priori information. This random initialization ensures robustness of the method with respect to the initialization and the minimization set up. Postprocessing corrections with morphological operators were applied to refine the details of the global segmentation method. A clinical study was performed on a database of 10 adult brain MRI volumes to compare the level set segmentation to three other methods: “idealized” intensity thresholding, fuzzy connectedness, and an expectation maximization classification using hidden Markov random fields. Quantitative evaluation of segmentation accuracy was performed with comparison to manual segmentation computing true positive and false positive volume fractions. A statistical comparison of the segmentation methods was performed through a Wilcoxon analysis of these error rates and results showed very high quality and stability of the multiphase three-dimensional level set method. PMID:18253474

  2. Effects of immersion on visual analysis of volume data.

    PubMed

    Laha, Bireswar; Sensharma, Kriti; Schiffbauer, James D; Bowman, Doug A

    2012-04-01

    Volume visualization has been widely used for decades for analyzing datasets ranging from 3D medical images to seismic data to paleontological data. Many have proposed using immersive virtual reality (VR) systems to view volume visualizations, and there is anecdotal evidence of the benefits of VR for this purpose. However, there has been very little empirical research exploring the effects of higher levels of immersion for volume visualization, and it is not known how various components of immersion influence the effectiveness of visualization in VR. We conducted a controlled experiment in which we studied the independent and combined effects of three components of immersion (head tracking, field of regard, and stereoscopic rendering) on the effectiveness of visualization tasks with two x-ray microscopic computed tomography datasets. We report significant benefits of analyzing volume data in an environment involving those components of immersion. We find that the benefits do not necessarily require all three components simultaneously, and that the components have variable influence on different task categories. The results of our study improve our understanding of the effects of immersion on perceived and actual task performance, and provide guidance on the choice of display systems to designers seeking to maximize the effectiveness of volume visualization applications. PMID:22402687

  3. Phylogenetic analysis, genomic diversity and classification of M class gene segments of turkey reoviruses.

    PubMed

    Mor, Sunil K; Marthaler, Douglas; Verma, Harsha; Sharafeldin, Tamer A; Jindal, Naresh; Porter, Robert E; Goyal, Sagar M

    2015-03-23

    From 2011 to 2014, 13 turkey arthritis reoviruses (TARVs) were isolated from cases of swollen hock joints in 2-18-week-old turkeys. In addition, two isolates from similar cases of turkey arthritis were received from another laboratory. Eight turkey enteric reoviruses (TERVs) isolated from fecal samples of turkeys were also used for comparison. The aims of this study were to characterize turkey reovirus (TRV) based on complete M class genome segments and to determine genetic diversity within TARVs in comparison to TERVs and chicken reoviruses (CRVs). Nucleotide (nt) cut off values of 84%, 83% and 85% for the M1, M2 and M3 gene segments were proposed and used for genotype classification, generating 5, 7, and 3 genotypes, respectively. Using these nt cut off values, we propose M class genotype constellations (GCs) for avian reoviruses. Of the seven GCs, GC1 and GC3 were shared between the TARVs and TERVs, indicating possible reassortment between turkey and chicken reoviruses. The TARVs and TERVs were divided into three GCs, and GC2 was unique to TARVs and TERVs. The proposed new GC approach should be useful in identifying reassortant viruses, which may ultimately be used in the design of a universal vaccine against both chicken and turkey reoviruses. PMID:25655814

  4. Fetal brain volumetry through MRI volumetric reconstruction and segmentation

    PubMed Central

    Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.

    2013-01-01

    Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848

  5. Analysis on volume grating induced by femtosecond laser pulses.

    PubMed

    Zhou, Keya; Guo, Zhongyi; Ding, Weiqiang; Liu, Shutian

    2010-06-21

    We report on a kind of self-assembled volume grating in silica glass induced by tightly focused femtosecond laser pulses. The formation of the volume grating is attributed to the multiple microexplosion in the transparent materials induced by the femtosecond pulses. The first order diffractive efficiency is in dependence on the energy of the pulses and the scanning velocity of the laser greatly, and reaches as high as 30%. The diffraction pattern of the fabricated grating is numerically simulated and analyzed by a two dimensional FDTD method and the Fresnel Diffraction Integral. The numerical results proved our prediction on the formation of the volume grating, which agrees well with our experiment results. PMID:20588497

  6. Determination of fiber volume in graphite/epoxy materials using computer image analysis

    NASA Technical Reports Server (NTRS)

    Viens, Michael J.

    1990-01-01

    The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.

  7. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 4: Mission peculiar spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) peculiar spacecraft segment and associated subsystems and modules are presented. The specifications considered include the following: (1) wideband communications subsystem module, (2) mission peculiar software, (3) hydrazine propulsion subsystem module, (4) solar array assembly, and (5) the scanning spectral radiometer.

  8. NOTE: Reducing the number of segments in unidirectional MLC segmentations

    NASA Astrophysics Data System (ADS)

    Mellado, X.; Cruz, S.; Artacho, J. M.; Canellas, M.

    2010-02-01

    In intensity-modulated radiation therapy (IMRT), fluence matrices obtained from a treatment planning system are usually delivered by a linear accelerator equipped with a multileaf collimator (MLC). A segmentation method is needed for decomposing these fluence matrices into segments suitable for the MLC, and the number of segments used is an important factor for treatment time. In this work, an algorithm for reduction of the number of segments (NS) is presented for unidirectional segmentations, where there is no backtracking of the MLC leaves. It uses a geometrical representation of the segmentation output for searching the key values in a fluence matrix that complicate its decomposition. The NS reduction is achieved by performing minor modifications in these values, under the conditions of avoiding substantial modifications of the dose-volume histogram, and does not increase in average the total number of monitor units delivered. The proposed method was tested using two clinical cases planned with the PCRT 3D® treatment planning system.

  9. Organizational Communication: Abstracts, Analysis, and Overview. Volume 6.

    ERIC Educational Resources Information Center

    Greenbaum, Howard H.; Falcione, Raymond L.

    This annual volume of organizational communication abstracts presents over 1,100 abstracts of the literature on organizational communication occurring in 1979. An introductory chapter explains the classification systems, provides operational definitions of terms, and concedes the shortcomings of the research effort. An overview chapter comments…

  10. Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images

    PubMed Central

    de Castro, J.; Méndez, A.; Tarquis, A. M.

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  11. Intraspecific phylogeography of the gopher tortoise, Gopherus polyphemus: RFLP analysis of amplified mtDNA segments.

    PubMed

    Osentoski, M F; Lamb, T

    1995-12-01

    The slow rate of mtDNA evolution in turtles poses a limitation on the levels of intraspecific variation detectable by conventional restriction fragment surveys. We examined mtDNA variation in the gopher tortoise (Gopherus polyphemus) using an alternative restriction assay, one in which PCR-amplified segments of the mitochondrial genome were digested with tetranucleotide-site endonucleases. Restriction fragment polymorphisms representing four amplified regions were analysed to evaluate population genetic structure among 112 tortoises throughout the species' range. Thirty-six haplotypes were identified, and three major geographical assemblages (Eastern, Western, and Mid-Florida) were resolved by UPGMA and parsimony analyses. Eastern and Western assemblages abut near the Apalachicola drainage, whereas the Mid-Florida assemblage appears restricted to the Brooksville Ridge. The Eastern/Western assemblage boundary is remarkably congruent with phylogeographic profiles for eight additional species from the south-eastern U.S., representing both freshwater and terrestrial realms. PMID:8564009

  12. Fractal analysis of laplacian pyramidal filters applied to segmentation of soil images.

    PubMed

    de Castro, J; Ballesteros, F; Méndez, A; Tarquis, A M

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  13. Uniparental disomy analysis in trios using genome-wide SNP array and whole-genome sequencing data imply segmental uniparental isodisomy in general populations.

    PubMed

    Sasaki, Kensaku; Mishima, Hiroyuki; Miura, Kiyonori; Yoshiura, Koh-Ichiro

    2013-01-10

    Whole chromosomal and segmental uniparental disomy (UPD) is one of the causes of imprinting disorder and other recessive disorders. Most investigations of UPD were performed only using cases with relevant phenotypic features and included few markers. However, the diagnosis of cases with segmental UPD requires a large number of molecular investigations. Currently, the accurate frequency of whole chromosomal and segmental UPD in a normal developing embryo is not well understood. Here, we present whole chromosome and segmental UPD analysis using single nucleotide polymorphism (SNP) microarray data of 173 mother-father-child trios (519 individuals) from six populations (including 170 HapMap trios). For two of these trios, we also investigated the possibility of shorter segmental UPD as a consequence of homologous recombination repair (HR) for DNA double strand breaks (DSBs) during the early developing stage using high-coverage whole-genome sequencing (WGS) data from 1000 Genomes Project. This could be overlooked by SNP microarray. We identified one obvious segmental paternal uniparental isodisomy (iUPD) (8.2 mega bases) in one HapMap sample from 173 trios using Genome-Wide Human SNP Array 6.0 (SNP6.0 array) data. However, we could not identify shorter segmental iUPD in two trios using WGS data. Finally, we estimated the rate of segmental UPD to be one per 173 births (0.578%) based on the UPD screening for 173 trios in general populations. Based on the autosomal chromosome pairs investigated, we estimate the rate of segmental UPD to be one per 3806 chromosome pairs (0.026%). These data imply the possibility of hidden segmental UPD in normal individuals. PMID:23111162

  14. A link-segment model of upright human posture for analysis of head-trunk coordination

    NASA Technical Reports Server (NTRS)

    Nicholas, S. C.; Doxey-Gasway, D. D.; Paloski, W. H.

    1998-01-01

    Sensory-motor control of upright human posture may be organized in a top-down fashion such that certain head-trunk coordination strategies are employed to optimize visual and/or vestibular sensory inputs. Previous quantitative models of the biomechanics of human posture control have examined the simple case of ankle sway strategy, in which an inverted pendulum model is used, and the somewhat more complicated case of hip sway strategy, in which multisegment, articulated models are used. While these models can be used to quantify the gross dynamics of posture control, they are not sufficiently detailed to analyze head-trunk coordination strategies that may be crucial to understanding its underlying mechanisms. In this paper, we present a biomechanical model of upright human posture that extends an existing four mass, sagittal plane, link-segment model to a five mass model including an independent head link. The new model was developed to analyze segmental body movements during dynamic posturography experiments in order to study head-trunk coordination strategies and their influence on sensory inputs to balance control. It was designed specifically to analyze data collected on the EquiTest (NeuroCom International, Clackamas, OR) computerized dynamic posturography system, where the task of maintaining postural equilibrium may be challenged under conditions in which the visual surround, support surface, or both are in motion. The performance of the model was tested by comparing its estimated ground reaction forces to those measured directly by support surface force transducers. We conclude that this model will be a valuable analytical tool in the search for mechanisms of balance control.

  15. Cargo Logistics Airlift Systems Study (CLASS). Volume 1: Analysis of current air cargo system

    NASA Technical Reports Server (NTRS)

    Burby, R. J.; Kuhlman, W. H.

    1978-01-01

    The material presented in this volume is classified into the following sections; (1) analysis of current routes; (2) air eligibility criteria; (3) current direct support infrastructure; (4) comparative mode analysis; (5) political and economic factors; and (6) future potential market areas. An effort was made to keep the observations and findings relating to the current systems as objective as possible in order not to bias the analysis of future air cargo operations reported in Volume 3 of the CLASS final report.

  16. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 3: General purpose spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) general purpose aircraft segment are presented. The satellite is designed to provide attitude stabilization, electrical power, and a communications data handling subsystem which can support various mission peculiar subsystems. The various specifications considered include the following: (1) structures subsystem, (2) thermal control subsystem, (3) communications and data handling subsystem module, (4) attitude control subsystem module, (5) power subsystem module, and (6) electrical integration subsystem.

  17. Industrial process heat data analysis and evaluation. Volume 1

    SciTech Connect

    Lewandowski, A; Gee, R; May, K

    1984-07-01

    The Solar Energy Research Institute (SERI) has modeled seven of the Department of Energy (DOE) sponsored solar Industrial Process Heat (IPH) field experiments and has generated thermal performance predictions for each project. Additionally, these performance predictions have been compared with actual performance measurements taken at the projects. Predictions were generated using SOLIPH, an hour-by-hour computer code with the capability for modeling many types of solar IPH components and system configurations. Comparisons of reported and predicted performance resulted in good agreement when the field test reliability and availability was high. Volume I contains the main body of the work: objective, model description, site configurations, model results, data comparisons, and summary. Volume II contains complete performance prediction results (tabular and graphic output) and computer program listings.

  18. Industrial process heat data analysis and evaluation. Volume 2

    SciTech Connect

    Lewandowski, A; Gee, R; May, K

    1984-07-01

    The Solar Energy Research Institute (SERI) has modeled seven of the Department of Energy (DOE) sponsored solar Industrial Process Heat (IPH) field experiments and has generated thermal performance predictions for each project. Additionally, these performance predictions have been compared with actual performance measurements taken at the projects. Predictions were generated using SOLIPH, an hour-by-hour computer code with the capability for modeling many types of solar IPH components and system configurations. Comparisons of reported and predicted performance resulted in good agreement when the field test reliability and availability was high. Volume I contains the main body of the work; objective model description, site configurations, model results, data comparisons, and summary. Volume II contains complete performance prediction results (tabular and graphic output) and computer program listings.

  19. Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies

    NASA Astrophysics Data System (ADS)

    Yang, Jun

    2000-12-01

    Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.

  20. Remote sensing data acquisition analysis and archival. Volume 2. Appendices

    SciTech Connect

    Stringer, W.J.; Dean, K.G.; Groves, J.E.

    1993-03-25

    The project specialized in the acquisition and dissemination of satellite imagery and its utilization for case-specific and statistical analyses of offshore environmental conditions, particularly those involving sea ice. The topics included: Kasegaluk Lagoon transport, the effect of winter storms on arctic ice, the relationship between ice surface temperatures as measured by buoys and passive microwave imagery, unusual cloud forms following lead-openings, and analyses of Chukchi and Bering sea polynyas. The report is the appendices to Volume 1.

  1. Automatic segmentation and identification of solitary pulmonary nodules on follow-up CT scans based on local intensity structure analysis and non-rigid image registration

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Naito, Hideto; Nakamura, Yoshihiko; Kitasaka, Takayuki; Rueckert, Daniel; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2011-03-01

    This paper presents a novel method that can automatically segment solitary pulmonary nodule (SPN) and match such segmented SPNs on follow-up thoracic CT scans. Due to the clinical importance, a physician needs to find SPNs on chest CT and observe its progress over time in order to diagnose whether it is benign or malignant, or to observe the effect of chemotherapy for malignant ones using follow-up data. However, the enormous amount of CT images makes large burden tasks to a physician. In order to lighten this burden, we developed a method for automatic segmentation and assisting observation of SPNs in follow-up CT scans. The SPNs on input 3D thoracic CT scan are segmented based on local intensity structure analysis and the information of pulmonary blood vessels. To compensate lung deformation, we co-register follow-up CT scans based on an affine and a non-rigid registration. Finally, the matches of detected nodules are found from registered CT scans based on a similarity measurement calculation. We applied these methods to three patients including 14 thoracic CT scans. Our segmentation method detected 96.7% of SPNs from the whole images, and the nodule matching method found 83.3% correspondences from segmented SPNs. The results also show our matching method is robust to the growth of SPN, including integration/separation and appearance/disappearance. These confirmed our method is feasible for segmenting and identifying SPNs on follow-up CT scans.

  2. Automated segmentation of the lamina cribrosa using Frangi's filter: a novel approach for rapid identification of tissue volume fraction and beam orientation in a trabeculated structure in the eye

    PubMed Central

    Campbell, Ian C.; Coudrillier, Baptiste; Mensah, Johanne; Abel, Richard L.; Ethier, C. Ross

    2015-01-01

    The lamina cribrosa (LC) is a tissue in the posterior eye with a complex trabecular microstructure. This tissue is of great research interest, as it is likely the initial site of retinal ganglion cell axonal damage in glaucoma. Unfortunately, the LC is difficult to access experimentally, and thus imaging techniques in tandem with image processing have emerged as powerful tools to study the microstructure and biomechanics of this tissue. Here, we present a staining approach to enhance the contrast of the microstructure in micro-computed tomography (micro-CT) imaging as well as a comparison between tissues imaged with micro-CT and second harmonic generation (SHG) microscopy. We then apply a modified version of Frangi's vesselness filter to automatically segment the connective tissue beams of the LC and determine the orientation of each beam. This approach successfully segmented the beams of a porcine optic nerve head from micro-CT in three dimensions and SHG microscopy in two dimensions. As an application of this filter, we present finite-element modelling of the posterior eye that suggests that connective tissue volume fraction is the major driving factor of LC biomechanics. We conclude that segmentation with Frangi's filter is a powerful tool for future image-driven studies of LC biomechanics. PMID:25589572

  3. PREDICTION OF MINERAL QUALITY OF IRRIGATION RETURN FLOW. VOLUME IV. DATA ANALYSIS UTILITY PROGRAMS

    EPA Science Inventory

    This volume of the report contains a description of the data analysis subroutines developed to support the modeling effort described in Volume III. The subroutines were used to evaluate and condition data used in the conjunctive use model. The subroutines include (1) regression a...

  4. A STANDARD PROCEDURE FOR COST ANALYSIS OF POLLUTION CONTROL OPERATIONS. VOLUME II. APPENDICES

    EPA Science Inventory

    Volume I is a user guide for a standard procedure for the engineering cost analysis of pollution abatement operations and processes. The procedure applies to projects in various economic sectors: private, regulated, and public. Volume II, the bulk of the document, contains 11 app...

  5. Analysis in ultrasmall volumes: microdispensing of picoliter droplets and analysis without protection from evaporation.

    PubMed

    Neugebauer, Sebastian; Evans, Stephanie R; Aguilar, Zoraida P; Mosbach, Marcus; Fritsch, Ingrid; Schuhmann, Wolfgang

    2004-01-15

    A new approach is reported for analysis of ultrasmall volumes. It takes advantage of the versatile positioning of a dispenser to shoot approximately 150-pL droplets of liquid onto a specific location of a substrate where analysis is performed rapidly, in a fraction of the time that it takes for the droplet to evaporate. In this report, the site where the liquid is dispensed carries out fast-scan cyclic voltammetry (FSCV), although the detection method does not need to be restricted to electrochemistry. The FSCV is performed at a microcavity having individually addressable gold electrodes, where one serves as working electrode and another as counter/pseudoreference electrode. Five or six droplets of 10 mM [Ru(NH(3))(6)]Cl(3) in 0.1 M KCl were dispensed and allowed to dry, followed by redissolution of the redox species and electrolyte with one or five droplets of water and immediate FSCV, demonstrating the ability to easily concentrate a sample and the reproducibility of redissolution, respectively. Because this approach does not integrate detection with microfluidics on the same chip, it simplifies fabrication of devices for analysis of ultrasmall volumes. It may be useful for single-step and multistep sample preparation, analyses, and bioassays in microarray formats if dispensing and changing of solutions are automated. However, care must be taken to avoid factors that affect the aim of the dispenser, such as drafts and clogging of the nozzle. PMID:14719897

  6. Corrections for volume hydrogen content in coal analysis by prompt gamma neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Salgado, J.; Oliveira, C.

    1992-05-01

    Prompt gamma neutron activation analysis, PGNAA, is a useful technique to determine the elemental composition of bulk samples in on-line measurements. Monte Carlo simulation studies performed in bulk coals of different compositions for given sample size and geometry have shown that both the gamma count rate for hydrogen and the gamma count rate per percent by weight for an arbitrary element due to (n, ?) reactions depend on the volume hydrogen content, being independent of coal composition. Experimental results using a 252Cf neutron source surrounded by a lead cylinder were obtained for nine different coal types. These show that the ?-peak originated by (n, n' ?) reactions in the lead shield depends on the sample density. Assuming that the source intensity is constant, this result enables the measurement of the coal bulk density. Taking into account the results just described, the present paper shows how the ?-peak intensities can be corrected for volume hydrogen content in order to obtain the percent by weight contents of the coal. The density is necessary to convert the volume hydrogen in percent by weight content and to calculate the bulk sample weight.

  7. Practical considerations for the segmented-flow analysis of nitrate and ammonium in seawater and the avoidance of matrix effects

    NASA Astrophysics Data System (ADS)

    Rho, Tae Keun; Coverly, Stephen; Kim, Eun-Soo; Kang, Dong-Jin; Kahng, Sung-Hyun; Na, Tae-Hee; Cho, Sung-Rok; Lee, Jung-Moo; Moon, Cho-Rong

    2015-12-01

    In this study we describe measures taken in our laboratory to improve the long-term precision of nitrate and ammonia analysis in seawater using a microflow segmented-flow analyzer. To improve the nitrate reduction efficiency using a flow-through open tube cadmium reactor (OTCR), we compared alternative buffer formulations and regeneration procedures for an OTCR. We improved long-term stability for nitrate with a modified flow scheme and color reagent formulation and for ammonia by isolating samples from the ambient air and purifying the air used for bubble segmentation. We demonstrate the importance of taking into consideration the residual nutrient content of the artificial seawater used for the preparation of calibration standards. We describe how an operating procedure to eliminate errors from that source as well as from the refractive index of the matrix itself can be modified to include the minimization of dynamic refractive index effects resulting from differences between the matrix of the samples, the calibrants, and the wash solution. We compare the data for long-term measurements of certified reference material under two different conditions, using ultrapure water (UPW) and artificial seawater (ASW) for the sampler wash.

  8. Analysis of a segmented q-plate tunable retarder for the generation of first-order vector beams.

    PubMed

    Davis, Jeffrey A; Hashimoto, Nobuyuki; Kurihara, Makoto; Hurtado, Enrique; Pierce, Melanie; Sánchez-López, María M; Badham, Katherine; Moreno, Ignacio

    2015-11-10

    In this work we study a prototype q-plate segmented tunable liquid crystal retarder device. It shows a large modulation range (5π rad for a wavelength of 633 nm and near 2π for 1550 nm) and a large clear aperture of one inch diameter. We analyze the operation of the q-plate in terms of Jones matrices and provide different matrix decompositions useful for its analysis, including the polarization transformations, the effect of the tunable phase shift, and the effect of quantization levels (the device is segmented in 12 angular sectors). We also show a very simple and robust optical system capable of generating all polarization states on the first-order Poincaré sphere. An optical polarization rotator and a linear retarder are used in a geometry that allows the generation of all states in the zero-order Poincaré sphere simply by tuning two retardance parameters. We then use this system with the q-plate device to directly map an input arbitrary state of polarization to a corresponding first-order vectorial beam. This optical system would be more practical for high speed and programmable generation of vector beams than other systems reported so far. Experimental results are presented. PMID:26560790

  9. The genetic architecture of Down syndrome phenotypes revealed by high-resolution analysis of human segmental trisomies

    PubMed Central

    Korbel, Jan O.; Tirosh-Wagner, Tal; Urban, Alexander Eckehart; Chen, Xiao-Ning; Kasowski, Maya; Dai, Li; Grubert, Fabian; Erdman, Chandra; Gao, Michael C.; Lange, Ken; Sobel, Eric M.; Barlow, Gillian M.; Aylsworth, Arthur S.; Carpenter, Nancy J.; Clark, Robin Dawn; Cohen, Monika Y.; Doran, Eric; Falik-Zaccai, Tzipora; Lewin, Susan O.; Lott, Ira T.; McGillivray, Barbara C.; Moeschler, John B.; Pettenati, Mark J.; Pueschel, Siegfried M.; Rao, Kathleen W.; Shaffer, Lisa G.; Shohat, Mordechai; Van Riper, Alexander J.; Warburton, Dorothy; Weissman, Sherman; Gerstein, Mark B.; Snyder, Michael; Korenberg, Julie R.

    2009-01-01

    Down syndrome (DS), or trisomy 21, is a common disorder associated with several complex clinical phenotypes. Although several hypotheses have been put forward, it is unclear as to whether particular gene loci on chromosome 21 (HSA21) are sufficient to cause DS and its associated features. Here we present a high-resolution genetic map of DS phenotypes based on an analysis of 30 subjects carrying rare segmental trisomies of various regions of HSA21. By using state-of-the-art genomics technologies we mapped segmental trisomies at exon-level resolution and identified discrete regions of 1.8–16.3 Mb likely to be involved in the development of 8 DS phenotypes, 4 of which are congenital malformations, including acute megakaryocytic leukemia, transient myeloproliferative disorder, Hirschsprung disease, duodenal stenosis, imperforate anus, severe mental retardation, DS-Alzheimer Disease, and DS-specific congenital heart disease (DSCHD). Our DS-phenotypic maps located DSCHD to a <2-Mb interval. Furthermore, the map enabled us to present evidence against the necessary involvement of other loci as well as specific hypotheses that have been put forward in relation to the etiology of DS—i.e., the presence of a single DS consensus region and the sufficiency of DSCR1 and DYRK1A, or APP, in causing several severe DS phenotypes. Our study demonstrates the value of combining advanced genomics with cohorts of rare patients for studying DS, a prototype for the role of copy-number variation in complex disease. PMID:19597142

  10. A review of heart chamber segmentation for structural and functional analysis using cardiac magnetic resonance imaging.

    PubMed

    Peng, Peng; Lekadir, Karim; Gooya, Ali; Shao, Ling; Petersen, Steffen E; Frangi, Alejandro F

    2016-04-01

    Cardiovascular magnetic resonance (CMR) has become a key imaging modality in clinical cardiology practice due to its unique capabilities for non-invasive imaging of the cardiac chambers and great vessels. A wide range of CMR sequences have been developed to assess various aspects of cardiac structure and function, and significant advances have also been made in terms of imaging quality and acquisition times. A lot of research has been dedicated to the development of global and regional quantitative CMR indices that help the distinction between health and pathology. The goal of this review paper is to discuss the structural and functional CMR indices that have been proposed thus far for clinical assessment of the cardiac chambers. We include indices definitions, the requirements for the calculations, exemplar applications in cardiovascular diseases, and the corresponding normal ranges. Furthermore, we review the most recent state-of-the art techniques for the automatic segmentation of the cardiac boundaries, which are necessary for the calculation of the CMR indices. Finally, we provide a detailed discussion of the existing literature and of the future challenges that need to be addressed to enable a more robust and comprehensive assessment of the cardiac chambers in clinical practice. PMID:26811173

  11. Analysis of the Vancouver lung nodule malignancy model with respect to manual and automated segmentation

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Boroczky, Lilla; Bergtholdt, Martin; Klinder, Tobias

    2015-03-01

    The recently published Vancouver model for lung nodule malignancy prediction holds great promise as a practically feasible tool to mitigate the clinical decision problem of how to act on a lung nodule detected at baseline screening. It provides a formula to compute a probability of malignancy from only nine clinical and radiologic features. The feature values are provided by user interaction but in principle could also be automatically pre-filled by appropriate image processing algorithms and RIS requests. Nodule diameter is a feature with crucial influence on the predicted malignancy, and leads to uncertainty caused by inter-reader variability. The purpose of this paper is to analyze how strongly the malignancy prediction of a lung nodule found with CT screening is affected by the inter-reader variation of the nodule diameter estimation. To this aim we have estimated the magnitude of the malignancy variability by applying the Vancouver malignancy model to the LIDC-IDRI database which contains independent delineations from several readers. It can be shown that using fully automatic nodule segmentation can significantly lower the variability of the estimated malignancy, while demonstrating excellent agreement with the expert readers.

  12. Distinction and quantification of carry-over and sample interaction in gas segmented continuous flow analysis

    PubMed Central

    Zhang, Jia-Zhong

    1997-01-01

    The formulae for calculation of carry-over and sample interaction are derived for the first time in this study. A scheme proposed by Thiers et al. (two samples of low concentration followed by a high concentration sample and low concentration sample) is verified and recommended for the determination of the carry-over coeffcient. The derivation demonstrates that both widely used schemes of a high concentration sample followed by two low concentration samples, and a low concentration sample followed by two high concentration samples actually measure the sum of the carry-over coeffcient and sample interaction coefficient. A scheme of three low concentration samples followed by a high concentration sample is proposed and verified for determination of the sample interaction coeffcient. Experimental results indicate that carry-over is a strong function of cycle time and a weak function of ratio of sample time to wash time. Sample dispersion is found to be a function of sample time. Fitted equations can be used to predict the carry-over, absorbance and dispersion given sample times, and wash times for an analytical system. Results clearly show the important role of intersample air segmentation in reducing carry-over, sample interaction and dispersion. PMID:18924810

  13. Distinction and quantification of carry-over and sample interaction in gas segmented continuous flow analysis.

    PubMed

    Zhang, J Z

    1997-01-01

    The formulae for calculation of carry-over and sample interaction are derived for the first time in this study. A scheme proposed by Thiers et al. (two samples of low concentration followed by a high concentration sample and low concentration sample) is verified and recommended for the determination of the carry-over coeffcient. The derivation demonstrates that both widely used schemes of a high concentration sample followed by two low concentration samples, and a low concentration sample followed by two high concentration samples actually measure the sum of the carry-over coeffcient and sample interaction coefficient. A scheme of three low concentration samples followed by a high concentration sample is proposed and verified for determination of the sample interaction coeffcient. Experimental results indicate that carry-over is a strong function of cycle time and a weak function of ratio of sample time to wash time. Sample dispersion is found to be a function of sample time. Fitted equations can be used to predict the carry-over, absorbance and dispersion given sample times, and wash times for an analytical system. Results clearly show the important role of intersample air segmentation in reducing carry-over, sample interaction and dispersion. PMID:18924810

  14. Who Will More Likely Buy PHEV: A Detailed Market Segmentation Analysis

    SciTech Connect

    Lin, Zhenhong; Greene, David L

    2010-01-01

    Understanding the diverse PHEV purchase behaviors among prospective new car buyers is key for designing efficient and effective policies for promoting new energy vehicle technologies. The ORNL MA3T model developed for the U.S. Department of Energy is described and used to project PHEV purchase probabilities by different consumers. MA3T disaggregates the U.S. household vehicle market into 1458 consumer segments based on region, residential area, driver type, technology attitude, home charging availability and work charging availability and is calibrated to the EIA s Annual Energy Outlook. Simulation results from MA3T are used to identify the more likely PHEV buyers and provide explanations. It is observed that consumers who have home charging, drive more frequently and live in urban area are more likely to buy a PHEV. Early adopters are projected to be more likely PHEV buyers in the early market, but the PHEV purchase probability by the late majority consumer can increase over time when PHEV gradually becomes a familiar product. Copyright Form of EVS25.

  15. Comparative analysis of the distribution of segmented filamentous bacteria in humans, mice and chickens

    PubMed Central

    Yin, Yeshi; Wang, Yu; Zhu, Liying; Liu, Wei; Liao, Ningbo; Jiang, Mizu; Zhu, Baoli; Yu, Hongwei D; Xiang, Charlie; Wang, Xin

    2013-01-01

    Segmented filamentous bacteria (SFB) are indigenous gut commensal bacteria. They are commonly detected in the gastrointestinal tracts of both vertebrates and invertebrates. Despite the significant role they have in the modulation of the development of host immune systems, little information exists regarding the presence of SFB in humans. The aim of this study was to investigate the distribution and diversity of SFB in humans and to determine their phylogenetic relationships with their hosts. Gut contents from 251 humans, 92 mice and 72 chickens were collected for bacterial genomic DNA extraction and subjected to SFB 16S rRNA-specific PCR detection. The results showed SFB colonization to be age-dependent in humans, with the majority of individuals colonized within the first 2 years of life, but this colonization disappeared by the age of 3 years. Results of 16S rRNA sequencing showed that multiple operational taxonomic units of SFB could exist in the same individuals. Cross-species comparison among human, mouse and chicken samples demonstrated that each host possessed an exclusive predominant SFB sequence. In summary, our results showed that SFB display host specificity, and SFB colonization, which occurs early in human life, declines in an age-dependent manner. PMID:23151642

  16. Asymmetry analysis of the arm segments during forward handspring on floor.

    PubMed

    Exell, Timothy A; Robinson, Gemma; Irwin, Gareth

    2016-08-01

    Asymmetry in gymnastics underpins successful performance and may also have implications as an injury mechanism; therefore, understanding of this concept could be useful for coaches and clinicians. The aim of this study was to examine kinematic and external kinetic asymmetry of the arm segments during the contact phase of a fundamental skill, the forward handspring on floor. Using a repeated single subject design six female National elite gymnasts (age: 19 ± 1.5 years, mass: 58.64 ± 3.72 kg, height: 1.62 ± 0.41 m), each performed 15 forward handsprings, synchronised 3D kinematic and kinetic data were collected. Asymmetry between the lead and non-lead side arms was quantified during each trial. Significant kinetic asymmetry was observed for all gymnasts (p < 0.005) with the direction of the asymmetry being related to the lead leg. All gymnasts displayed kinetic asymmetry for ground reaction force. Kinematic asymmetry was present for more gymnasts at the shoulder than the distal joints. These findings provide useful information for coaching gymnastics skills, which may subjectively appear to be symmetrical. The observed asymmetry has both performance and injury implications. PMID:26625144

  17. Photogrammetric Digital Outcrop Model analysis of a segment of the Centovalli Line (Trontano, Italy)

    NASA Astrophysics Data System (ADS)

    Consonni, Davide; Pontoglio, Emanuele; Bistacchi, Andrea; Tunesi, Annalisa

    2015-04-01

    The Centovalli Line is a complex network of brittle faults developing between Domodossola (West) and Locarno (East), where it merges with the Canavese Line (western segment of the Periadriatic Lineament). The Centovalli Line roughly follows the Southern Steep Belt which characterizes the inner or "root" zone of the Penninic and Austroalpine units, which underwent several deformation phases under variable P-T conditions over all the Alpine orogenic history. The last deformation phases in this area developed under brittle conditions, resulting in an array of dextral-reverse subvertical faults with a general E-W trend that partly reactivates and partly crosscuts the metamorphic foliations and lithological boundaries. Here we report on a quantitative digital outcrop model (DOM) study aimed at quantifying the fault zone architecture in a particularly well exposed outcrop near Trontano, at the western edge of the Centovalli Line. The DOM was reconstructed with photogrammetry and allowed to perform a complete characterization of the damage zones and multiple fault cores on both point cloud and textured surfaces models. Fault cores have been characterized in terms of attitude, thickness, and internal distribution of fault rocks (gouge-bearing), including possibly seismogenic localized slip surfaces. In the damage zones, the fracture network has been characterized in terms of fracture intensity (both P10 and P21 on virtual scanlines and scan-areas), fracture attitude, fracture connectivity, etc.

  18. Research on Demonstration Title I Compensatory Education Projects. Analysis Plan, Volume I and Appendices, Volume II.

    ERIC Educational Resources Information Center

    Vanecko, James J.; And Others

    This report presents the analytic plan for a study of demonstration Title I projects. The study is to be one of distributional equity: what kinds of instructional services are received by what kinds of students. It is mandated by the United States Congress and will provide the basic analysis for possible modifications in the Elementary and…

  19. Model documentation of the gas analysis modeling system. Volume 1. Model overview

    SciTech Connect

    Kydes, A.S.

    1984-08-01

    This is Volume 1 of three volumes of documentation for the Gas Analysis Modeling System (GAMS) developed by the Analysis and Forecasting Branch, Reserves ad Natural Gas Division, Office of Oil and Gas, Energy Information Administration (EIA), US Department of Energy. The documentation has been developed to comply with the requirements specified in Energy Information Administration Order EI 5910.3A, Guidelines and Procedures for Model and Analysis Documentation, effective October 1, 1982. Volume 1 is intended to satisfy the requirements of paragraph 5.c(3) of the Order, ''Model Overview''. The Appendix A provides a model abstract which is intended to satisfy the requirements of 5.c(1) of the Order. This document is a non-mathematical description of the GAMS system, and is written for an audience with a technical understanding of applied statistical/applied mathematics modeling. Companion volumes to Volume 1 include: Volume 2 - Model Documentation of the Gas Analysis Modeling System: Model Methodology; Volume 3 - Model Documentation of the Gas Analysis Modeling System: GAMS Software, Data Documentation and User's Guide. 6 figs., 1 tab.

  20. Risk factors for neovascular glaucoma after carbon ion radiotherapy of choroidal melanoma using dose-volume histogram analysis

    SciTech Connect

    Hirasawa, Naoki . E-mail: naoki_h@nirs.go.jp; Tsuji, Hiroshi; Ishikawa, Hitoshi; Koyama-Ito, Hiroko; Kamada, Tadashi; Mizoe, Jun-Etsu; Ito, Yoshiyuki; Naganawa, Shinji; Ohnishi, Yoshitaka; Tsujii, Hirohiko

    2007-02-01

    Purpose: To determine the risk factors for neovascular glaucoma (NVG) after carbon ion radiotherapy (C-ion RT) of choroidal melanoma. Methods and Materials: A total of 55 patients with choroidal melanoma were treated between 2001 and 2005 with C-ion RT based on computed tomography treatment planning. All patients had a tumor of large size or one located close to the optic disk. Univariate and multivariate analyses were performed to identify the risk factors of NVG for the following parameters; gender, age, dose-volumes of the iris-ciliary body and the wall of eyeball, and irradiation of the optic disk (ODI). Results: Neovascular glaucoma occurred in 23 patients and the 3-year cumulative NVG rate was 42.6 {+-} 6.8% (standard error), but enucleation from NVG was performed in only three eyes. Multivariate analysis revealed that the significant risk factors for NVG were V50{sub IC} (volume irradiated {>=}50 GyE to iris-ciliary body) (p = 0.002) and ODI (p = 0.036). The 3-year NVG rate for patients with V50{sub IC} {>=}0.127 mL and those with V50{sub IC} <0.127 mL were 71.4 {+-} 8.5% and 11.5 {+-} 6.3%, respectively. The corresponding rate for the patients with and without ODI were 62.9 {+-} 10.4% and 28.4 {+-} 8.0%, respectively. Conclusion: Dose-volume histogram analysis with computed tomography indicated that V50{sub IC} and ODI were independent risk factors for NVG. An irradiation system that can reduce the dose to both the anterior segment and the optic disk might be worth adopting to investigate whether or not incidence of NVG can be decreased with it.

  1. A registration-based segmentation method with application to adiposity analysis of mice microCT images

    NASA Astrophysics Data System (ADS)

    Bai, Bing; Joshi, Anand; Brandhorst, Sebastian; Longo, Valter D.; Conti, Peter S.; Leahy, Richard M.

    2014-04-01

    Obesity is a global health problem, particularly in the U.S. where one third of adults are obese. A reliable and accurate method of quantifying obesity is necessary. Visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) are two measures of obesity that reflect different associated health risks, but accurate measurements in humans or rodent models are difficult. In this paper we present an automatic, registration-based segmentation method for mouse adiposity studies using microCT images. We co-register the subject CT image and a mouse CT atlas. Our method is based on surface matching of the microCT image and an atlas. Surface-based elastic volume warping is used to match the internal anatomy. We acquired a whole body scan of a C57BL6/J mouse injected with contrast agent using microCT and created a whole body mouse atlas by manually delineate the boundaries of the mouse and major organs. For method verification we scanned a C57BL6/J mouse from the base of the skull to the distal tibia. We registered the obtained mouse CT image to our atlas. Preliminary results show that we can warp the atlas image to match the posture and shape of the subject CT image, which has significant differences from the atlas. We plan to use this software tool in longitudinal obesity studies using mouse models.

  2. Fault rupture segmentation

    NASA Astrophysics Data System (ADS)

    Cleveland, Kenneth Michael

    A critical foundation to earthquake study and hazard assessment is the understanding of controls on fault rupture, including segmentation. Key challenges to understanding fault rupture segmentation include, but are not limited to: What determines if a fault segment will rupture in a single great event or multiple moderate events? How is slip along a fault partitioned between seismic and seismic components? How does the seismicity of a fault segment evolve over time? How representative are past events for assessing future seismic hazards? In order to address the difficult questions regarding fault rupture segmentation, new methods must be developed that utilize the information available. Much of the research presented in this study focuses on the development of new methods for attacking the challenges of understanding fault rupture segmentation. Not only do these methods exploit a broader band of information within the waveform than has traditionally been used, but they also lend themselves to the inclusion of even more seismic phases providing deeper understandings. Additionally, these methods are designed to be fast and efficient with large datasets, allowing them to utilize the enormous volume of data available. Key findings from this body of work include demonstration that focus on fundamental earthquake properties on regional scales can provide general understanding of fault rupture segmentation. We present a more modern, waveform-based method that locates events using cross-correlation of the Rayleigh waves. Additionally, cross-correlation values can also be used to calculate precise earthquake magnitudes. Finally, insight regarding earthquake rupture directivity can be easily and quickly exploited using cross-correlation of surface waves.

  3. Segmentation of liver and liver tumor for the Liver-Workbench

    NASA Astrophysics Data System (ADS)

    Zhou, Jiayin; Ding, Feng; Xiong, Wei; Huang, Weimin; Tian, Qi; Wang, Zhimin; Venkatesh, Sudhakar K.; Leow, Wee Kheng

    2011-03-01

    Robust and efficient segmentation tools are important for the quantification of 3D liver and liver tumor volumes which can greatly help clinicians in clinical decision-making and treatment planning. A two-module image analysis procedure which integrates two novel semi-automatic algorithms has been developed to segment 3D liver and liver tumors from multi-detector computed tomography (MDCT) images. The first module is to segment the liver volume using a flippingfree mesh deformation model. In each iteration, before mesh deformation, the algorithm detects and avoids possible flippings which will cause the self-intersection of the mesh and then the undesired segmentation results. After flipping avoidance, Laplacian mesh deformation is performed with various constraints in geometry and shape smoothness. In the second module, the segmented liver volume is used as the ROI and liver tumors are segmented by using support vector machines (SVMs)-based voxel classification and propagational learning. First a SVM classifier was trained to extract tumor region from one single 2D slice in the intermediate part of a tumor by voxel classification. Then the extracted tumor contour, after some morphological operations, was projected to its neighboring slices for automated sampling, learning and further voxel classification in neighboring slices. This propagation procedure continued till all tumorcontaining slices were processed. The performance of the whole procedure was tested using 20 MDCT data sets and the results were promising: Nineteen liver volumes were successfully segmented out, with the mean relative absolute volume difference (RAVD), volume overlap error (VOE) and average symmetric surface distance (ASSD) to reference segmentation of 7.1%, 12.3% and 2.5 mm, respectively. For live tumors segmentation, the median RAVD, VOE and ASSD were 7.3%, 18.4%, 1.7 mm, respectively.

  4. Synfuel program analysis. Volume 2: VENVAL users manual

    NASA Astrophysics Data System (ADS)

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This volume is intended for program analysts and is a users manual for the VENVAL model. It contains specific explanations as to input data requirements and programming procedures for the use of this model. VENVAL is a generalized computer program to aid in evaluation of prospective private sector production ventures. The program can project interrelated values of installed capacity, production, sales revenue, operating costs, depreciation, investment, dent, earnings, taxes, return on investment, depletion, and cash flow measures. It can also compute related public sector and other external costs and revenues if unit costs are furnished.

  5. Synfuel program analysis. Volume II. VENVAL users manual

    SciTech Connect

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This volume is intended for program analysts and is a users manual for the VENVAL model. It contains specific explanations as to input data requirements and programming procedures for the use of this model in handling various cases. VENVAL is a generalized computer program to aid in evaluation of prospective private-sector production ventures. The program can project interrelated values of installed capacity, production, sales revenue, operating costs, depreciation, investment, debt, earnings, taxes, return on investment, depletion, and cash flow measures. It can also compute related public sector and other external costs and revenues if unit costs are furnished. (DMC)

  6. Virtual Mastoidectomy Performance Evaluation through Multi-Volume Analysis

    PubMed Central

    Kerwin, Thomas; Stredney, Don; Wiet, Gregory; Shen, Han-Wei

    2012-01-01

    Purpose Development of a visualization system that provides surgical instructors with a method to compare the results of many virtual surgeries (n > 100). Methods A masked distance field models the overlap between expert and resident results. Multiple volume displays are used side-by-side with a 2D point display. Results Performance characteristics were examined by comparing the results of specific residents with those of experts and the entire class. Conclusions The software provides a promising approach for comparing performance between large groups of residents learning mastoidectomy techniques. PMID:22528058

  7. Model documentation of the gas analysis modeling system. Volume 1. Model overview

    SciTech Connect

    Not Available

    1984-08-01

    This is Volume 1 of three volumes of documentation for the Gas Analysis Modeling System (GAMS) developed by the Analysis and Forecasting Branch, Reserves and Natural Gas Division, Office of Oil and Gas, Energy Information Administration (EIA), US Department of Energy. This document is a non-mathematical description of the GAMS system, and is written for an audience with a technical understanding of applied statistical/applied mathematics modeling.

  8. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  9. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial

  10. The effect of lead selection on traditional and heart rate-adjusted ST segment analysis in the detection of coronary artery disease during exercise testing.

    PubMed

    Viik, J; Lehtinen, R; Turjanmaa, V; Niemelä, K; Malmivuo, J

    1997-09-01

    Several methods of heart rate-adjusted ST segment (ST/HR) analysis have been suggested to improve the diagnostic accuracy of exercise electrocardiography in the identification of coronary artery disease compared with traditional ST segment analysis. However, no comprehensive comparison of these methods on a lead-by-lead basis in all 12 electrocardiographic leads has been reported. This article compares the diagnostic performances of ST/HR hysteresis, ST/HR index, ST segment depression 3 minutes after recovery from exercise, and ST segment depression at peak exercise in a study population of 128 patients with angiographically proved coronary artery disease and 189 patients with a low likelihood of the disease. The methods were determined in each lead of the Mason-Likar modification of the standard 12-lead exercise electrocardiogram for each patient. The ST/HR hysteresis, ST/HR index, ST segment depression 3 minutes after recovery from exercise, and ST segment depression at peak exercise achieved more than 85% area under the receiver-operating characteristic curve in nine, none, three, and one of the 12 standard leads, respectively. The diagnostic performance of ST/HR hysteresis was significantly superior in each lead, with the exception of leads a VL and V1. Examination of individual leads in each study method revealed the high diagnostic performance of leads I and -aVR, indicating that the importance of these leads has been undervalued. In conclusion, the results indicate that when traditional ST segment analysis is used for the detection of coronary artery disease, more attention should be paid to the leads chosen for analysis, and lead-specific cut points should be applied. On the other hand, ST/HR hysteresis, which integrates the ST/HR depression of the exercise and recovery phases, seems to be relatively insensitive to the lead selection and significantly increases the diagnostic performance of exercise electrocardiography in the detection of coronary artery disease. PMID:9327707

  11. Navier-Stokes analysis of two- and three-dimensional flow field in solid rocket motors with segment joints

    NASA Technical Reports Server (NTRS)

    Sabnis, J. S.; Gibeling, H. J.; Mcdonald, H.

    1987-01-01

    A multidimensional implicit Navier-Stokes analysis which uses numerical solution of ensemble-averaged Navier-Stokes equations in a nonorthogonal bodyfitted cylindrical-polar coordinate system has been applied to simulation of the internal flow field in solid-propellant rocket motor chambers with segment joints. The calculation procedure incorporates a two-equation (k-epsilon) turbulence model and utilizes a consistently split, linearized block-implicit algorithm for numerical solution of the governing equations. Computations performed to simulate the axisymmetric flow field in the vicinity of the aft field joint in the Space Shuttle SRB using 14,725 grid points show the presence of a region of reversed axial flow near the downstream edge of the slot. Calculations were also performed for two cases involving asymmetric three-dimensional flow in the vicinity of the aft field joint in the SRB using 721,525 grid points to estimate circumferential velocities and pressure gradients at the joint.

  12. Estimating temperature-dependent anisotropic hydrogen displacements with the invariom database and a new segmented rigid-body analysis program

    PubMed Central

    Lübben, Jens; Bourhis, Luc J.; Dittrich, Birger

    2015-01-01

    Invariom partitioning and notation are used to estimate anisotropic hydrogen displacements for incorporation in crystallographic refinement models. Optimized structures of the generalized invariom database and their frequency computations provide the information required: frequencies are converted to internal atomic displacements and combined with the results of a TLS (translation–libration–screw) fit of experimental non-hydrogen anisotropic displacement parameters to estimate those of H atoms. Comparison with TLS+ONIOM and neutron diffraction results for four example structures where high-resolution X-ray and neutron data are available show that electron density transferability rules established in the invariom approach are also suitable for streamlining the transfer of atomic vibrations. A new segmented-body TLS analysis program called APD-Toolkit has been coded to overcome technical limitations of the established program THMA. The influence of incorporating hydrogen anisotropic displacement parameters on conventional refinement is assessed. PMID:26664341

  13. Conjoint Analysis of Study Abroad Preferences: Key Attributes, Segments and Implications for Increasing Student Participation

    ERIC Educational Resources Information Center

    Garver, Michael S.; Divine, Richard L.

    2008-01-01

    An adaptive conjoint analysis was performed on the study abroad preferences of a sample of undergraduate college students. The results indicate that trip location, cost, and time spent abroad are the three most important determinants of student preference for different study abroad trip scenarios. The analysis also uncovered four different study…

  14. Evaluation of the Field Test of Project Information Packages: Volume III--Resource Cost Analysis.

    ERIC Educational Resources Information Center

    Al-Salam, Nabeel; And Others

    The third of three volumes evaluating the first year field test of the Project Information Packages (PIPs) provides a cost analysis study as a key element in the total evaluation. The resource approach to cost analysis is explained and the specific resource methodology used in the main cost analysis of the 19 PIP field-test projects detailed. The…

  15. STS-1 operational flight profile. Volume 6: Abort analysis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The abort analysis for the cycle 3 Operational Flight Profile (OFP) for the Space Transportation System 1 Flight (STS-1) is defined, superseding the abort analysis previously presented. Included are the flight description, abort analysis summary, flight design groundrules and constraints, initialization information, general abort description and results, abort solid rocket booster and external tank separation and disposal results, abort monitoring displays and discussion on both ground and onboard trajectory monitoring, abort initialization load summary for the onboard computer, list of the key abort powered flight dispersion analysis.

  16. Image-based segmentation for characterization and quantitative analysis of the spinal cord injuries by using diffusion patterns

    NASA Astrophysics Data System (ADS)

    Hannula, Markus; Olubamiji, Adeola; Kunttu, Iivari; Dastidar, Prasun; Soimakallio, Seppo; Öhman, Juha; Hyttinen, Jari

    2011-03-01

    In medical imaging, magnetic resonance imaging sequences are able to provide information of the damaged brain structure and the neuronal connections. The sequences can be analyzed to form 3D models of the geometry and further including functional information of the neurons of the specific brain area to develop functional models. Modeling offers a tool which can be used for the modeling of brain trauma from images of the patients and thus information to tailor the properties of the transplanted cells. In this paper, we present image-based methods for the analysis of human spinal cord injuries. In this effort, we use three dimensional diffusion tensor imaging, which is an effective method for analyzing the response of the water molecules. This way, our idea is to study how the injury affects on the tissues and how this can be made visible in the imaging. In this paper, we present here a study of spinal cord analysis to two subjects, one healthy volunteer and one spinal cord injury patient. We have done segmentations and volumetric analysis for detection of anatomical differences. The functional differences are analyzed by using diffusion tensor imaging. The obtained results show that this kind of analysis is capable of finding differences in spinal cords anatomy and function.

  17. Evaluation of automated brain MR image segmentation and volumetry methods.

    PubMed

    Klauschen, Frederick; Goldman, Aaron; Barra, Vincent; Meyer-Lindenberg, Andreas; Lundervold, Arvid

    2009-04-01

    We compare three widely used brain volumetry methods available in the software packages FSL, SPM5, and FreeSurfer and evaluate their performance using simulated and real MR brain data sets. We analyze the accuracy of gray and white matter volume measurements and their robustness against changes of image quality using the BrainWeb MRI database. These images are based on "gold-standard" reference brain templates. This allows us to assess between- (same data set, different method) and also within-segmenter (same method, variation of image quality) comparability, for both of which we find pronounced variations in segmentation results for gray and white matter volumes. The calculated volumes deviate up to >10% from the reference values for gray and white matter depending on method and image quality. Sensitivity is best for SPM5, volumetric accuracy for gray and white matter was similar in SPM5 and FSL and better than in FreeSurfer. FSL showed the highest stability for white (<5%), FreeSurfer (6.2%) for gray matter for constant image quality BrainWeb data. Between-segmenter comparisons show discrepancies of up to >20% for the simulated data and 24% on average for the real data sets, whereas within-method performance analysis uncovered volume differences of up to >15%. Since the discrepancies between results reach the same order of magnitude as volume changes observed in disease, these effects limit the usability of the segmentation methods for following volume changes in individual patients over time and should be taken into account during the planning and analysis of brain volume studies. PMID:18537111

  18. Measurement and analysis of grain boundary grooving by volume diffusion

    NASA Technical Reports Server (NTRS)

    Hardy, S. C.; Mcfadden, G. B.; Coriell, S. R.; Voorhees, P. W.; Sekerka, R. F.

    1991-01-01

    Experimental measurements of isothermal grain boundary grooving by volume diffusion are carried out for Sn bicrystals in the Sn-Pb system near the eutectic temperature. The dimensions of the groove increase with a temporal exponent of 1/3, and measurement of the associated rate constant allows the determination of the product of the liquid diffusion coefficient D and the capillarity length Gamma associated with the interfacial free energy of the crystal-melt interface. The small-slope theory of Mullins is generalized to the entire range of dihedral angles by using a boundary integral formulation of the associated free boundary problem, and excellent agreement with experimental groove shapes is obtained. By using the diffusivity measured by Jordon and Hunt, the present measured values of Gamma are found to agree to within 5 percent with the values obtained from experiments by Gunduz and Hunt on grain boundary grooving in a temperature gradient.

  19. Economic analysis of the space shuttle system, volume 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of the space shuttle system is presented. The analysis is based on economic benefits, recurring costs, non-recurring costs, and ecomomic tradeoff functions. The most economic space shuttle configuration is determined on the basis of: (1) objectives of reusable space transportation system, (2) various space transportation systems considered and (3) alternative space shuttle systems.

  20. Space shuttle navigation analysis. Volume 2: Baseline system navigation

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Rains, R. G.

    1980-01-01

    Studies related to the baseline navigation system for the orbiter are presented. The baseline navigation system studies include a covariance analysis of the Inertial Measurement Unit calibration and alignment procedures, postflight IMU error recovery for the approach and landing phases, on-orbit calibration of IMU instrument biases, and a covariance analysis of entry and prelaunch navigation system performance.

  1. Price-volume multifractal analysis and its application in Chinese stock markets

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Zhuang, Xin-tian; Liu, Zhi-ying

    2012-06-01

    An empirical research on Chinese stock markets is conducted using statistical tools. First, the multifractality of stock price return series, ri(ri=ln(Pt+1)-ln(Pt)) and trading volume variation series, vi(vi=ln(Vt+1)-ln(Vt)) is confirmed using multifractal detrended fluctuation analysis. Furthermore, a multifractal detrended cross-correlation analysis between stock price return and trading volume variation in Chinese stock markets is also conducted. It is shown that the cross relationship between them is also found to be multifractal. Second, the cross-correlation between stock price Pi and trading volume Vi is empirically studied using cross-correlation function and detrended cross-correlation analysis. It is found that both Shanghai stock market and Shenzhen stock market show pronounced long-range cross-correlations between stock price and trading volume. Third, a composite index R based on price and trading volume is introduced. Compared with stock price return series ri and trading volume variation series vi, R variation series not only remain the characteristics of original series but also demonstrate the relative correlation between stock price and trading volume. Finally, we analyze the multifractal characteristics of R variation series before and after three financial events in China (namely, Price Limits, Reform of Non-tradable Shares and financial crisis in 2008) in the whole period of sample to study the changes of stock market fluctuation and financial risk. It is found that the empirical results verified the validity of R.

  2. Three stage level set segmentation of mass core, periphery, and spiculations for automated image analysis of digital mammograms

    NASA Astrophysics Data System (ADS)

    Ball, John Eugene

    In this dissertation, level set methods are employed to segment masses in digital mammographic images and to classify land cover classes in hyperspectral data. For the mammography computer aided diagnosis (CAD) application, level set-based segmentation methods are designed and validated for mass-periphery segmentation, spiculation segmentation, and core segmentation. The proposed periphery segmentation uses the narrowband level set method in conjunction with an adaptive speed function based on a measure of the boundary complexity in the polar domain. The boundary complexity term is shown to be beneficial for delineating challenging masses with ill-defined and irregularly shaped borders. The proposed method is shown to outperform periphery segmentation methods currently reported in the literature. The proposed mass spiculation segmentation uses a generalized form of the Dixon and Taylor Line Operator along with narrowband level sets using a customized speed function. The resulting spiculation features are shown to be very beneficial for classifying the mass as benign or malignant. For example, when using patient age and texture features combined with a maximum likelihood (ML) classifier, the spiculation segmentation method increases the overall accuracy to 92% with 2 false negatives as compared to 87% with 4 false negatives when using periphery segmentation approaches. The proposed mass core segmentation uses the Chan-Vese level set method with a minimal variance criterion. The resulting core features are shown to be effective and comparable to periphery features, and are shown to reduce the number of false negatives in some cases. Most mammographic CAD systems use only a periphery segmentation, so those systems could potentially benefit from core features.

  3. Ventriculogram segmentation using boosted decision trees

    NASA Astrophysics Data System (ADS)

    McDonald, John A.; Sheehan, Florence H.

    2004-05-01

    Left ventricular status, reflected in ejection fraction or end systolic volume, is a powerful prognostic indicator in heart disease. Quantitative analysis of these and other parameters from ventriculograms (cine xrays of the left ventricle) is infrequently performed due to the labor required for manual segmentation. None of the many methods developed for automated segmentation has achieved clinical acceptance. We present a method for semi-automatic segmentation of ventriculograms based on a very accurate two-stage boosted decision-tree pixel classifier. The classifier determines which pixels are inside the ventricle at key ED (end-diastole) and ES (end-systole) frames. The test misclassification rate is about 1%. The classifier is semi-automatic, requiring a user to select 3 points in each frame: the endpoints of the aortic valve and the apex. The first classifier stage is 2 boosted decision-trees, trained using features such as gray-level statistics (e.g. median brightness) and image geometry (e.g. coordinates relative to user supplied 3 points). Second stage classifiers are trained using the same features as the first, plus the output of the first stage. Border pixels are determined from the segmented images using dilation and erosion. A curve is then fit to the border pixels, minimizing a penalty function that trades off fidelity to the border pixels with smoothness. ED and ES volumes, and ejection fraction are estimated from border curves using standard area-length formulas. On independent test data, the differences between automatic and manual volumes (and ejection fractions) are similar in size to the differences between two human observers.

  4. The ACODEA Framework: Developing Segmentation and Classification Schemes for Fully Automatic Analysis of Online Discussions

    ERIC Educational Resources Information Center

    Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank

    2012-01-01

    Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…

  5. The ACODEA Framework: Developing Segmentation and Classification Schemes for Fully Automatic Analysis of Online Discussions

    ERIC Educational Resources Information Center

    Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank

    2012-01-01

    Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,

  6. Image Analysis Software Based on Color Segmentation for Characterization of Viability and Physiological Activity of Biofilms▿

    PubMed Central

    Chávez de Paz, Luis E.

    2009-01-01

    The novel image analysis software package bioImage_L was tested to calculate biofilm structural parameters in oral biofilms stained with dual-channel fluorescent markers. By identifying color tonalities in situ, the software independently processed the color subpopulations and characterized the viability and metabolic activity of biofilms. PMID:19139239

  7. Ceramic component development analysis -- Volume 1. Final report

    SciTech Connect

    Boss, D.E.

    1998-06-09

    The development of advanced filtration media for advanced fossil-fueled power generating systems is a critical step in meeting the performance and emissions requirements for these systems. While porous metal and ceramic candle-filters have been available for some time, the next generation of filters will include ceramic-matrix composites (CMCs) (Techniweave/Westinghouse, Babcock and Wilcox (B and W), DuPont Lanxide Composites), intermetallic alloys (Pall Corporation), and alternate filter geometries (CeraMem Separations). The goal of this effort was to perform a cursory review of the manufacturing processes used by 5 companies developing advanced filters from the perspective of process repeatability and the ability for their processes to be scale-up to produce volumes. Given the brief nature of the on-site reviews, only an overview of the processes and systems could be obtained. Each of the 5 companies had developed some level of manufacturing and quality assurance documentation, with most of the companies leveraging the procedures from other products they manufacture. It was found that all of the filter manufacturers had a solid understanding of the product development path. Given that these filters are largely developmental, significant additional work is necessary to understand the process-performance relationships and projecting manufacturing costs.

  8. Viscous wing theory development. Volume 1: Analysis, method and results

    NASA Technical Reports Server (NTRS)

    Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.

    1986-01-01

    Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.

  9. SLUDGE TREATMENT PROJECT ALTERNATIVES ANALYSIS SUMMARY REPORT [VOLUME 1

    SciTech Connect

    FREDERICKSON JR; ROURK RJ; HONEYMAN JO; JOHNSON ME; RAYMOND RE

    2009-01-19

    Highly radioactive sludge (containing up to 300,000 curies of actinides and fission products) resulting from the storage of degraded spent nuclear fuel is currently stored in temporary containers located in the 105-K West storage basin near the Columbia River. The background, history, and known characteristics of this sludge are discussed in Section 2 of this report. There are many compelling reasons to remove this sludge from the K-Basin. These reasons are discussed in detail in Section1, and they include the following: (1) Reduce the risk to the public (from a potential release of highly radioactive material as fine respirable particles by airborne or waterborn pathways); (2) Reduce the risk overall to the Hanford worker; and (3) Reduce the risk to the environment (the K-Basin is situated above a hazardous chemical contaminant plume and hinders remediation of the plume until the sludge is removed). The DOE-RL has stated that a key DOE objective is to remove the sludge from the K-West Basin and River Corridor as soon as possible, which will reduce risks to the environment, allow for remediation of contaminated areas underlying the basins, and support closure of the 100-KR-4 operable unit. The environmental and nuclear safety risks associated with this sludge have resulted in multiple legal and regulatory remedial action decisions, plans,and commitments that are summarized in Table ES-1 and discussed in more detail in Volume 2, Section 9.

  10. Volume-Rendering-Based Interactive 3D Measurement for Quantitative Analysis of 3D Medical Images

    PubMed Central

    Dai, Yakang; Yang, Yuetao; Kuai, Duojie; Yang, Xiaodong

    2013-01-01

    3D medical images are widely used to assist diagnosis and surgical planning in clinical applications, where quantitative measurement of interesting objects in the image is of great importance. Volume rendering is widely used for qualitative visualization of 3D medical images. In this paper, we introduce a volume-rendering-based interactive 3D measurement framework for quantitative analysis of 3D medical images. In the framework, 3D widgets and volume clipping are integrated with volume rendering. Specifically, 3D plane widgets are manipulated to clip the volume to expose interesting objects. 3D plane widgets, 3D line widgets, and 3D angle widgets are then manipulated to measure the areas, distances, and angles of interesting objects. The methodology of the proposed framework is described. Experimental results indicate the performance of the interactive 3D measurement framework. PMID:23762199

  11. Integration of sneak analysis with design, volume 1

    NASA Astrophysics Data System (ADS)

    Miller, Jeff

    1990-06-01

    This report documents the work in the creation of a software package to be used by a design engineer to prevent sneak circuit failures in a new design. Sneak Circuit Analysis for the Common Man was an interim report that was issued representing the manual procedure for identifying possible sneak circuits. This report presents the automated version to be used on an IBM PC under MS/DOS. The Sneak Circuit Analysis (SCA) software package uses the ORCAD II schematic capture program to analyze the circuitry. SCA will search for potential sneak paths and identify them for the user. SCA will then offer suggestions to the user to correct the design weaknesses. The software package handles analog as well as digital circuits and for very large networks, a sectional analysis is possible.

  12. Finite element analysis of laminated plates and shells, volume 1

    NASA Technical Reports Server (NTRS)

    Seide, P.; Chang, P. N. H.

    1978-01-01

    The finite element method is used to investigate the static behavior of laminated composite flat plates and cylindrical shells. The analysis incorporates the effects of transverse shear deformation in each layer through the assumption that the normals to the undeformed layer midsurface remain straight but need not be normal to the mid-surface after deformation. A digital computer program was developed to perform the required computations. The program includes a very efficient equation solution code which permits the analysis of large size problems. The method is applied to the problem of stretching and bending of a perforated curved plate.

  13. A system for the quantitative analysis of bone metastases by image segmentation

    SciTech Connect

    Erdi, Y.E.; Humm, J.L.; Yeung, H.

    1996-12-31

    Preliminary evidence indicates that the fraction of bone containing metastatic lesions is a strong prognostic indicator of survival longevity for prostate and breast cancer. To quantify metastatic lesions, the most common method is to visually inspect the fraction of each bone involvement and determine the percent involvement by drawing region-of-interest. However, this approach is time-consuming, subjective and dependent upon individual interpretation. To overcome these problems, a semi-automated region-growing program was developed for the quantitation of metastases from planar bone scans. The program then computes the fraction of lesion involvement in each bone based on look-up-tables containing the relationship of bone weight with: race, sex, height, and age. The bone metastases analysis system has been used on 11 scans from 6 patients. The correlation was high (r=0.83) between conventional (manually drawn region-of-interest) and this analysis system. Bone metastases analysis results in consistently lower estimates of fractional involvement in bone compared to the conventional region-of-interest drawing or visual estimation method. This is due to the apparent broadening of objects at and below the limits of resolution of the gamma camera. Bone metastases (BMets) analysis system reduces the delineation and quantitation time of lesions by at least 2 compared to manual region-of-interest drawing. The objectivity of this technique allows the detection of small variations in follow-up patient scans for which manual region-of-interest method may fail, due to performance variability of the user. This method preserves the diagnostic skills of the nuclear medicine physician to select which bony structures contain lesions, yet combines it with an objective delineation of the lesion.

  14. Structural Analysis and Testing of an Erectable Truss for Precision Segmented Reflector Application

    NASA Technical Reports Server (NTRS)

    Collins, Timothy J.; Fichter, W. B.; Adams, Richard R.; Javeed, Mehzad

    1995-01-01

    This paper describes analysis and test results obtained at Langley Research Center (LaRC) on a doubly curved testbed support truss for precision reflector applications. Descriptions of test procedures and experimental results that expand upon previous investigations are presented. A brief description of the truss is given, and finite-element-analysis models are described. Static-load and vibration test procedures are discussed, and experimental results are shown to be repeatable and in generally good agreement with linear finite-element predictions. Truss structural performance (as determined by static deflection and vibration testing) is shown to be predictable and very close to linear. Vibration test results presented herein confirm that an anomalous mode observed during initial testing was due to the flexibility of the truss support system. Photogrammetric surveys with two 131-in. reference scales show that the root-mean-square (rms) truss-surface accuracy is about 0.0025 in. Photogrammetric measurements also indicate that the truss coefficient of thermal expansion (CTE) is in good agreement with that predicted by analysis. A detailed description of the photogrammetric procedures is included as an appendix.

  15. Cost-volume-profit and net present value analysis of health information systems.

    PubMed

    McLean, R A

    1998-08-01

    The adoption of any information system should be justified by an economic analysis demonstrating that its projected benefits outweigh its projected costs. Analysis differ, however, on which methods to employ for such a justification. Accountants prefer cost-volume-profit analysis, and economists prefer net present value analysis. The article explains the strengths and weaknesses of each method and shows how they can be used together so that well-informed investments in information systems can be made. PMID:10181911

  16. Space tug economic analysis study. Volume 2: Tug concepts analysis. Part 2: Economic analysis

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of space tug operations is presented. The subjects discussed are: (1) cost uncertainties, (2) scenario analysis, (3) economic sensitivities, (4) mixed integer programming formulation of the space tug problem, and (5) critical parameters in the evaluation of a public expenditure.

  17. Improving image segmentation performance and quantitative analysis via a computer-aided grading methodology for optical coherence tomography retinal image analysis

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Salinas, Harry M.; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M.; Puliafito, Carmen A.

    2010-07-01

    We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 μm and 26.71 μm when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 μm and 0.6 and 1.76 μm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R2>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.

  18. Underground Test Area Subproject Phase I Data Analysis Task. Volume VIII - Risk Assessment Documentation Package

    SciTech Connect

    1996-12-01

    Volume VIII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the risk assessment documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  19. Underground Test Area Subproject Phase I Data Analysis Task. Volume II - Potentiometric Data Document Package

    SciTech Connect

    1996-12-01

    Volume II of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the potentiometric data. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  20. Underground Test Area Subproject Phase I Data Analysis Task. Volume VII - Tritium Transport Model Documentation Package

    SciTech Connect

    1996-12-01

    Volume VII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the tritium transport model documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  1. Underground Test Area Subproject Phase I Data Analysis Task. Volume VI - Groundwater Flow Model Documentation Package

    SciTech Connect

    1996-11-01

    Volume VI of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the groundwater flow model data. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  2. Bivariate analysis of flood peaks and volumes using copulas. An application to the Danube River

    NASA Astrophysics Data System (ADS)

    Papaioannou, George; Bacigal, Tomas; Jeneiova, Katarina; Kohnová, Silvia; Szolgay, Jan; Loukas, Athanasios

    2014-05-01

    A multivariate analysis on flood variables such as flood peaks, volumes and durations, is essential for hydrotechnical projects design. A lot of authors have suggested the use of bivariate distributions for the frequency analysis of flood peaks and volumes due to the supposition that the marginal probability distribution type is the same for these variables. The application of Copulas, which are becoming gradually widespread, can overcome this constraint. The selection of the appropriate copula type/families is not extensively treated in the literature and it remains a challenge in copula analysis. In this study a bivariate copula analysis with the use of different copula families is carried out on the basis of flood peak and the corresponding volumes along a river. This bivariate analysis of flood peaks and volumes is based on streamflow daily data of a time-series more than 100 years from several gauged stations of the Danube River. The methodology applied using annual maximum flood peaks (AMF) with the independent annual maximum volumes of fixed durations at 5, 10, 15,20,25,30 and 60 days. The discharge-volume pairs correlation are examined using Kendall's tau correlation analysis. The copulas families that selected for the bivariate modeling of the extracted pairs discharge and volumes are the Archimedean, Extreme-value and other copula families. The evaluation of the copulas performance achieved with the use of scatterplots of the observed and bootstrapped simulated pairs and formal tests of goodness of fit. Suitability of copulas was statistically compared. Archimedean (e.g. Frank and Clayton) copulas revealed to be more capable for bivariate modeling of floods than the other examined copula families at the Danube River. Results showed in general that copulas are effective tools for bivariate modeling of the two study random variables.

  3. Satellite power systems (SPS) concept definition study. Volume 7: SPS program plan and economic analysis, appendixes

    NASA Technical Reports Server (NTRS)

    Hanley, G.

    1978-01-01

    Three appendixes in support of Volume 7 are contained in this document. The three appendixes are: (1) Satellite Power System Work Breakdown Structure Dictionary; (2) SPS cost Estimating Relationships; and (3) Financial and Operational Concept. Other volumes of the final report that provide additional detail are: Executive Summary; SPS Systems Requirements; SPS Concept Evolution; SPS Point Design Definition; Transportation and Operations Analysis; and SPS Technology Requirements and Verification.

  4. Spaceborne power systems preference analyses. Volume 2: Decision analysis

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Feinberg, A.; Miles, R. F., Jr.

    1985-01-01

    Sixteen alternative spaceborne nuclear power system concepts were ranked using multiattribute decision analysis. The purpose of the ranking was to identify promising concepts for further technology development and the issues associated with such development. Four groups were interviewed to obtain preference. The four groups were: safety, systems definition and design, technology assessment, and mission analysis. The highest ranked systems were the heat-pipe ther