Science.gov

Sample records for volume segmentation analysis

  1. Economic Analysis. Volume I. Course Segments 4-15.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    The first volume of the United States Naval Academy's individualized instruction course in economic analysis covers segments 4-15 of the course. Topics in this introduction include the nature and methods of economics, production possibilities, demand, supply and equilibrium, and the concept of the circular flow. Other segments of the course, the…

  2. Segmentation-based method incorporating fractional volume analysis for quantification of brain atrophy on magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Wang, Deming; Doddrell, David M.

    2001-07-01

    Partial volume effect is a major problem in brain tissue segmentation on digital images such as magnetic resonance (MR) images. In this paper, special attention has been paid to partial volume effect when developing a method for quantifying brain atrophy. Specifically, partial volume effect is minimized in the process of parameter estimation prior to segmentation by identifying and excluding those voxels with possible partial volume effect. A quantitative measure for partial volume effect was also introduced through developing a model that calculates fractional volumes for voxels with mixtures of two different tissues. For quantifying cerebrospinal fluid (CSF) volumes, fractional volumes are calculated for two classes of mixture involving gray matter and CSF, and white matter and CSF. Tissue segmentation is carried out using 1D and 2D thresholding techniques after images are intensity- corrected. Threshold values are estimated using the minimum error method. Morphological processing and region identification analysis are used extensively in the algorithm. As an application, the method was employed for evaluating rates of brain atrophy based on serially acquired structural brain MR images. Consistent and accurate rates of brain atrophy have been obtained for patients with Alzheimer's disease as well as for elderly subjects due to normal aging process.

  3. Volume analysis of treatment response of head and neck lesions using 3D level set segmentation

    NASA Astrophysics Data System (ADS)

    Hadjiiski, Lubomir; Street, Ethan; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Chan, Heang-Ping; Mukherji, Suresh K.

    2008-03-01

    A computerized system for segmenting lesions in head and neck CT scans was developed to assist radiologists in estimation of the response to treatment of malignant lesions. The system performs 3D segmentations based on a level set model and uses as input an approximate bounding box for the lesion of interest. In this preliminary study, CT scans from a pre-treatment exam and a post one-cycle chemotherapy exam of 13 patients containing head and neck neoplasms were used. A radiologist marked 35 temporal pairs of lesions. 13 pairs were primary site cancers and 22 pairs were metastatic lymph nodes. For all lesions, a radiologist outlined a contour on the best slice on both the pre- and post treatment scans. For the 13 primary lesion pairs, full 3D contours were also extracted by a radiologist. The average pre- and post-treatment areas on the best slices for all lesions were 4.5 and 2.1 cm2, respectively. For the 13 primary site pairs the average pre- and post-treatment primary lesions volumes were 15.4 and 6.7 cm 3 respectively. The correlation between the automatic and manual estimates for the pre-to-post-treatment change in area for all 35 pairs was r=0.97, while the correlation for the percent change in area was r=0.80. The correlation for the change in volume for the 13 primary site pairs was r=0.89, while the correlation for the percent change in volume was r=0.79. The average signed percent error between the automatic and manual areas for all 70 lesions was 11.0+/-20.6%. The average signed percent error between the automatic and manual volumes for all 26 primary lesions was 37.8+/-42.1%. The preliminary results indicate that the automated segmentation system can reliably estimate tumor size change in response to treatment relative to radiologist's hand segmentation.

  4. NSEG, a segmented mission analysis program for low and high speed aircraft. Volume 1: Theoretical development

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is presented. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed characteristics were specified in tabular form. The code also contains extensive flight envelope performance mapping capabilities. Approximate take off and landing analyses were performed. At high speeds, centrifugal lift effects were accounted for. Extensive turbojet and ramjet engine scaling procedures were incorporated in the code.

  5. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 3: Demonstration problems

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    Program NSEG is a rapid mission analysis code based on the use of approximate flight path equations of motion. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelope performance mapping capabilities. For example, rate-of-climb, turn rates, and energy maneuverability parameter values may be mapped in the Mach-altitude plane. Approximate take off and landing analyses are also performed. At high speeds, centrifugal lift effects are accounted for. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  6. Prostate volume contouring: A 3D analysis of segmentation using 3DTRUS, CT, and MR

    SciTech Connect

    Smith, Wendy L. . E-mail: wendy.smith@cancerboard.ab.ca; Lewis, Craig |; Bauman, Glenn ||; Rodrigues, George ||; D'Souza, David |; Ash, Robert |; Ho, Derek; Venkatesan, Varagur |; Downey, Donal; Fenster, Aaron

    2007-03-15

    Purpose: This study evaluated the reproducibility and modality differences of prostate contouring after brachytherapy implant using three-dimensional (3D) transrectal ultrasound (3DTRUS), T2-weighted magnetic resonance (MR), and computed tomography (CT) imaging. Methods and Materials: Seven blinded observers contoured 10 patients' prostates, 30 day postimplant, on 3DTRUS, MR, and CT images to assess interobserver variability. Randomized images were contoured twice by each observer. We analyzed length and volume measurements and performed a 3D analysis of intra- and intermodality variation. Results: Average volume ratios were 1.16 for CT/MR, 0.90 for 3DTRUS/MR, and 1.30 for CT/3DTRUS. Overall contouring variability was largest for CT and similar for MR and 3DTRUS. The greatest variability of CT contours occurred at the posterior and anterior portions of the midgland. On MR, overall variability was smaller, with a maximum in the anterior region. On 3DTRUS, high variability occurred in anterior regions of the apex and base, whereas the prostate-rectum interface had the smallest variability. The shape of the prostate on MR was rounder, with the base and apex of similar size, whereas CT contours had broad, flat bases narrowing toward the apex. The average percent of surface area that was significantly different (95% confidence interval) for CT/MR was 4.1%; 3DTRUS/MR, 10.7%; and CT/3DTRUS, 6.3%. The larger variability of CT measurements made significant differences more difficult to detect. Conclusions: The contouring of prostates on CT, MR, and 3DTRUS results in systematic differences in the locations of and variability in prostate boundary definition between modalities. MR and 3DTRUS display the smallest variability and the closest correspondence.

  7. Inter-sport variability of muscle volume distribution identified by segmental bioelectrical impedance analysis in four ball sports

    PubMed Central

    Yamada, Yosuke; Masuo, Yoshihisa; Nakamura, Eitaro; Oda, Shingo

    2013-01-01

    The aim of this study was to evaluate and quantify differences in muscle distribution in athletes of various ball sports using segmental bioelectrical impedance analysis (SBIA). Participants were 115 male collegiate athletes from four ball sports (baseball, soccer, tennis, and lacrosse). Percent body fat (%BF) and lean body mass were measured, and SBIA was used to measure segmental muscle volume (MV) in bilateral upper arms, forearms, thighs, and lower legs. We calculated the MV ratios of dominant to nondominant, proximal to distal, and upper to lower limbs. The measurements consisted of a total of 31 variables. Cluster and factor analyses were applied to identify redundant variables. The muscle distribution was significantly different among groups, but the %BF was not. The classification procedures of the discriminant analysis could correctly distinguish 84.3% of the athletes. These results suggest that collegiate ball game athletes have adapted their physique to their sport movements very well, and the SBIA, which is an affordable, noninvasive, easy-to-operate, and fast alternative method in the field, can distinguish ball game athletes according to their specific muscle distribution within a 5-minute measurement. The SBIA could be a useful, affordable, and fast tool for identifying talents for specific sports. PMID:24379714

  8. Inter-sport variability of muscle volume distribution identified by segmental bioelectrical impedance analysis in four ball sports.

    PubMed

    Yamada, Yosuke; Masuo, Yoshihisa; Nakamura, Eitaro; Oda, Shingo

    2013-01-01

    The aim of this study was to evaluate and quantify differences in muscle distribution in athletes of various ball sports using segmental bioelectrical impedance analysis (SBIA). Participants were 115 male collegiate athletes from four ball sports (baseball, soccer, tennis, and lacrosse). Percent body fat (%BF) and lean body mass were measured, and SBIA was used to measure segmental muscle volume (MV) in bilateral upper arms, forearms, thighs, and lower legs. We calculated the MV ratios of dominant to nondominant, proximal to distal, and upper to lower limbs. The measurements consisted of a total of 31 variables. Cluster and factor analyses were applied to identify redundant variables. The muscle distribution was significantly different among groups, but the %BF was not. The classification procedures of the discriminant analysis could correctly distinguish 84.3% of the athletes. These results suggest that collegiate ball game athletes have adapted their physique to their sport movements very well, and the SBIA, which is an affordable, noninvasive, easy-to-operate, and fast alternative method in the field, can distinguish ball game athletes according to their specific muscle distribution within a 5-minute measurement. The SBIA could be a useful, affordable, and fast tool for identifying talents for specific sports.

  9. Early Expansion of the Intracranial CSF Volume After Palliative Whole-Brain Radiotherapy: Results of a Longitudinal CT Segmentation Analysis

    SciTech Connect

    Sanghera, Paul; Gardner, Sandra L.; Scora, Daryl; Davey, Phillip

    2010-03-15

    Purpose: To assess cerebral atrophy after radiotherapy, we measured intracranial cerebrospinal fluid volume (ICSFV) over time after whole-brain radiotherapy (WBRT) and compared it with published normal-population data. Methods and Materials: We identified 9 patients receiving a single course of WBRT (30 Gy in 10 fractions over 2 weeks) for ipsilateral brain metastases with at least 3 years of computed tomography follow-up. Segmentation analysis was confined to the tumor-free hemi-cranium. The technique was semiautomated by use of thresholds based on scanned image intensity. The ICSFV percentage (ratio of ICSFV to brain volume) was used for modeling purposes. Published normal-population ICSFV percentages as a function of age were used as a control. A repeated-measures model with cross-sectional (between individuals) and longitudinal (within individuals) quadratic components was fitted to the collected data. The influence of clinical factors including the use of subependymal plate shielding was studied. Results: The median imaging follow-up was 6.25 years. There was an immediate increase (p < 0.0001) in ICSFV percentage, which decelerated over time. The clinical factors studied had no significant effect on the model. Conclusions: WBRT immediately accelerates the rate of brain atrophy. This longitudinal study in patients with brain metastases provides a baseline against which the potential benefits of more localized radiotherapeutic techniques such as radiosurgery may be compared.

  10. Three-dimensional CT image segmentation by volume growing

    NASA Astrophysics Data System (ADS)

    Zhu, Dongping; Conners, Richard W.; Araman, Philip A.

    1991-11-01

    The research reported in this paper is aimed at locating, identifying, and quantifying internal (anatomical or physiological) structures, by 3-D image segmentation. Computerized tomography (CT) images of an object are first processed on a slice-by-slice basis, generating a stack of image slices that have been smoothed and pre-segmented. The image smoothing operation is executed by a spatially adaptive filter, and the 2-D pre-segmentation is achieved by a thresholding process whereby each individual pixel in the input image space is consistently assigned a label, according to its CT number, i.e., the gray-level value. Given a sequence of pre-segmented images as 3-D input scene (a stack of image slices), the spatial connectivity that exists among neighboring image pixels is utilized in a volume growing process which generates a number of well-defined volumetric regions or image solides, each representing an individual anatomical or physiological structure in the input scene. The 3-D segmentation is implemented using a volume growing process so that the aspect of pixel spatial connectivity is incorporated into the image segmentation procedure. To initialize the volume growing process for each volumetric region in the input 3-D scene, a seed location for a region is defined and loaded into a queue data structure called seed queue. The volume growing process consists of a set of procedures that perform different operations on the volumetric data of a CT image sequence. Examples of experiment of the described system with CT image data of several hardwood logs are given to demonstrate usefulness and flexibility of this approach. This allows solutions to industrial web inspection, as well as to several problems in medical image analysis where low-level image segmentation plays an important role toward successful image interpretation tasks.

  11. Bioimpedance Measurement of Segmental Fluid Volumes and Hemodynamics

    NASA Technical Reports Server (NTRS)

    Montgomery, Leslie D.; Wu, Yi-Chang; Ku, Yu-Tsuan E.; Gerth, Wayne A.; DeVincenzi, D. (Technical Monitor)

    2000-01-01

    Bioimpedance has become a useful tool to measure changes in body fluid compartment volumes. An Electrical Impedance Spectroscopic (EIS) system is described that extends the capabilities of conventional fixed frequency impedance plethysmographic (IPG) methods to allow examination of the redistribution of fluids between the intracellular and extracellular compartments of body segments. The combination of EIS and IPG techniques was evaluated in the human calf, thigh, and torso segments of eight healthy men during 90 minutes of six degree head-down tilt (HDT). After 90 minutes HDT the calf and thigh segments significantly (P < 0.05) lost conductive volume (eight and four percent, respectively) while the torso significantly (P < 0.05) gained volume (approximately three percent). Hemodynamic responses calculated from pulsatile IPG data also showed a segmental pattern consistent with vascular fluid loss from the lower extremities and vascular engorgement in the torso. Lumped-parameter equivalent circuit analyses of EIS data for the calf and thigh indicated that the overall volume decreases in these segments arose from reduced extracellular volume that was not completely balanced by increased intracellular volume. The combined use of IPG and EIS techniques enables noninvasive tracking of multi-segment volumetric and hemodynamic responses to environmental and physiological stresses.

  12. Uterine fibroid segmentation and volume measurement on MRI

    NASA Astrophysics Data System (ADS)

    Yao, Jianhua; Chen, David; Lu, Wenzhu; Premkumar, Ahalya

    2006-03-01

    Uterine leiomyomas are the most common pelvic tumors in females. The efficacy of medical treatment is gauged by shrinkage of the size of these tumors. In this paper, we present a method to robustly segment the fibroids on MRI and accurately measure the 3D volume. Our method is based on a combination of fast marching level set and Laplacian level set. With a seed point placed inside the fibroid region, a fast marching level set is first employed to obtain a rough segmentation, followed by a Laplacian level set to refine the segmentation. We devised a scheme to automatically determine the parameters for the level set function and the sigmoid function based on pixel statistics around the seed point. The segmentation is conducted on three concurrent views (axial, coronal and sagittal), and a combined volume measurement is computed to obtain a more reliable measurement. We carried out extensive tests on 13 patients, 25 MRI studies and 133 fibroids. The segmentation result was validated against manual segmentation defined by experts. The average segmentation sensitivity (true positive fraction) among all fibroids was 84.6%, and the average segmentation specificity (1-false positive fraction) was 84.3%.

  13. 3D visualization for medical volume segmentation validation

    NASA Astrophysics Data System (ADS)

    Eldeib, Ayman M.

    2002-05-01

    This paper presents a 3-D visualization tool that manipulates and/or enhances by user input the segmented targets and other organs. A 3-D visualization tool is developed to create a precise and realistic 3-D model from CT/MR data set for manipulation in 3-D and permitting physician or planner to look through, around, and inside the various structures. The 3-D visualization tool is designed to assist and to evaluate the segmentation process. It can control the transparency of each 3-D object. It displays in one view a 2-D slice (axial, coronal, and/or sagittal)within a 3-D model of the segmented tumor or structures. This helps the radiotherapist or the operator to evaluate the adequacy of the generated target compared to the original 2-D slices. The graphical interface enables the operator to easily select a specific 2-D slice of the 3-D volume data set. The operator is enabled to manually override and adjust the automated segmentation results. After correction, the operator can see the 3-D model again and go back and forth till satisfactory segmentation is obtained. The novelty of this research work is in using state-of-the-art of image processing and 3-D visualization techniques to facilitate a process of a medical volume segmentation validation and assure the accuracy of the volume measurement of the structure of interest.

  14. Unsupervised segmentation of cardiac PET transmission images for automatic heart volume extraction.

    PubMed

    Juslin, Anu; Tohka, Jussi

    2006-01-01

    In this study, we propose an automatic method to extract the heart volume from the cardiac positron emission tomography (PET) transmission images. The method combines the automatic 3D segmentation of the transmission image using Markov random fields (MRFs) to surface extraction using deformable models. Deformable models were automatically initialized using the MRFs segmentation result. The extraction of the heart region is needed e.g. in independent component analysis (ICA). The volume of the heart can be used to mask the emission image corresponding to the transmission image, so that only the cardiac region is used for the analysis. The masking restricts the number of independent components and reduces the computation time. In addition, the MRF segmentation result could be used for attenuation correction. The method was tested with 25 patient images. The MRF segmentation results were of good quality in all cases and we were able to extract the heart volume from all the images. PMID:17946020

  15. Amygdalar and hippocampal volume: A comparison between manual segmentation, Freesurfer and VBM.

    PubMed

    Grimm, Oliver; Pohlack, Sebastian; Cacciaglia, Raffaele; Winkelmann, Tobias; Plichta, Michael M; Demirakca, Traute; Flor, Herta

    2015-09-30

    Automated segmentation of the amygdala and the hippocampus is of interest for research looking at large datasets where manual segmentation of T1-weighted magnetic resonance tomography images is less feasible for morphometric analysis. Manual segmentation still remains the gold standard for subcortical structures like the hippocampus and the amygdala. A direct comparison of VBM8 and Freesurfer is rarely done, because VBM8 results are most often used for voxel-based analysis. We used the same region-of-interest (ROI) for Freesurfer and VBM8 to relate automated and manually derived volumes of the amygdala and the hippocampus. We processed a large manually segmented dataset of n=92 independent samples with an automated segmentation strategy (VBM8 vs. Freesurfer Version 5.0). For statistical analysis, we only calculated Pearsons's correlation coefficients, but used methods developed for comparison such as Lin's concordance coefficient. The correlation between automatic and manual segmentation was high for the hippocampus [0.58-0.76] and lower for the amygdala [0.45-0.59]. However, concordance coefficients point to higher concordance for the amygdala [0.46-0.62] instead of the hippocampus [0.06-0.12]. VBM8 and Freesurfer segmentation performed on a comparable level in comparison to manual segmentation. We conclude (1) that correlation alone does not capture systematic differences (e.g. of hippocampal volumes), (2) calculation of ROI volumes with VBM8 gives measurements comparable to Freesurfer V5.0 when using the same ROI and (3) systematic and proportional differences are caused mainly by different definitions of anatomic boundaries and only to a lesser part by different segmentation strategies. This work underscores the importance of using method comparison techniques and demonstrates that even with high correlation coefficients, there can be still large differences in absolute volume. PMID:26057114

  16. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling.

    PubMed

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Sastre-Garriga, Jaume; Montalban, Xavier; Rovira, Àlex; Lladó, Xavier

    2015-01-01

    Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS) lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM) and white matter (WM) using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations.

  17. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    PubMed Central

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís; Sastre-Garriga, Jaume; Montalban, Xavier; Rovira, Àlex; Lladó, Xavier

    2015-01-01

    Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS) lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM) and white matter (WM) using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations. PMID:26740917

  18. Fast global interactive volume segmentation with regional supervoxel descriptors

    NASA Astrophysics Data System (ADS)

    Luengo, Imanol; Basham, Mark; French, Andrew P.

    2016-03-01

    In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction.

  19. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  20. Tooth segmentation system with intelligent editing for cephalometric analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shoupu

    2015-03-01

    Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.

  1. Fully automated segmentation of oncological PET volumes using a combined multiscale and statistical model

    SciTech Connect

    Montgomery, David W. G.; Amira, Abbes; Zaidi, Habib

    2007-02-15

    The widespread application of positron emission tomography (PET) in clinical oncology has driven this imaging technology into a number of new research and clinical arenas. Increasing numbers of patient scans have led to an urgent need for efficient data handling and the development of new image analysis techniques to aid clinicians in the diagnosis of disease and planning of treatment. Automatic quantitative assessment of metabolic PET data is attractive and will certainly revolutionize the practice of functional imaging since it can lower variability across institutions and may enhance the consistency of image interpretation independent of reader experience. In this paper, a novel automated system for the segmentation of oncological PET data aiming at providing an accurate quantitative analysis tool is proposed. The initial step involves expectation maximization (EM)-based mixture modeling using a k-means clustering procedure, which varies voxel order for initialization. A multiscale Markov model is then used to refine this segmentation by modeling spatial correlations between neighboring image voxels. An experimental study using an anthropomorphic thorax phantom was conducted for quantitative evaluation of the performance of the proposed segmentation algorithm. The comparison of actual tumor volumes to the volumes calculated using different segmentation methodologies including standard k-means, spatial domain Markov Random Field Model (MRFM), and the new multiscale MRFM proposed in this paper showed that the latter dramatically reduces the relative error to less than 8% for small lesions (7 mm radii) and less than 3.5% for larger lesions (9 mm radii). The analysis of the resulting segmentations of clinical oncologic PET data seems to confirm that this methodology shows promise and can successfully segment patient lesions. For problematic images, this technique enables the identification of tumors situated very close to nearby high normal physiologic uptake. The

  2. Synthesis of intensity gradient and texture information for efficient three-dimensional segmentation of medical volumes

    PubMed Central

    Vantaram, Sreenath Rao; Saber, Eli; Dianat, Sohail A.; Hu, Yang

    2015-01-01

    Abstract. We propose a framework that efficiently employs intensity, gradient, and textural features for three-dimensional (3-D) segmentation of medical (MRI/CT) volumes. Our methodology commences by determining the magnitude of intensity variations across the input volume using a 3-D gradient detection scheme. The resultant gradient volume is utilized in a dynamic volume growing/formation process that is initiated in voxel locations with small gradient magnitudes and is concluded at sites with large gradient magnitudes, yielding a map comprising an initial set of partitions (or subvolumes). This partition map is combined with an entropy-based texture descriptor along with intensity and gradient attributes in a multivariate analysis-based volume merging procedure that fuses subvolumes with similar characteristics to yield a final/refined segmentation output. Additionally, a semiautomated version of the aforestated algorithm that allows a user to interactively segment a desired subvolume of interest as opposed to the entire volume is also discussed. Our approach was tested on several MRI and CT datasets and the results show favorable performance in comparison to the state-of-the-art ITK-SNAP technique. PMID:26158098

  3. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  4. Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.

    1994-05-01

    An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.

  5. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    NASA Astrophysics Data System (ADS)

    Hatt, M.; Lamare, F.; Boussion, N.; Turzo, A.; Collet, C.; Salzenstein, F.; Roux, C.; Jarritt, P.; Carson, K.; Cheze-LeRest, C.; Visvikis, D.

    2007-07-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both

  6. Semiautomatic regional segmentation to measure orbital fat volumes in thyroid-associated ophthalmopathy. A validation study.

    PubMed

    Comerci, M; Elefante, A; Strianese, D; Senese, R; Bonavolontà, P; Alfano, B; Bonavolontà, B; Brunetti, A

    2013-08-01

    This study was designed to validate a novel semi-automated segmentation method to measure regional intra-orbital fat tissue volume in Graves' ophthalmopathy. Twenty-four orbits from 12 patients with Graves' ophthalmopathy, 24 orbits from 12 controls, ten orbits from five MRI study simulations and two orbits from a digital model were used. Following manual region of interest definition of the orbital volumes performed by two operators with different levels of expertise, an automated procedure calculated intra-orbital fat tissue volumes (global and regional, with automated definition of four quadrants). In patients with Graves' disease, clinical activity score and degree of exophthalmos were measured and correlated with intra-orbital fat volumes. Operator performance was evaluated and statistical analysis of the measurements was performed. Accurate intra-orbital fat volume measurements were obtained with coefficients of variation below 5%. The mean operator difference in total fat volume measurements was 0.56%. Patients had significantly higher intra-orbital fat volumes than controls (p<0.001 using Student's t test). Fat volumes and clinical score were significantly correlated (p<0.001). The semi-automated method described here can provide accurate, reproducible intra-orbital fat measurements with low inter-operator variation and good correlation with clinical data.

  7. Brain tumor target volume determination for radiation therapy treatment planning through the use of automated MRI segmentation

    NASA Astrophysics Data System (ADS)

    Mazzara, Gloria Patrika

    Radiation therapy seeks to effectively irradiate the tumor cells while minimizing the dose to adjacent normal cells. Prior research found that the low success rates for treating brain tumors would be improved with higher radiation doses to the tumor area. This is feasible only if the target volume can be precisely identified. However, the definition of tumor volume is still based on time-intensive, highly subjective manual outlining by radiation oncologists. In this study the effectiveness of two automated Magnetic Resonance Imaging (MRI) segmentation methods, k-Nearest Neighbors (kNN) and Knowledge-Guided (KG), in determining the Gross Tumor Volume (GTV) of brain tumors for use in radiation therapy was assessed. Three criteria were applied: accuracy of the contours; quality of the resulting treatment plan in terms of dose to the tumor; and a novel treatment plan evaluation technique based on post-treatment images. The kNN method was able to segment all cases while the KG method was limited to enhancing tumors and gliomas with clear enhancing edges. Various software applications were developed to create a closed smooth contour that encompassed the tumor pixels from the segmentations and to integrate these results into the treatment planning software. A novel, probabilistic measurement of accuracy was introduced to compare the agreement of the segmentation methods with the weighted average physician volume. Both computer methods under-segment the tumor volume when compared with the physicians but performed within the variability of manual contouring (28% +/- 12% for inter-operator variability). Computer segmentations were modified vertically to compensate for their under-segmentation. When comparing radiation treatment plans designed from physician-defined tumor volumes with treatment plans developed from the modified segmentation results, the reference target volume was irradiated within the same level of conformity. Analysis of the plans based on post

  8. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region

    PubMed Central

    Tian, Jing; Varga, Boglárka; Somfai, Gábor Márk; Lee, Wen-Hsiang; Smiddy, William E.; Cabrera DeBuc, Delia

    2015-01-01

    Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra’s algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (∼ 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data. PMID:26258430

  9. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    PubMed Central

    Hatt, Mathieu; Lamare, Frédéric; Boussion, Nicolas; Roux, Christian; Turzo, Alexandre; Cheze-Lerest, Catherine; Jarritt, Peter; Carson, Kathryn; Salzenstein, Fabien; Collet, Christophe; Visvikis, Dimitris

    2007-01-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the Fuzzy Hidden Markov Chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical Hidden Markov Chain (HMC) algorithm, FHMC takes into account noise, voxel’s intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the “fuzzy” nature of the object on interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8mm3 and 64mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both

  10. Volume Averaging of Spectral-Domain Optical Coherence Tomography Impacts Retinal Segmentation in Children

    PubMed Central

    Trimboli-Heidler, Carmelina; Vogt, Kelly; Avery, Robert A.

    2016-01-01

    Purpose To determine the influence of volume averaging on retinal layer thickness measures acquired with spectral-domain optical coherence tomography (SD-OCT) in children. Methods Macular SD-OCT images were acquired using three different volume settings (i.e., 1, 3, and 9 volumes) in children enrolled in a prospective OCT study. Total retinal thickness and five inner layers were measured around an Early Treatment Diabetic Retinopathy Scale (ETDRS) grid using beta version automated segmentation software for the Spectralis. The magnitude of manual segmentation required to correct the automated segmentation was classified as either minor (<12 lines adjusted), moderate (>12 and <25 lines adjusted), severe (>26 and <48 lines adjusted), or fail (>48 lines adjusted or could not adjust due to poor image quality). The frequency of each edit classification was assessed for each volume setting. Thickness, paired difference, and 95% limits of agreement of each anatomic quadrant were compared across volume density. Results Seventy-five subjects (median age 11.8 years, range 4.3–18.5 years) contributed 75 eyes. Less than 5% of the 9- and 3-volume scans required more than minor manual segmentation corrections, compared with 71% of 1-volume scans. The inner (3 mm) region demonstrated similar measures across all layers, regardless of volume number. The 1-volume scans demonstrated greater variability of the retinal nerve fiber layer (RNLF) thickness, compared with the other volumes in the outer (6 mm) region. Conclusions In children, volume averaging of SD-OCT acquisitions reduce retinal layer segmentation errors. Translational Relevance This study highlights the importance of volume averaging when acquiring macula volumes intended for multilayer segmentation. PMID:27570711

  11. Midbrain volume segmentation using active shape models and LBPs

    NASA Astrophysics Data System (ADS)

    Olveres, Jimena; Nava, Rodrigo; Escalante-Ramírez, Boris; Cristóbal, Gabriel; García-Moreno, Carla María.

    2013-09-01

    In recent years, the use of Magnetic Resonance Imaging (MRI) to detect different brain structures such as midbrain, white matter, gray matter, corpus callosum, and cerebellum has increased. This fact together with the evidence that midbrain is associated with Parkinson's disease has led researchers to consider midbrain segmentation as an important issue. Nowadays, Active Shape Models (ASM) are widely used in literature for organ segmentation where the shape is an important discriminant feature. Nevertheless, this approach is based on the assumption that objects of interest are usually located on strong edges. Such a limitation may lead to a final shape far from the actual shape model. This paper proposes a novel method based on the combined use of ASM and Local Binary Patterns for segmenting midbrain. Furthermore, we analyzed several LBP methods and evaluated their performance. The joint-model considers both global and local statistics to improve final adjustments. The results showed that our proposal performs substantially better than the ASM algorithm and provides better segmentation measurements.

  12. Multi-region unstructured volume segmentation using tetrahedron filling

    SciTech Connect

    Willliams, Sean Jamerson; Dillard, Scott E; Thoma, Dan J; Hlawitschka, Mario; Hamann, Bernd

    2010-01-01

    Segmentation is one of the most common operations in image processing, and while there are several solutions already present in the literature, they each have their own benefits and drawbacks that make them well-suited for some types of data and not for others. We focus on the problem of breaking an image into multiple regions in a single segmentation pass, while supporting both voxel and scattered point data. To solve this problem, we begin with a set of potential boundary points and use a Delaunay triangulation to complete the boundaries. We use heuristic- and interaction-driven Voronoi clustering to find reasonable groupings of tetrahedra. Apart from the computation of the Delaunay triangulation, our algorithm has linear time complexity with respect to the number of tetrahedra.

  13. 3D robust Chan-Vese model for industrial computed tomography volume data segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Linghui; Zeng, Li; Luan, Xiao

    2013-11-01

    Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.

  14. Scintigraphic method for the assessment of intraluminal volume and motility of isolated intestinal segments. [Dogs

    SciTech Connect

    Mitchell, A.; Macey, D.J.; Collin, J.

    1983-07-01

    The isolated in vivo intestinal segment is a popular experimental preparation for the investigation of intestinal function, but its value has been limited because no method has been available for measuring changes in intraluminal volume under experimental conditions. We report a scintigraphic technique for measuring intraluminal volume and assessing intestinal motility. Between 30 and 180 ml, the volume of a 75-cm segment of canine jejunum, perfused with Tc-99m-labeled tin colloid, was found to be proportional to the recorded count rate. This method has been used to monitor the effects of the hormone vasopressin on intestinal function.

  15. LANDSAT-D program. Volume 2: Ground segment

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Raw digital data, as received from the LANDSAT spacecraft, cannot generate images that meet specifications. Radiometric corrections must be made to compensate for aging and for differences in sensitivity among the instrument sensors. Geometric corrections must be made to compensate for off-nadir look angle, and to calculate spacecraft drift from its prescribed path. Corrections must also be made for look-angle jitter caused by vibrations induced by spacecraft equipment. The major components of the LANDSAT ground segment and their functions are discussed.

  16. High volume production trial of mirror segments for the Thirty Meter Telescope

    NASA Astrophysics Data System (ADS)

    Oota, Tetsuji; Negishi, Mahito; Shinonaga, Hirohiko; Gomi, Akihiko; Tanaka, Yutaka; Akutsu, Kotaro; Otsuka, Itaru; Mochizuki, Shun; Iye, Masanori; Yamashita, Takuya

    2014-07-01

    The Thirty Meter Telescope is a next-generation optical/infrared telescope to be constructed on Mauna Kea, Hawaii toward the end of this decade, as an international project. Its 30 m primary mirror consists of 492 off-axis aspheric segmented mirrors. High volume production of hundreds of segments has started in 2013 based on the contract between National Astronomical Observatory of Japan and Canon Inc.. This paper describes the achievements of the high volume production trials. The Stressed Mirror Figuring technique which is established by Keck Telescope engineers is arranged and adopted. To measure the segment surface figure, a novel stitching algorithm is evaluated by experiment. The integration procedure is checked with prototype segment.

  17. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  18. Segmentation propagation for the automated quantification of ventricle volume from serial MRI

    NASA Astrophysics Data System (ADS)

    Linguraru, Marius George; Butman, John A.

    2009-02-01

    Accurate ventricle volume estimates could potentially improve the understanding and diagnosis of communicating hydrocephalus. Postoperative communicating hydrocephalus has been recognized in patients with brain tumors where the changes in ventricle volume can be difficult to identify, particularly over short time intervals. Because of the complex alterations of brain morphology in these patients, the segmentation of brain ventricles is challenging. Our method evaluates ventricle size from serial brain MRI examinations; we (i) combined serial images to increase SNR, (ii) automatically segmented this image to generate a ventricle template using fast marching methods and geodesic active contours, and (iii) propagated the segmentation using deformable registration of the original MRI datasets. By applying this deformation to the ventricle template, serial volume estimates were obtained in a robust manner from routine clinical images (0.93 overlap) and their variation analyzed.

  19. Hybrid segmentation framework for 3D medical image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  20. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  1. Multi-Segment Hemodynamic and Volume Assessment With Impedance Plethysmography: Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Ku, Yu-Tsuan E.; Montgomery, Leslie D.; Webbon, Bruce W. (Technical Monitor)

    1995-01-01

    Definition of multi-segmental circulatory and volume changes in the human body provides an understanding of the physiologic responses to various aerospace conditions. We have developed instrumentation and testing procedures at NASA Ames Research Center that may be useful in biomedical research and clinical diagnosis. Specialized two, four, and six channel impedance systems will be described that have been used to measure calf, thigh, thoracic, arm, and cerebral hemodynamic and volume changes during various experimental investigations.

  2. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes.

    PubMed

    Eapen, Maya; Korah, Reeba; Geetha, G

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  3. Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory.

    PubMed

    Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin

    2012-01-01

    In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region.

  4. Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory.

    PubMed

    Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin

    2012-01-01

    In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region. PMID:23286081

  5. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    NASA Astrophysics Data System (ADS)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  6. Generalized method for partial volume estimation and tissue segmentation in cerebral magnetic resonance images

    PubMed Central

    Khademi, April; Venetsanopoulos, Anastasios; Moody, Alan R.

    2014-01-01

    Abstract. An artifact found in magnetic resonance images (MRI) called partial volume averaging (PVA) has received much attention since accurate segmentation of cerebral anatomy and pathology is impeded by this artifact. Traditional neurological segmentation techniques rely on Gaussian mixture models to handle noise and PVA, or high-dimensional feature sets that exploit redundancy in multispectral datasets. Unfortunately, model-based techniques may not be optimal for images with non-Gaussian noise distributions and/or pathology, and multispectral techniques model probabilities instead of the partial volume (PV) fraction. For robust segmentation, a PV fraction estimation approach is developed for cerebral MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead, the PV fraction is estimated directly from each image using an adaptively defined global edge map constructed by exploiting a relationship between edge content and PVA. The final PVA map is used to segment anatomy and pathology with subvoxel accuracy. Validation on simulated and real, pathology-free T1 MRI (Gaussian noise), as well as pathological fluid attenuation inversion recovery MRI (non-Gaussian noise), demonstrate that the PV fraction is accurately estimated and the resultant segmentation is robust. Comparison to model-based methods further highlight the benefits of the current approach. PMID:26158022

  7. Milling Stability Analysis Based on Chebyshev Segmentation

    NASA Astrophysics Data System (ADS)

    HUANG, Jianwei; LI, He; HAN, Ping; Wen, Bangchun

    2016-09-01

    Chebyshev segmentation method was used to discretize the time period contained in delay differential equation, then the Newton second-order difference quotient method was used to calculate the cutter motion vector at each time endpoint, and the Floquet theory was used to determine the stability of the milling system after getting the transfer matrix of milling system. Using the above methods, a two degree of freedom milling system stability issues were investigated, and system stability lobe diagrams were got. The results showed that the proposed methods have the following advantages. Firstly, with the same calculation accuracy, the points needed to represent the time period are less by the Chebyshev Segmentation than those of the average segmentation, and the computational efficiency of the Chebyshev Segmentation is higher. Secondly, if the time period is divided into the same parts, the stability lobe diagrams got by Chebyshev segmentation method are more accurate than those of the average segmentation.

  8. Analysis of Random Segment Errors on Coronagraph Performance

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.; Stahl, H. Philip; Shaklan, Stuart B.; N'Diaye, Mamadou

    2016-01-01

    At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

  9. [Segment analysis of the target market of physiotherapeutic services].

    PubMed

    Babaskin, D V

    2010-01-01

    The objective of the present study was to demonstrate the possibilities to analyse selected segments of the target market of physiotherapeutic services provided by medical and preventive-facilities of two major types. The main features of a target segment, such as provision of therapeutic massage, are illustrated in terms of two characteristics, namely attractiveness to the users and the ability of a given medical facility to satisfy their requirements. Based on the analysis of portfolio of the available target segments the most promising ones (winner segments) were selected for further marketing studies. This choice does not exclude the possibility of involvement of other segments of medical services in marketing activities.

  10. Multi-stage learning for robust lung segmentation in challenging CT volumes.

    PubMed

    Sofka, Michal; Wetzl, Jens; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    Simple algorithms for segmenting healthy lung parenchyma in CT are unable to deal with high density tissue common in pulmonary diseases. To overcome this problem, we propose a multi-stage learning-based approach that combines anatomical information to predict an initialization of a statistical shape model of the lungs. The initialization first detects the carina of the trachea, and uses this to detect a set of automatically selected stable landmarks on regions near the lung (e.g., ribs, spine). These landmarks are used to align the shape model, which is then refined through boundary detection to obtain fine-grained segmentation. Robustness is obtained through hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of diseased and healthy lungs. We demonstrate fast detection (35s per volume on average) and segmentation of 2 mm accuracy on challenging data.

  11. The position response of a large-volume segmented germanium detector

    NASA Astrophysics Data System (ADS)

    Descovich, M.; Nolan, P. J.; Boston, A. J.; Dobson, J.; Gros, S.; Cresswell, J. R.; Simpson, J.; Lazarus, I.; Regan, P. H.; Valiente-Dobon, J. J.; Sellin, P.; Pearson, C. J.

    2005-11-01

    The position response of a large-volume segmented coaxial germanium detector is reported. The detector has 24-fold segmentation on its outer contact. The output from each contact was sampled with fast digital signal processing electronics in order to determine the position of the γ-ray interaction from the signal pulse shape. The interaction position was reconstructed in a polar coordinate system by combining the radial information, contained in the rise-time of the pulse leading edge, with the azimuthal information, obtained from the magnitude of the transient charge signals induced on the neighbouring segments. With this method, a position resolution of 3-7 mm is achieved in both the radial and the azimuthal directions.

  12. Robust semi-automatic segmentation of single- and multichannel MRI volumes through adaptable class-specific representation

    NASA Astrophysics Data System (ADS)

    Nielsen, Casper F.; Passmore, Peter J.

    2002-05-01

    Segmentation of MRI volumes is complicated by noise, inhomogeneity and partial volume artefacts. Fully or semi-automatic methods often require time consuming or unintuitive initialization. Adaptable Class-Specific Representation (ACSR) is a semi-automatic segmentation framework implemented by the Path Growing Algorithm (PGA), which reduces artefacts near segment boundaries. The user visually defines the desired segment classes through the selection of class templates and the following segmentation process is fully automatic. Good results have previously been achieved with color cryo section segmentation and ACSR has been developed further for the MRI modality. In this paper we present two optimizations for robust ACSR segmentation of MRI volumes. Automatic template creation based on an initial segmentation step using Learning Vector Quantization is applied for higher robustness to noise. Inhomogeneity correction is added as a pre-processing step, comparing the EQ and N3 algorithms. Results based on simulated T1-weighed and multispectral (T1 and T2) MRI data from the BrainWeb database and real data from the Internet Brain Segmentation Repository are presented. We show that ACSR segmentation compares favorably to previously published results on the same volumes and discuss the pros and cons of using quantitative ground truth evaluation compared to qualitative visual assessment.

  13. Automatic segmentation of blood vessels from MR angiography volume data by using fuzzy logic technique

    NASA Astrophysics Data System (ADS)

    Kobashi, Syoji; Hata, Yutaka; Tokimoto, Yasuhiro; Ishikawa, Makato

    1999-05-01

    This paper shows a novel medical image segmentation method applied to blood vessel segmentation from magnetic resonance angiography volume data. The principle idea of the method is fuzzy information granulation concept. The method consists of 2 parts: (1) quantization and feature extraction, (2) iterative fuzzy synthesis. In the first part, volume quantization is performed with watershed segmentation technique. Each quantum is represented by three features, vascularity, narrowness and histogram consistency. Using these features, we estimate the fuzzy degrees of each quantum for knowledge models about MRA volume data. In the second part, the method increases the fuzzy degrees by selectively synthesizing neighboring quantums. As a result, we obtain some synthesized quantums. We regard them as fuzzy granules and classify them into blood vessel or fat by evaluating the fuzzy degrees. In the experimental result, three dimensional images are generated using target maximum intensity projection (MIP) and surface shaded display. The comparison with conventional MIP images shows that the unclarity region in conventional images are clearly depict in our images. The qualitative evaluation done by a physician shows that our method can extract blood vessel region and that the results are useful to diagnose the cerebral diseases.

  14. Hitchhiker’s Guide to Voxel Segmentation for Partial Volume Correction of In Vivo Magnetic Resonance Spectroscopy

    PubMed Central

    Quadrelli, Scott; Mountford, Carolyn; Ramadan, Saadallah

    2016-01-01

    Partial volume effects have the potential to cause inaccuracies when quantifying metabolites using proton magnetic resonance spectroscopy (MRS). In order to correct for cerebrospinal fluid content, a spectroscopic voxel needs to be segmented according to different tissue contents. This article aims to detail how automated partial volume segmentation can be undertaken and provides a software framework for researchers to develop their own tools. While many studies have detailed the impact of partial volume correction on proton magnetic resonance spectroscopy quantification, there is a paucity of literature explaining how voxel segmentation can be achieved using freely available neuroimaging packages. PMID:27147822

  15. Automated cerebellar segmentation: Validation and application to detect smaller volumes in children prenatally exposed to alcohol☆

    PubMed Central

    Cardenas, Valerie A.; Price, Mathew; Infante, M. Alejandra; Moore, Eileen M.; Mattson, Sarah N.; Riley, Edward P.; Fein, George

    2014-01-01

    Objective To validate an automated cerebellar segmentation method based on active shape and appearance modeling and then segment the cerebellum on images acquired from adolescents with histories of prenatal alcohol exposure (PAE) and non-exposed controls (NC). Methods Automated segmentations of the total cerebellum, right and left cerebellar hemispheres, and three vermal lobes (anterior, lobules I–V; superior posterior, lobules VI–VII; inferior posterior, lobules VIII–X) were compared to expert manual labelings on 20 subjects, studied twice, that were not used for model training. The method was also used to segment the cerebellum on 11 PAE and 9 NC adolescents. Results The test–retest intraclass correlation coefficients (ICCs) of the automated method were greater than 0.94 for all cerebellar volume and mid-sagittal vermal area measures, comparable or better than the test–retest ICCs for manual measurement (all ICCs > 0.92). The ICCs computed on all four cerebellar measurements (manual and automated measures on the repeat scans) to compare comparability were above 0.97 for non-vermis parcels, and above 0.89 for vermis parcels. When applied to patients, the automated method detected smaller cerebellar volumes and mid-sagittal areas in the PAE group compared to controls (p < 0.05 for all regions except the superior posterior lobe, consistent with prior studies). Discussion These results demonstrate excellent reliability and validity of automated cerebellar volume and mid-sagittal area measurements, compared to manual measurements. These data also illustrate that this new technology for automatically delineating the cerebellum leads to conclusions regarding the effects of prenatal alcohol exposure on the cerebellum consistent with prior studies that used labor intensive manual delineation, even with a very small sample. PMID:25061566

  16. Ray analysis of parabolic-index segmented planar waveguides.

    PubMed

    Rastogi, V; Ghatak, A K; Ostrowsky, D B; Thyagarajan, K; Shenoy, M R

    1998-07-20

    A ray analysis of periodically segmented waveguides with parabolic-index variation in the high-index region is presented. We carried out the analysis using ray transfer matrices, which is convenient to implement and which can be extended to study different types of graded-index segmented waveguide. Results of this ray tracing approach clearly illustrate the waveguiding properties and the existence of stable and unstable regions of operation in segmented waveguides. We also illustrate the tapering action exhibited by segmented waveguides in which the duty cycle varies along the length of the waveguide. This analysis, although restricted to multimode structures, provides a clear visualization of the waveguiding properties in terms of ray propagation in segmented waveguides.

  17. A unifying framework for partial volume segmentation of brain MR images.

    PubMed

    Van Leemput, Koen; Maes, Frederik; Vandermeulen, Dirk; Suetens, Paul

    2003-01-01

    Accurate brain tissue segmentation by intensity-based voxel classification of magnetic resonance (MR) images is complicated by partial volume (PV) voxels that contain a mixture of two or more tissue types. In this paper, we present a statistical framework for PV segmentation that encompasses and extends existing techniques. We start from a commonly used parametric statistical image model in which each voxel belongs to one single tissue type, and introduce an additional downsampling step that causes partial voluming along the borders between tissues. An expectation-maximization approach is used to simultaneously estimate the parameters of the resulting model and perform a PV classification. We present results on well-chosen simulated images and on real MR images of the brain, and demonstrate that the use of appropriate spatial prior knowledge not only improves the classifications, but is often indispensable for robust parameter estimation as well. We conclude that general robust PV segmentation of MR brain images requires statistical models that describe the spatial distribution of brain tissues more accurately than currently available models.

  18. A novel colonic polyp volume segmentation method for computer tomographic colonography

    NASA Astrophysics Data System (ADS)

    Wang, Huafeng; Li, Lihong C.; Han, Hao; Song, Bowen; Peng, Hao; Wang, Yunhong; Wang, Lihua; Liang, Zhengrong

    2014-03-01

    Colorectal cancer is the third most common type of cancer. However, this disease can be prevented by detection and removal of precursor adenomatous polyps after the diagnosis given by experts on computer tomographic colonography (CTC). During CTC diagnosis, the radiologist looks for colon polyps and measures not only the size but also the malignancy. It is a common sense that to segment polyp volumes from their complicated growing environment is of much significance for accomplishing the CTC based early diagnosis task. Previously, the polyp volumes are mainly given from the manually or semi-automatically drawing by the radiologists. As a result, some deviations cannot be avoided since the polyps are usually small (6~9mm) and the radiologists' experience and knowledge are varying from one to another. In order to achieve automatic polyp segmentation carried out by the machine, we proposed a new method based on the colon decomposition strategy. We evaluated our algorithm on both phantom and patient data. Experimental results demonstrate our approach is capable of segment the small polyps from their complicated growing background.

  19. Hierarchical probabilistic Gabor and MRF segmentation of brain tumours in MRI volumes.

    PubMed

    Subbanna, Nagesh K; Precup, Doina; Collins, D Louis; Arbel, Tal

    2013-01-01

    In this paper, we present a fully automated hierarchical probabilistic framework for segmenting brain tumours from multispectral human brain magnetic resonance images (MRIs) using multiwindow Gabor filters and an adapted Markov Random Field (MRF) framework. In the first stage, a customised Gabor decomposition is developed, based on the combined-space characteristics of the two classes (tumour and non-tumour) in multispectral brain MRIs in order to optimally separate tumour (including edema) from healthy brain tissues. A Bayesian framework then provides a coarse probabilistic texture-based segmentation of tumours (including edema) whose boundaries are then refined at the voxel level through a modified MRF framework that carefully separates the edema from the main tumour. This customised MRF is not only built on the voxel intensities and class labels as in traditional MRFs, but also models the intensity differences between neighbouring voxels in the likelihood model, along with employing a prior based on local tissue class transition probabilities. The second inference stage is shown to resolve local inhomogeneities and impose a smoothing constraint, while also maintaining the appropriate boundaries as supported by the local intensity difference observations. The method was trained and tested on the publicly available MICCAI 2012 Brain Tumour Segmentation Challenge (BRATS) Database [1] on both synthetic and clinical volumes (low grade and high grade tumours). Our method performs well compared to state-of-the-art techniques, outperforming the results of the top methods in cases of clinical high grade and low grade tumour core segmentation by 40% and 45% respectively.

  20. A modified probabilistic neural network for partial volume segmentation in brain MR image.

    PubMed

    Song, Tao; Jamshidi, Mo M; Lee, Roland R; Huang, Mingxiong

    2007-09-01

    A modified probabilistic neural network (PNN) for brain tissue segmentation with magnetic resonance imaging (MRI) is proposed. In this approach, covariance matrices are used to replace the singular smoothing factor in the PNN's kernel function, and weighting factors are added in the pattern of summation layer. This weighted probabilistic neural network (WPNN) classifier can account for partial volume effects, which exist commonly in MRI, not only in the final result stage, but also in the modeling process. It adopts the self-organizing map (SOM) neural network to overly segment the input MR image, and yield reference vectors necessary for probabilistic density function (pdf) estimation. A supervised "soft" labeling mechanism based on Bayesian rule is developed, so that weighting factors can be generated along with corresponding SOM reference vectors. Tissue classification results from various algorithms are compared, and the effectiveness and robustness of the proposed approach are demonstrated. PMID:18220190

  1. Segmentation of cerebral MRI scans using a partial volume model, shading correction, and an anatomical prior

    NASA Astrophysics Data System (ADS)

    Noe, Aljaz; Kovacic, Stanislav; Gee, James C.

    2001-07-01

    A mixture-model clustering algorithm is presented for robust MRI brain image segmentation in the presence of partial volume averaging. The method uses additional classes to represent partial volume voxels of mixed tissue type in the image. Probability distributions for partial volume voxels are modeled accordingly. The image model also allows for tissue-dependent variance values and voxel neighborhood information is taken into account in the clustering formulation. Additionally we extend the image model to account for a low frequency intensity inhomogeneity that may be present in an image. This so-called shading effect is modeled as a linear combination of polynomial basis functions, and is estimated within the clustering algorithm. We also investigate the possibility of using additional anatomical prior information obtained by registering tissue class template images to the image to be segmented. The final result is the estimated fractional amount of each tissue type present within a voxel in addition to the label assigned to the voxel. A parallel implementation of the method is evaluated using synthetic and real MRI data.

  2. An interactive system for volume segmentation in computer-assisted surgery

    NASA Astrophysics Data System (ADS)

    Kunert, Tobias; Heimann, Tobias; Schroter, Andre; Schobinger, Max; Bottger, Thomas; Thorn, Matthias; Wolf, Ivo; Engelmann, Uwe; Meinzer, Hans-Peter

    2004-05-01

    Computer-assisted surgery aims at a decreased surgical risk and a reduced recovery time of patients. However, its use is still limited to complex cases because of the high effort. It is often caused by the extensive medical image analysis. Especially, image segmentation requires a lot of manual work. Surgeons and radiologists are suffering from usability problems of many workstations. In this work, we present a dedicated workplace for interactive segmentation integratd within the CHILI (tele-)radiology system. The software comes with a lot of improvements with respect to its graphical user interface, the segmentation process and the segmentatin methods. We point out important software requirements and give insight into the concepts which were implemented. Further examples and applications illustrate the software system.

  3. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results

    PubMed Central

    Lyden, Hannah; Gimbel, Sarah I.; Del Piero, Larissa; Tsai, A. Bryna; Sachs, Matthew E.; Kaplan, Jonas T.; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used. PMID:27656121

  4. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results.

    PubMed

    Lyden, Hannah; Gimbel, Sarah I; Del Piero, Larissa; Tsai, A Bryna; Sachs, Matthew E; Kaplan, Jonas T; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used.

  5. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results

    PubMed Central

    Lyden, Hannah; Gimbel, Sarah I.; Del Piero, Larissa; Tsai, A. Bryna; Sachs, Matthew E.; Kaplan, Jonas T.; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used.

  6. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results.

    PubMed

    Lyden, Hannah; Gimbel, Sarah I; Del Piero, Larissa; Tsai, A Bryna; Sachs, Matthew E; Kaplan, Jonas T; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used. PMID:27656121

  7. Influences of skull segmentation inaccuracies on EEG source analysis.

    PubMed

    Lanfer, B; Scherg, M; Dannhauer, M; Knösche, T R; Burger, M; Wolters, C H

    2012-08-01

    The low-conducting human skull is known to have an especially large influence on electroencephalography (EEG) source analysis. Because of difficulties segmenting the complex skull geometry out of magnetic resonance images, volume conductor models for EEG source analysis might contain inaccuracies and simplifications regarding the geometry of the skull. The computer simulation study presented here investigated the influences of a variety of skull geometry deficiencies on EEG forward simulations and source reconstruction from EEG data. Reference EEG data was simulated in a detailed and anatomically plausible reference model. Test models were derived from the reference model representing a variety of skull geometry inaccuracies and simplifications. These included erroneous skull holes, local errors in skull thickness, modeling cavities as bone, downward extension of the model and simplifying the inferior skull or the inferior skull and scalp as layers of constant thickness. The reference EEG data was compared to forward simulations in the test models, and source reconstruction in the test models was performed on the simulated reference data. The finite element method with high-resolution meshes was employed for all forward simulations. It was found that large skull geometry inaccuracies close to the source space, for example, when cutting the model directly below the skull, led to errors of 20mm and more for extended source space regions. Local defects, for example, erroneous skull holes, caused non-negligible errors only in the vicinity of the defect. The study design allowed a comparison of influence size, and guidelines for modeling the skull geometry were concluded.

  8. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    NASA Astrophysics Data System (ADS)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  9. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    NASA Astrophysics Data System (ADS)

    Leibfarth, S.; Eckert, F.; Welz, S.; Siegel, C.; Schmidt, H.; Schwenzer, N.; Zips, D.; Thorwarth, D.

    2015-07-01

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only.

  10. Segmentation of interest region in medical volume images using geometric deformable model.

    PubMed

    Lee, Myungeun; Cho, Wanhyun; Kim, Sunworl; Park, Soonyoung; Kim, Jong Hyo

    2012-05-01

    In this paper, we present a new segmentation method using the level set framework for medical volume images. The method was implemented using the surface evolution principle based on the geometric deformable model and the level set theory. And, the speed function in the level set approach consists of a hybrid combination of three integral measures derived from the calculus of variation principle. The terms are defined as robust alignment, active region, and smoothing. These terms can help to obtain the precise surface of the target object and prevent the boundary leakage problem. The proposed method has been tested on synthetic and various medical volume images with normal tissue and tumor regions in order to evaluate its performance on visual and quantitative data. The quantitative validation of the proposed segmentation is shown with higher Jaccard's measure score (72.52%-94.17%) and lower Hausdorff distance (1.2654 mm-3.1527 mm) than the other methods such as mean speed (67.67%-93.36% and 1.3361mm-3.4463 mm), mean-variance speed (63.44%-94.72% and 1.3361 mm-3.4616 mm), and edge-based speed (0.76%-42.44% and 3.8010 mm-6.5389 mm). The experimental results confirm that the effectiveness and performance of our method is excellent compared with traditional approaches. PMID:22402196

  11. Multi-atlas segmentation of the cartilage in knee MR images with sequential volume- and bone-mask-based registrations

    NASA Astrophysics Data System (ADS)

    Lee, Han Sang; Kim, Hyeun A.; Kim, Hyeonjin; Hong, Helen; Yoon, Young Cheol; Kim, Junmo

    2016-03-01

    In spite of its clinical importance in diagnosis of osteoarthritis, segmentation of cartilage in knee MRI remains a challenging task due to its shape variability and low contrast with surrounding soft tissues and synovial fluid. In this paper, we propose a multi-atlas segmentation of cartilage in knee MRI with sequential atlas registrations and locallyweighted voting (LWV). First, bone is segmented by sequential volume- and object-based registrations and LWV. Second, to overcome the shape variability of cartilage, cartilage is segmented by bone-mask-based registration and LWV. In experiments, the proposed method improved the bone segmentation by reducing misclassified bone region, and enhanced the cartilage segmentation by preventing cartilage leakage into surrounding similar intensity region, with the help of sequential registrations and LWV.

  12. Reliability of tarsal bone segmentation and its contribution to MR kinematic analysis methods.

    PubMed

    Wolf, P; Luechinger, R; Stacoff, A; Boesiger, P; Stuessi, E

    2007-10-01

    The purpose of this study was to determine the reliability of tarsal bone segmentation based on magnetic resonance (MR) imaging using commercially available software. All tarsal bones of five subjects were segmented five times each by two operators. Volumes and second moments of volume were calculated and used to determine the intra- as well as interoperator reproducibility. The results show that these morphological parameters had excellent interclass correlation coefficients (>0.997) indicating that the presented tarsal bone segmentation is a reliable procedure and that operators are in fact interchangeable. The consequences on differences in MR kinematic analysis methods of segmentation due to repetition were also determined. It became evident that one analysis method--fitting surface point clouds--was considerable less affected by repeated segmentation (cuboid: up to 0.2 degrees, other tarsal bones up to 0.1 degrees) compared to a method using principal axes (cuboid up to 6.7 degrees, other tarsal bones up to 0.8 degrees). Thus, the former method is recommended for investigations of tarsal bone kinematics by MR imaging.

  13. Whole-body and segmental muscle volume are associated with ball velocity in high school baseball pitchers

    PubMed Central

    Yamada, Yosuke; Yamashita, Daichi; Yamamoto, Shinji; Matsui, Tomoyuki; Seo, Kazuya; Azuma, Yoshikazu; Kida, Yoshikazu; Morihara, Toru; Kimura, Misaka

    2013-01-01

    The aim of the study was to examine the relationship between pitching ball velocity and segmental (trunk, upper arm, forearm, upper leg, and lower leg) and whole-body muscle volume (MV) in high school baseball pitchers. Forty-seven male high school pitchers (40 right-handers and seven left-handers; age, 16.2 ± 0.7 years; stature, 173.6 ± 4.9 cm; mass, 65.0 ± 6.8 kg, years of baseball experience, 7.5 ± 1.8 years; maximum pitching ball velocity, 119.0 ± 9.0 km/hour) participated in the study. Segmental and whole-body MV were measured using segmental bioelectrical impedance analysis. Maximum ball velocity was measured with a sports radar gun. The MV of the dominant arm was significantly larger than the MV of the non-dominant arm (P < 0.001). There was no difference in MV between the dominant and non-dominant legs. Whole-body MV was significantly correlated with ball velocity (r = 0.412, P < 0.01). Trunk MV was not correlated with ball velocity, but the MV for both lower legs, and the dominant upper leg, upper arm, and forearm were significantly correlated with ball velocity (P < 0.05). The results were not affected by age or years of baseball experience. Whole-body and segmental MV are associated with ball velocity in high school baseball pitchers. However, the contribution of the muscle mass on pitching ball velocity is limited, thus other fundamental factors (ie, pitching skill) are also important. PMID:24379713

  14. Whole-body and segmental muscle volume are associated with ball velocity in high school baseball pitchers.

    PubMed

    Yamada, Yosuke; Yamashita, Daichi; Yamamoto, Shinji; Matsui, Tomoyuki; Seo, Kazuya; Azuma, Yoshikazu; Kida, Yoshikazu; Morihara, Toru; Kimura, Misaka

    2013-01-01

    The aim of the study was to examine the relationship between pitching ball velocity and segmental (trunk, upper arm, forearm, upper leg, and lower leg) and whole-body muscle volume (MV) in high school baseball pitchers. Forty-seven male high school pitchers (40 right-handers and seven left-handers; age, 16.2 ± 0.7 years; stature, 173.6 ± 4.9 cm; mass, 65.0 ± 6.8 kg, years of baseball experience, 7.5 ± 1.8 years; maximum pitching ball velocity, 119.0 ± 9.0 km/hour) participated in the study. Segmental and whole-body MV were measured using segmental bioelectrical impedance analysis. Maximum ball velocity was measured with a sports radar gun. The MV of the dominant arm was significantly larger than the MV of the non-dominant arm (P < 0.001). There was no difference in MV between the dominant and non-dominant legs. Whole-body MV was significantly correlated with ball velocity (r = 0.412, P < 0.01). Trunk MV was not correlated with ball velocity, but the MV for both lower legs, and the dominant upper leg, upper arm, and forearm were significantly correlated with ball velocity (P < 0.05). The results were not affected by age or years of baseball experience. Whole-body and segmental MV are associated with ball velocity in high school baseball pitchers. However, the contribution of the muscle mass on pitching ball velocity is limited, thus other fundamental factors (ie, pitching skill) are also important. PMID:24379713

  15. Whole-body and segmental muscle volume are associated with ball velocity in high school baseball pitchers.

    PubMed

    Yamada, Yosuke; Yamashita, Daichi; Yamamoto, Shinji; Matsui, Tomoyuki; Seo, Kazuya; Azuma, Yoshikazu; Kida, Yoshikazu; Morihara, Toru; Kimura, Misaka

    2013-01-01

    The aim of the study was to examine the relationship between pitching ball velocity and segmental (trunk, upper arm, forearm, upper leg, and lower leg) and whole-body muscle volume (MV) in high school baseball pitchers. Forty-seven male high school pitchers (40 right-handers and seven left-handers; age, 16.2 ± 0.7 years; stature, 173.6 ± 4.9 cm; mass, 65.0 ± 6.8 kg, years of baseball experience, 7.5 ± 1.8 years; maximum pitching ball velocity, 119.0 ± 9.0 km/hour) participated in the study. Segmental and whole-body MV were measured using segmental bioelectrical impedance analysis. Maximum ball velocity was measured with a sports radar gun. The MV of the dominant arm was significantly larger than the MV of the non-dominant arm (P < 0.001). There was no difference in MV between the dominant and non-dominant legs. Whole-body MV was significantly correlated with ball velocity (r = 0.412, P < 0.01). Trunk MV was not correlated with ball velocity, but the MV for both lower legs, and the dominant upper leg, upper arm, and forearm were significantly correlated with ball velocity (P < 0.05). The results were not affected by age or years of baseball experience. Whole-body and segmental MV are associated with ball velocity in high school baseball pitchers. However, the contribution of the muscle mass on pitching ball velocity is limited, thus other fundamental factors (ie, pitching skill) are also important.

  16. Segmentation and quantitative analysis of individual cells in developmental tissues.

    PubMed

    Nandy, Kaustav; Kim, Jusub; McCullough, Dean P; McAuliffe, Matthew; Meaburn, Karen J; Yamaguchi, Terry P; Gudla, Prabhakar R; Lockett, Stephen J

    2014-01-01

    Image analysis is vital for extracting quantitative information from biological images and is used extensively, including investigations in developmental biology. The technique commences with the segmentation (delineation) of objects of interest from 2D images or 3D image stacks and is usually followed by the measurement and classification of the segmented objects. This chapter focuses on the segmentation task and here we explain the use of ImageJ, MIPAV (Medical Image Processing, Analysis, and Visualization), and VisSeg, three freely available software packages for this purpose. ImageJ and MIPAV are extremely versatile and can be used in diverse applications. VisSeg is a specialized tool for performing highly accurate and reliable 2D and 3D segmentation of objects such as cells and cell nuclei in images and stacks.

  17. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  18. 4-D segmentation and normalization of 3He MR images for intrasubject assessment of ventilated lung volumes

    NASA Astrophysics Data System (ADS)

    Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.

    2012-03-01

    Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.

  19. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  20. Semiautomatic white blood cell segmentation based on multiscale analysis.

    PubMed

    Dorini, L B; Minetto, R; Leite, N J

    2013-01-01

    This paper approaches novel methods to segment the nucleus and cytoplasm of white blood cells (WBC). This information is the basis to perform higher level tasks such as automatic differential counting, which plays an important role in the diagnosis of different diseases. We explore the image simplification and contour regularization resulting from the application of the Self-Dual Multiscale Morphological Toggle (SMMT), an operator with scale-space properties. To segment the nucleus, the image preprocessing with SMMT has shown to be essential to ensure the accuracy of two well-known image segmentations techniques, namely, watershed transform and Level Set methods. To identify the cytoplasm region, we propose two different schemes, based on granulometric analysis and on morphological transformations. The proposed methods have been successfully applied to a large number of images, showing promising segmentation and classification results for varying cell appearance and image quality, encouraging future works.

  1. Leaf image segmentation method based on multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Jin-Wei; Shi, Wen; Liao, Gui-Ping

    2013-12-01

    To identify singular regions of crop leaf affected by diseases, based on multifractal detrended fluctuation analysis (MF-DFA), an image segmentation method is proposed. In the proposed method, first, we defend a new texture descriptor: local generalized Hurst exponent, recorded as LHq based on MF-DFA. And then, box-counting dimension f(LHq) is calculated for sub-images constituted by the LHq of some pixels, which come from a specific region. Consequently, series of f(LHq) of the different regions can be obtained. Finally, the singular regions are segmented according to the corresponding f(LHq). Six kinds of corn diseases leaf's images are tested in our experiments. Both the proposed method and other two segmentation methods—multifractal spectrum based and fuzzy C-means clustering have been compared in the experiments. The comparison results demonstrate that the proposed method can recognize the lesion regions more effectively and provide more robust segmentations.

  2. Linear test bed. Volume 1: Test bed no. 1. [aerospike test bed with segmented combustor

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Linear Test Bed program was to design, fabricate, and evaluation test an advanced aerospike test bed which employed the segmented combustor concept. The system is designated as a linear aerospike system and consists of a thrust chamber assembly, a power package, and a thrust frame. It was designed as an experimental system to demonstrate the feasibility of the linear aerospike-segmented combustor concept. The overall dimensions are 120 inches long by 120 inches wide by 96 inches in height. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure, at a mixture ratio of 5.5. At the design conditions, the sea level thrust is 200,000 pounds. The complete program including concept selection, design, fabrication, component test, system test, supporting analysis and posttest hardware inspection is described.

  3. Three-dimensional segmentation of pulmonary artery volume from thoracic computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Lindenmaier, Tamas J.; Sheikh, Khadija; Bluemke, Emma; Gyacskov, Igor; Mura, Marco; Licskai, Christopher; Mielniczuk, Lisa; Fenster, Aaron; Cunningham, Ian A.; Parraga, Grace

    2015-03-01

    Chronic obstructive pulmonary disease (COPD), is a major contributor to hospitalization and healthcare costs in North America. While the hallmark of COPD is airflow limitation, it is also associated with abnormalities of the cardiovascular system. Enlargement of the pulmonary artery (PA) is a morphological marker of pulmonary hypertension, and was previously shown to predict acute exacerbations using a one-dimensional diameter measurement of the main PA. We hypothesized that a three-dimensional (3D) quantification of PA size would be more sensitive than 1D methods and encompass morphological changes along the entire central pulmonary artery. Hence, we developed a 3D measurement of the main (MPA), left (LPA) and right (RPA) pulmonary arteries as well as total PA volume (TPAV) from thoracic CT images. This approach incorporates segmentation of pulmonary vessels in cross-section for the MPA, LPA and RPA to provide an estimate of their volumes. Three observers performed five repeated measurements for 15 ex-smokers with ≥10 pack-years, and randomly identified from a larger dataset of 199 patients. There was a strong agreement (r2=0.76) for PA volume and PA diameter measurements, which was used as a gold standard. Observer measurements were strongly correlated and coefficients of variation for observer 1 (MPA:2%, LPA:3%, RPA:2%, TPA:2%) were not significantly different from observer 2 and 3 results. In conclusion, we generated manual 3D pulmonary artery volume measurements from thoracic CT images that can be performed with high reproducibility. Future work will involve automation for implementation in clinical workflows.

  4. Analysis of the Segmented Features of Indicator of Mine Presence

    NASA Astrophysics Data System (ADS)

    Krtalic, A.

    2016-06-01

    The aim of this research is to investigate possibility for interactive semi-automatic interpretation of digital images in humanitarian demining for the purpose of detection and extraction of (strong) indicators of mine presence which can be seen on the images, according to the parameters of the general geometric shapes rather than radiometric characteristics. For that purpose, objects are created by segmentation. The segments represent the observed indicator and the objects that surround them (for analysis of the degree of discrimination of objects from the environment) in the best possible way. These indicators cover a certain characteristic surface. These areas are determined by segmenting the digital image. Sets of pixels that form such surface on images have specific geometric features. In this way, it is provided to analyze the features of the segments on the basis of the object, rather than the pixel level. Factor analysis of geometric parameters of this segments is performed in order to identify parameters that can be distinguished from the other parameters according to their geometric features. Factor analysis was carried out in two different ways, according to the characteristics of the general geometric shape and to the type of strong indicators of mine presence. The continuation of this research is the implementation of the automatic extraction of indicators of mine presence according results presented in this paper.

  5. Segment clustering methodology for unsupervised Holter recordings analysis

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sotelo, Jose Luis; Peluffo-Ordoñez, Diego; Castellanos Dominguez, German

    2015-01-01

    Cardiac arrhythmia analysis on Holter recordings is an important issue in clinical settings, however such issue implicitly involves attending other problems related to the large amount of unlabelled data which means a high computational cost. In this work an unsupervised methodology based in a segment framework is presented, which consists of dividing the raw data into a balanced number of segments in order to identify fiducial points, characterize and cluster the heartbeats in each segment separately. The resulting clusters are merged or split according to an assumed criterion of homogeneity. This framework compensates the high computational cost employed in Holter analysis, being possible its implementation for further real time applications. The performance of the method is measure over the records from the MIT/BIH arrhythmia database and achieves high values of sensibility and specificity, taking advantage of database labels, for a broad kind of heartbeats types recommended by the AAMI.

  6. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  7. A comprehensive segmentation analysis of crude oil market based on time irreversibility

    NASA Astrophysics Data System (ADS)

    Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi

    2016-05-01

    In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.

  8. Extracellular and intracellular volume variations during postural change measured by segmental and wrist-ankle bioimpedance spectroscopy.

    PubMed

    Fenech, Marianne; Jaffrin, Michel Y

    2004-01-01

    Extracellular (ECW) and intracellular (ICW) volumes were measured using both segmental and wrist-ankle (W-A) bioimpedance spectroscopy (5-1000 kHz) in 15 healthy subjects (7 men, 8 women). In the 1st protocol, the subject, after sitting for 30 min, laid supine for at least 30 min. In the second protocol, the subject, who had been supine for 1 hr, sat up in bed for 10 min and returned to supine position for another hour. Segmental ECW and ICW resistances of legs, arms and trunk were measured by placing four voltage electrodes on wrist, shoulder, top of thigh and ankle and using Hanai's conductivity theory. W-A resistances were found to be very close to the sum of segmental resistances. When switching from sitting to supine (protocol 1), the mean ECW leg resistance increased by 18.2%, that of arm and W-A by 12.4%. Trunk resistance also increased but not significantly by 4.8%. Corresponding increases in ICW resistance were smaller for legs (3.7%) and arm (-0.7%) but larger for the trunk (21.4%). Total body ECW volumes from segmental measurements were in good agreement with W-A and Watson anthropomorphic correlation. The decrease in total ECW volume (when supine) calculated from segmental resistances was at 0.79 l less than the W-A one (1.12 l). Total ICW volume reductions were 3.4% (segmental) and 3.8% (W-A). Tests of protocol 2 confirmed that resistance and fluid volume values were not affected by a temporary position change. PMID:14723506

  9. Air-segmented amplitude-modulated multiplexed flow analysis.

    PubMed

    Inui, Koji; Uemura, Takeshi; Ogusu, Takeshi; Takeuchi, Masaki; Tanaka, Hideji

    2011-01-01

    Air-segmentation is applied to amplitude-modulated multiplexed flow analysis, which we proposed recently. Sample solutions, the flow rates of which are varied periodically, are merged with reagent and/or diluent solution. The merged stream is segmented by air-bubbles and, downstream, its absorbance is measured after deaeration. The analytes in the samples are quantified from the amplitudes of the respective wave components in the absorbance. The proposed method is applied to the determinations of a food dye, phosphate ions and nitrite ions. The air-segmentation is effective for limiting amplitude damping through the axial dispersion, resulting in an improvement in sensitivity. This effect is more pronounced at shorter control periods and longer flow path lengths.

  10. Fingerprint image segmentation based on multi-features histogram analysis

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhang, Youguang

    2007-11-01

    An effective fingerprint image segmentation based on multi-features histogram analysis is presented. We extract a new feature, together with three other features to segment fingerprints. Two of these four features, each of which is related to one of the other two, are reciprocals with each other, so features are divided into two groups. These two features' histograms are calculated respectively to determine which feature group is introduced to segment the aim-fingerprint. The features could also divide fingerprints into two classes with high and low quality. Experimental results show that our algorithm could classify foreground and background effectively with lower computational cost, and it can also reduce pseudo-minutiae detected and improve the performance of AFIS.

  11. Education, Work and Employment--Volume II. Segmented Labour Markets, Workplace Democracy and Educational Planning, Education and Self-Employment.

    ERIC Educational Resources Information Center

    Carnoy, Martin; And Others

    This volume contains three studies covering separate yet complementary aspects of the problem of the relationships between the educational system and the production system as manpower user. The first monograph on the theories of the markets seeks to answer two questions: what can be learned from the work done on the segmentation of the labor…

  12. Salted and preserved duck eggs: a consumer market segmentation analysis.

    PubMed

    Arthur, Jennifer; Wiseman, Kelleen; Cheng, K M

    2015-08-01

    The combination of increasing ethnic diversity in North America and growing consumer support for local food products may present opportunities for local producers and processors in the ethnic foods product category. Our study examined the ethnic Chinese (pop. 402,000) market for salted and preserved duck eggs in Vancouver, British Columbia (BC), Canada. The objective of the study was to develop a segmentation model using survey data to categorize consumer groups based on their attitudes and the importance they placed on product attributes. We further used post-segmentation acculturation score, demographics and buyer behaviors to define these groups. Data were gathered via a survey of randomly selected Vancouver households with Chinese surnames (n = 410), targeting the adult responsible for grocery shopping. Results from principal component analysis and a 2-step cluster analysis suggest the existence of 4 market segments, described as Enthusiasts, Potentialists, Pragmatists, Health Skeptics (salted duck eggs), and Neutralists (preserved duck eggs). Kruskal Wallis tests and post hoc Mann-Whitney tests found significant differences between segments in terms of attitudes and the importance placed on product characteristics. Health Skeptics, preserved egg Potentialists, and Pragmatists of both egg products were significantly biased against Chinese imports compared to others. Except for Enthusiasts, segments disagreed that eggs are 'Healthy Products'. Preserved egg Enthusiasts had a significantly lower acculturation score (AS) compared to all others, while salted egg Enthusiasts had a lower AS compared to Health Skeptics. All segments rated "produced in BC, not mainland China" products in the "neutral to very likely" range for increasing their satisfaction with the eggs. Results also indicate that buyers of each egg type are willing to pay an average premium of at least 10% more for BC produced products versus imports, with all other characteristics equal. Overall

  13. Salted and preserved duck eggs: a consumer market segmentation analysis.

    PubMed

    Arthur, Jennifer; Wiseman, Kelleen; Cheng, K M

    2015-08-01

    The combination of increasing ethnic diversity in North America and growing consumer support for local food products may present opportunities for local producers and processors in the ethnic foods product category. Our study examined the ethnic Chinese (pop. 402,000) market for salted and preserved duck eggs in Vancouver, British Columbia (BC), Canada. The objective of the study was to develop a segmentation model using survey data to categorize consumer groups based on their attitudes and the importance they placed on product attributes. We further used post-segmentation acculturation score, demographics and buyer behaviors to define these groups. Data were gathered via a survey of randomly selected Vancouver households with Chinese surnames (n = 410), targeting the adult responsible for grocery shopping. Results from principal component analysis and a 2-step cluster analysis suggest the existence of 4 market segments, described as Enthusiasts, Potentialists, Pragmatists, Health Skeptics (salted duck eggs), and Neutralists (preserved duck eggs). Kruskal Wallis tests and post hoc Mann-Whitney tests found significant differences between segments in terms of attitudes and the importance placed on product characteristics. Health Skeptics, preserved egg Potentialists, and Pragmatists of both egg products were significantly biased against Chinese imports compared to others. Except for Enthusiasts, segments disagreed that eggs are 'Healthy Products'. Preserved egg Enthusiasts had a significantly lower acculturation score (AS) compared to all others, while salted egg Enthusiasts had a lower AS compared to Health Skeptics. All segments rated "produced in BC, not mainland China" products in the "neutral to very likely" range for increasing their satisfaction with the eggs. Results also indicate that buyers of each egg type are willing to pay an average premium of at least 10% more for BC produced products versus imports, with all other characteristics equal. Overall

  14. Small rural hospitals: an example of market segmentation analysis.

    PubMed

    Mainous, A G; Shelby, R L

    1991-01-01

    In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution.

  15. Documented Safety Analysis for the B695 Segment

    SciTech Connect

    Laycak, D

    2008-09-11

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems

  16. Accurate airway segmentation based on intensity structure analysis and graph-cut

    NASA Astrophysics Data System (ADS)

    Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku

    2016-03-01

    This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.

  17. Segmental hair analysis and estimation of methamphetamine use pattern.

    PubMed

    Han, Eunyoung; Yang, Heejin; Seol, Ilung; Park, Yunshin; Lee, Bongwoo; Song, Joon Myong

    2013-03-01

    The aim of this study was to investigate whether the results of segmental hair analysis can be used to estimate patterns of methamphetamine (MA) use. Segmental hair analysis for MA and amphetamine (AP) was performed. Hair was cut into the hair root, consecutive 1 cm length segments and 1-4 cm length segments. Whole hair was also analyzed. The hair samples were incubated for 20 h in 1 mL methanol containing 1 % hydrochloric acid after washing the hair samples. Hair extracts were evaporated and derivatization was performed using trifluoroacetic anhydride in ethylacetate at 65 °C for 30 min. Derivatized extract was analyzed by gas chromatography/mass spectrometry. The 15 subjects consisted of 13 males and two females and their ages ranged from 25 to 42 (mean, 32). MA and AP concentrations in the whole hair ranged from 3.00 to 105.10 ng/mg (mean, 34.53) and from 0.05 to 4.76 ng/mg (mean, 2.42), respectively. Based on the analysis of the 1 cm length segmental hair, the results were interpreted in a way to distinguish between continuous use of MA (n = 10), no recent but previous use of MA (n = 3), and recent but no previous use of MA (n = 2). Furthermore, the individuals were interpreted as light, moderate, and heavy users based on concentration ranges previously published.

  18. High volume data storage architecture analysis

    NASA Technical Reports Server (NTRS)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  19. Study of tracking and data acquisition system for the 1990's. Volume 4: TDAS space segment architecture

    NASA Technical Reports Server (NTRS)

    Orr, R. S.

    1984-01-01

    Tracking and data acquisition system (TDAS) requirements, TDAS architectural goals, enhanced TDAS subsystems, constellation and networking options, TDAS spacecraft options, crosslink implementation, baseline TDAS space segment architecture, and treat model development/security analysis are addressed.

  20. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  1. Three-dimensional model-guided segmentation and analysis of medical images

    NASA Astrophysics Data System (ADS)

    Arata, Louis K.; Dhawan, Atam P.; Broderick, Joseph; Gaskill, Mary

    1992-06-01

    Automated or semi-automated analysis and labeling of structural brain images, such as magnetic resonance (MR) and computed tomography, is desirable for a number of reasons. Quantification of brain volumes can aid in the study of various diseases and the affect of various drug regimes. A labeled structural image, when registered with a functional image such as positron emission tomography or single photon emission computed tomography, allows the quantification of activity in various brain subvolumes such as the major lobes. Because even low resolution scans (7.5 to 8.0 mm slices) have 15 to 17 slices in order to image the entire head of the subject hand segmentation of these slices is a very laborious process. However, because of the spatial complexity of many of the brain structures notably the ventricles, automatic segmentation is not a simple undertaking. In order to accurately segment a structure such as the ventricles we must have a model of equal complexity to guide the segmentation. Also, we must have a model which can incorporate the variability among different subjects from a pre-specified group. Analysis of MR brain scans is accomplished by utilizing the data from T2 weighted and proton density images to isolate the regions of interest. Identification is then done automatically with the aid of a composite model formed from the operator assisted segmentation of MR scans of subjects from the same group. We describe the construction of the model and demonstrate its use in the segmentation and labeling of the ventricles in the brain.

  2. Segmented infrared image analysis for rotating machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Duan, Lixiang; Yao, Mingchao; Wang, Jinjiang; Bai, Tangbo; Zhang, Laibin

    2016-07-01

    As a noncontact and non-intrusive technique, infrared image analysis becomes promising for machinery defect diagnosis. However, the insignificant information and strong noise in infrared image limit its performance. To address this issue, this paper presents an image segmentation approach to enhance the feature extraction in infrared image analysis. A region selection criterion named dispersion degree is also formulated to discriminate fault representative regions from unrelated background information. Feature extraction and fusion methods are then applied to obtain features from selected regions for further diagnosis. Experimental studies on a rotor fault simulator demonstrate that the presented segmented feature enhancement approach outperforms the one from the original image using both Naïve Bayes classifier and support vector machine.

  3. Automatic segmentation and quantitative analysis of the articular cartilages from magnetic resonance images of the knee.

    PubMed

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K; Ourselin, Sébastien

    2010-01-01

    In this paper, we present a segmentation scheme that automatically and accurately segments all the cartilages from magnetic resonance (MR) images of nonpathological knees. Our scheme involves the automatic segmentation of the bones using a three-dimensional active shape model, the extraction of the expected bone-cartilage interface (BCI), and cartilage segmentation from the BCI using a deformable model that utilizes localization, patient specific tissue estimation and a model of the thickness variation. The accuracy of this scheme was experimentally validated using leave one out experiments on a database of fat suppressed spoiled gradient recall MR images. The scheme was compared to three state of the art approaches, tissue classification, a modified semi-automatic watershed algorithm and nonrigid registration (B-spline based free form deformation). Our scheme obtained an average Dice similarity coefficient (DSC) of (0.83, 0.83, 0.85) for the (patellar, tibial, femoral) cartilages, while (0.82, 0.81, 0.86) was obtained with a tissue classifier and (0.73, 0.79, 0.76) was obtained with nonrigid registration. The average DSC obtained for all the cartilages using a semi-automatic watershed algorithm (0.90) was slightly higher than our approach (0.89), however unlike this approach we segment each cartilage as a separate object. The effectiveness of our approach for quantitative analysis was evaluated using volume and thickness measures with a median volume difference error of (5.92, 4.65, 5.69) and absolute Laplacian thickness difference of (0.13, 0.24, 0.12) mm.

  4. Analysis of adjacent segment degeneration with laminectomy above a fused lumbar segment.

    PubMed

    Gard, Andrew P; Klopper, Hendrik B; Doran, Stephen E; Hellbusch, Leslie C

    2013-11-01

    Although recent data suggests that lumbar fusion with decompression contributes to some marginal acceleration of adjacent segment degeneration (ASD), few studies have evaluated whether it is safe to perform a laminectomy above a fused segment. This study investigates the hypothesis that laminectomy above a fused lumbar segment does not increase the incidence of ASD, and assesses the benefits and risks of performing a laminectomy above a lumbar fusion. A retrospective review of 171 patients who underwent decompression and instrumented fusion of the lumbar spine was performed to analyze the association between ASD and laminectomy above the fused lumbar segment. Patients were divided into two groups - one group with instrumented fusion alone and the other group with instrumented fusion plus laminectomy above the fused segment. Of the 171 patients, 34 underwent additional decompressive laminectomy above the fused segment. There was a significant increase in ASD incidence as well as progression of ASD grade in both groups. There was no significant increase in ASD in patients with decompressive laminectomy above the fused lumbar segment compared to patients with laminectomy limited to the fused segment. This retrospective review of 171 patients who underwent decompression and instrumented fusion with follow-up radiographs demonstrates that laminectomy decompression above a fused segment does not significantly increase radiographic ASD. There is, however, a significant increase in ASD over time, which was observed throughout the entire cohort likely representing a natural progression of lumbar spondylosis above the fusion segment.

  5. A region growing method for tumor volume segmentation on PET images for rectal and anal cancer patients.

    PubMed

    Day, Ellen; Betler, James; Parda, David; Reitz, Bodo; Kirichenko, Alexander; Mohammadi, Seyed; Miften, Moyed

    2009-10-01

    The application of automated segmentation methods for tumor delineation on 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) images presents an opportunity to reduce the interobserver variability in radiotherapy (RT) treatment planning. In this work, three segmentation methods were evaluated and compared for rectal and anal cancer patients: (i) Percentage of the maximum standardized uptake value (SUV% max), (ii) fixed SUV cutoff of 2.5 (SUV2.5), and (iii) mathematical technique based on a confidence connected region growing (CCRG) method. A phantom study was performed to determine the SUV% max threshold value and found to be 43%, SUV43% max. The CCRG method is an iterative scheme that relies on the use of statistics from a specified region in the tumor. The scheme is initialized by a subregion of pixels surrounding the maximum intensity pixel. The mean and standard deviation of this region are measured and the pixels connected to the region are included or not based on the criterion that they are greater than a value derived from the mean and standard deviation. The mean and standard deviation of this new region are then measured and the process repeats. FDG-PET-CT imaging studies for 18 patients who received RT were used to evaluate the segmentation methods. A PET avid (PETavid) region was manually segmented for each patient and the volume was then used to compare the calculated volumes along with the absolute mean difference and range for all methods. For the SUV43% max method, the volumes were always smaller than the PETavid volume by a mean of 56% and a range of 21%-79%. The volumes from the SUV2.5 method were either smaller or larger than the PETavid volume by a mean of 37% and a range of 2%-130%. The CCRG approach provided the best results with a mean difference of 9% and a range of 1%-27%. Results show that the CCRG technique can be used in the segmentation of tumor volumes on FDG-PET images, thus providing treatment planners with a clinically

  6. Swept Volume Parameterization for Isogeometric Analysis

    NASA Astrophysics Data System (ADS)

    Aigner, M.; Heinrich, C.; Jüttler, B.; Pilgerstorfer, E.; Simeon, B.; Vuong, A.-V.

    Isogeometric Analysis uses NURBS representations of the domain for performing numerical simulations. The first part of this paper presents a variational framework for generating NURBS parameterizations of swept volumes. The class of these volumes covers a number of interesting free-form shapes, such as blades of turbines and propellers, ship hulls or wings of airplanes. The second part of the paper reports the results of isogeometric analysis which were obtained with the help of the generated NURBS volume parameterizations. In particular we discuss the influence of the chosen parameterization and the incorporation of boundary conditions.

  7. Three-dimensional choroidal segmentation in spectral OCT volumes using optic disc prior information

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Girkin, Christopher A.; Hariri, Amirhossein; Sadda, SriniVas R.

    2016-03-01

    Recently, much attention has been focused on determining the role of the peripapillary choroid - the layer between the outer retinal pigment epithelium (RPE)/Bruchs membrane (BM) and choroid-sclera (C-S) junction, whether primary or secondary in the pathogenesis of glaucoma. However, the automated choroidal segmentation in spectral-domain optical coherence tomography (SD-OCT) images of optic nerve head (ONH) has not been reported probably due to the fact that the presence of the BM opening (BMO, corresponding to the optic disc) can deflect the choroidal segmentation from its correct position. The purpose of this study is to develop a 3D graph-based approach to identify the 3D choroidal layer in ONH-centered SD-OCT images using the BMO prior information. More specifically, an initial 3D choroidal segmentation was first performed using the 3D graph search algorithm. Note that varying surface interaction constraints based on the choroidal morphological model were applied. To assist the choroidal segmentation, two other surfaces of internal limiting membrane and innerouter segment junction were also segmented. Based on the segmented layer between the RPE/BM and C-S junction, a 2D projection map was created. The BMO in the projection map was detected by a 2D graph search. The pre-defined BMO information was then incorporated into the surface interaction constraints of the 3D graph search to obtain more accurate choroidal segmentation. Twenty SD-OCT images from 20 healthy subjects were used. The mean differences of the choroidal borders between the algorithm and manual segmentation were at a sub-voxel level, indicating a high level segmentation accuracy.

  8. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  9. Segmentation of multiple sclerosis lesions in MRI: an image analysis approach

    NASA Astrophysics Data System (ADS)

    Krishnan, Kalpagam; Atkins, M. Stella

    1998-06-01

    This paper describes an intensity-based method for the segmentation of multiple sclerosis lesions in dual-echo PD and T2-weighted magnetic resonance brain images. The method consists of two stages: feature extraction and image analysis. For feature extraction, we use a ratio filter transformation on the proton density (PD) and spin-spin (T2) data sequences to extract the white matter, cerebrospinal fluid and the lesion features. The one and two dimensional histograms of the features are then analyzed to obtain different parameters, which provide the basis for subsequent image analysis operations to detect the multiple sclerosis lesions. In the image analysis stage, the PD images of the volume are first pre-processed to enhance the lesion tissue areas. White matter and cerebrospinal fluid masks are then generated and applied on the enhanced volume to remove non- lesion areas. Segmentation of lesions is performed in two steps: conspicuous lesions are extracted in the first step, followed by the extraction of the subtle lesions.

  10. Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET.

    PubMed

    Bousse, Alexandre; Pedemonte, Stefano; Thomas, Benjamin A; Erlandsson, Kjell; Ourselin, Sébastien; Arridge, Simon; Hutton, Brian F

    2012-10-21

    In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image.

  11. Meteorological analysis models, volume 2

    NASA Technical Reports Server (NTRS)

    Langland, R. A.; Stark, D. L.

    1976-01-01

    As part of the SEASAT program, two sets of analysis programs were developed. One set of programs produce 63 x 63 horizontal mesh analyses on a polar stereographic grid. The other set produces 187 x 187 third mesh analyses. The parameters analyzed include sea surface temperature, sea level pressure and twelve levels of upper air temperature, height and wind analyses. Both sets use operational data provided by a weather bureau. The analysis output is used to initialize the primitive equation forecast models also included.

  12. Automatic Segmentation of Cell Nuclei in Bladder and Skin Tissue for Karyometric Analysis

    PubMed Central

    Korde, Vrushali R.; Bartels, Hubert; Barton, Jennifer; Ranger-Moore, James

    2010-01-01

    Objective To automatically segment cell nuclei in histology images of bladder and skin tissue for karyometric analysis. Study Design The four main steps in the program were as follows: median filtering and thresholding, segmentation, categorizing, and cusp correction. This robust segmentation technique used properties of the image histogram to optimally select a threshold and create closed four-way chain code nuclear segmentations. Each cell nucleus segmentation was treated as an individual object whose properties of segmentation quality were used for criteria to classify each nucleus as: throw away, salvageable, or good. An erosion/dilation procedure and re-thresholding were performed on salvageable nuclei to correct cusps. Results Ten bladder histology images were segmented both by hand and using this automatic segmentation algorithm. The automatic segmentation resulted in a sensitivity of 76.4%, defined as the percentage of hand segmented nuclei that were automatically segmented with good quality. The median proportional difference between hand and automatic segmentations over 42 nuclei each with 95 features used in karyometric analysis was 1.6%. The same procedure was performed on 10 skin histology images with a sensitivity of 83.0% and median proportional difference of 2.6%. Conclusion The close agreement in karyometric features with hand segmentation shows that automated segmentation can be used for analysis of bladder and skin histology images. PMID:19402384

  13. Pulse shape analysis and position determination in segmented HPGe detectors: The AGATA detector library

    NASA Astrophysics Data System (ADS)

    Bruyneel, B.; Birkenbach, B.; Reiter, P.

    2016-03-01

    The AGATA Detector Library (ADL) was developed for the calculation of signals from highly segmented large volume high-purity germanium (HPGe) detectors. ADL basis sets comprise a huge amount of calculated position-dependent detector pulse shapes. A basis set is needed for Pulse Shape Analysis (PSA). By means of PSA the interaction position of a γ-ray inside the active detector volume is determined. Theoretical concepts of the calculations are introduced and cover the relevant aspects of signal formation in HPGe. The approximations and the realization of the computer code with its input parameters are explained in detail. ADL is a versatile and modular computer code; new detectors can be implemented in this library. Measured position resolutions of the AGATA detectors based on ADL are discussed.

  14. Automatic, accurate, and reproducible segmentation of the brain and cerebro-spinal fluid in T1-weighted volume MRI scans and its application to serial cerebral and intracranial volumetry

    NASA Astrophysics Data System (ADS)

    Lemieux, Louis

    2001-07-01

    A new fully automatic algorithm for the segmentation of the brain and cerebro-spinal fluid (CSF) from T1-weighted volume MRI scans of the head was specifically developed in the context of serial intra-cranial volumetry. The method is an extension of a previously published brain extraction algorithm. The brain mask is used as a basis for CSF segmentation based on morphological operations, automatic histogram analysis and thresholding. Brain segmentation is then obtained by iterative tracking of the brain-CSF interface. Grey matter (GM), white matter (WM) and CSF volumes are calculated based on a model of intensity probability distribution that includes partial volume effects. Accuracy was assessed using a digital phantom scan. Reproducibility was assessed by segmenting pairs of scans from 20 normal subjects scanned 8 months apart and 11 patients with epilepsy scanned 3.5 years apart. Segmentation accuracy as measured by overlap was 98% for the brain and 96% for the intra-cranial tissues. The volume errors were: total brain (TBV): -1.0%, intra-cranial (ICV):0.1%, CSF: +4.8%. For repeated scans, matching resulted in improved reproducibility. In the controls, the coefficient of reliability (CR) was 1.5% for the TVB and 1.0% for the ICV. In the patients, the Cr for the ICV was 1.2%.

  15. Cumulative Heat Diffusion Using Volume Gradient Operator for Volume Analysis.

    PubMed

    Gurijala, K C; Wang, Lei; Kaufman, A

    2012-12-01

    We introduce a simple, yet powerful method called the Cumulative Heat Diffusion for shape-based volume analysis, while drastically reducing the computational cost compared to conventional heat diffusion. Unlike the conventional heat diffusion process, where the diffusion is carried out by considering each node separately as the source, we simultaneously consider all the voxels as sources and carry out the diffusion, hence the term cumulative heat diffusion. In addition, we introduce a new operator that is used in the evaluation of cumulative heat diffusion called the Volume Gradient Operator (VGO). VGO is a combination of the LBO and a data-driven operator which is a function of the half gradient. The half gradient is the absolute value of the difference between the voxel intensities. The VGO by its definition captures the local shape information and is used to assign the initial heat values. Furthermore, VGO is also used as the weighting parameter for the heat diffusion process. We demonstrate that our approach can robustly extract shape-based features and thus forms the basis for an improved classification and exploration of features based on shape.

  16. Multi-level segment analysis: definition and applications in turbulence

    NASA Astrophysics Data System (ADS)

    Wang, Lipo

    2015-11-01

    The interaction of different scales is among the most interesting and challenging features in turbulence research. Existing approaches used for scaling analysis such as structure-function and Fourier spectrum method have their respective limitations, for instance scale mixing, i.e. the so-called infrared and ultraviolet effects. For a given function, by specifying different window sizes, the local extremal point set will be different. Such window size dependent feature indicates multi-scale statistics. A new method, multi-level segment analysis (MSA) based on the local extrema statistics, has been developed. The part of the function between two adjacent extremal points is defined as a segment, which is characterized by the functional difference and scale difference. The structure function can be differently derived from these characteristic parameters. Data test results show that MSA can successfully reveal different scaling regimes in turbulence systems such as Lagrangian and two-dimensional turbulence, which have been remaining controversial in turbulence research. In principle MSA can generally be extended for various analyses.

  17. Dextrocardia--value of segmental analysis in its categorisation.

    PubMed Central

    Calcaterra, G; Anderson, R H; Lau, K C; Shinebourne, E A

    1979-01-01

    Dextrocardia can be defined as a heart in the right chest with the major axis to the right. This definition, however, conveys no information regarding the chamber arrangements and internal anatomy of the heart. Of 40 patients satisfying this definition in the files of the Brompton Hospital, 33 had angiocardiographic data adequate for complete analysis in terms of connections, relations, and morphology of cardiac segments. They form the subject of this report. There were 16 (48%) patients with situs solitus, 11 (33%) with situs inversus, and six (18%) with situs ambiguus. Of the cases of situs ambiguus, four exhibited laevoisomerism and two dextroisomerism. Of the 16 patients with situs solitus, six had two ventricles and 10 had univentricular hearts; two patients had concordant and three discordant ventriculoarterial connections, seven had double outlet ventricle, and four a single outlet heart. Of the 11 patients with situs inversus, nine had two ventricles and two a univentricular heart of right ventricular type; the arterial connection was concordant in two, discordant in two, double outlet in six, and single outlet in one. Of the six patients with situs ambiguus and laevo or dextroisomerism, four had two ventricles, and two univentricular hearts; the arterial connection was concordant in one, double outlet in three, and single outlet in two. Segmental analysis and the use of basic descriptive terms are essential to define the complex anatomy of such hearts. Images PMID:518773

  18. A novel approach for the automated segmentation and volume quantification of cardiac fats on computed tomography.

    PubMed

    Rodrigues, É O; Morais, F F C; Morais, N A O S; Conci, L S; Neto, L V; Conci, A

    2016-01-01

    The deposits of fat on the surroundings of the heart are correlated to several health risk factors such as atherosclerosis, carotid stiffness, coronary artery calcification, atrial fibrillation and many others. These deposits vary unrelated to obesity, which reinforces its direct segmentation for further quantification. However, manual segmentation of these fats has not been widely deployed in clinical practice due to the required human workload and consequential high cost of physicians and technicians. In this work, we propose a unified method for an autonomous segmentation and quantification of two types of cardiac fats. The segmented fats are termed epicardial and mediastinal, and stand apart from each other by the pericardium. Much effort was devoted to achieve minimal user intervention. The proposed methodology mainly comprises registration and classification algorithms to perform the desired segmentation. We compare the performance of several classification algorithms on this task, including neural networks, probabilistic models and decision tree algorithms. Experimental results of the proposed methodology have shown that the mean accuracy regarding both epicardial and mediastinal fats is 98.5% (99.5% if the features are normalized), with a mean true positive rate of 98.0%. In average, the Dice similarity index was equal to 97.6%. PMID:26474835

  19. Layout pattern analysis using the Voronoi diagram of line segments

    NASA Astrophysics Data System (ADS)

    Dey, Sandeep Kumar; Cheilaris, Panagiotis; Gabrani, Maria; Papadopoulou, Evanthia

    2016-01-01

    Early identification of problematic patterns in very large scale integration (VLSI) designs is of great value as the lithographic simulation tools face significant timing challenges. To reduce the processing time, such a tool selects only a fraction of possible patterns which have a probable area of failure, with the risk of missing some problematic patterns. We introduce a fast method to automatically extract patterns based on their structure and context, using the Voronoi diagram of line-segments as derived from the edges of VLSI design shapes. Designers put line segments around the problematic locations in patterns called "gauges," along which the critical distance is measured. The gauge center is the midpoint of a gauge. We first use the Voronoi diagram of VLSI shapes to identify possible problematic locations, represented as gauge centers. Then we use the derived locations to extract windows containing the problematic patterns from the design layout. The problematic locations are prioritized by the shape and proximity information of the design polygons. We perform experiments for pattern selection in a portion of a 22-nm random logic design layout. The design layout had 38,584 design polygons (consisting of 199,946 line segments) on layer Mx, and 7079 markers generated by an optical rule checker (ORC) tool. The optical rules specify requirements for printing circuits with minimum dimension. Markers are the locations of some optical rule violations in the layout. We verify our approach by comparing the coverage of our extracted patterns to the ORC-generated markers. We further derive a similarity measure between patterns and between layouts. The similarity measure helps to identify a set of representative gauges that reduces the number of patterns for analysis.

  20. Multiprogram Segment Margin Analysis: Concepts and Practice in Educational Programs.

    ERIC Educational Resources Information Center

    Duangploy, Orapin; Anderman, Steve

    1985-01-01

    To solve the problems of cost identification and allocation in an extended education office, a segment margin approach was used. (Segment margin is the difference between revenues and traceable direct costs.) Courses could be evaluated by segment margin rather than net income, since allocated indirect costs are not controllable by the individual…

  1. Performance evaluation of automated segmentation software on optical coherence tomography volume data.

    PubMed

    Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E; Debuc, Delia Cabrera

    2016-05-01

    Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth.

  2. Analysis of Retinal Peripapillary Segmentation in Early Alzheimer's Disease Patients

    PubMed Central

    Salobrar-Garcia, Elena; Hoyas, Irene; Leal, Mercedes; de Hoz, Rosa; Rojas, Blanca; Ramirez, Ana I.; Salazar, Juan J.; Yubero, Raquel; Gil, Pedro; Triviño, Alberto; Ramirez, José M.

    2015-01-01

    Decreased thickness of the retinal nerve fiber layer (RNFL) may reflect retinal neuronal-ganglion cell death. A decrease in the RNFL has been demonstrated in Alzheimer's disease (AD) in addition to aging by optical coherence tomography (OCT). Twenty-three mild-AD patients and 28 age-matched control subjects with mean Mini-Mental State Examination 23.3 and 28.2, respectively, with no ocular disease or systemic disorders affecting vision, were considered for study. OCT peripapillary and macular segmentation thickness were examined in the right eye of each patient. Compared to controls, eyes of patients with mild-AD patients showed no statistical difference in peripapillary RNFL thickness (P > 0.05); however, sectors 2, 3, 4, 8, 9, and 11 of the papilla showed thinning, while in sectors 1, 5, 6, 7, and 10 there was thickening. Total macular volume and RNFL thickness of the fovea in all four inner quadrants and in the outer temporal quadrants proved to be significantly decreased (P < 0.01). Despite the fact that peripapillary RNFL thickness did not statistically differ in comparison to control eyes, the increase in peripapillary thickness in our mild-AD patients could correspond to an early neurodegeneration stage and may entail the existence of an inflammatory process that could lead to progressive peripapillary fiber damage. PMID:26557684

  3. Image analysis for neuroblastoma classification: segmentation of cell nuclei.

    PubMed

    Gurcan, Metin N; Pan, Tony; Shimada, Hiro; Saltz, Joel

    2006-01-01

    Neuroblastoma is a childhood cancer of the nervous system. Current prognostic classification of this disease partly relies on morphological characteristics of the cells from H&E-stained images. In this work, an automated cell nuclei segmentation method is developed. This method employs morphological top-hat by reconstruction algorithm coupled with hysteresis thresholding to both detect and segment the cell nuclei. Accuracy of the automated cell nuclei segmentation algorithm is measured by comparing its outputs to manual segmentation. The average segmentation accuracy is 90.24+/-5.14% PMID:17947119

  4. Improving the clinical correlation of multiple sclerosis black hole volume change by paired-scan analysis.

    PubMed

    Tam, Roger C; Traboulsee, Anthony; Riddehough, Andrew; Li, David K B

    2012-01-01

    The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes.

  5. REACH. Teacher's Guide, Volume III. Task Analysis.

    ERIC Educational Resources Information Center

    Morris, James Lee; And Others

    Designed for use with individualized instructional units (CE 026 345-347, CE 026 349-351) in the electromechanical cluster, this third volume of the postsecondary teacher's guide presents the task analysis which was used in the development of the REACH (Refrigeration, Electro-Mechanical, Air Conditioning, Heating) curriculum. The major blocks of…

  6. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  7. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations.

  8. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI

    PubMed Central

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2014-01-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  9. Automated cerebellar lobule segmentation with application to cerebellar structural analysis in cerebellar disease.

    PubMed

    Yang, Zhen; Ye, Chuyang; Bogovic, John A; Carass, Aaron; Jedynak, Bruno M; Ying, Sarah H; Prince, Jerry L

    2016-02-15

    The cerebellum plays an important role in both motor control and cognitive function. Cerebellar function is topographically organized and diseases that affect specific parts of the cerebellum are associated with specific patterns of symptoms. Accordingly, delineation and quantification of cerebellar sub-regions from magnetic resonance images are important in the study of cerebellar atrophy and associated functional losses. This paper describes an automated cerebellar lobule segmentation method based on a graph cut segmentation framework. Results from multi-atlas labeling and tissue classification contribute to the region terms in the graph cut energy function and boundary classification contributes to the boundary term in the energy function. A cerebellar parcellation is achieved by minimizing the energy function using the α-expansion technique. The proposed method was evaluated using a leave-one-out cross-validation on 15 subjects including both healthy controls and patients with cerebellar diseases. Based on reported Dice coefficients, the proposed method outperforms two state-of-the-art methods. The proposed method was then applied to 77 subjects to study the region-specific cerebellar structural differences in three spinocerebellar ataxia (SCA) genetic subtypes. Quantitative analysis of the lobule volumes shows distinct patterns of volume changes associated with different SCA subtypes consistent with known patterns of atrophy in these genetic subtypes. PMID:26408861

  10. SU-E-J-238: Monitoring Lymph Node Volumes During Radiotherapy Using Semi-Automatic Segmentation of MRI Images

    SciTech Connect

    Veeraraghavan, H; Tyagi, N; Riaz, N; McBride, S; Lee, N; Deasy, J

    2014-06-01

    Purpose: Identification and image-based monitoring of lymph nodes growing due to disease, could be an attractive alternative to prophylactic head and neck irradiation. We evaluated the accuracy of the user-interactive Grow Cut algorithm for volumetric segmentation of radiotherapy relevant lymph nodes from MRI taken weekly during radiotherapy. Method: The algorithm employs user drawn strokes in the image to volumetrically segment multiple structures of interest. We used a 3D T2-wturbo spin echo images with an isotropic resolution of 1 mm3 and FOV of 492×492×300 mm3 of head and neck cancer patients who underwent weekly MR imaging during the course of radiotherapy. Various lymph node (LN) levels (N2, N3, N4'5) were individually contoured on the weekly MR images by an expert physician and used as ground truth in our study. The segmentation results were compared with the physician drawn lymph nodes based on DICE similarity score. Results: Three head and neck patients with 6 weekly MR images were evaluated. Two patients had level 2 LN drawn and one patient had level N2, N3 and N4'5 drawn on each MR image. The algorithm took an average of a minute to segment the entire volume (512×512×300 mm3). The algorithm achieved an overall DICE similarity score of 0.78. The time taken for initializing and obtaining the volumetric mask was about 5 mins for cases with only N2 LN and about 15 mins for the case with N2,N3 and N4'5 level nodes. The longer initialization time for the latter case was due to the need for accurate user inputs to separate overlapping portions of the different LN. The standard deviation in segmentation accuracy at different time points was utmost 0.05. Conclusions: Our initial evaluation of the grow cut segmentation shows reasonably accurate and consistent volumetric segmentations of LN with minimal user effort and time.

  11. Finite Volume Methods: Foundation and Analysis

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Ohlberger, Mario

    2003-01-01

    Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

  12. Automated segmentation of chronic stroke lesions using LINDA: Lesion identification with neighborhood data analysis.

    PubMed

    Pustina, Dorian; Coslett, H Branch; Turkeltaub, Peter E; Tustison, Nicholas; Schwartz, Myrna F; Avants, Brian

    2016-04-01

    The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696 ± 0.16, Hausdorff distance of 17.9 ± 9.8 mm, and average displacement of 2.54 ± 1.38 mm. The manual and predicted lesion volumes correlated at r = 0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. PMID:26756101

  13. Application of taxonomy theory, Volume 1: Computing a Hopf bifurcation-related segment of the feasibility boundary. Final report

    SciTech Connect

    Zaborszky, J.; Venkatasubramanian, V.

    1995-10-01

    Taxonomy Theory is the first precise comprehensive theory for large power system dynamics modeled in any detail. The motivation for this project is to show that it can be used, practically, for analyzing a disturbance that actually occurred on a large system, which affected a sizable portion of the Midwest with supercritical Hopf type oscillations. This event is well documented and studied. The report first summarizes Taxonomy Theory with an engineering flavor. Then various computational approaches are sighted and analyzed for desirability to use with Taxonomy Theory. Then working equations are developed for computing a segment of the feasibility boundary that bounds the region of (operating) parameters throughout which the operating point can be moved without losing stability. Then experimental software incorporating large EPRI software packages PSAPAC is developed. After a summary of the events during the subject disturbance, numerous large scale computations, up to 7600 buses, are reported. These results are reduced into graphical and tabular forms, which then are analyzed and discussed. The report is divided into two volumes. This volume illustrates the use of the Taxonomy Theory for computing the feasibility boundary and presents evidence that the event indeed led to a Hopf type oscillation on the system. Furthermore it proves that the Feasibility Theory can indeed be used for practical computation work with very large systems. Volume 2, a separate volume, will show that the disturbance has led to a supercritical (that is stable oscillation) Hopf bifurcation.

  14. A framework for automatic heart sound analysis without segmentation

    PubMed Central

    2011-01-01

    Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set. PMID:21303558

  15. Atlas-Based Segmentation Improves Consistency and Decreases Time Required for Contouring Postoperative Endometrial Cancer Nodal Volumes

    SciTech Connect

    Young, Amy V.; Wortham, Angela; Wernick, Iddo; Evans, Andrew; Ennis, Ronald D.

    2011-03-01

    Purpose: Accurate target delineation of the nodal volumes is essential for three-dimensional conformal and intensity-modulated radiotherapy planning for endometrial cancer adjuvant therapy. We hypothesized that atlas-based segmentation ('autocontouring') would lead to time savings and more consistent contours among physicians. Methods and Materials: A reference anatomy atlas was constructed using the data from 15 postoperative endometrial cancer patients by contouring the pelvic nodal clinical target volume on the simulation computed tomography scan according to the Radiation Therapy Oncology Group 0418 trial using commercially available software. On the simulation computed tomography scans from 10 additional endometrial cancer patients, the nodal clinical target volume autocontours were generated. Three radiation oncologists corrected the autocontours and delineated the manual nodal contours under timed conditions while unaware of the other contours. The time difference was determined, and the overlap of the contours was calculated using Dice's coefficient. Results: For all physicians, manual contouring of the pelvic nodal target volumes and editing the autocontours required a mean {+-} standard deviation of 32 {+-} 9 vs. 23 {+-} 7 minutes, respectively (p = .000001), a 26% time savings. For each physician, the time required to delineate the manual contours vs. correcting the autocontours was 30 {+-} 3 vs. 21 {+-} 5 min (p = .003), 39 {+-} 12 vs. 30 {+-} 5 min (p = .055), and 29 {+-} 5 vs. 20 {+-} 5 min (p = .0002). The mean overlap increased from manual contouring (0.77) to correcting the autocontours (0.79; p = .038). Conclusion: The results of our study have shown that autocontouring leads to increased consistency and time savings when contouring the nodal target volumes for adjuvant treatment of endometrial cancer, although the autocontours still required careful editing to ensure that the lymph nodes at risk of recurrence are properly included in the target

  16. Prioritization of brain MRI volumes using medical image perception model and tumor region segmentation.

    PubMed

    Mehmood, Irfan; Ejaz, Naveed; Sajjad, Muhammad; Baik, Sung Wook

    2013-10-01

    The objective of the present study is to explore prioritization methods in diagnostic imaging modalities to automatically determine the contents of medical images. In this paper, we propose an efficient prioritization of brain MRI. First, the visual perception of the radiologists is adapted to identify salient regions. Then this saliency information is used as an automatic label for accurate segmentation of brain lesion to determine the scientific value of that image. The qualitative and quantitative results prove that the rankings generated by the proposed method are closer to the rankings created by radiologists. PMID:24034739

  17. Segmentation and classification of capnograms: application in respiratory variability analysis.

    PubMed

    Herry, C L; Townsend, D; Green, G C; Bravi, A; Seely, A J E

    2014-12-01

    Variability analysis of respiratory waveforms has been shown to provide key insights into respiratory physiology and has been used successfully to predict clinical outcomes. The current standard for quality assessment of the capnogram signal relies on a visual analysis performed by an expert in order to identify waveform artifacts. Automated processing of capnograms is desirable in order to extract clinically useful features over extended periods of time in a patient monitoring environment. However, the proper interpretation of capnogram derived features depends upon the quality of the underlying waveform. In addition, the comparison of capnogram datasets across studies requires a more practical approach than a visual analysis and selection of high-quality breath data. This paper describes a system that automatically extracts breath-by-breath features from capnograms and estimates the quality of individual breaths derived from them. Segmented capnogram breaths were presented to expert annotators, who labeled the individual physiological breaths into normal and multiple abnormal breath types. All abnormal breath types were aggregated into the abnormal class for the purpose of this manuscript, with respiratory variability analysis as the end-application. A database of 11,526 breaths from over 300 patients was created, comprising around 35% abnormal breaths. Several simple classifiers were trained through a stratified repeated ten-fold cross-validation and tested on an unseen portion of the labeled breath database, using a subset of 15 features derived from each breath curve. Decision Tree, K-Nearest Neighbors (KNN) and Naive Bayes classifiers were close in terms of performance (AUC of 90%, 89% and 88% respectively), while using 7, 4 and 5 breath features, respectively. When compared to airflow derived timings, the 95% confidence interval on the mean difference in interbreath intervals was ± 0.18 s. This breath classification system provides a fast and robust pre

  18. Simultaneous segmentation of retinal surfaces and microcystic macular edema in SDOCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Lang, Andrew; Swingle, Emily K.; Al-Louzi, Omar; Carass, Aaron; Solomon, Sharon; Calabresi, Peter A.; Saidha, Shiv; Prince, Jerry L.

    2016-03-01

    Optical coherence tomography (OCT) is a noninvasive imaging modality that has begun to find widespread use in retinal imaging for the detection of a variety of ocular diseases. In addition to structural changes in the form of altered retinal layer thicknesses, pathological conditions may also cause the formation of edema within the retina. In multiple sclerosis, for instance, the nerve fiber and ganglion cell layers are known to thin. Additionally, the formation of pseudocysts called microcystic macular edema (MME) have also been observed in the eyes of about 5% of MS patients, and its presence has been shown to be correlated with disease severity. Previously, we proposed separate algorithms for the segmentation of retinal layers and MME, but since MME mainly occurs within specific regions of the retina, a simultaneous approach is advantageous. In this work, we propose an automated globally optimal graph-theoretic approach that simultaneously segments the retinal layers and the MME in volumetric OCT scans. SD-OCT scans from one eye of 12 MS patients with known MME and 8 healthy controls were acquired and the pseudocysts manually traced. The overall precision and recall of the pseudocyst detection was found to be 86.0% and 79.5%, respectively.

  19. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  20. Blood vessel segmentation using line-direction vector based on Hessian analysis

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Kitasaka, Takayuki; Mori, Kensaku

    2010-03-01

    For decision of the treatment strategy, grading of stenoses is important in diagnosis of vascular disease such as arterial occlusive disease or thromboembolism. It is also important to understand the vasculature in minimally invasive surgery such as laparoscopic surgery or natural orifice translumenal endoscopic surgery. Precise segmentation and recognition of blood vessel regions are indispensable tasks in medical image processing systems. Previous methods utilize only ``lineness'' measure, which is computed by Hessian analysis. However, difference of the intensity values between a voxel of thin blood vessel and a voxel of surrounding tissue is generally decreased by the partial volume effect. Therefore, previous methods cannot extract thin blood vessel regions precisely. This paper describes a novel blood vessel segmentation method that can extract thin blood vessels with suppressing false positives. The proposed method utilizes not only lineness measure but also line-direction vector corresponding to the largest eigenvalue in Hessian analysis. By introducing line-direction information, it is possible to distinguish between a blood vessel voxel and a voxel having a low lineness measure caused by noise. In addition, we consider the scale of blood vessel. The proposed method can reduce false positives in some line-like tissues close to blood vessel regions by utilization of iterative region growing with scale information. The experimental result shows thin blood vessel (0.5 mm in diameter, almost same as voxel spacing) can be extracted finely by the proposed method.

  1. Normative Data for Body Segment Weights, Volumes, and Densities in Cadaver and Living Subjects

    ERIC Educational Resources Information Center

    Gold, Ellen; Katch, Victor

    1976-01-01

    Application of only Dempster's data on problems in human motion studies to living subjects is at best a rough approximation, in light of apparent differences between Dempster's data and the grand mean calculated for all data, with respect to volume and weight. (MB)

  2. Semiautomated three-dimensional segmentation software to quantify carpal bone volume changes on wrist CT scans for arthritis assessment.

    PubMed

    Duryea, J; Magalnick, M; Alli, S; Yao, L; Wilson, M; Goldbach-Mansky, R

    2008-06-01

    Rapid progression of joint destruction is an indication of poor prognosis in patients with rheumatoid arthritis. Computed tomography (CT) has the potential to serve as a gold standard for joint imaging since it provides high resolution three-dimensional (3D) images of bone structure. The authors have developed a method to quantify erosion volume changes on wrist CT scans. In this article they present a description and validation of the methodology using multiple scans of a hand phantom and five human subjects. An anthropomorphic hand phantom was imaged with a clinical CT scanner at three different orientations separated by a 30-deg angle. A reader used the semiautomated software tool to segment the individual carpal bones of each CT scan. Reproducibility was measured as the root-mean-square standard deviation (RMMSD) and coefficient of variation (CoV) between multiple measurements of the carpal volumes. Longitudinal erosion progression was studied by inserting simulated erosions in a paired second scan. The change in simulated erosion size was calculated by performing 3D image registration and measuring the volume difference between scans in a region adjacent to the simulated erosion. The RMSSD for the total carpal volumes was 21.0 mm3 (CoV = 1.3%) for the phantom, and 44.1 mm3 (CoV = 3.0%) for the in vivo subjects. Using 3D registration and local volume difference calculations, the RMMSD was 1.0-3.0 mm3 The reader time was approximately 5 min per carpal bone. There was excellent agreement between the measured and simulated erosion volumes. The effect of a poorly measured volume for a single erosion is mitigated by the large number of subjects that would comprise a clinical study and that there will be many erosions measured per patient. CT promises to be a quantifiable tool to measure erosion volumes and may serve as a gold standard that can be used in the validation of other modalities such as magnetic resonance imaging.

  3. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  4. Fusing Markov random fields with anatomical knowledge and shape-based analysis to segment multiple sclerosis white matter lesions in magnetic resonance images of the brain

    NASA Astrophysics Data System (ADS)

    AlZubi, Stephan; Toennies, Klaus D.; Bodammer, N.; Hinrichs, Herman

    2002-05-01

    This paper proposes an image analysis system to segment multiple sclerosis lesions of magnetic resonance (MR) brain volumes consisting of 3 mm thick slices using three channels (images showing T1-, T2- and PD -weighted contrast). The method uses the statistical model of Markov Random Fields (MRF) both at low and high levels. The neighborhood system used in this MRF is defined in three types: (1) Voxel to voxel: a low-level heterogeneous neighborhood system is used to restore noisy images. (2) Voxel to segment: a fuzzy atlas, which indicates the probability distribution of each tissue type in the brain, is registered elastically with the MRF. It is used by the MRF as a-priori knowledge to correct miss-classified voxels. (3) Segment to segment: Remaining lesion candidates are processed by a feature based classifier that looks at unary and neighborhood information to eliminate more false positives. An expert's manual segmentation was compared with the algorithm.

  5. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  6. Infant word segmentation and childhood vocabulary development: a longitudinal analysis.

    PubMed

    Singh, Leher; Steven Reznick, J; Xuehua, Liang

    2012-07-01

    Infants begin to segment novel words from speech by 7.5 months, demonstrating an ability to track, encode and retrieve words in the context of larger units. Although it is presumed that word recognition at this stage is a prerequisite to constructing a vocabulary, the continuity between these stages of development has not yet been empirically demonstrated. The goal of the present study is to investigate whether infant word segmentation skills are indeed related to later lexical development. Two word segmentation tasks, varying in complexity, were administered in infancy and related to childhood outcome measures. Outcome measures consisted of age-normed productive vocabulary percentiles and a measure of cognitive development. Results demonstrated a strong degree of association between infant word segmentation abilities at 7 months and productive vocabulary size at 24 months. In addition, outcome groups, as defined by median vocabulary size and growth trajectories at 24 months, showed distinct word segmentation abilities as infants. These findings provide the first prospective evidence supporting the predictive validity of infant word segmentation tasks and suggest that they are indeed associated with mature word knowledge. A video abstract of this article can be viewed at http://www.youtube.com/watch?v=jxzLi5oLZQ8. PMID:22709398

  7. Glioma grading using apparent diffusion coefficient map: application of histogram analysis based on automatic segmentation.

    PubMed

    Lee, Jeongwon; Choi, Seung Hong; Kim, Ji-Hoon; Sohn, Chul-Ho; Lee, Sooyeul; Jeong, Jaeseung

    2014-09-01

    The accurate diagnosis of glioma subtypes is critical for appropriate treatment, but conventional histopathologic diagnosis often exhibits significant intra-observer variability and sampling error. The aim of this study was to investigate whether histogram analysis using an automatically segmented region of interest (ROI), excluding cystic or necrotic portions, could improve the differentiation between low-grade and high-grade gliomas. Thirty-two patients (nine low-grade and 23 high-grade gliomas) were included in this retrospective investigation. The outer boundaries of the entire tumors were manually drawn in each section of the contrast-enhanced T1 -weighted MR images. We excluded cystic or necrotic portions from the entire tumor volume. The histogram analyses were performed within the ROI on normalized apparent diffusion coefficient (ADC) maps. To evaluate the contribution of the proposed method to glioma grading, we compared the area under the receiver operating characteristic (ROC) curves. We found that an ROI excluding cystic or necrotic portions was more useful for glioma grading than was an entire tumor ROI. In the case of the fifth percentile values of the normalized ADC histogram, the area under the ROC curve for the tumor ROIs excluding cystic or necrotic portions was significantly higher than that for the entire tumor ROIs (p < 0.005). The automatic segmentation of a cystic or necrotic area probably improves the ability to differentiate between high- and low-grade gliomas on an ADC map. PMID:25042540

  8. Analysis of adjacent segment reoperation after lumbar total disc replacement

    PubMed Central

    Rainey, Scott; Blumenthal, Scott L.; Zigler, Jack E.; Guyer, Richard D.; Ohnmeiss, Donna D.

    2012-01-01

    Background Fusion has long been used for treating chronic back pain unresponsive to nonoperative care. However, potential development of adjacent segment degeneration resulting in reoperation is a concern. Total disc replacement (TDR) has been proposed as a method for addressing back pain and preventing or reducing adjacent segment degeneration. The purpose of the study was to determine the reoperation rate at the segment adjacent to a level implanted with a lumbar TDR and to analyze the pre-TDR condition of the adjacent segment. Methods This study was based on a retrospective review of charts and radiographs from a consecutive series of 1000 TDR patients to identify those who underwent reoperation because of adjacent segment degeneration. Some of the patients were part of randomized studies comparing TDR with fusion. Adjacent segment reoperation data were also collected from 67 patients who were randomized to fusion in those studies. The condition of the adjacent segment before the index surgery was compared with its condition before reoperation based on radiographs, magnetic resonance imaging (MRI), and computed tomography. Results Of the 1000 TDR patients, 20 (2.0%) underwent reoperation. The mean length of time from arthroplasty to reoperation was 28.3 months (range, 0.5–85 months). Of the adjacent segments evaluated on preoperative MRI, 38.8% were normal, 38.8% were moderately diseased, and 22.2% were classified as having severe degeneration. None of these levels had a different grading at the time of reoperation compared with the pre-TDR MRI study. Reoperation for adjacent segment degeneration was performed in 4.5% of the fusion patients. Conclusions The 2.0% rate of adjacent segment degeneration resulting in reoperation in this study is similar to the 2.0% to 2.8% range in other studies and lower than the published rates of 7% to 18% after lumbar fusion. By carefully assessing the presence of pre-existing degenerative changes before performing arthroplasty

  9. Relationship between stroke volume and pulse pressure during blood volume perturbation: a mathematical analysis.

    PubMed

    Bighamian, Ramin; Hahn, Jin-Oh

    2014-01-01

    Arterial pulse pressure has been widely used as surrogate of stroke volume, for example, in the guidance of fluid therapy. However, recent experimental investigations suggest that arterial pulse pressure is not linearly proportional to stroke volume. However, mechanisms underlying the relation between the two have not been clearly understood. The goal of this study was to elucidate how arterial pulse pressure and stroke volume respond to a perturbation in the left ventricular blood volume based on a systematic mathematical analysis. Both our mathematical analysis and experimental data showed that the relative change in arterial pulse pressure due to a left ventricular blood volume perturbation was consistently smaller than the corresponding relative change in stroke volume, due to the nonlinear left ventricular pressure-volume relation during diastole that reduces the sensitivity of arterial pulse pressure to perturbations in the left ventricular blood volume. Therefore, arterial pulse pressure must be used with care when used as surrogate of stroke volume in guiding fluid therapy.

  10. Structured Time Series Analysis for Human Action Segmentation and Recognition.

    PubMed

    Dian Gong; Medioni, Gerard; Xuemei Zhao

    2014-07-01

    We address the problem of structure learning of human motion in order to recognize actions from a continuous monocular motion sequence of an arbitrary person from an arbitrary viewpoint. Human motion sequences are represented by multivariate time series in the joint-trajectories space. Under this structured time series framework, we first propose Kernelized Temporal Cut (KTC), an extension of previous works on change-point detection by incorporating Hilbert space embedding of distributions, to handle the nonparametric and high dimensionality issues of human motions. Experimental results demonstrate the effectiveness of our approach, which yields realtime segmentation, and produces high action segmentation accuracy. Second, a spatio-temporal manifold framework is proposed to model the latent structure of time series data. Then an efficient spatio-temporal alignment algorithm Dynamic Manifold Warping (DMW) is proposed for multivariate time series to calculate motion similarity between action sequences (segments). Furthermore, by combining the temporal segmentation algorithm and the alignment algorithm, online human action recognition can be performed by associating a few labeled examples from motion capture data. The results on human motion capture data and 3D depth sensor data demonstrate the effectiveness of the proposed approach in automatically segmenting and recognizing motion sequences, and its ability to handle noisy and partially occluded data, in the transfer learning module. PMID:26353312

  11. Automated compromised right lung segmentation method using a robust atlas-based active volume model with sparse shape composition prior in CT.

    PubMed

    Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren

    2015-12-01

    To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation.

  12. Segmentation of hepatic artery in multi-phase liver CT using directional dilation and connectivity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Schnurr, Alena-Kathrin; Zidowitz, Stephan; Georgii, Joachim; Zhao, Yue; Razavi, Mohammad; Schwier, Michael; Hahn, Horst K.; Hansen, Christian

    2016-03-01

    Segmentation of hepatic arteries in multi-phase computed tomography (CT) images is indispensable in liver surgery planning. During image acquisition, the hepatic artery is enhanced by the injection of contrast agent. The enhanced signals are often not stably acquired due to non-optimal contrast timing. Other vascular structure, such as hepatic vein or portal vein, can be enhanced as well in the arterial phase, which can adversely affect the segmentation results. Furthermore, the arteries might suffer from partial volume effects due to their small diameter. To overcome these difficulties, we propose a framework for robust hepatic artery segmentation requiring a minimal amount of user interaction. First, an efficient multi-scale Hessian-based vesselness filter is applied on the artery phase CT image, aiming to enhance vessel structures with specified diameter range. Second, the vesselness response is processed using a Bayesian classifier to identify the most probable vessel structures. Considering the vesselness filter normally performs not ideally on the vessel bifurcations or the segments corrupted by noise, two vessel-reconnection techniques are proposed. The first technique uses a directional morphological operator to dilate vessel segments along their centerline directions, attempting to fill the gap between broken vascular segments. The second technique analyzes the connectivity of vessel segments and reconnects disconnected segments and branches. Finally, a 3D vessel tree is reconstructed. The algorithm has been evaluated using 18 CT images of the liver. To quantitatively measure the similarities between segmented and reference vessel trees, the skeleton coverage and mean symmetric distance are calculated to quantify the agreement between reference and segmented vessel skeletons, resulting in an average of 0:55+/-0:27 and 12:7+/-7:9 mm (mean standard deviation), respectively.

  13. Health Lifestyles: Audience Segmentation Analysis for Public Health Interventions.

    ERIC Educational Resources Information Center

    Slater, Michael D.; Flora, June A.

    This paper is concerned with the application of market research techniques to segment large populations into homogeneous units in order to improve the reach, utilization, and effectiveness of health programs. The paper identifies seven distinctive patterns of health attitudes, social influences, and behaviors using cluster analytic techniques in a…

  14. Comparison of Acute and Chronic Traumatic Brain Injury Using Semi-Automatic Multimodal Segmentation of MR Volumes

    PubMed Central

    Chambers, Micah C.; Alger, Jeffry R.; Filippou, Maria; Prastawa, Marcel W.; Wang, Bo; Hovda, David A.; Gerig, Guido; Toga, Arthur W.; Kikinis, Ron; Vespa, Paul M.; Van Horn, John D.

    2011-01-01

    Abstract Although neuroimaging is essential for prompt and proper management of traumatic brain injury (TBI), there is a regrettable and acute lack of robust methods for the visualization and assessment of TBI pathophysiology, especially for of the purpose of improving clinical outcome metrics. Until now, the application of automatic segmentation algorithms to TBI in a clinical setting has remained an elusive goal because existing methods have, for the most part, been insufficiently robust to faithfully capture TBI-related changes in brain anatomy. This article introduces and illustrates the combined use of multimodal TBI segmentation and time point comparison using 3D Slicer, a widely-used software environment whose TBI data processing solutions are openly available. For three representative TBI cases, semi-automatic tissue classification and 3D model generation are performed to perform intra-patient time point comparison of TBI using multimodal volumetrics and clinical atrophy measures. Identification and quantitative assessment of extra- and intra-cortical bleeding, lesions, edema, and diffuse axonal injury are demonstrated. The proposed tools allow cross-correlation of multimodal metrics from structural imaging (e.g., structural volume, atrophy measurements) with clinical outcome variables and other potential factors predictive of recovery. In addition, the workflows described are suitable for TBI clinical practice and patient monitoring, particularly for assessing damage extent and for the measurement of neuroanatomical change over time. With knowledge of general location, extent, and degree of change, such metrics can be associated with clinical measures and subsequently used to suggest viable treatment options. PMID:21787171

  15. Sensitivity analysis of volume scattering phase functions.

    PubMed

    Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael

    2016-08-01

    To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m-3. PMID:27505819

  16. Latent segmentation based count models: Analysis of bicycle safety in Montreal and Toronto.

    PubMed

    Yasmin, Shamsunnahar; Eluru, Naveen

    2016-10-01

    The study contributes to literature on bicycle safety by building on the traditional count regression models to investigate factors affecting bicycle crashes at the Traffic Analysis Zone (TAZ) level. TAZ is a traffic related geographic entity which is most frequently used as spatial unit for macroscopic crash risk analysis. In conventional count models, the impact of exogenous factors is restricted to be the same across the entire region. However, it is possible that the influence of exogenous factors might vary across different TAZs. To accommodate for the potential variation in the impact of exogenous factors we formulate latent segmentation based count models. Specifically, we formulate and estimate latent segmentation based Poisson (LP) and latent segmentation based Negative Binomial (LNB) models to study bicycle crash counts. In our latent segmentation approach, we allow for more than two segments and also consider a large set of variables in segmentation and segment specific models. The formulated models are estimated using bicycle-motor vehicle crash data from the Island of Montreal and City of Toronto for the years 2006 through 2010. The TAZ level variables considered in our analysis include accessibility measures, exposure measures, sociodemographic characteristics, socioeconomic characteristics, road network characteristics and built environment. A policy analysis is also conducted to illustrate the applicability of the proposed model for planning purposes. This macro-level research would assist decision makers, transportation officials and community planners to make informed decisions to proactively improve bicycle safety - a prerequisite to promoting a culture of active transportation. PMID:27442595

  17. Influence of segmented vessel size due to limited imaging resolution on coronary hyperemic flow prediction from arterial crown volume.

    PubMed

    van Horssen, P; van Lier, M G J T B; van den Wijngaard, J P H M; VanBavel, E; Hoefer, I E; Spaan, J A E; Siebes, M

    2016-04-01

    Computational predictions of the functional stenosis severity from coronary imaging data use an allometric scaling law to derive hyperemic blood flow (Q) from coronary arterial volume (V), Q = αV(β) Reliable estimates of α and β are essential for meaningful flow estimations. We hypothesize that the relation between Q and V depends on imaging resolution. In five canine hearts, fluorescent microspheres were injected into the left anterior descending coronary artery during maximal hyperemia. The coronary arteries of the excised heart were filled with fluorescent cast material, frozen, and processed with an imaging cryomicrotome to yield a three-dimensional representation of the coronary arterial network. The effect of limited image resolution was simulated by assessing scaling law parameters from the virtual arterial network at 11 truncation levels ranging from 50 to 1,000 μm segment radius. Mapped microsphere locations were used to derive the corresponding relative Q using a reference truncation level of 200 μm. The scaling law factor α did not change with truncation level, despite considerable intersubject variability. In contrast, the scaling law exponent β decreased from 0.79 to 0.55 with increasing truncation radius and was significantly lower for truncation radii above 500 μm vs. 50 μm (P< 0.05). Hyperemic Q was underestimated for vessel truncation above the reference level. In conclusion, flow-crown volume relations confirmed overall power law behavior; however, this relation depends on the terminal vessel radius that can be visualized. The scaling law exponent β should therefore be adapted to the resolution of the imaging modality. PMID:26825519

  18. Influence of segmented vessel size due to limited imaging resolution on coronary hyperemic flow prediction from arterial crown volume.

    PubMed

    van Horssen, P; van Lier, M G J T B; van den Wijngaard, J P H M; VanBavel, E; Hoefer, I E; Spaan, J A E; Siebes, M

    2016-04-01

    Computational predictions of the functional stenosis severity from coronary imaging data use an allometric scaling law to derive hyperemic blood flow (Q) from coronary arterial volume (V), Q = αV(β) Reliable estimates of α and β are essential for meaningful flow estimations. We hypothesize that the relation between Q and V depends on imaging resolution. In five canine hearts, fluorescent microspheres were injected into the left anterior descending coronary artery during maximal hyperemia. The coronary arteries of the excised heart were filled with fluorescent cast material, frozen, and processed with an imaging cryomicrotome to yield a three-dimensional representation of the coronary arterial network. The effect of limited image resolution was simulated by assessing scaling law parameters from the virtual arterial network at 11 truncation levels ranging from 50 to 1,000 μm segment radius. Mapped microsphere locations were used to derive the corresponding relative Q using a reference truncation level of 200 μm. The scaling law factor α did not change with truncation level, despite considerable intersubject variability. In contrast, the scaling law exponent β decreased from 0.79 to 0.55 with increasing truncation radius and was significantly lower for truncation radii above 500 μm vs. 50 μm (P< 0.05). Hyperemic Q was underestimated for vessel truncation above the reference level. In conclusion, flow-crown volume relations confirmed overall power law behavior; however, this relation depends on the terminal vessel radius that can be visualized. The scaling law exponent β should therefore be adapted to the resolution of the imaging modality.

  19. Topology-corrected segmentation and local intensity estimates for improved partial volume classification of brain cortex in MRI.

    PubMed

    Rueda, Andrea; Acosta, Oscar; Couprie, Michel; Bourgeat, Pierrick; Fripp, Jurgen; Dowson, Nicholas; Romero, Eduardo; Salvado, Olivier

    2010-05-15

    In magnetic resonance imaging (MRI), accuracy and precision with which brain structures may be quantified are frequently affected by the partial volume (PV) effect. PV is due to the limited spatial resolution of MRI compared to the size of anatomical structures. Accurate classification of mixed voxels and correct estimation of the proportion of each pure tissue (fractional content) may help to increase the precision of cortical thickness estimation in regions where this measure is particularly difficult, such as deep sulci. The contribution of this work is twofold: on the one hand, we propose a new method to label voxels and compute tissue fractional content, integrating a mechanism for detecting sulci with topology preserving operators. On the other hand, we improve the computation of the fractional content of mixed voxels using local estimation of pure tissue intensity means. Accuracy and precision were assessed using simulated and real MR data and comparison with other existing approaches demonstrated the benefits of our method. Significant improvements in gray matter (GM) classification and cortical thickness estimation were brought by the topology correction. The fractional content root mean squared error diminished by 6.3% (p<0.01) on simulated data. The reproducibility error decreased by 8.8% (p<0.001) and the Jaccard similarity measure increased by 3.5% on real data. Furthermore, compared with manually guided expert segmentations, the similarity measure was improved by 12.0% (p<0.001). Thickness estimation with the proposed method showed a higher reproducibility compared with the measure performed after partial volume classification using other methods.

  20. SU-E-J-123: Assessing Segmentation Accuracy of Internal Volumes and Sub-Volumes in 4D PET/CT of Lung Tumors Using a Novel 3D Printed Phantom

    SciTech Connect

    Soultan, D; Murphy, J; James, C; Hoh, C; Moiseenko, V; Cervino, L; Gill, B

    2015-06-15

    Purpose: To assess the accuracy of internal target volume (ITV) segmentation of lung tumors for treatment planning of simultaneous integrated boost (SIB) radiotherapy as seen in 4D PET/CT images, using a novel 3D-printed phantom. Methods: The insert mimics high PET tracer uptake in the core and 50% uptake in the periphery, by using a porous design at the periphery. A lung phantom with the insert was placed on a programmable moving platform. Seven breathing waveforms of ideal and patient-specific respiratory motion patterns were fed to the platform, and 4D PET/CT scans were acquired of each of them. CT images were binned into 10 phases, and PET images were binned into 5 phases following the clinical protocol. Two scenarios were investigated for segmentation: a gate 30–70 window, and no gating. The radiation oncologist contoured the outer ITV of the porous insert with on CT images, while the internal void volume with 100% uptake was contoured on PET images for being indistinguishable from the outer volume in CT images. Segmented ITVs were compared to the expected volumes based on known target size and motion. Results: 3 ideal breathing patterns, 2 regular-breathing patient waveforms, and 2 irregular-breathing patient waveforms were used for this study. 18F-FDG was used as the PET tracer. The segmented ITVs from CT closely matched the expected motion for both no gating and gate 30–70 window, with disagreement of contoured ITV with respect to the expected volume not exceeding 13%. PET contours were seen to overestimate volumes in all the cases, up to more than 40%. Conclusion: 4DPET images of a novel 3D printed phantom designed to mimic different uptake values were obtained. 4DPET contours overestimated ITV volumes in all cases, while 4DCT contours matched expected ITV volume values. Investigation of the cause and effects of the discrepancies is undergoing.

  1. Development of an automated 3D segmentation program for volume quantification of body fat distribution using CT.

    PubMed

    Ohshima, Shunsuke; Yamamoto, Shuji; Yamaji, Taiki; Suzuki, Masahiro; Mutoh, Michihiro; Iwasaki, Motoki; Sasazuki, Shizuka; Kotera, Ken; Tsugane, Shoichiro; Muramatsu, Yukio; Moriyama, Noriyuki

    2008-09-20

    The objective of this study was to develop a computing tool for full-automatic segmentation of body fat distributions on volumetric CT images. We developed an algorithm to automatically identify the body perimeter and the inner contour that separates visceral fat from subcutaneous fat. Diaphragmatic surfaces can be extracted by model-based segmentation to match the bottom surface of the lung in CT images for determination of the upper limitation of the abdomen. The functions for quantitative evaluation of abdominal obesity or obesity-related metabolic syndrome were implemented with a prototype three-dimensional (3D) image processing workstation. The volumetric ratios of visceral fat to total fat and visceral fat to subcutaneous fat for each subject can be calculated. Additionally, color intensity mapping of subcutaneous areas and the visceral fat layer is quite obvious in understanding the risk of abdominal obesity with the 3D surface display. Preliminary results obtained have been useful in medical checkups and have contributed to improved efficiency in checking obesity throughout the whole range of the abdomen with 3D visualization and analysis.

  2. A new partial volume segmentation approach to extract bladder wall for computer-aided detection in virtual cystoscopy

    NASA Astrophysics Data System (ADS)

    Li, Lihong; Wang, Zigang; Li, Xiang; Wei, Xinzhou; Adler, Howard L.; Huang, Wei; Rizvi, Syed A.; Meng, Hong; Harrington, Donald P.; Liang, Zhengrong

    2004-04-01

    We propose a new partial volume (PV) segmentation scheme to extract bladder wall for computer aided detection (CAD) of bladder lesions using multispectral MR images. Compared with CT images, MR images provide not only a better tissue contrast between bladder wall and bladder lumen, but also the multispectral information. As multispectral images are spatially registered over three-dimensional space, information extracted from them is more valuable than that extracted from each image individually. Furthermore, the intrinsic T1 and T2 contrast of the urine against the bladder wall eliminates the invasive air insufflation procedure. Because the earliest stages of bladder lesion growth tend to develop gradually and migrate slowly from the mucosa into the bladder wall, our proposed PV algorithm quantifies images as percentages of tissues inside each voxel. It preserves both morphology and texture information and provides tissue growth tendency in addition to the anatomical structure. Our CAD system utilizes a multi-scan protocol on dual (full and empty of urine) states of the bladder to extract both geometrical and texture information. Moreover, multi-scan of transverse and coronal MR images eliminates motion artifacts. Experimental results indicate that the presented scheme is feasible towards mass screening and lesion detection for virtual cystoscopy (VC).

  3. Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Syed, Abdul Haleem

    Land-use-and-land-cover (LULC) mapping is crucial in precision agriculture, environmental monitoring, disaster response, and military applications. The demand for improved and more accurate LULC maps has led to the emergence of a key methodology known as Geographic Object-Based Image Analysis (GEOBIA). The core idea of the GEOBIA for an object-based classification system (OBC) is to change the unit of analysis from single-pixels to groups-of-pixels called `objects' through segmentation. While this new paradigm solved problems and improved global accuracy, it also raised new challenges such as the loss of accuracy in categories that are less abundant, but potentially important. Although this trade-off may be acceptable in some domains, the consequences of such an accuracy loss could be potentially fatal in others (for instance, landmine detection). This thesis proposes a method to improve OBC performance by eliminating such accuracy losses. Specifically, we examine the two key players of an OBC system: Hierarchical Segmentation and Supervised Classification. Further, we propose a model to understand the source of accuracy errors in minority categories and provide a method called Scale Fusion to eliminate those errors. This proposed fusion method involves two stages. First, the characteristic scale for each category is estimated through a combination of segmentation and supervised classification. Next, these estimated scales (segmentation maps) are fused into one combined-object-map. Classification performance is evaluated by comparing results of the multi-cut-and-fuse approach (proposed) to the traditional single-cut (SC) scale selection strategy. Testing on four different data sets revealed that our proposed algorithm improves accuracy on minority classes while performing just as well on abundant categories. Another active obstacle, presented by today's remotely sensed images, is the volume of information produced by our modern sensors with high spatial and

  4. Who avoids going to the doctor and why? Audience segmentation analysis for application of message development.

    PubMed

    Kannan, Viji Diane; Veazie, Peter J

    2015-01-01

    This exploratory study examines the prevalent and detrimental health care phenomenon of patient delay in order to inform formative research leading to the design of communication strategies. Delayed medical care diminishes optimal treatment choices, negatively impacts prognosis, and increases medical costs. Various communication strategies have been employed to combat patient delay, with limited success. This study fills a gap in research informing those interventions by focusing on the portion of patient delay occurring after symptoms have been assessed as a sign of illness and the need for medical care has been determined. We used CHAID segmentation analysis to produce homogeneous segments from the sample according to the propensity to avoid medical care. CHAID is a criterion-based predictive cluster analysis technique. CHAID examines a variety of characteristics to find the one most strongly associated with avoiding doctor visits through a chi-squared test and assessment of statistical significance. The characteristics identified then define the segments. Fourteen segments were produced. Age was the first delineating characteristic, with younger age groups comprising a greater proportion of avoiders. Other segments containing a comparatively larger percent of avoiders were characterized by lower income, lower education, being uninsured, and being male. Each segment was assessed for psychographic properties associated with avoiding care, reasons for avoiding care, and trust in health information sources. While the segments display distinct profiles, having had positive provider experiences, having high health self-efficacy, and having an internal rather than external or chance locus of control were associated with low avoidance among several segments. Several segments were either more or less likely to cite time or money as the reason for avoiding care. And several older aged segments were less likely than the remaining sample to trust the government as a source

  5. Imalytics Preclinical: Interactive Analysis of Biomedical Volume Data

    PubMed Central

    Gremse, Felix; Stärk, Marius; Ehling, Josef; Menzel, Jan Robert; Lammers, Twan; Kiessling, Fabian

    2016-01-01

    A software tool is presented for interactive segmentation of volumetric medical data sets. To allow interactive processing of large data sets, segmentation operations, and rendering are GPU-accelerated. Special adjustments are provided to overcome GPU-imposed constraints such as limited memory and host-device bandwidth. A general and efficient undo/redo mechanism is implemented using GPU-accelerated compression of the multiclass segmentation state. A broadly applicable set of interactive segmentation operations is provided which can be combined to solve the quantification task of many types of imaging studies. A fully GPU-accelerated ray casting method for multiclass segmentation rendering is implemented which is well-balanced with respect to delay, frame rate, worst-case memory consumption, scalability, and image quality. Performance of segmentation operations and rendering are measured using high-resolution example data sets showing that GPU-acceleration greatly improves the performance. Compared to a reference marching cubes implementation, the rendering was found to be superior with respect to rendering delay and worst-case memory consumption while providing sufficiently high frame rates for interactive visualization and comparable image quality. The fast interactive segmentation operations and the accurate rendering make our tool particularly suitable for efficient analysis of multimodal image data sets which arise in large amounts in preclinical imaging studies. PMID:26909109

  6. Control volume based hydrocephalus research; analysis of human data

    NASA Astrophysics Data System (ADS)

    Cohen, Benjamin; Wei, Timothy; Voorhees, Abram; Madsen, Joseph; Anor, Tomer

    2010-11-01

    Hydrocephalus is a neuropathophysiological disorder primarily diagnosed by increased cerebrospinal fluid volume and pressure within the brain. To date, utilization of clinical measurements have been limited to understanding of the relative amplitude and timing of flow, volume and pressure waveforms; qualitative approaches without a clear framework for meaningful quantitative comparison. Pressure volume models and electric circuit analogs enforce volume conservation principles in terms of pressure. Control volume analysis, through the integral mass and momentum conservation equations, ensures that pressure and volume are accounted for using first principles fluid physics. This approach is able to directly incorporate the diverse measurements obtained by clinicians into a simple, direct and robust mechanics based framework. Clinical data obtained for analysis are discussed along with data processing techniques used to extract terms in the conservation equation. Control volume analysis provides a non-invasive, physics-based approach to extracting pressure information from magnetic resonance velocity data that cannot be measured directly by pressure instrumentation.

  7. Theoretical analysis and experimental verification on valve-less piezoelectric pump with hemisphere-segment bluff-body

    NASA Astrophysics Data System (ADS)

    Ji, Jing; Zhang, Jianhui; Xia, Qixiao; Wang, Shouyin; Huang, Jun; Zhao, Chunsheng

    2014-05-01

    Existing researches on no-moving part valves in valve-less piezoelectric pumps mainly concentrate on pipeline valves and chamber bottom valves, which leads to the complex structure and manufacturing process of pump channel and chamber bottom. Furthermore, position fixed valves with respect to the inlet and outlet also makes the adjustability and controllability of flow rate worse. In order to overcome these shortcomings, this paper puts forward a novel implantable structure of valve-less piezoelectric pump with hemisphere-segments in the pump chamber. Based on the theory of flow around bluff-body, the flow resistance on the spherical and round surface of hemisphere-segment is different when fluid flows through, and the macroscopic flow resistance differences thus formed are also different. A novel valve-less piezoelectric pump with hemisphere-segment bluff-body (HSBB) is presented and designed. HSBB is the no-moving part valve. By the method of volume and momentum comparison, the stress on the bluff-body in the pump chamber is analyzed. The essential reason of unidirectional fluid pumping is expounded, and the flow rate formula is obtained. To verify the theory, a prototype is produced. By using the prototype, experimental research on the relationship between flow rate, pressure difference, voltage, and frequency has been carried out, which proves the correctness of the above theory. This prototype has six hemisphere-segments in the chamber filled with water, and the effective diameter of the piezoelectric bimorph is 30mm. The experiment result shows that the flow rate can reach 0.50 mL/s at the frequency of 6 Hz and the voltage of 110 V. Besides, the pressure difference can reach 26.2 mm H2O at the frequency of 6 Hz and the voltage of 160 V. This research proposes a valve-less piezoelectric pump with hemisphere-segment bluff-body, and its validity and feasibility is verified through theoretical analysis and experiment.

  8. Label-fusion-segmentation and deformation-based shape analysis of deep gray matter in multiple sclerosis: the impact of thalamic subnuclei on disability.

    PubMed

    Magon, Stefano; Chakravarty, M Mallar; Amann, Michael; Weier, Katrin; Naegelin, Yvonne; Andelova, Michaela; Radue, Ernst-Wilhelm; Stippich, Christoph; Lerch, Jason P; Kappos, Ludwig; Sprenger, Till

    2014-08-01

    Deep gray matter (DGM) atrophy has been reported in patients with multiple sclerosis (MS) already at early stages of the disease and progresses throughout the disease course. We studied DGM volume and shape and their relation to disability in a large cohort of clinically well-described MS patients using new subcortical segmentation methods and shape analysis. Structural 3D magnetic resonance images were acquired at 1.5 T in 118 patients with relapsing remitting MS. Subcortical structures were segmented using a multiatlas technique that relies on the generation of an automatically generated template library. To localize focal morphological changes, shape analysis was performed by estimating the vertex-wise displacements each subject must undergo to deform to a template. Multiple linear regression analysis showed that the volume of specific thalamic nuclei (the ventral nuclear complex) together with normalized gray matter volume explains a relatively large proportion of expanded disability status scale (EDSS) variability. The deformation-based displacement analysis confirmed the relation between thalamic shape and EDSS scores. Furthermore, white matter lesion volume was found to relate to the shape of all subcortical structures. This novel method for the analysis of subcortical volume and shape allows depicting specific contributions of DGM abnormalities to neurological deficits in MS patients. The results stress the importance of ventral thalamic nuclei in this respect.

  9. Health lifestyles: audience segmentation analysis for public health interventions.

    PubMed

    Slater, M D; Flora, J A

    1991-01-01

    This article is concerned with the application of market segmentation techniques in order to improve the planning and implementation of public health education programs. Seven distinctive patterns of health attitudes, social influences, and behaviors are identified using cluster analytic techniques in a sample drawn from four central California cities, and are subjected to construct and predictive validation: The lifestyle clusters predict behaviors including seatbelt use, vitamin C use, and attention to health information. The clusters also predict self-reported improvements in health behavior as measured in a two-year follow-up survey, e.g., eating less salt and losing weight, and self-reported new moderate and new vigorous exercise. Implications of these lifestyle clusters for public health education and intervention planning, and the larger potential of lifestyle clustering techniques in public health efforts, are discussed.

  10. Vessel segmentation analysis of ischemic stroke images acquired with photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Soetikno, Brian; Hu, Song; Gonzales, Ernie; Zhong, Qiaonan; Maslov, Konstantin; Lee, Jin-Moo; Wang, Lihong V.

    2012-02-01

    We have applied optical-resolution photoacoustic microscopy (OR-PAM) for longitudinal monitoring of cerebral metabolism through the intact skull of mice before, during, and up to 72 hours after a 1-hour transient middle cerebral artery occlusion (tMCAO). The high spatial resolution of OR-PAM enabled us to develop vessel segmentation techniques for segment-wise analysis of cerebrovascular responses.

  11. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations.

    PubMed

    Hart, Nicolas H; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L; Newton, Robert U

    2015-09-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this

  12. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations

    PubMed Central

    Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.

    2015-01-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this

  13. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends.

    PubMed

    Mansoor, Awais; Bagci, Ulas; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z; Folio, Les R; Udupa, Jayaram K; Mollura, Daniel J

    2015-01-01

    The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy-guided, and (e) machine learning-based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed.

  14. 3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities

    NASA Astrophysics Data System (ADS)

    Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir

    2016-03-01

    Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.

  15. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment

  16. The Prognostic Impact of In-Hospital Change in Mean Platelet Volume in Patients With Non-ST-Segment Elevation Myocardial Infarction.

    PubMed

    Kırış, Tuncay; Yazici, Selcuk; Günaydin, Zeki Yüksel; Akyüz, Şükrü; Güzelburç, Özge; Atmaca, Hüsnü; Ertürk, Mehmet; Nazli, Cem; Dogan, Abdullah

    2016-08-01

    It is unclear whether changes in mean platelet volume (MPV) are associated with total mortality in acute coronary syndromes. We investigated whether the change in MPV predicts total mortality in patients with non-ST-segment elevation myocardial infarction (NSTEMI). We retrospectively analyzed 419 consecutive patients (19 patients were excluded). The remaining patients were categorized as survivors (n = 351) or nonsurvivors (n = 49). Measurements of MPV were performed at admission and after 24 hours. The difference between the 2 measurements was considered as the MPV change (ΔMPV). The end point of the study was total mortality at 1-year follow-up. During the follow-up, there were 49 deaths (12.2%). Admission MPV was comparable in the 2 groups. However, both MPV (9.6 ± 1.4 fL vs 9.2 ± 1.0 fL, P = .044) and ΔMPV (0.40 [0.10-0.70] fL vs 0.70 [0.40-1.20] fL, P < .001) at the first 24 hours were higher in nonsurvivors than survivors. In multivariate analysis, ΔMPV was an independent predictor of total mortality (odds ratio: 1.84, 95% confidence interval: 1.28-2.65, P = .001). An early increase in MPV after admission was independently associated with total mortality in patients with NSTEMI. Such patients may need more effective antiplatelet therapy. PMID:26787684

  17. High-throughput microcoil NMR of compound libraries using zero-dispersion segmented flow analysis.

    PubMed

    Kautz, Roger A; Goetzinger, Wolfgang K; Karger, Barry L

    2005-01-01

    An automated system for loading samples into a microcoil NMR probe has been developed using segmented flow analysis. This approach enhanced 2-fold the throughput of the published direct injection and flow injection methods, improved sample utilization 3-fold, and was applicable to high-field NMR facilities with long transfer lines between the sample handler and NMR magnet. Sample volumes of 2 microL (10-30 mM, approximately 10 microg) were drawn from a 96-well microtiter plate by a sample handler, then pumped to a 0.5-microL microcoil NMR probe as a queue of closely spaced "plugs" separated by an immiscible fluorocarbon fluid. Individual sample plugs were detected by their NMR signal and automatically positioned for stopped-flow data acquisition. The sample in the NMR coil could be changed within 35 s by advancing the queue. The fluorocarbon liquid wetted the wall of the Teflon transfer line, preventing the DMSO samples from contacting the capillary wall and thus reducing sample losses to below 5% after passage through the 3-m transfer line. With a wash plug of solvent between samples, sample-to-sample carryover was <1%. Significantly, the samples did not disperse into the carrier liquid during loading or during acquisitions of several days for trace analysis. For automated high-throughput analysis using a 16-second acquisition time, spectra were recorded at a rate of 1.5 min/sample and total deuterated solvent consumption was <0.5 mL (1 US dollar) per 96-well plate.

  18. A robust and fast line segment detector based on top-down smaller eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing

    2014-01-01

    In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.

  19. Influence of volume expansion on NaC1 reabsorption in the diluting segments of the nephron: a study using clearance methods.

    PubMed

    Danovitch, G M; Bricker, N S

    1976-09-01

    Whether volume expansion influences NaC1 reabsorption by the diluting segment of the nephron remains a matter of controversy. In the present studies this question has been examined in normal unanesthetized dogs, undergoing maximal water diuresis. Free water clearance (CH2O/GFR) has been used as the index of NaC1 reabsorption in the diluting segment. Three expressions have been employed for "distal delivery" of NaC1: a) V/GFR, designated as the "volume term"; b) (CNa/GFR + CH2O/GFR), the "sodium term;" and c) (CC1/GFR + CH2O/GFR), the "chloride term". The validity of these terms is discussed. Three techniques were used to increase distal delivery: 1) the administration of acetazolamide to dogs in which extracellular fluid (ECF) volume was not expanded (grop 1); 2) "moderate" volume expansion (group 2); and 3) "marked" volume expansion (group 3). CH2O/GFR increased progressively with rising values for "distal delivery" regardless of which term was used to calculate the latter. With all three delivery terms, differences in distal NaC1 reabsorption emerged between the two volume-expanded groups, though only with the "chloride" term did substantial differences also emerge between the nonexpanded group 1 dogs and both volume-expanded groups. In group 1, values for CH2O/GFR increased in close to a linear fashion up to distal delivery values equal to 24% of the volume of glomerular filtrate. However, at high rates of distal delivery the rate of rise of CH2O/GFR was less in group 2 than in group 1 and the depression of values was even greater in group 3. Within the limits of the techniques used, the data suggest that volume expansion inhibits fractional NaC1 reabsorption in the diluting segment of the nephron in a dose-related fashion. The "chloride" term was found to be superior to the "volume" and "sodium" terms in revealing these changes.

  20. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  1. AxonSeg: Open Source Software for Axon and Myelin Segmentation and Morphometric Analysis

    PubMed Central

    Zaimi, Aldo; Duval, Tanguy; Gasecka, Alicja; Côté, Daniel; Stikov, Nikola; Cohen-Adad, Julien

    2016-01-01

    Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy (SEM) only). Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface (GUI) and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i) image pre-processing; (ii) pre-segmentation of axons over a cropped image and discriminant analysis (DA) to select the best parameters based on axon shape and intensity information; (iii) automatic axon and myelin segmentation over the full image; and (iv) atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM), SEM and coherent anti-Stokes Raman scattering (CARS) microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg.

  2. Gene expression analysis reveals that Delta/Notch signalling is not involved in onychophoran segmentation.

    PubMed

    Janssen, Ralf; Budd, Graham E

    2016-03-01

    Delta/Notch (Dl/N) signalling is involved in the gene regulatory network underlying the segmentation process in vertebrates and possibly also in annelids and arthropods, leading to the hypothesis that segmentation may have evolved in the last common ancestor of bilaterian animals. Because of seemingly contradicting results within the well-studied arthropods, however, the role and origin of Dl/N signalling in segmentation generally is still unclear. In this study, we investigate core components of Dl/N signalling by means of gene expression analysis in the onychophoran Euperipatoides kanangrensis, a close relative to the arthropods. We find that neither Delta or Notch nor any other investigated components of its signalling pathway are likely to be involved in segment addition in onychophorans. We instead suggest that Dl/N signalling may be involved in posterior elongation, another conserved function of these genes. We suggest further that the posterior elongation network, rather than classic Dl/N signalling, may be in the control of the highly conserved segment polarity gene network and the lower-level pair-rule gene network in onychophorans. Consequently, we believe that the pair-rule gene network and its interaction with Dl/N signalling may have evolved within the arthropod lineage and that Dl/N signalling has thus likely been recruited independently for segment addition in different phyla. PMID:26935716

  3. Gene expression analysis reveals that Delta/Notch signalling is not involved in onychophoran segmentation.

    PubMed

    Janssen, Ralf; Budd, Graham E

    2016-03-01

    Delta/Notch (Dl/N) signalling is involved in the gene regulatory network underlying the segmentation process in vertebrates and possibly also in annelids and arthropods, leading to the hypothesis that segmentation may have evolved in the last common ancestor of bilaterian animals. Because of seemingly contradicting results within the well-studied arthropods, however, the role and origin of Dl/N signalling in segmentation generally is still unclear. In this study, we investigate core components of Dl/N signalling by means of gene expression analysis in the onychophoran Euperipatoides kanangrensis, a close relative to the arthropods. We find that neither Delta or Notch nor any other investigated components of its signalling pathway are likely to be involved in segment addition in onychophorans. We instead suggest that Dl/N signalling may be involved in posterior elongation, another conserved function of these genes. We suggest further that the posterior elongation network, rather than classic Dl/N signalling, may be in the control of the highly conserved segment polarity gene network and the lower-level pair-rule gene network in onychophorans. Consequently, we believe that the pair-rule gene network and its interaction with Dl/N signalling may have evolved within the arthropod lineage and that Dl/N signalling has thus likely been recruited independently for segment addition in different phyla.

  4. Preliminary analysis of effect of random segment errors on coronagraph performance

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-09-01

    "Are we alone in the Universe?" is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 1010 of the host star's light with a 10-11 stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segments. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  5. AxonSeg: Open Source Software for Axon and Myelin Segmentation and Morphometric Analysis

    PubMed Central

    Zaimi, Aldo; Duval, Tanguy; Gasecka, Alicja; Côté, Daniel; Stikov, Nikola; Cohen-Adad, Julien

    2016-01-01

    Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy (SEM) only). Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface (GUI) and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i) image pre-processing; (ii) pre-segmentation of axons over a cropped image and discriminant analysis (DA) to select the best parameters based on axon shape and intensity information; (iii) automatic axon and myelin segmentation over the full image; and (iv) atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM), SEM and coherent anti-Stokes Raman scattering (CARS) microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg. PMID:27594833

  6. AxonSeg: Open Source Software for Axon and Myelin Segmentation and Morphometric Analysis.

    PubMed

    Zaimi, Aldo; Duval, Tanguy; Gasecka, Alicja; Côté, Daniel; Stikov, Nikola; Cohen-Adad, Julien

    2016-01-01

    Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy (SEM) only). Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface (GUI) and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i) image pre-processing; (ii) pre-segmentation of axons over a cropped image and discriminant analysis (DA) to select the best parameters based on axon shape and intensity information; (iii) automatic axon and myelin segmentation over the full image; and (iv) atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM), SEM and coherent anti-Stokes Raman scattering (CARS) microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg. PMID:27594833

  7. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  8. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed

  9. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed

  10. 3-D segmentation and quantitative analysis of inner and outer walls of thrombotic abdominal aortic aneurysms

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Yin, Yin; Wahle, Andreas; Olszewski, Mark E.; Sonka, Milan

    2008-03-01

    An abdominal aortic aneurysm (AAA) is an area of a localized widening of the abdominal aorta, with a frequent presence of thrombus. A ruptured aneurysm can cause death due to severe internal bleeding. AAA thrombus segmentation and quantitative analysis are of paramount importance for diagnosis, risk assessment, and determination of treatment options. Until now, only a small number of methods for thrombus segmentation and analysis have been presented in the literature, either requiring substantial user interaction or exhibiting insufficient performance. We report a novel method offering minimal user interaction and high accuracy. Our thrombus segmentation method is composed of an initial automated luminal surface segmentation, followed by a cost function-based optimal segmentation of the inner and outer surfaces of the aortic wall. The approach utilizes the power and flexibility of the optimal triangle mesh-based 3-D graph search method, in which cost functions for thrombus inner and outer surfaces are based on gradient magnitudes. Sometimes local failures caused by image ambiguity occur, in which case several control points are used to guide the computer segmentation without the need to trace borders manually. Our method was tested in 9 MDCT image datasets (951 image slices). With the exception of a case in which the thrombus was highly eccentric, visually acceptable aortic lumen and thrombus segmentation results were achieved. No user interaction was used in 3 out of 8 datasets, and 7.80 +/- 2.71 mouse clicks per case / 0.083 +/- 0.035 mouse clicks per image slice were required in the remaining 5 datasets.

  11. New method of on-line quantification of regional wall motion with automated segmental motion analysis.

    PubMed

    Fujino, T; Ono, S; Murata, K; Tanaka, N; Tone, T; Yamamura, T; Tomochika, Y; Kimura, K; Ueda, K; Liu, J; Wada, Y; Murashita, M; Kondo, Y; Matsuzaki, M

    2001-09-01

    We have recently developed an automated segmental motion analysis (A-SMA) system, based on an automatic "blood-tissue interface" detection technique, to provide real-time and on-line objective echocardiographic segmental wall motion analysis. To assess the feasibility of A-SMA in detecting regional left ventricular (LV) wall motion abnormalities, we performed 2-dimensional echocardiography with A-SMA in 13 healthy subjects, 22 patients with prior myocardial infarction (MI), and 9 with dilated cardiomyopathy (DCM). Midpapillary parasternal short-axis and apical 2- and 4-chamber views were obtained to clearly trace the blood-tissue interface. The LV cavity was then divided into 6 wedge-shaped segments by A-SMA. The area of each segment was calculated automatically throughout a cardiac cycle, and the area changes of each segment were displayed as bar graphs or time-area curves. The systolic fractional area change (FAC), peak ejection rate (PER), and filling rate (PFR) were also calculated with the use of A-SMA. In the control group, a uniform FAC was observed in real time among 6 segments in the short-axis view (60% +/- 10% to 78% +/- 9%), or among 5 segments in either the 2-chamber (59% +/- 12% to 75% +/- 16%) or 4-chamber view (58% +/- 13% to 72% +/- 12%). The variations of FAC, PER, and PFR were obviously decreased in infarct-related regions in the MI group and were globally decreased in the DCM group. We conclude that A-SMA is an objective and time-saving method for assessing regional wall motion abnormalities in real time. This method is a reliable new tool that provides on-line quantification of regional wall motion.

  12. Segmentation of vascular structures and hematopoietic cells in 3D microscopy images and quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mu, Jian; Yang, Lin; Kamocka, Malgorzata M.; Zollman, Amy L.; Carlesso, Nadia; Chen, Danny Z.

    2015-03-01

    In this paper, we present image processing methods for quantitative study of how the bone marrow microenvironment changes (characterized by altered vascular structure and hematopoietic cell distribution) caused by diseases or various factors. We develop algorithms that automatically segment vascular structures and hematopoietic cells in 3-D microscopy images, perform quantitative analysis of the properties of the segmented vascular structures and cells, and examine how such properties change. In processing images, we apply local thresholding to segment vessels, and add post-processing steps to deal with imaging artifacts. We propose an improved watershed algorithm that relies on both intensity and shape information and can separate multiple overlapping cells better than common watershed methods. We then quantitatively compute various features of the vascular structures and hematopoietic cells, such as the branches and sizes of vessels and the distribution of cells. In analyzing vascular properties, we provide algorithms for pruning fake vessel segments and branches based on vessel skeletons. Our algorithms can segment vascular structures and hematopoietic cells with good quality. We use our methods to quantitatively examine the changes in the bone marrow microenvironment caused by the deletion of Notch pathway. Our quantitative analysis reveals property changes in samples with deleted Notch pathway. Our tool is useful for biologists to quantitatively measure changes in the bone marrow microenvironment, for developing possible therapeutic strategies to help the bone marrow microenvironment recovery.

  13. Morphotectonic Index Analysis as an Indicator of Neotectonic Segmentation of the Nicoya Peninsula, Costa Rica

    NASA Astrophysics Data System (ADS)

    Morrish, S.; Marshall, J. S.

    2013-12-01

    The Nicoya Peninsula lies within the Costa Rican forearc where the Cocos plate subducts under the Caribbean plate at ~8.5 cm/yr. Rapid plate convergence produces frequent large earthquakes (~50yr recurrence interval) and pronounced crustal deformation (0.1-2.0m/ky uplift). Seven uplifted segments have been identified in previous studies using broad geomorphic surfaces (Hare & Gardner 1984) and late Quaternary marine terraces (Marshall et al. 2010). These surfaces suggest long term net uplift and segmentation of the peninsula in response to contrasting domains of subducting seafloor (EPR, CNS-1, CNS-2). In this study, newer 10m contour digital topographic data (CENIGA- Terra Project) will be used to characterize and delineate this segmentation using morphotectonic analysis of drainage basins and correlation of fluvial terrace/ geomorphic surface elevations. The peninsula has six primary watersheds which drain into the Pacific Ocean; the Río Andamojo, Río Tabaco, Río Nosara, Río Ora, Río Bongo, and Río Ario which range in area from 200 km2 to 350 km2. The trunk rivers follow major lineaments that define morphotectonic segment boundaries and in turn their drainage basins are bisected by them. Morphometric analysis of the lower (1st and 2nd) order drainage basins will provide insight into segmented tectonic uplift and deformation by comparing values of drainage basin asymmetry, stream length gradient, and hypsometry with respect to margin segmentation and subducting seafloor domain. A general geomorphic analysis will be conducted alongside the morphometric analysis to map previously recognized (Morrish et al. 2010) but poorly characterized late Quaternary fluvial terraces. Stream capture and drainage divide migration are common processes throughout the peninsula in response to the ongoing deformation. Identification and characterization of basin piracy throughout the peninsula will provide insight into the history of landscape evolution in response to

  14. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    SciTech Connect

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  15. Control-Volume Analysis Of Thrust-Augmenting Ejectors

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1990-01-01

    New method of analysis of transient flow in thrust-augmenting ejector based on control-volume formulation of governing equations. Considered as potential elements of propulsion subsystems of short-takeoff/vertical-landing airplanes.

  16. Segmentation, statistical analysis, and modelling of the wall system in ceramic foams

    SciTech Connect

    Kampf, Jürgen; Schlachter, Anna-Lena; Redenbach, Claudia; Liebscher, André

    2015-01-15

    Closed walls in otherwise open foam structures may have a great impact on macroscopic properties of the materials. In this paper, we present two algorithms for the segmentation of such closed walls from micro-computed tomography images of the foam structure. The techniques are compared on simulated data and applied to tomographic images of ceramic filters. This allows for a detailed statistical analysis of the normal directions and sizes of the walls. Finally, we explain how the information derived from the segmented wall system can be included in a stochastic microstructure model for the foam.

  17. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  18. Moving cast shadow resistant for foreground segmentation based on shadow properties analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Gao, Yun; Yuan, Guowu; Ji, Rongbin

    2015-12-01

    Moving object detection is the fundamental task in machine vision applications. However, moving cast shadows detection is one of the major concerns for accurate video segmentation. Since detected moving object areas are often contain shadow points, errors in measurements, localization, segmentation, classification and tracking may arise from this. A novel shadow elimination algorithm is proposed in this paper. A set of suspected moving object area are detected by the adaptive Gaussian approach. A model is established based on shadow optical properties analysis. And shadow regions are discriminated from the set of moving pixels by using the properties of brightness, chromaticity and texture in sequence.

  19. Patient Segmentation Analysis Offers Significant Benefits For Integrated Care And Support.

    PubMed

    Vuik, Sabine I; Mayer, Erik K; Darzi, Ara

    2016-05-01

    Integrated care aims to organize care around the patient instead of the provider. It is therefore crucial to understand differences across patients and their needs. Segmentation analysis that uses big data can help divide a patient population into distinct groups, which can then be targeted with care models and intervention programs tailored to their needs. In this article we explore the potential applications of patient segmentation in integrated care. We propose a framework for population strategies in integrated care-whole populations, subpopulations, and high-risk populations-and show how patient segmentation can support these strategies. Through international case examples, we illustrate practical considerations such as choosing a segmentation logic, accessing data, and tailoring care models. Important issues for policy makers to consider are trade-offs between simplicity and precision, trade-offs between customized and off-the-shelf solutions, and the availability of linked data sets. We conclude that segmentation can provide many benefits to integrated care, and we encourage policy makers to support its use. PMID:27140981

  20. Phantom-based ground-truth generation for cerebral vessel segmentation and pulsatile deformation analysis

    NASA Astrophysics Data System (ADS)

    Schetelig, Daniel; Säring, Dennis; Illies, Till; Sedlacik, Jan; Kording, Fabian; Werner, René

    2016-03-01

    Hemodynamic and mechanical factors of the vascular system are assumed to play a major role in understanding, e.g., initiation, growth and rupture of cerebral aneurysms. Among those factors, cardiac cycle-related pulsatile motion and deformation of cerebral vessels currently attract much interest. However, imaging of those effects requires high spatial and temporal resolution and remains challenging { and similarly does the analysis of the acquired images: Flow velocity changes and contrast media inflow cause vessel intensity variations in related temporally resolved computed tomography and magnetic resonance angiography data over the cardiac cycle and impede application of intensity threshold-based segmentation and subsequent motion analysis. In this work, a flow phantom for generation of ground-truth images for evaluation of appropriate segmentation and motion analysis algorithms is developed. The acquired ground-truth data is used to illustrate the interplay between intensity fluctuations and (erroneous) motion quantification by standard threshold-based segmentation, and an adaptive threshold-based segmentation approach is proposed that alleviates respective issues. The results of the phantom study are further demonstrated to be transferable to patient data.

  1. Segmental analysis of indocyanine green pharmacokinetics for the reliable diagnosis of functional vascular insufficiency

    NASA Astrophysics Data System (ADS)

    Kang, Yujung; Lee, Jungsul; An, Yuri; Jeon, Jongwook; Choi, Chulhee

    2011-03-01

    Accurate and reliable diagnosis of functional insufficiency of peripheral vasculature is essential since Raynaud phenomenon (RP), most common form of peripheral vascular insufficiency, is commonly associated with systemic vascular disorders. We have previously demonstrated that dynamic imaging of near-infrared fluorophore indocyanine green (ICG) can be a noninvasive and sensitive tool to measure tissue perfusion. In the present study, we demonstrated that combined analysis of multiple parameters, especially onset time and modified Tmax which means the time from onset of ICG fluorescence to Tmax, can be used as a reliable diagnostic tool for RP. To validate the method, we performed the conventional thermographic analysis combined with cold challenge and rewarming along with ICG dynamic imaging and segmental analysis. A case-control analysis demonstrated that segmental pattern of ICG dynamics in both hands was significantly different between normal and RP case, suggesting the possibility of clinical application of this novel method for the convenient and reliable diagnosis of RP.

  2. Screening Analysis : Volume 1, Description and Conclusions.

    SciTech Connect

    Bonneville Power Administration; Corps of Engineers; Bureau of Reclamation

    1992-08-01

    The SOR consists of three analytical phases leading to a Draft EIS. The first phase Pilot Analysis, was performed for the purpose of testing the decision analysis methodology being used in the SOR. The Pilot Analysis is described later in this chapter. The second phase, Screening Analysis, examines all possible operating alternatives using a simplified analytical approach. It is described in detail in this and the next chapter. This document also presents the results of screening. The final phase, Full-Scale Analysis, will be documented in the Draft EIS and is intended to evaluate comprehensively the few, best alternatives arising from the screening analysis. The purpose of screening is to analyze a wide variety of differing ways of operating the Columbia River system to test the reaction of the system to change. The many alternatives considered reflect the range of needs and requirements of the various river users and interests in the Columbia River Basin. While some of the alternatives might be viewed as extreme, the information gained from the analysis is useful in highlighting issues and conflicts in meeting operating objectives. Screening is also intended to develop a broad technical basis for evaluation including regional experts and to begin developing an evaluation capability for each river use that will support full-scale analysis. Finally, screening provides a logical method for examining all possible options and reaching a decision on a few alternatives worthy of full-scale analysis. An organizational structure was developed and staffed to manage and execute the SOR, specifically during the screening phase and the upcoming full-scale analysis phase. The organization involves ten technical work groups, each representing a particular river use. Several other groups exist to oversee or support the efforts of the work groups.

  3. Market segmentation for multiple option healthcare delivery systems--an application of cluster analysis.

    PubMed

    Jarboe, G R; Gates, R H; McDaniel, C D

    1990-01-01

    Healthcare providers of multiple option plans may be confronted with special market segmentation problems. This study demonstrates how cluster analysis may be used for discovering distinct patterns of preference for multiple option plans. The availability of metric, as opposed to categorical or ordinal, data provides the ability to use sophisticated analysis techniques which may be superior to frequency distributions and cross-tabulations in revealing preference patterns.

  4. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  5. Information architecture. Volume 2, Part 1: Baseline analysis summary

    SciTech Connect

    1996-12-01

    The Department of Energy (DOE) Information Architecture, Volume 2, Baseline Analysis, is a collaborative and logical next-step effort in the processes required to produce a Departmentwide information architecture. The baseline analysis serves a diverse audience of program management and technical personnel and provides an organized way to examine the Department`s existing or de facto information architecture. A companion document to Volume 1, The Foundations, it furnishes the rationale for establishing a Departmentwide information architecture. This volume, consisting of the Baseline Analysis Summary (part 1), Baseline Analysis (part 2), and Reference Data (part 3), is of interest to readers who wish to understand how the Department`s current information architecture technologies are employed. The analysis identifies how and where current technologies support business areas, programs, sites, and corporate systems.

  6. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-01

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  7. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-01

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  8. Mimicking human expert interpretation of remotely sensed raster imagery by using a novel segmentation analysis within ArcGIS

    NASA Astrophysics Data System (ADS)

    Le Bas, Tim; Scarth, Anthony; Bunting, Peter

    2015-04-01

    Traditional computer based methods for the interpretation of remotely sensed imagery use each pixel individually or the average of a small window of pixels to calculate a class or thematic value, which provides an interpretation. However when a human expert interprets imagery, the human eye is excellent at finding coherent and homogenous areas and edge features. It may therefore be advantageous for computer analysis to mimic human interpretation. A new toolbox for ArcGIS 10.x will be presented that segments the data layers into a set of polygons. Each polygon is defined by a K-means clustering and region growing algorithm, thus finding areas, their edges and any lineations in the imagery. Attached to each polygon are the characteristics of the imagery such as mean and standard deviation of the pixel values, within the polygon. The segmentation of imagery into a jigsaw of polygons also has the advantage that the human interpreter does not need to spend hours digitising the boundaries. The segmentation process has been taken from the RSGIS library of analysis and classification routines (Bunting et al., 2014). These routines are freeware and have been modified to be available in the ArcToolbox under the Windows (v7) operating system. Input to the segmentation process is a multi-layered raster image, for example; a Landsat image, or a set of raster datasets made up from derivatives of topography. The size and number of polygons are set by the user and are dependent on the imagery used. Examples will be presented of data from the marine environment utilising bathymetric depth, slope, rugosity and backscatter from a multibeam system. Meaningful classification of the polygons using their numerical characteristics is the next goal. Object based image analysis (OBIA) should help this workflow. Fully calibrated imagery systems will allow numerical classification to be translated into more readily understandable terms. Peter Bunting, Daniel Clewley, Richard M. Lucas and Sam

  9. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  10. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets

    PubMed Central

    Belevich, Ilya; Joensuu, Merja; Kumar, Darshan; Vihinen, Helena; Jokitalo, Eija

    2016-01-01

    Understanding the structure–function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program. PMID:26727152

  11. Laser power conversion system analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-ground laser power conversion system analysis investigated the feasibility and cost effectiveness of converting solar energy into laser energy in space, and transmitting the laser energy to earth for conversion to electrical energy. The analysis included space laser systems with electrical outputs on the ground ranging from 100 to 10,000 MW. The space laser power system was shown to be feasible and a viable alternate to the microwave solar power satellite. The narrow laser beam provides many options and alternatives not attainable with a microwave beam.

  12. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  13. Stress Analysis of Bolted, Segmented Cylindrical Shells Exhibiting Flange Mating-Surface Waviness

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2009-01-01

    Bolted, segmented cylindrical shells are a common structural component in many engineering systems especially for aerospace launch vehicles. Segmented shells are often needed due to limitations of manufacturing capabilities or transportation issues related to very long, large-diameter cylindrical shells. These cylindrical shells typically have a flange or ring welded to opposite ends so that shell segments can be mated together and bolted to form a larger structural system. As the diameter of these shells increases, maintaining strict fabrication tolerances for the flanges to be flat and parallel on a welded structure is an extreme challenge. Local fit-up stresses develop in the structure due to flange mating-surface mismatch (flange waviness). These local stresses need to be considered when predicting a critical initial flaw size. Flange waviness is one contributor to the fit-up stress state. The present paper describes the modeling and analysis effort to simulate fit-up stresses due to flange waviness in a typical bolted, segmented cylindrical shell. Results from parametric studies are presented for various flange mating-surface waviness distributions and amplitudes.

  14. Analysis of gene expression levels in individual bacterial cells without image segmentation

    SciTech Connect

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J.

    2012-05-11

    Highlights: Black-Right-Pointing-Pointer We present a method for extracting gene expression data from images of bacterial cells. Black-Right-Pointing-Pointer The method does not employ cell segmentation and does not require high magnification. Black-Right-Pointing-Pointer Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. Black-Right-Pointing-Pointer We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

  15. Power Loss Analysis and Comparison of Segmented and Unsegmented Energy Coupling Coils for Wireless Energy Transfer

    PubMed Central

    Tang, Sai Chun; McDannold, Nathan J.

    2015-01-01

    This paper investigated the power losses of unsegmented and segmented energy coupling coils for wireless energy transfer. Four 30-cm energy coupling coils with different winding separations, conductor cross-sectional areas, and number of turns were developed. The four coils were tested in both unsegmented and segmented configurations. The winding conduction and intrawinding dielectric losses of the coils were evaluated individually based on a well-established lumped circuit model. We found that the intrawinding dielectric loss can be as much as seven times higher than the winding conduction loss at 6.78 MHz when the unsegmented coil is tightly wound. The dielectric loss of an unsegmented coil can be reduced by increasing the winding separation or reducing the number of turns, but the power transfer capability is reduced because of the reduced magnetomotive force. Coil segmentation using resonant capacitors has recently been proposed to significantly reduce the operating voltage of a coil to a safe level in wireless energy transfer for medical implants. Here, we found that it can naturally eliminate the dielectric loss. The coil segmentation method and the power loss analysis used in this paper could be applied to the transmitting, receiving, and resonant coils in two- and four-coil energy transfer systems. PMID:26640745

  16. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  17. A Comparison of Amplitude-Based and Phase-Based Positron Emission Tomography Gating Algorithms for Segmentation of Internal Target Volumes of Tumors Subject to Respiratory Motion

    SciTech Connect

    Jani, Shyam S.; Robinson, Clifford G.; Dahlbom, Magnus; White, Benjamin M.; Thomas, David H.; Gaudio, Sergio; Low, Daniel A.; Lamb, James M.

    2013-11-01

    Purpose: To quantitatively compare the accuracy of tumor volume segmentation in amplitude-based and phase-based respiratory gating algorithms in respiratory-correlated positron emission tomography (PET). Methods and Materials: List-mode fluorodeoxyglucose-PET data was acquired for 10 patients with a total of 12 fluorodeoxyglucose-avid tumors and 9 lymph nodes. Additionally, a phantom experiment was performed in which 4 plastic butyrate spheres with inner diameters ranging from 1 to 4 cm were imaged as they underwent 1-dimensional motion based on 2 measured patient breathing trajectories. PET list-mode data were gated into 8 bins using 2 amplitude-based (equal amplitude bins [A1] and equal counts per bin [A2]) and 2 temporal phase-based gating algorithms. Gated images were segmented using a commercially available gradient-based technique and a fixed 40% threshold of maximum uptake. Internal target volumes (ITVs) were generated by taking the union of all 8 contours per gated image. Segmented phantom ITVs were compared with their respective ground-truth ITVs, defined as the volume subtended by the tumor model positions covering 99% of breathing amplitude. Superior-inferior distances between sphere centroids in the end-inhale and end-exhale phases were also calculated. Results: Tumor ITVs from amplitude-based methods were significantly larger than those from temporal-based techniques (P=.002). For lymph nodes, A2 resulted in ITVs that were significantly larger than either of the temporal-based techniques (P<.0323). A1 produced the largest and most accurate ITVs for spheres with diameters of ≥2 cm (P=.002). No significant difference was shown between algorithms in the 1-cm sphere data set. For phantom spheres, amplitude-based methods recovered an average of 9.5% more motion displacement than temporal-based methods under regular breathing conditions and an average of 45.7% more in the presence of baseline drift (P<.001). Conclusions: Target volumes in images generated

  18. Fetal autonomic brain age scores, segmented heart rate variability analysis, and traditional short term variability

    PubMed Central

    Hoyer, Dirk; Kowalski, Eva-Maria; Schmidt, Alexander; Tetschke, Florian; Nowack, Samuel; Rudolph, Anja; Wallwitz, Ulrike; Kynass, Isabelle; Bode, Franziska; Tegtmeyer, Janine; Kumm, Kathrin; Moraru, Liviu; Götz, Theresa; Haueisen, Jens; Witte, Otto W.; Schleußner, Ekkehard; Schneider, Uwe

    2014-01-01

    Disturbances of fetal autonomic brain development can be evaluated from fetal heart rate patterns (HRP) reflecting the activity of the autonomic nervous system. Although HRP analysis from cardiotocographic (CTG) recordings is established for fetal surveillance, temporal resolution is low. Fetal magnetocardiography (MCG), however, provides stable continuous recordings at a higher temporal resolution combined with a more precise heart rate variability (HRV) analysis. A direct comparison of CTG and MCG based HRV analysis is pending. The aims of the present study are: (i) to compare the fetal maturation age predicting value of the MCG based fetal Autonomic Brain Age Score (fABAS) approach with that of CTG based Dawes-Redman methodology; and (ii) to elaborate fABAS methodology by segmentation according to fetal behavioral states and HRP. We investigated MCG recordings from 418 normal fetuses, aged between 21 and 40 weeks of gestation. In linear regression models we obtained an age predicting value of CTG compatible short term variability (STV) of R2 = 0.200 (coefficient of determination) in contrast to MCG/fABAS related multivariate models with R2 = 0.648 in 30 min recordings, R2 = 0.610 in active sleep segments of 10 min, and R2 = 0.626 in quiet sleep segments of 10 min. Additionally segmented analysis under particular exclusion of accelerations (AC) and decelerations (DC) in quiet sleep resulted in a novel multivariate model with R2 = 0.706. According to our results, fMCG based fABAS may provide a promising tool for the estimation of fetal autonomic brain age. Beside other traditional and novel HRV indices as possible indicators of developmental disturbances, the establishment of a fABAS score normogram may represent a specific reference. The present results are intended to contribute to further exploration and validation using independent data sets and multicenter research structures. PMID:25505399

  19. Image Segmentation By Cluster Analysis Of High Resolution Textured SPOT Images

    NASA Astrophysics Data System (ADS)

    Slimani, M.; Roux, C.; Hillion, A.

    1986-04-01

    Textural analysis is now a commonly used technique in digital image processing. In this paper, we present an application of textural analysis to high resolution SPOT satellite images. The purpose of the methodology is to improve classification results, i.e. image segmentation in remote sensing. Remote sensing techniques, based on high resolution satellite data offer good perspectives for the cartography of littoral environment. Textural information contained in the pan-chromatic channel of ten meters resolution is introduced in order to separate different types of structures. The technique we used is based on statistical pattern recognition models and operates in two steps. A first step, features extraction, is derived by using a stepwise algorithm. Segmentation is then performed by cluster analysis using these extracted. features. The texture features are computed over the immediate neighborhood of the pixel using two methods : the cooccurence matrices method and the grey level difference statistics method. Image segmentation based only on texture features is then performed by pixel classification and finally discussed. In a future paper, we intend to compare the results with aerial data in view of the management of the littoral resources.

  20. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  1. Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data

    NASA Astrophysics Data System (ADS)

    Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus

    The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.

  2. Right ventricular volume analysis by angiography in right ventricular cardiomyopathy.

    PubMed

    Indik, Julia H; Dallas, William J; Gear, Kathleen; Tandri, Harikrishna; Bluemke, David A; Moukabary, Talal; Marcus, Frank I

    2012-06-01

    Imaging of the right ventricle (RV) for the diagnosis of arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D) is commonly performed by echocardiography or magnetic resonance imaging (MRI). Angiography is an alternative modality, particularly when MRI cannot be performed. We hypothesized that RV volume and ejection fraction computed by angiography would correlate with these quantities as computed by MRI. RV volumes and ejection fraction were computed for subjects enrolled in the North American ARVC/D Registry, with both RV angiography and MRI studies. Angiography was performed in the 30° right anterior oblique (RAO) and 60° left anterior oblique (LAO) views. Angiographic volumes were computed by RAO view and two-view (RAO and LAO) formulae. 17 subjects were analyzed (11 men and 6 women), with 15 subjects classified as affected, and two as unaffected by modified Task Force criteria. The correlation coefficient of MRI to the two-view angiographic analysis was 0.72 (P = 0.003) for end-diastolic volume and 0.68 (P = 0.005) for ejection fraction. Angiographically derived volumes were larger than MRI derived volume (P = 0.009) and with the slope in a linear relationship equal to 0.8 for end diastolic volume, and 0.9 for RV ejection fraction (P < 0.001), computed by the two view formula. End-diastolic volumes and ejection fractions of the RV obtained by dual view angiography correlate with these quantities by MRI. RV end-diastolic volumes are larger by RV angiography in comparison with MRI.

  3. Viable tumor volume: volume of interest within segmented metastatic lesions, a pilot study of proposed computed tomography response criteria for urothelial cancer

    PubMed Central

    Folio, Les Roger; Turkbey, Evrim B.; Steinberg, Seth M.; Apolo, Andrea B.

    2015-01-01

    Objectives To evaluate the ability of new computed tomography (CT) response criteria for solid tumors such as urothelial cancer (VTV; viable tumor volume) to predict overall survival (OS) in patients with metastatic bladder cancer treated with cabozantinib. Materials and Methods We compared the relative capabilities of VTV, RECIST, MASS (morphology, attenuation, size, and structure), and Choi criteria, as well as volume measurements, to predict OS using serial follow-up contrast-enhanced CT exams in patients with metastatic urothelial carcinoma. Kaplan-Meier curves and 2-tailed log-rank tests compared OS based on early RECIST 1.1 response against each of the other criteria. A Cox proportional hazards model assessed response at follow-up exams as a time-varying covariate for OS. Results We assessed 141 lesions in 55 CT scans from 17 patients with urothelial metastasis, comparing VTV, RECIST, MASS, and Choi criteria, and volumetric measurements, for response assessment. Median follow-up was 4.5 months, range was 2–14 months. Only the VTV criteria demonstrated a statistical association with OS (p = 0.019; median OS 9.7 vs. 3.5 months). Conclusion This pilot study suggests that VTV is a promising tool for assessing tumor response and predicting OS, using criteria that incorporate tumor volume and density in patients receiving antiangiogenic therapy for urothelial cancer. Larger studies are warranted to validate these findings. PMID:26149529

  4. Texture analysis based on the Hermite transform for image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus

    2012-06-01

    Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.

  5. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    SciTech Connect

    Akhbardeh, Alireza; Jacobs, Michael A.

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment

  6. Scanning and transmission electron microscopic analysis of ampullary segment of oviduct during estrous cycle in caprines.

    PubMed

    Sharma, R K; Singh, R; Bhardwaj, J K

    2015-01-01

    The ampullary segment of the mammalian oviduct provides suitable milieu for fertilization and development of zygote before implantation into uterus. It is, therefore, in the present study, the cyclic changes in the morphology of ampullary segment of goat oviduct were studied during follicular and luteal phases using scanning and transmission electron microscopy techniques. Topographical analysis revealed the presence of uniformly ciliated ampullary epithelia, concealing apical processes of non-ciliated cells along with bulbous secretory cells during follicular phase. The luteal phase was marked with decline in number of ciliated cells with increased occurrence of secretory cells. The ultrastructure analysis has demonstrated the presence of indented nuclear membrane, supranuclear cytoplasm, secretory granules, rough endoplasmic reticulum, large lipid droplets, apically located glycogen masses, oval shaped mitochondria in the secretory cells. The ciliated cells were characterized by the presence of elongated nuclei, abundant smooth endoplasmic reticulum, oval or spherical shaped mitochondria with crecentric cristae during follicular phase. However, in the luteal phase, secretory cells were possessing highly indented nucleus with diffused electron dense chromatin, hyaline nucleosol, increased number of lipid droplets. The ciliated cells had numerous fibrous granules and basal bodies. The parallel use of scanning and transmission electron microscopy techniques has enabled us to examine the cyclic and hormone dependent changes occurring in the topography and fine structure of epithelium of ampullary segment and its cells during different reproductive phases that will be great help in understanding major bottle neck that limits success rate in vitro fertilization and embryo transfer technology. PMID:25491952

  7. FEM correlation and shock analysis of a VNC MEMS mirror segment

    NASA Astrophysics Data System (ADS)

    Aguayo, Eduardo J.; Lyon, Richard; Helmbrecht, Michael; Khomusi, Sausan

    2014-08-01

    Microelectromechanical systems (MEMS) are becoming more prevalent in today's advanced space technologies. The Visible Nulling Coronagraph (VNC) instrument, being developed at the NASA Goddard Space Flight Center, uses a MEMS Mirror to correct wavefront errors. This MEMS Mirror, the Multiple Mirror Array (MMA), is a key component that will enable the VNC instrument to detect Jupiter and ultimately Earth size exoplanets. Like other MEMS devices, the MMA faces several challenges associated with spaceflight. Therefore, Finite Element Analysis (FEA) is being used to predict the behavior of a single MMA segment under different spaceflight-related environments. Finite Element Analysis results are used to guide the MMA design and ensure its survival during launch and mission operations. A Finite Element Model (FEM) has been developed of the MMA using COMSOL. This model has been correlated to static loading on test specimens. The correlation was performed in several steps—simple beam models were correlated initially, followed by increasingly complex and higher fidelity models of the MMA mirror segment. Subsequently, the model has been used to predict the dynamic behavior and stresses of the MMA segment in a representative spaceflight mechanical shock environment. The results of the correlation and the stresses associated with a shock event are presented herein.

  8. Scanning and transmission electron microscopic analysis of ampullary segment of oviduct during estrous cycle in caprines.

    PubMed

    Sharma, R K; Singh, R; Bhardwaj, J K

    2015-01-01

    The ampullary segment of the mammalian oviduct provides suitable milieu for fertilization and development of zygote before implantation into uterus. It is, therefore, in the present study, the cyclic changes in the morphology of ampullary segment of goat oviduct were studied during follicular and luteal phases using scanning and transmission electron microscopy techniques. Topographical analysis revealed the presence of uniformly ciliated ampullary epithelia, concealing apical processes of non-ciliated cells along with bulbous secretory cells during follicular phase. The luteal phase was marked with decline in number of ciliated cells with increased occurrence of secretory cells. The ultrastructure analysis has demonstrated the presence of indented nuclear membrane, supranuclear cytoplasm, secretory granules, rough endoplasmic reticulum, large lipid droplets, apically located glycogen masses, oval shaped mitochondria in the secretory cells. The ciliated cells were characterized by the presence of elongated nuclei, abundant smooth endoplasmic reticulum, oval or spherical shaped mitochondria with crecentric cristae during follicular phase. However, in the luteal phase, secretory cells were possessing highly indented nucleus with diffused electron dense chromatin, hyaline nucleosol, increased number of lipid droplets. The ciliated cells had numerous fibrous granules and basal bodies. The parallel use of scanning and transmission electron microscopy techniques has enabled us to examine the cyclic and hormone dependent changes occurring in the topography and fine structure of epithelium of ampullary segment and its cells during different reproductive phases that will be great help in understanding major bottle neck that limits success rate in vitro fertilization and embryo transfer technology.

  9. Global analysis of microscopic fluorescence lifetime images using spectral segmentation and a digital micromirror spatial illuminator.

    PubMed

    Bednarkiewicz, Artur; Whelan, Maurice P

    2008-01-01

    Fluorescence lifetime imaging (FLIM) is very demanding from a technical and computational perspective, and the output is usually a compromise between acquisition/processing time and data accuracy and precision. We present a new approach to acquisition, analysis, and reconstruction of microscopic FLIM images by employing a digital micromirror device (DMD) as a spatial illuminator. In the first step, the whole field fluorescence image is collected by a color charge-coupled device (CCD) camera. Further qualitative spectral analysis and sample segmentation are performed to spatially distinguish between spectrally different regions on the sample. Next, the fluorescence of the sample is excited segment by segment, and fluorescence lifetimes are acquired with a photon counting technique. FLIM image reconstruction is performed by either raster scanning the sample or by directly accessing specific regions of interest. The unique features of the DMD illuminator allow the rapid on-line measurement of global good initial parameters (GIP), which are supplied to the first iteration of the fitting algorithm. As a consequence, a decrease of the computation time required to obtain a satisfactory quality-of-fit is achieved without compromising the accuracy and precision of the lifetime measurements. PMID:19021324

  10. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  11. Analysis of Drosophila segmentation network identifies a JNK pathway factor overexpressed in kidney cancer.

    PubMed

    Liu, Jiang; Ghanim, Murad; Xue, Lei; Brown, Christopher D; Iossifov, Ivan; Angeletti, Cesar; Hua, Sujun; Nègre, Nicolas; Ludwig, Michael; Stricker, Thomas; Al-Ahmadie, Hikmat A; Tretiakova, Maria; Camp, Robert L; Perera-Alberto, Montse; Rimm, David L; Xu, Tian; Rzhetsky, Andrey; White, Kevin P

    2009-02-27

    We constructed a large-scale functional network model in Drosophila melanogaster built around two key transcription factors involved in the process of embryonic segmentation. Analysis of the model allowed the identification of a new role for the ubiquitin E3 ligase complex factor SPOP. In Drosophila, the gene encoding SPOP is a target of segmentation transcription factors. Drosophila SPOP mediates degradation of the Jun kinase phosphatase Puckered, thereby inducing tumor necrosis factor (TNF)/Eiger-dependent apoptosis. In humans, we found that SPOP plays a conserved role in TNF-mediated JNK signaling and was highly expressed in 99% of clear cell renal cell carcinomas (RCCs), the most prevalent form of kidney cancer. SPOP expression distinguished histological subtypes of RCC and facilitated identification of clear cell RCC as the primary tumor for metastatic lesions.

  12. AAV Vectors for FRET-Based Analysis of Protein-Protein Interactions in Photoreceptor Outer Segments

    PubMed Central

    Becirovic, Elvir; Böhm, Sybille; Nguyen, Ong N. P.; Riedmayr, Lisa M.; Hammelmann, Verena; Schön, Christian; Butz, Elisabeth S.; Wahl-Schott, Christian; Biel, Martin; Michalakis, Stylianos

    2016-01-01

    Fluorescence resonance energy transfer (FRET) is a powerful method for the detection and quantification of stationary and dynamic protein-protein interactions. Technical limitations have hampered systematic in vivo FRET experiments to study protein-protein interactions in their native environment. Here, we describe a rapid and robust protocol that combines adeno-associated virus (AAV) vector-mediated in vivo delivery of genetically encoded FRET partners with ex vivo FRET measurements. The method was established on acutely isolated outer segments of murine rod and cone photoreceptors and relies on the high co-transduction efficiency of retinal photoreceptors by co-delivered AAV vectors. The procedure can be used for the systematic analysis of protein-protein interactions of wild type or mutant outer segment proteins in their native environment. Conclusively, our protocol can help to characterize the physiological and pathophysiological relevance of photoreceptor specific proteins and, in principle, should also be transferable to other cell types. PMID:27516733

  13. AAV Vectors for FRET-Based Analysis of Protein-Protein Interactions in Photoreceptor Outer Segments.

    PubMed

    Becirovic, Elvir; Böhm, Sybille; Nguyen, Ong N P; Riedmayr, Lisa M; Hammelmann, Verena; Schön, Christian; Butz, Elisabeth S; Wahl-Schott, Christian; Biel, Martin; Michalakis, Stylianos

    2016-01-01

    Fluorescence resonance energy transfer (FRET) is a powerful method for the detection and quantification of stationary and dynamic protein-protein interactions. Technical limitations have hampered systematic in vivo FRET experiments to study protein-protein interactions in their native environment. Here, we describe a rapid and robust protocol that combines adeno-associated virus (AAV) vector-mediated in vivo delivery of genetically encoded FRET partners with ex vivo FRET measurements. The method was established on acutely isolated outer segments of murine rod and cone photoreceptors and relies on the high co-transduction efficiency of retinal photoreceptors by co-delivered AAV vectors. The procedure can be used for the systematic analysis of protein-protein interactions of wild type or mutant outer segment proteins in their native environment. Conclusively, our protocol can help to characterize the physiological and pathophysiological relevance of photoreceptor specific proteins and, in principle, should also be transferable to other cell types. PMID:27516733

  14. Segmentation of Unstructured Datasets

    NASA Technical Reports Server (NTRS)

    Bhat, Smitha

    1996-01-01

    Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

  15. Segmentation of scarred and non-scarred myocardium in LG enhanced CMR images using intensity-based textural analysis.

    PubMed

    Kotu, Lasya Priya; Engan, Kjersti; Eftestøl, Trygve; Ørn, Stein; Woie, Leik

    2011-01-01

    The Late Gadolinium (LG) enhancement in Cardiac Magnetic Resonance (CMR) imaging is used to increase the intensity of scarred area in myocardium for thorough examination. Automatic segmentation of scar is important because scar size is largely responsible in changing the size, shape and functioning of left ventricle and it is a preliminary step required in exploring the information present in scar. We have proposed a new technique to segment scar (infarct region) from non-scarred myocardium using intensity-based texture analysis. Our new technique uses dictionary-based texture features and dc-values to segment scarred and non-scarred myocardium using Maximum Likelihood Estimator (MLE) based Bayes classification. Texture analysis aided with intensity values gives better segmentation of scar from myocardium with high sensitivity and specificity values in comparison to manual segmentation by expert cardiologists.

  16. Multivariate statistical analysis as a tool for the segmentation of 3D spectral data.

    PubMed

    Lucas, G; Burdet, P; Cantoni, M; Hébert, C

    2013-01-01

    Acquisition of three-dimensional (3D) spectral data is nowadays common using many different microanalytical techniques. In order to proceed to the 3D reconstruction, data processing is necessary not only to deal with noisy acquisitions but also to segment the data in term of chemical composition. In this article, we demonstrate the value of multivariate statistical analysis (MSA) methods for this purpose, allowing fast and reliable results. Using scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) coupled with a focused ion beam (FIB), a stack of spectrum images have been acquired on a sample produced by laser welding of a nickel-titanium wire and a stainless steel wire presenting a complex microstructure. These data have been analyzed using principal component analysis (PCA) and factor rotations. PCA allows to significantly improve the overall quality of the data, but produces abstract components. Here it is shown that rotated components can be used without prior knowledge of the sample to help the interpretation of the data, obtaining quickly qualitative mappings representative of elements or compounds found in the material. Such abundance maps can then be used to plot scatter diagrams and interactively identify the different domains in presence by defining clusters of voxels having similar compositions. Identified voxels are advantageously overlaid on secondary electron (SE) images with higher resolution in order to refine the segmentation. The 3D reconstruction can then be performed using available commercial softwares on the basis of the provided segmentation. To asses the quality of the segmentation, the results have been compared to an EDX quantification performed on the same data. PMID:24035679

  17. AN ANALYSIS OF THE SEGMENTATION THRESHOLD USED IN AXIAL-SHEAR STRAIN ELASTOGRAPHY

    PubMed Central

    Thittai, Arun K.; Xia, Rongmin

    2014-01-01

    Axial - shear strain elastography was introduced recently to image the tumor-host tissue boundary bonding characteristics. The image depicting the axial-shear strain distribution in a tissue under axial compression was termed as an axial-shear strain elastogram (ASSE). It has been demonstrated through simulation, tissue-mimicking phantom experiments, and retrospective analysis of in vivo breast lesion data that metrics quantifying the pattern of axial-shear strain distribution on ASSE can be used as features for identifying the lesion boundary condition as loosely-bonded or firmly-bonded. Consequently, features from ASSE have been shown to have potential in non-invasive breast lesion classification into benign versus malignant. Although there appears to be a broad concurrence in the results reported by different groups, important details pertaining to the appropriate segmentation threshold needed for – 1) displaying the ASSE as a color-overlay on top of corresponding Axial Strain Elastogram (ASE) and/or sonogram for feature visualization and 2) ASSE feature extraction are not yet fully addressed. In this study, we utilize ASSE from tissue mimicking phantom (with loosely-bonded & firmly-bonded inclusions) experiments and freehand –acquired in vivo breast lesion data (7 benign & 9 malignant) to analyze the effect of segmentation threshold on ASSE feature value, specifically, the “fill-in” feature that was introduced recently. We varied the segmentation threshold from 20% to 70% (of the maximum ASSE value) for each frame of the acquisition cine-loop of every data and computed the number of ASSE pixels within the lesion that was greater than or equal to this threshold value. If at least 40% of the pixels within the lesion area crossed this segmentation threshold, the ASSE frame was considered to demonstrate a “fill-in” that would indicate a loosely-bonded lesion boundary condition (suggestive of a benign lesion). Otherwise, the ASSE frame was considered not

  18. Method 349.0 Determination of Ammonia in Estuarine and Coastal Waters by Gas Segmented Continuous Flow Colorimetric Analysis

    EPA Science Inventory

    This method provides a procedure for the determination of ammonia in estuarine and coastal waters. The method is based upon the indophenol reaction,1-5 here adapted to automated gas-segmented continuous flow analysis.

  19. Semi-automatic segmentation and modeling of the cervical spinal cord for volume quantification in multiple sclerosis patients from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sonkova, Pavlina; Evangelou, Iordanis E.; Gallo, Antonio; Cantor, Fredric K.; Ohayon, Joan; McFarland, Henry F.; Bagnato, Francesca

    2008-03-01

    Spinal cord (SC) tissue loss is known to occur in some patients with multiple sclerosis (MS), resulting in SC atrophy. Currently, no measurement tools exist to determine the magnitude of SC atrophy from Magnetic Resonance Images (MRI). We have developed and implemented a novel semi-automatic method for quantifying the cervical SC volume (CSCV) from Magnetic Resonance Images (MRI) based on level sets. The image dataset consisted of SC MRI exams obtained at 1.5 Tesla from 12 MS patients (10 relapsing-remitting and 2 secondary progressive) and 12 age- and gender-matched healthy volunteers (HVs). 3D high resolution image data were acquired using an IR-FSPGR sequence acquired in the sagittal plane. The mid-sagittal slice (MSS) was automatically located based on the entropy calculation for each of the consecutive sagittal slices. The image data were then pre-processed by 3D anisotropic diffusion filtering for noise reduction and edge enhancement before segmentation with a level set formulation which did not require re-initialization. The developed method was tested against manual segmentation (considered ground truth) and intra-observer and inter-observer variability were evaluated.

  20. Integration of 3D scale-based pseudo-enhancement correction and partial volume image segmentation for improving electronic colon cleansing in CT colonograpy.

    PubMed

    Zhang, Hao; Li, Lihong; Zhu, Hongbin; Han, Hao; Song, Bowen; Liang, Zhengrong

    2014-01-01

    Orally administered tagging agents are usually used in CT colonography (CTC) to differentiate residual bowel content from native colonic structures. However, the high-density contrast agents tend to introduce pseudo-enhancement (PE) effect on neighboring soft tissues and elevate their observed CT attenuation value toward that of the tagged materials (TMs), which may result in an excessive electronic colon cleansing (ECC) since the pseudo-enhanced soft tissues are incorrectly identified as TMs. To address this issue, we integrated a 3D scale-based PE correction into our previous ECC pipeline based on the maximum a posteriori expectation-maximization partial volume (PV) segmentation. The newly proposed ECC scheme takes into account both the PE and PV effects that commonly appear in CTC images. We evaluated the new scheme on 40 patient CTC scans, both qualitatively through display of segmentation results, and quantitatively through radiologists' blind scoring (human observer) and computer-aided detection (CAD) of colon polyps (computer observer). Performance of the presented algorithm has shown consistent improvements over our previous ECC pipeline, especially for the detection of small polyps submerged in the contrast agents. The CAD results of polyp detection showed that 4 more submerged polyps were detected for our new ECC scheme over the previous one.

  1. Incorporation of texture-based features in optimal graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Abràmoff, Michael D.; Sonka, Milan; Kwon, Young H.; Garvin, Mona K.

    2012-02-01

    While efficient graph-theoretic approaches exist for the optimal (with respect to a cost function) and simultaneous segmentation of multiple surfaces within volumetric medical images, the appropriate design of cost functions remains an important challenge. Previously proposed methods have used simple cost functions or optimized a combination of the same, but little has been done to design cost functions using learned features from a training set, in a less biased fashion. Here, we present a method to design cost functions for the simultaneous segmentation of multiple surfaces using the graph-theoretic approach. Classified texture features were used to create probability maps, which were incorporated into the graph-search approach. The efficiency of such an approach was tested on 10 optic nerve head centered optical coherence tomography (OCT) volumes obtained from 10 subjects that presented with glaucoma. The mean unsigned border position error was computed with respect to the average of manual tracings from two independent observers and compared to our previously reported results. A significant improvement was noted in the overall means which reduced from 9.25 +/- 4.03μm to 6.73 +/- 2.45μm (p < 0.01) and is also comparable with the inter-observer variability of 8.85 +/- 3.85μm.

  2. Dense nuclei segmentation based on graph cut and convexity-concavity analysis.

    PubMed

    Qi, J

    2014-01-01

    With the rapid advancement of 3D confocal imaging technology, more and more 3D cellular images will be available. However, robust and automatic extraction of nuclei shape may be hindered by a highly cluttered environment, as for example, in fly eye tissues. In this paper, we present a novel and efficient nuclei segmentation algorithm based on the combination of graph cut and convex shape assumption. The main characteristic of the algorithm is that it segments nuclei foreground using a graph-cut algorithm with our proposed new initialization method and splits overlapping or touching cell nuclei by simple convexity and concavity analysis. Experimental results show that the proposed algorithm can segment complicated nuclei clumps effectively in our fluorescent fruit fly eye images. Evaluation on a public hand-labelled 2D benchmark demonstrates substantial quantitative improvement over other methods. For example, the proposed method achieves a 3.2 Hausdorff distance decrease and a 1.8 decrease in the merged nuclei error per slice.

  3. Advanced finite element analysis of L4-L5 implanted spine segment

    NASA Astrophysics Data System (ADS)

    Pawlikowski, Marek; Domański, Janusz; Suchocki, Cyprian

    2015-09-01

    In the paper finite element (FE) analysis of implanted lumbar spine segment is presented. The segment model consists of two lumbar vertebrae L4 and L5 and the prosthesis. The model of the intervertebral disc prosthesis consists of two metallic plates and a polyurethane core. Bone tissue is modelled as a linear viscoelastic material. The prosthesis core is made of a polyurethane nanocomposite. It is modelled as a non-linear viscoelastic material. The constitutive law of the core, derived in one of the previous papers, is implemented into the FE software Abaqus®. It was done by means of the User-supplied procedure UMAT. The metallic plates are elastic. The most important parts of the paper include: description of the prosthesis geometrical and numerical modelling, mathematical derivation of stiffness tensor and Kirchhoff stress and implementation of the constitutive model of the polyurethane core into Abaqus® software. Two load cases were considered, i.e. compression and stress relaxation under constant displacement. The goal of the paper is to numerically validate the constitutive law, which was previously formulated, and to perform advanced FE analyses of the implanted L4-L5 spine segment in which non-standard constitutive law for one of the model materials, i.e. the prosthesis core, is implemented.

  4. A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks.

    PubMed

    Wang, Changhan; Yan, Xinchen; Smith, Max; Kochhar, Kanika; Rubin, Marcie; Warren, Stephen M; Wrobel, James; Lee, Honglak

    2015-08-01

    Wound surface area changes over multiple weeks are highly predictive of the wound healing process. Furthermore, the quality and quantity of the tissue in the wound bed also offer important prognostic information. Unfortunately, accurate measurements of wound surface area changes are out of reach in the busy wound practice setting. Currently, clinicians estimate wound size by estimating wound width and length using a scalpel after wound treatment, which is highly inaccurate. To address this problem, we propose an integrated system to automatically segment wound regions and analyze wound conditions in wound images. Different from previous segmentation techniques which rely on handcrafted features or unsupervised approaches, our proposed deep learning method jointly learns task-relevant visual features and performs wound segmentation. Moreover, learned features are applied to further analysis of wounds in two ways: infection detection and healing progress prediction. To the best of our knowledge, this is the first attempt to automate long-term predictions of general wound healing progress. Our method is computationally efficient and takes less than 5 seconds per wound image (480 by 640 pixels) on a typical laptop computer. Our evaluations on a large-scale wound database demonstrate the effectiveness and reliability of the proposed system. PMID:26736781

  5. Local multifractal detrended fluctuation analysis for non-stationary image's texture segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Zong-shou; Li, Jin-wei

    2014-12-01

    Feature extraction plays a great important role in image processing and pattern recognition. As a power tool, multifractal theory is recently employed for this job. However, traditional multifractal methods are proposed to analyze the objects with stationary measure and cannot for non-stationary measure. The works of this paper is twofold. First, the definition of stationary image and 2D image feature detection methods are proposed. Second, a novel feature extraction scheme for non-stationary image is proposed by local multifractal detrended fluctuation analysis (Local MF-DFA), which is based on 2D MF-DFA. A set of new multifractal descriptors, called local generalized Hurst exponent (Lhq) is defined to characterize the local scaling properties of textures. To test the proposed method, both the novel texture descriptor and other two multifractal indicators, namely, local Hölder coefficients based on capacity measure and multifractal dimension Dq based on multifractal differential box-counting (MDBC) method, are compared in segmentation experiments. The first experiment indicates that the segmentation results obtained by the proposed Lhq are better than the MDBC-based Dq slightly and superior to the local Hölder coefficients significantly. The results in the second experiment demonstrate that the Lhq can distinguish the texture images more effectively and provide more robust segmentations than the MDBC-based Dq significantly.

  6. Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population

    USGS Publications Warehouse

    Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel

    2002-01-01

    A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.

  7. Ground truth delineation for medical image segmentation based on Local Consistency and Distribution Map analysis.

    PubMed

    Cheng, Irene; Sun, Xinyao; Alsufyani, Noura; Xiong, Zhihui; Major, Paul; Basu, Anup

    2015-01-01

    Computer-aided detection (CAD) systems are being increasingly deployed for medical applications in recent years with the goal to speed up tedious tasks and improve precision. Among others, segmentation is an important component in CAD systems as a preprocessing step to help recognize patterns in medical images. In order to assess the accuracy of a CAD segmentation algorithm, comparison with ground truth data is necessary. To-date, ground truth delineation relies mainly on contours that are either manually defined by clinical experts or automatically generated by software. In this paper, we propose a systematic ground truth delineation method based on a Local Consistency Set Analysis approach, which can be used to establish an accurate ground truth representation, or if ground truth is available, to assess the accuracy of a CAD generated segmentation algorithm. We validate our computational model using medical data. Experimental results demonstrate the robustness of our approach. In contrast to current methods, our model also provides consistency information at distributed boundary pixel level, and thus is invariant to global compensation error.

  8. A three-dimensional, six-segment chain analysis of forceful overarm throwing.

    PubMed

    Hong, D A; Cheung, T K; Roberts, E M

    2001-04-01

    A three-dimensional, six-segment model was applied to the pitching motion of three professional pitchers to analyze the kinematics and kinetics of the hips, upper trunk, humerus and forearm plus hand of both the upper limbs. Subjects were filmed at 250 frames per second. An inverse dynamics approach and angular momentum principle with respect to the proximal endpoint of a rigid segment were employed in the analysis. Results showed considerable similarities between subjects in the kinetic control of trunk rotation about the spine's longitudinal axis, but variability in the control of trunk lean both to the side and forward. The kinetics of the throwing shoulder and elbow joint were comparable between subjects, but the contribution of the non-throwing upper limb was minimal and variable. The upper trunk rotators played a key role in accelerating the ball to an early, low velocity near stride foot contact. After a brief pause they resumed acting strongly in a positive direction, though not enough to prevent trunk angular velocity slowing, as the musculature of the arm applied a load at the throwing shoulder. The interaction moment from the proximal segments assisted the forearm extensor in slowing flexion and producing rapid elbow extension near ball release. The temporal onset of muscular torques was not in a strictly successive proximal-to-distal sequence.

  9. Ground truth delineation for medical image segmentation based on Local Consistency and Distribution Map analysis.

    PubMed

    Cheng, Irene; Sun, Xinyao; Alsufyani, Noura; Xiong, Zhihui; Major, Paul; Basu, Anup

    2015-01-01

    Computer-aided detection (CAD) systems are being increasingly deployed for medical applications in recent years with the goal to speed up tedious tasks and improve precision. Among others, segmentation is an important component in CAD systems as a preprocessing step to help recognize patterns in medical images. In order to assess the accuracy of a CAD segmentation algorithm, comparison with ground truth data is necessary. To-date, ground truth delineation relies mainly on contours that are either manually defined by clinical experts or automatically generated by software. In this paper, we propose a systematic ground truth delineation method based on a Local Consistency Set Analysis approach, which can be used to establish an accurate ground truth representation, or if ground truth is available, to assess the accuracy of a CAD generated segmentation algorithm. We validate our computational model using medical data. Experimental results demonstrate the robustness of our approach. In contrast to current methods, our model also provides consistency information at distributed boundary pixel level, and thus is invariant to global compensation error. PMID:26736941

  10. Using semi-automated segmentation of computed tomography datasets for three-dimensional visualization and volume measurements of equine paranasal sinuses.

    PubMed

    Brinkschulte, Markus; Bienert-Zeit, Astrid; Lüpke, Matthias; Hellige, Maren; Staszyk, Carsten; Ohnesorge, Bernhard

    2013-01-01

    The system of the paranasal sinuses morphologically represents one of the most complex parts of the equine body. A clear understanding of spatial relationships is needed for correct diagnosis and treatment. The purpose of this study was to describe the anatomy and volume of equine paranasal sinuses using three-dimensional (3D) reformatted renderings of computed tomography (CT) slices. Heads of 18 cadaver horses, aged 2-25 years, were analyzed by the use of separate semi-automated segmentation of the following bilateral paranasal sinus compartments: rostral maxillary sinus (Sinus maxillaris rostralis), ventral conchal sinus (Sinus conchae ventralis), caudal maxillary sinus (Sinus maxillaris caudalis), dorsal conchal sinus (Sinus conchae dorsalis), frontal sinus (Sinus frontalis), sphenopalatine sinus (Sinus sphenopalatinus), and middle conchal sinus (Sinus conchae mediae). Reconstructed structures were displayed separately, grouped, or altogether as transparent or solid elements to visualize individual paranasal sinus morphology. The paranasal sinuses appeared to be divided into two systems by the maxillary septum (Septum sinuum maxillarium). The first or rostral system included the rostral maxillary and ventral conchal sinus. The second or caudal system included the caudal maxillary, dorsal conchal, frontal, sphenopalatine, and middle conchal sinuses. These two systems overlapped and were interlocked due to the oblique orientation of the maxillary septum. Total volumes of the paranasal sinuses ranged from 911.50 to 1502.00 ml (mean ± SD, 1151.00 ± 186.30 ml). 3D renderings of equine paranasal sinuses by use of semi-automated segmentation of CT-datasets improved understanding of this anatomically challenging region. PMID:23890087

  11. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  12. Micro analysis of fringe field formed inside LDA measuring volume

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhijit; Nirala, A. K.

    2016-05-01

    In the present study we propose a technique for micro analysis of fringe field formed inside laser Doppler anemometry (LDA) measuring volume. Detailed knowledge of the fringe field obtained by this technique allows beam quality, alignment and fringe uniformity to be evaluated with greater precision and may be helpful for selection of an appropriate optical element for LDA system operation. A complete characterization of fringes formed at the measurement volume using conventional, as well as holographic optical elements, is presented. Results indicate the qualitative, as well as quantitative, improvement of fringes formed at the measurement volume by holographic optical elements. Hence, use of holographic optical elements in LDA systems may be advantageous for improving accuracy in the measurement.

  13. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  14. Automated system for ST segment and arrhythmia analysis in exercise radionuclide ventriculography

    SciTech Connect

    Hsia, P.W.; Jenkins, J.M.; Shimoni, Y.; Gage, K.P.; Santinga, J.T.; Pitt, B.

    1986-06-01

    A computer-based system for interpretation of the electrocardiogram (ECG) in the diagnosis of arrhythmia and ST segment abnormality in an exercise system is presented. The system was designed for inclusion in a gamma camera so the ECG diagnosis could be combined with the diagnostic capability of radionuclide ventriculography. Digitized data are analyzed in a beat-by-beat mode and a contextual diagnosis of underlying rhythm is provided. Each beat is assigned a beat code based on a combination of waveform analysis and RR interval measurement. The waveform analysis employs a new correlation coefficient formula which corrects for baseline wander. Selective signal averaging, in which only normal beats are included, is done for an improved signal-to-noise ratio prior to ST segment analysis. Template generation, R wave detection, QRS window size, baseline correction, and continuous updating of heart rate have all been automated. ST level and slope measurements are computed on signal-averaged data. Arrhythmia analysis of 13 passages of abnormal rhythm by computer was found to be correct in 98.4 percent of all beats. 25 passages of exercise data, 1-5 min in length, were evaluated by the cardiologist and found to be in agreement in 95.8 percent in measurements of ST level and 91.7 percent in measurements of ST slope.

  15. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  16. Texture analysis improves level set segmentation of the anterior abdominal wall

    SciTech Connect

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Landman, Bennett A.

    2013-12-15

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention.Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall.Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture.Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture

  17. Texture analysis improves level set segmentation of the anterior abdominal wall

    PubMed Central

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Landman, Bennett A.

    2013-01-01

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention. Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall. Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture. Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture

  18. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    SciTech Connect

    Not Available

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  19. Analysis of the segmented contraction of basis functions using density matrix theory.

    PubMed

    Custodio, Rogério; Gomes, André Severo Pereira; Sensato, Fabrício Ronil; Trevas, Júlio Murilo Dos Santos

    2006-11-30

    A particular formulation based on density matrix (DM) theory at the Hartree-Fock level of theory and the description of the atomic orbitals as integral transforms is introduced. This formulation leads to a continuous representation of the density matrices as functions of a generator coordinate and to the possibility of plotting either the continuous or discrete density matrices as functions of the exponents of primitive Gaussian basis functions. The analysis of these diagrams provides useful information allowing: (a) the determination of the most important primitives for a given orbital, (b) the core-valence separation, and (c) support for the development of contracted basis sets by the segmented method.

  20. Do tumor volume, percent tumor volume predict biochemical recurrence after radical prostatectomy? A meta-analysis

    PubMed Central

    Meng, Yang; Li, He; Xu, Peng; Wang, Jia

    2015-01-01

    The aim of this meta-analysis was to explore the effects of tumor volume (TV) and percent tumor volume (PTV) on biochemical recurrence (BCR) after radical prostatectomy (RP). An electronic search of Medline, Embase and CENTRAL was performed for relevant studies. Studies evaluated the effects of TV and/or PTV on BCR after RP and provided detailed results of multivariate analyses were included. Combined hazard ratios (HRs) and their corresponding 95% confidence intervals (CIs) were calculated using random-effects or fixed-effects models. A total of 15 studies with 16 datasets were included in the meta-analysis. Our study showed that both TV (HR 1.04, 95% CI: 1.00-1.07; P=0.03) and PTV (HR 1.01, 95% CI: 1.00-1.02; P=0.02) were predictors of BCR after RP. The subgroup analyses revealed that TV predicted BCR in studies from Asia, PTV was significantly correlative with BCR in studies in which PTV was measured by computer planimetry, and both TV and PTV predicted BCR in studies with small sample sizes (<1000). In conclusion, our meta-analysis demonstrated that both TV and PTV were significantly associated with BCR after RP. Therefore, TV and PTV should be considered when assessing the risk of BCR in RP specimens. PMID:26885209

  1. Identifying radiotherapy target volumes in brain cancer by image analysis

    PubMed Central

    Cheng, Kun; Montgomery, Dean; Feng, Yang; Steel, Robin; Liao, Hanqing; McLaren, Duncan B.; Erridge, Sara C.; McLaughlin, Stephen

    2015-01-01

    To establish the optimal radiotherapy fields for treating brain cancer patients, the tumour volume is often outlined on magnetic resonance (MR) images, where the tumour is clearly visible, and mapped onto computerised tomography images used for radiotherapy planning. This process requires considerable clinical experience and is time consuming, which will continue to increase as more complex image sequences are used in this process. Here, the potential of image analysis techniques for automatically identifying the radiation target volume on MR images, and thereby assisting clinicians with this difficult task, was investigated. A gradient-based level set approach was applied on the MR images of five patients with grades II, III and IV malignant cerebral glioma. The relationship between the target volumes produced by image analysis and those produced by a radiation oncologist was also investigated. The contours produced by image analysis were compared with the contours produced by an oncologist and used for treatment. In 93% of cases, the Dice similarity coefficient was found to be between 60 and 80%. This feasibility study demonstrates that image analysis has the potential for automatic outlining in the management of brain cancer patients, however, more testing and validation on a much larger patient cohort is required. PMID:26609418

  2. Global fractional anisotropy and mean diffusivity together with segmented brain volumes assemble a predictive discriminant model for young and elderly healthy brains: a pilot study at 3T

    PubMed Central

    Garcia-Lazaro, Haydee Guadalupe; Becerra-Laparra, Ivonne; Cortez-Conradis, David; Roldan-Valadez, Ernesto

    2016-01-01

    Summary Several parameters of brain integrity can be derived from diffusion tensor imaging. These include fractional anisotropy (FA) and mean diffusivity (MD). Combination of these variables using multivariate analysis might result in a predictive model able to detect the structural changes of human brain aging. Our aim was to discriminate between young and older healthy brains by combining structural and volumetric variables from brain MRI: FA, MD, and white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) volumes. This was a cross-sectional study in 21 young (mean age, 25.71±3.04 years; range, 21–34 years) and 10 elderly (mean age, 70.20±4.02 years; range, 66–80 years) healthy volunteers. Multivariate discriminant analysis, with age as the dependent variable and WM, GM and CSF volumes, global FA and MD, and gender as the independent variables, was used to assemble a predictive model. The resulting model was able to differentiate between young and older brains: Wilks’ λ = 0.235, χ2 (6) = 37.603, p = .000001. Only global FA, WM volume and CSF volume significantly discriminated between groups. The total accuracy was 93.5%; the sensitivity, specificity and positive and negative predictive values were 91.30%, 100%, 100% and 80%, respectively. Global FA, WM volume and CSF volume are parameters that, when combined, reliably discriminate between young and older brains. A decrease in FA is the strongest predictor of membership of the older brain group, followed by an increase in WM and CSF volumes. Brain assessment using a predictive model might allow the follow-up of selected cases that deviate from normal aging. PMID:27027893

  3. The Impact of Policy Guidelines on Hospital Antibiotic Use over a Decade: A Segmented Time Series Analysis

    PubMed Central

    Chandy, Sujith J.; Naik, Girish S.; Charles, Reni; Jeyaseelan, Visalakshi; Naumova, Elena N.; Thomas, Kurien; Lundborg, Cecilia Stalsby

    2014-01-01

    Introduction Antibiotic pressure contributes to rising antibiotic resistance. Policy guidelines encourage rational prescribing behavior, but effectiveness in containing antibiotic use needs further assessment. This study therefore assessed the patterns of antibiotic use over a decade and analyzed the impact of different modes of guideline development and dissemination on inpatient antibiotic use. Methods Antibiotic use was calculated monthly as defined daily doses (DDD) per 100 bed days for nine antibiotic groups and overall. This time series compared trends in antibiotic use in five adjacent time periods identified as ‘Segments,’ divided based on differing modes of guideline development and implementation: Segment 1– Baseline prior to antibiotic guidelines development; Segment 2– During preparation of guidelines and booklet dissemination; Segment 3– Dormant period with no guidelines dissemination; Segment 4– Booklet dissemination of revised guidelines; Segment 5– Booklet dissemination of revised guidelines with intranet access. Regression analysis adapted for segmented time series and adjusted for seasonality assessed changes in antibiotic use trend. Results Overall antibiotic use increased at a monthly rate of 0.95 (SE = 0.18), 0.21 (SE = 0.08) and 0.31 (SE = 0.06) for Segments 1, 2 and 3, stabilized in Segment 4 (0.05; SE = 0.10) and declined in Segment 5 (−0.37; SE = 0.11). Segments 1, 2 and 4 exhibited seasonal fluctuations. Pairwise segmented regression adjusted for seasonality revealed a significant drop in monthly antibiotic use of 0.401 (SE = 0.089; p<0.001) for Segment 5 compared to Segment 4. Most antibiotic groups showed similar trends to overall use. Conclusion Use of overall and specific antibiotic groups showed varied patterns and seasonal fluctuations. Containment of rising overall antibiotic use was possible during periods of active guideline dissemination. Wider access through intranet facilitated

  4. A novel method for the measurement of linear body segment parameters during clinical gait analysis.

    PubMed

    Geil, Mark D

    2013-09-01

    Clinical gait analysis is a valuable tool for the understanding of motion disorders and treatment outcomes. Most standard models used in gait analysis rely on predefined sets of body segment parameters that must be measured on each individual. Traditionally, these parameters are measured using calipers and tape measures. The process can be time consuming and is prone to several sources of error. This investigation explored a novel method for rapid recording of linear body segment parameters using magnetic-field based digital calipers commonly used for a different purpose in prosthetics and orthotics. The digital method was found to be comparable to traditional in all linear measures and data capture was significantly faster with the digital method, with mean time savings for 10 measurements of 2.5 min. Digital calipers only record linear distances, and were less accurate when diameters were used to approximate limb circumferences. Experience in measuring BSPs is important, as an experienced measurer was significantly faster than a graduate student and showed less difference between methods. Comparing measurement of adults vs. children showed greater differences with adults, and some method-dependence. If the hardware is available, digital caliper measurement of linear BSPs is accurate and rapid.

  5. Segmentation and Visual Analysis of Whole-Body Mouse Skeleton microSPECT

    PubMed Central

    Khmelinskii, Artem; Groen, Harald C.; Baiker, Martin; de Jong, Marion; Lelieveldt, Boudewijn P. F.

    2012-01-01

    Whole-body SPECT small animal imaging is used to study cancer, and plays an important role in the development of new drugs. Comparing and exploring whole-body datasets can be a difficult and time-consuming task due to the inherent heterogeneity of the data (high volume/throughput, multi-modality, postural and positioning variability). The goal of this study was to provide a method to align and compare side-by-side multiple whole-body skeleton SPECT datasets in a common reference, thus eliminating acquisition variability that exists between the subjects in cross-sectional and multi-modal studies. Six whole-body SPECT/CT datasets of BALB/c mice injected with bone targeting tracers 99mTc-methylene diphosphonate (99mTc-MDP) and 99mTc-hydroxymethane diphosphonate (99mTc-HDP) were used to evaluate the proposed method. An articulated version of the MOBY whole-body mouse atlas was used as a common reference. Its individual bones were registered one-by-one to the skeleton extracted from the acquired SPECT data following an anatomical hierarchical tree. Sequential registration was used while constraining the local degrees of freedom (DoFs) of each bone in accordance to the type of joint and its range of motion. The Articulated Planar Reformation (APR) algorithm was applied to the segmented data for side-by-side change visualization and comparison of data. To quantitatively evaluate the proposed algorithm, bone segmentations of extracted skeletons from the correspondent CT datasets were used. Euclidean point to surface distances between each dataset and the MOBY atlas were calculated. The obtained results indicate that after registration, the mean Euclidean distance decreased from 11.5±12.1 to 2.6±2.1 voxels. The proposed approach yielded satisfactory segmentation results with minimal user intervention. It proved to be robust for “incomplete” data (large chunks of skeleton missing) and for an intuitive exploration and comparison of multi-modal SPECT/CT cross

  6. Design and analysis of modules for segmented X-ray optics

    NASA Astrophysics Data System (ADS)

    McClelland, Ryan S.; Biskach, Michael P.; Chan, Kai-Wing; Saha, Timo T.; Zhang, William W.

    2012-09-01

    Lightweight and high resolution mirrors are needed for future space-based X-ray telescopes to achieve advances in high-energy astrophysics. The slumped glass mirror technology in development at NASA GSFC aims to build X-ray mirror modules with an area to mass ratio of ~17 cm2/kg at 1 keV and a resolution of 10 arc-sec Half Power Diameter (HPD) or better at an affordable cost. As the technology nears the performance requirements, additional engineering effort is needed to ensure the modules are compatible with space-flight. This paper describes Flight Mirror Assembly (FMA) designs for several X-ray astrophysics missions studied by NASA and defines generic driving requirements and subsequent verification tests necessary to advance technology readiness for mission implementation. The requirement to perform X-ray testing in a horizontal beam, based on the orientation of existing facilities, is particularly burdensome on the mirror technology, necessitating mechanical over-constraint of the mirror segments and stiffening of the modules in order to prevent self-weight deformation errors from dominating the measured performance. This requirement, in turn, drives the mass and complexity of the system while limiting the testable angular resolution. Design options for a vertical X-ray test facility alleviating these issues are explored. An alternate mirror and module design using kinematic constraint of the mirror segments, enabled by a vertical test facility, is proposed. The kinematic mounting concept has significant advantages including potential for higher angular resolution, simplified mirror integration, and relaxed thermal requirements. However, it presents new challenges including low vibration modes and imperfections in kinematic constraint. Implementation concepts overcoming these challenges are described along with preliminary test and analysis results demonstrating the feasibility of kinematically mounting slumped glass mirror segments.

  7. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  8. Computerized analysis of coronary artery disease: Performance evaluation of segmentation and tracking of coronary arteries in CT angiograms

    SciTech Connect

    Zhou, Chuan Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean; Agarwal, Prachi; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Patel, Smita; Wei, Jun

    2014-08-15

    Purpose: The authors are developing a computer-aided detection system to assist radiologists in analysis of coronary artery disease in coronary CT angiograms (cCTA). This study evaluated the accuracy of the authors’ coronary artery segmentation and tracking method which are the essential steps to define the search space for the detection of atherosclerotic plaques. Methods: The heart region in cCTA is segmented and the vascular structures are enhanced using the authors’ multiscale coronary artery response (MSCAR) method that performed 3D multiscale filtering and analysis of the eigenvalues of Hessian matrices. Starting from seed points at the origins of the left and right coronary arteries, a 3D rolling balloon region growing (RBG) method that adapts to the local vessel size segmented and tracked each of the coronary arteries and identifies the branches along the tracked vessels. The branches are queued and subsequently tracked until the queue is exhausted. With Institutional Review Board approval, 62 cCTA were collected retrospectively from the authors’ patient files. Three experienced cardiothoracic radiologists manually tracked and marked center points of the coronary arteries as reference standard following the 17-segment model that includes clinically significant coronary arteries. Two radiologists visually examined the computer-segmented vessels and marked the mistakenly tracked veins and noisy structures as false positives (FPs). For the 62 cases, the radiologists marked a total of 10191 center points on 865 visible coronary artery segments. Results: The computer-segmented vessels overlapped with 83.6% (8520/10191) of the center points. Relative to the 865 radiologist-marked segments, the sensitivity reached 91.9% (795/865) if a true positive is defined as a computer-segmented vessel that overlapped with at least 10% of the reference center points marked on the segment. When the overlap threshold is increased to 50% and 100%, the sensitivities were 86

  9. Application of Control Volume Analysis to Cerebrospinal Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Wei, Timothy; Cohen, Benjamin; Anor, Tomer; Madsen, Joseph

    2011-11-01

    Hydrocephalus is among the most common birth defects and may not be prevented nor cured. Afflicted individuals face serious issues, which at present are too complicated and not well enough understood to treat via systematic therapies. This talk outlines the framework and application of a control volume methodology to clinical Phase Contrast MRI data. Specifically, integral control volume analysis utilizes a fundamental, fluid dynamics methodology to quantify intracranial dynamics within a precise, direct, and physically meaningful framework. A chronically shunted, hydrocephalic patient in need of a revision procedure was used as an in vivo case study. Magnetic resonance velocity measurements within the patient's aqueduct were obtained in four biomedical state and were analyzed using the methods presented in this dissertation. Pressure force estimates were obtained, showing distinct differences in amplitude, phase, and waveform shape for different intracranial states within the same individual. Thoughts on the physiological and diagnostic research and development implications/opportunities will be presented.

  10. Molecular analysis of six segments of tobacco leaf enation virus, a novel phytoreovirus from tobacco.

    PubMed

    Picton, Anabela; Potgieter, Christiaan; Rey, Marie Emma Christine

    2007-10-01

    Tobacco leaf enation virus (TLEV) is a putative member of the genus Phytoreovirus within the family Reoviridae. Previous western blot analysis of structural viral proteins (apparent molecular weights of 93 kDa; 58 kDa; 48 kDa; 39 kDa and 36 kDa) associated with TLEV, isolated from infected tobacco in South Africa, suggested that these proteins may correspond to structural Wound tumor virus (WTV) proteins. To further establish the nature of this novel virus disease phenotype in tobacco, molecular characterization of six dsRNA components was undertaken. Full-length cDNA clones were obtained by an optimized modified single-primer amplification sequence-independent dsRNA cloning method. Results of this study revealed the conserved terminal sequence: 5'GG(U/C)...UGAU 3' of segments S6-S12, while adjacent to these conserved terminal sequences are imperfect inverted repeats (7-15 bp in length), both features being common to reoviruses. The complete nucleotide sequences of segments S5 (2,610 bp), S7 (1,740 bp), S8 (1,439 bp), S10 (1,252 bp), S11 (1,187 bp) and S12 (836 bp) were determined. Comparison of full-length nucleotide sequences with corresponding segments of other phytoreoviruses, Rice gall dwarf virus (RGDV), Rice dwarf virus (RDV) and WTV has shown nucleotide and predicted amino acid identities within the range of 30-60%. TLEV consistently shows a higher identity to WTV than to other phytoreovirus species where sequence data is available. Each segment had a single predicted open reading frame encoding proteins with calculated molecular weights of S5 (90.6 kDa); S7 (58.1 kDa); S8 (47.7 kDa); S10 (39.8 kDa); S11 (35 kDa) and S12 (19.5 kDa). The relatively low nucleotide and amino acid identity to other members of the genus demonstrates that TLEV is a novel phytoreovirus, distinct from the only other reported dicotyledenous-infecting WTV and is the first phytoreovirus reported to emerge in Africa.

  11. ANALYSIS OF THE SEGMENTAL IMPACTION OF FEMORAL HEAD FOLLOWING AN ACETABULAR FRACTURE SURGICALLY MANAGED

    PubMed Central

    Guimarães, Rodrigo Pereira; Kaleka, Camila Cohen; Cohen, Carina; Daniachi, Daniel; Keiske Ono, Nelson; Honda, Emerson Kiyoshi; Polesello, Giancarlo Cavalli; Riccioli, Walter

    2015-01-01

    Objective: Correlate the postoperative radiographic evaluation with variables accompanying acetabular fractures in order to determine the predictive factors for segmental impaction of femoral head. Methods: Retrospective analysis of medial files of patients submitted to open reduction surgery with internal acetabular fixation. Within approximately 35 years, 596 patients were treated for acetabular fractures; 267 were followed up for at least two years. The others were excluded either because their follow up was shorter than the minimum time, or as a result of the lack of sufficient data reported on files, or because they had been submitted to non-surgical treatment. The patients were followed up by one of three surgeons of the group using the Merle d'Aubigné and Postel clinical scales as well as radiological studies. Results: Only tow studied variables-age and amount of postoperative reductionshowed statistically significant correlation with femoral head impaction. Conclusions: The quality of reduction-anatomical or with up to 2mm residual deviation-presents a good radiographic evolution, reducing the potential for segmental impaction of the femoral head, a statistically significant finding. PMID:27004191

  12. Investigating materials for breast nodules simulation by using segmentation and similarity analysis of digital images

    NASA Astrophysics Data System (ADS)

    Siqueira, Paula N.; Marcomini, Karem D.; Sousa, Maria A. Z.; Schiabel, Homero

    2015-03-01

    The task of identifying the malignancy of nodular lesions on mammograms becomes quite complex due to overlapped structures or even to the granular fibrous tissue which can cause confusion in classifying masses shape, leading to unnecessary biopsies. Efforts to develop methods for automatic masses detection in CADe (Computer Aided Detection) schemes have been made with the aim of assisting radiologists and working as a second opinion. The validation of these methods may be accomplished for instance by using databases with clinical images or acquired through breast phantoms. With this aim, some types of materials were tested in order to produce radiographic phantom images which could characterize a good enough approach to the typical mammograms corresponding to actual breast nodules. Therefore different nodules patterns were physically produced and used on a previous developed breast phantom. Their characteristics were tested according to the digital images obtained from phantom exposures at a LORAD M-IV mammography unit. Two analysis were realized the first one by the segmentation of regions of interest containing the simulated nodules by an automated segmentation technique as well as by an experienced radiologist who has delineated the contour of each nodule by means of a graphic display digitizer. Both results were compared by using evaluation metrics. The second one used measure of quality Structural Similarity (SSIM) to generate quantitative data related to the texture produced by each material. Although all the tested materials proved to be suitable for the study, the PVC film yielded the best results.

  13. Interfacial energetics approach for analysis of endothelial cell and segmental polyurethane interactions.

    PubMed

    Hill, Michael J; Cheah, Calvin; Sarkar, Debanjan

    2016-08-01

    Understanding the physicochemical interactions between endothelial cells and biomaterials is vital for regenerative medicine applications. Particularly, physical interactions between the substratum interface and spontaneously deposited biomacromolecules as well as between the induced biomolecular interface and the cell in terms of surface energetics are important factors to regulate cellular functions. In this study, we examined the physical interactions between endothelial cells and segmental polyurethanes (PUs) using l-tyrosine based PUs to examine the structure-property relations in terms of PU surface energies and endothelial cell organization. Since, contact angle analysis used to probe surface energetics provides incomplete interpretation and understanding of the physical interactions, we sought a combinatorial surface energetics approach utilizing water contact angle, Zisman's critical surface tension (CST), Kaelble's numerical method, and van Oss-Good-Chaudhury theory (vOGCT), and applied to both substrata and serum adsorbed matrix to correlate human umbilical vein endothelial cell (HUVEC) behavior with surface energetics of l-tyrosine based PU surfaces. We determined that, while water contact angle of substratum or adsorbed matrix did not correlate well with HUVEC behavior, overall higher polarity according to the numerical method as well as Lewis base character of the substratum explained increased HUVEC interaction and monolayer formation as opposed to organization into networks. Cell interaction was also interpreted in terms of the combined effects of substratum and adsorbed matrix polarity and Lewis acid-base character to determine the effect of PU segments.

  14. Interfacial energetics approach for analysis of endothelial cell and segmental polyurethane interactions.

    PubMed

    Hill, Michael J; Cheah, Calvin; Sarkar, Debanjan

    2016-08-01

    Understanding the physicochemical interactions between endothelial cells and biomaterials is vital for regenerative medicine applications. Particularly, physical interactions between the substratum interface and spontaneously deposited biomacromolecules as well as between the induced biomolecular interface and the cell in terms of surface energetics are important factors to regulate cellular functions. In this study, we examined the physical interactions between endothelial cells and segmental polyurethanes (PUs) using l-tyrosine based PUs to examine the structure-property relations in terms of PU surface energies and endothelial cell organization. Since, contact angle analysis used to probe surface energetics provides incomplete interpretation and understanding of the physical interactions, we sought a combinatorial surface energetics approach utilizing water contact angle, Zisman's critical surface tension (CST), Kaelble's numerical method, and van Oss-Good-Chaudhury theory (vOGCT), and applied to both substrata and serum adsorbed matrix to correlate human umbilical vein endothelial cell (HUVEC) behavior with surface energetics of l-tyrosine based PU surfaces. We determined that, while water contact angle of substratum or adsorbed matrix did not correlate well with HUVEC behavior, overall higher polarity according to the numerical method as well as Lewis base character of the substratum explained increased HUVEC interaction and monolayer formation as opposed to organization into networks. Cell interaction was also interpreted in terms of the combined effects of substratum and adsorbed matrix polarity and Lewis acid-base character to determine the effect of PU segments. PMID:27065449

  15. Evaluation of poly-drug use in methadone-related fatalities using segmental hair analysis.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2015-03-01

    In Denmark, fatal poisoning among drug addicts is often related to methadone. The primary mechanism contributing to fatal methadone overdose is respiratory depression. Concurrent use of other central nervous system (CNS) depressants is suggested to heighten the potential for fatal methadone toxicity. Reduced tolerance due to a short-time abstinence period is also proposed to determine a risk for fatal overdose. The primary aims of this study were to investigate if concurrent use of CNS depressants or reduced tolerance were significant risk factors in methadone-related fatalities using segmental hair analysis. The study included 99 methadone-related fatalities collected in Denmark from 2008 to 2011, where both blood and hair were available. The cases were divided into three subgroups based on the cause of death; methadone poisoning (N=64), poly-drug poisoning (N=28) or methadone poisoning combined with fatal diseases (N=7). No significant differences between methadone concentrations in the subgroups were obtained in both blood and hair. The methadone blood concentrations were highly variable (0.015-5.3, median: 0.52mg/kg) and mainly within the concentration range detected in living methadone users. In hair, methadone was detected in 97 fatalities with concentrations ranging from 0.061 to 211ng/mg (median: 11ng/mg). In the remaining two cases, methadone was detected in blood but absent in hair specimens, suggesting that these two subjects were methadone-naive users. Extensive poly-drug use was observed in all three subgroups, both recently and within the last months prior to death. Especially, concurrent use of multiple benzodiazepines was prevalent among the deceased followed by the abuse of morphine, codeine, amphetamine, cannabis, cocaine and ethanol. By including quantitative segmental hair analysis, additional information on poly-drug use was obtained. Especially, 6-acetylmorphine was detected more frequently in hair specimens, indicating that regular abuse of

  16. Evaluation of poly-drug use in methadone-related fatalities using segmental hair analysis.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2015-03-01

    In Denmark, fatal poisoning among drug addicts is often related to methadone. The primary mechanism contributing to fatal methadone overdose is respiratory depression. Concurrent use of other central nervous system (CNS) depressants is suggested to heighten the potential for fatal methadone toxicity. Reduced tolerance due to a short-time abstinence period is also proposed to determine a risk for fatal overdose. The primary aims of this study were to investigate if concurrent use of CNS depressants or reduced tolerance were significant risk factors in methadone-related fatalities using segmental hair analysis. The study included 99 methadone-related fatalities collected in Denmark from 2008 to 2011, where both blood and hair were available. The cases were divided into three subgroups based on the cause of death; methadone poisoning (N=64), poly-drug poisoning (N=28) or methadone poisoning combined with fatal diseases (N=7). No significant differences between methadone concentrations in the subgroups were obtained in both blood and hair. The methadone blood concentrations were highly variable (0.015-5.3, median: 0.52mg/kg) and mainly within the concentration range detected in living methadone users. In hair, methadone was detected in 97 fatalities with concentrations ranging from 0.061 to 211ng/mg (median: 11ng/mg). In the remaining two cases, methadone was detected in blood but absent in hair specimens, suggesting that these two subjects were methadone-naive users. Extensive poly-drug use was observed in all three subgroups, both recently and within the last months prior to death. Especially, concurrent use of multiple benzodiazepines was prevalent among the deceased followed by the abuse of morphine, codeine, amphetamine, cannabis, cocaine and ethanol. By including quantitative segmental hair analysis, additional information on poly-drug use was obtained. Especially, 6-acetylmorphine was detected more frequently in hair specimens, indicating that regular abuse of

  17. Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives.

    PubMed

    Krishnamurthy, Senthilkumar; Narasimhan, Ganesh; Rengasamy, Umamaheswari

    2016-01-01

    The three-dimensional analysis on lung computed tomography scan was carried out in this study to detect the malignant lung nodules. An automatic three-dimensional segmentation algorithm proposed here efficiently segmented the tissue clusters (nodules) inside the lung. However, an automatic morphological region-grow segmentation algorithm that was implemented to segment the well-circumscribed nodules present inside the lung did not segment the juxta-pleural nodule present on the inner surface of wall of the lung. A novel edge bridge and fill technique is proposed in this article to segment the juxta-pleural and pleural-tail nodules accurately. The centroid shift of each candidate nodule was computed. The nodules with more centroid shift in the consecutive slices were eliminated since malignant nodule's resultant position did not usually deviate. The three-dimensional shape variation and edge sharp analyses were performed to reduce the false positives and to classify the malignant nodules. The change in area and equivalent diameter was more for malignant nodules in the consecutive slices and the malignant nodules showed a sharp edge. Segmentation was followed by three-dimensional centroid, shape and edge analysis which was carried out on a lung computed tomography database of 20 patient with 25 malignant nodules. The algorithms proposed in this article precisely detected 22 malignant nodules and failed to detect 3 with a sensitivity of 88%. Furthermore, this algorithm correctly eliminated 216 tissue clusters that were initially segmented as nodules; however, 41 non-malignant tissue clusters were detected as malignant nodules. Therefore, the false positive of this algorithm was 2.05 per patient.

  18. Segmentation and volumetric measurement of renal cysts and parenchyma from MR images of polycystic kidneys using multi-spectral analysis method

    NASA Astrophysics Data System (ADS)

    Bae, K. T.; Commean, P. K.; Brunsden, B. S.; Baumgarten, D. A.; King, B. F., Jr.; Wetzel, L. H.; Kenney, P. J.; Chapman, A. B.; Torres, V. E.; Grantham, J. J.; Guay-Woodford, L. M.; Tao, C.; Miller, J. P.; Meyers, C. M.; Bennett, W. M.

    2008-03-01

    For segmentation and volume measurement of renal cysts and parenchyma from kidney MR images in subjects with autosomal dominant polycystic kidney disease (ADPKD), a semi-automated, multi-spectral anaylsis (MSA) method was developed and applied to T1- and T2-weighted MR images. In this method, renal cysts and parenchyma were characterized and segmented for their characteristic T1 and T2 signal intensity differences. The performance of the MSA segmentation method was tested on ADPKD phantoms and patients. Segmented renal cysts and parenchyma volumes were measured and compared with reference standard measurements by fluid displacement method in the phantoms and stereology and region-based thresholding methods in patients, respectively. As results, renal cysts and parenchyma were segmented successfully with the MSA method. The volume measurements obtained with MSA were in good agreement with the measurements by other segmentation methods for both phantoms and subjects. The MSA method, however, was more time-consuming than the other segmentation methods because it required pre-segmentation, image registration and tissue classification-determination steps.

  19. Analysis of automated highway system risks and uncertainties. Volume 5

    SciTech Connect

    Sicherman, A.

    1994-10-01

    This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.

  20. Conjoint analysis to measure the perceived quality in volume rendering.

    PubMed

    Giesen, Joachim; Mueller, Klaus; Schuberth, Eva; Wang, Lujin; Zolliker, Peter

    2007-01-01

    Visualization algorithms can have a large number of parameters, making the space of possible rendering results rather high-dimensional. Only a systematic analysis of the perceived quality can truly reveal the optimal setting for each such parameter. However, an exhaustive search in which all possible parameter permutations are presented to each user within a study group would be infeasible to conduct. Additional complications may result from possible parameter co-dependencies. Here, we will introduce an efficient user study design and analysis strategy that is geared to cope with this problem. The user feedback is fast and easy to obtain and does not require exhaustive parameter testing. To enable such a framework we have modified a preference measuring methodology, conjoint analysis, that originated in psychology and is now also widely used in market research. We demonstrate our framework by a study that measures the perceived quality in volume rendering within the context of large parameter spaces.

  1. Volume measurements of normal orbital structures by computed tomographic analysis

    SciTech Connect

    Forbes, G.; Gehring, D.G.; Gorman, C.A.; Brennan, M.D.; Jackson, I.T.

    1985-07-01

    Computed tomographic digital data and special off-line computer graphic analysis were used to measure volumes of normal orbital soft tissue, extraocular muscle, orbital fat, and total bony orbit in vivo in 29 patients (58 orbits). The upper limits of normal for adult bony orbit, soft tissue exclusive of the globe, orbital fat, and muscle are 30.1 cm/sup 3/, 20.0 cm/sup 3/, 14.4 cm/sup 3/, and 6.5 cm/sup 3/, respectively. There are small differences in men as a group compared with women but minimal difference between right and left orbits in the same person. The accuracy of the techniques was established at 7%-8% for these orbit structural volumes in physical phantoms and in simulated silicone orbit phantoms in dry skulls. Mean values and upper limits of normal for volumes were determined in adult orbital structures for future comparison with changes due to endocrine ophthalmopathy, trauma, and congenital deformity.

  2. Analysis of volume holographic storage allowing large-angle illumination

    NASA Astrophysics Data System (ADS)

    Shamir, Joseph

    2005-05-01

    Advanced technological developments have stimulated renewed interest in volume holography for applications such as information storage and wavelength multiplexing for communications and laser beam shaping. In these and many other applications, the information-carrying wave fronts usually possess narrow spatial-frequency bands, although they may propagate at large angles with respect to each other or a preferred optical axis. Conventional analytic methods are not capable of properly analyzing the optical architectures involved. For mitigation of the analytic difficulties, a novel approximation is introduced to treat narrow spatial-frequency band wave fronts propagating at large angles. This approximation is incorporated into the analysis of volume holography based on a plane-wave decomposition and Fourier analysis. As a result of the analysis, the recently introduced generalized Bragg selectivity is rederived for this more general case and is shown to provide enhanced performance for the above indicated applications. The power of the new theoretical description is demonstrated with the help of specific examples and computer simulations. The simulations reveal some interesting effects, such as coherent motion blur, that were predicted in an earlier publication.

  3. Synfuel program analysis. Volume I. Procedures-capabilities

    SciTech Connect

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This is the first of the two volumes describing the analytic procedures and resulting capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative synfuel projects and integrated programs. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specifiy cases and interpret outputs. It also contains an explicit description (with examples) of the types of results which can be obtained when applied to: the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. In all cases, the objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

  4. User's operating procedures. Volume 2: Scout project financial analysis program

    NASA Technical Reports Server (NTRS)

    Harris, C. G.; Haris, D. K.

    1985-01-01

    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.

  5. Multi-level segment analysis: definition and application in turbulent systems

    NASA Astrophysics Data System (ADS)

    Wang, L. P.; Huang, Y. X.

    2015-06-01

    For many complex systems the interaction of different scales is among the most interesting and challenging features. It seems not very successful to extract the physical properties in different scale regimes by the existing approaches, such as the structure-function and Fourier spectrum method. Fundamentally, these methods have their respective limitations, for instance scale mixing, i.e. the so-called infrared and ultraviolet effects. To make improvements in this regard, a new method, multi-level segment analysis (MSA) based on the local extrema statistics, has been developed. Benchmark (fractional Brownian motion) verifications and the important case tests (Lagrangian and two-dimensional turbulence) show that MSA can successfully reveal different scaling regimes which have remained quite controversial in turbulence research. In general the MSA method proposed here can be applied to different dynamic systems in which the concepts of multiscale and multifractality are relevant.

  6. Online kernel slow feature analysis for temporal video segmentation and tracking.

    PubMed

    Liwicki, Stephan; Zafeiriou, Stefanos P; Pantic, Maja

    2015-10-01

    Slow feature analysis (SFA) is a dimensionality reduction technique which has been linked to how visual brain cells work. In recent years, the SFA was adopted for computer vision tasks. In this paper, we propose an exact kernel SFA (KSFA) framework for positive definite and indefinite kernels in Krein space. We then formulate an online KSFA which employs a reduced set expansion. Finally, by utilizing a special kind of kernel family, we formulate exact online KSFA for which no reduced set is required. We apply the proposed system to develop a SFA-based change detection algorithm for stream data. This framework is employed for temporal video segmentation and tracking. We test our setup on synthetic and real data streams. When combined with an online learning tracking system, the proposed change detection approach improves upon tracking setups that do not utilize change detection.

  7. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images

    PubMed Central

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L.; Levin, Michael; Miller, Eric L.

    2015-01-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach. PMID:26601004

  8. Analysis of Nuclear Mitochondrial DNA Segments of Nine Plant Species: Size, Distribution, and Insertion Loci

    PubMed Central

    Ko, Young-Joon

    2016-01-01

    Nuclear mitochondrial DNA segment (Numt) insertion describes a well-known phenomenon of mitochondrial DNA transfer into a eukaryotic nuclear genome. However, it has not been well understood, especially in plants. Numt insertion patterns vary from species to species in different kingdoms. In this study, the patterns were surveyed in nine plant species, and we found some tip-offs. First, when the mitochondrial genome size is relatively large, the portion of the longer Numt is also larger than the short one. Second, the whole genome duplication event increases the ratio of the shorter Numt portion in the size distribution. Third, Numt insertions are enriched in exon regions. This analysis may be helpful for understanding plant evolution. PMID:27729838

  9. Segmented coupled-wave analysis of a curved wire-grid polarizer.

    PubMed

    Kim, Donghyun; Sim, Eunji

    2008-03-01

    The performance of a wire-grid polarizer (WGP) on a curved surface was investigated with a simple numerical model. The computation model combines rigorous coupled-wave analysis with piecewise linear segmentation that approximates a curved surface for two bending configurations. A curvature-induced Rayleigh anomaly is found to be the main performance degradation mechanism that reduces transmittance and polarization contrast. A WGP on a curved surface is more likely to incur the Rayleigh anomaly with smaller surface curvature. For a given curvature, a larger WGP is more vulnerable. Effects of polar and azimuthal incidence angles were also analyzed. Suggestions were made in regard to a WGP design that minimizes the performance degradation.

  10. Adaptation of multifractal analysis to segmentation of microcalcifications in digital mammograms

    NASA Astrophysics Data System (ADS)

    Stojić, Tomislav; Reljin, Irini; Reljin, Branimir

    2006-07-01

    A method for detecting microcalcifications in digital mammograms is proposed. After recognizing basic features of microcalcifications we introduced several modifications in multifractal analysis, obtaining an efficient method adapted to enhance only small light parts not belonging to surrounding tissue, possibly microcalcifications. Started with a mammogram image, a method creates corresponding multifractal image from which a radiologist has the freedom to change the level of segmentation in an interactive manner and to find suspicious regions, which may contain microcalcifications. Additional postprocessing, based on mathematical morphology, refines the procedure by selecting and outlining regions that contain clusters with microcalcifications. The proposed method was tested through referent mammograms from MiniMIAS database, which is available at public domain. The proposed method successfully extracted microcalcifications in all (clinically approved) cases belonging to this database.

  11. Automated identification of best-quality coronary artery segments from multiple-phase coronary CT angiography (cCTA) for vessel analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A.

    2016-03-01

    We are developing an automated method to identify the best quality segment among the corresponding segments in multiple-phase cCTA. The coronary artery trees are automatically extracted from different cCTA phases using our multi-scale vessel segmentation and tracking method. An automated registration method is then used to align the multiple-phase artery trees. The corresponding coronary artery segments are identified in the registered vessel trees and are straightened by curved planar reformation (CPR). Four features are extracted from each segment in each phase as quality indicators in the original CT volume and the straightened CPR volume. Each quality indicator is used as a voting classifier to vote the corresponding segments. A newly designed weighted voting ensemble (WVE) classifier is finally used to determine the best-quality coronary segment. An observer preference study is conducted with three readers to visually rate the quality of the vessels in 1 to 6 rankings. Six and 10 cCTA cases are used as training and test set in this preliminary study. For the 10 test cases, the agreement between automatically identified best-quality (AI-BQ) segments and radiologist's top 2 rankings is 79.7%, and between AI-BQ and the other two readers are 74.8% and 83.7%, respectively. The results demonstrated that the performance of our automated method was comparable to those of experienced readers for identification of the best-quality coronary segments.

  12. Segmentation of fluorescence microscopy images for quantitative analysis of cell nuclear architecture.

    PubMed

    Russell, Richard A; Adams, Niall M; Stephens, David A; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S

    2009-04-22

    Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments.

  13. Interactive 3D Analysis of Blood Vessel Trees and Collateral Vessel Volumes in Magnetic Resonance Angiograms in the Mouse Ischemic Hindlimb Model.

    PubMed

    Marks, Peter C; Preda, Marilena; Henderson, Terry; Liaw, Lucy; Lindner, Volkhard; Friesel, Robert E; Pinz, Ilka M

    2013-10-31

    The quantitative analysis of blood vessel volumes from magnetic resonance angiograms (MRA) or μCT images is difficult and time-consuming. This fact, when combined with a study that involves multiple scans of multiple subjects, can represent a significant portion of research time. In order to enhance analysis options and to provide an automated and fast analysis method, we developed a software plugin for the ImageJ and Fiji image processing frameworks that enables the quick and reproducible volume quantification of blood vessel segments. The novel plugin named Volume Calculator (VolCal), accepts any binary (thresholded) image and produces a three-dimensional schematic representation of the vasculature that can be directly manipulated by the investigator. Using MRAs of the mouse hindlimb ischemia model, we demonstrate quick and reproducible blood vessel volume calculations with 95 - 98% accuracy. In clinical settings this software may enhance image interpretation and the speed of data analysis and thus enhance intervention decisions for example in peripheral vascular disease or aneurysms. In summary, we provide a novel, fast and interactive quantification of blood vessel volumes for single blood vessels or sets of vessel segments with particular focus on collateral formation after an ischemic insult. PMID:24563682

  14. Motion analysis of knee joint using dynamic volume images

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Kohno, Takahiro; Suzuki, Masahiko; Moriya, Hideshige; Mori, Sin-ichiro; Endo, Masahiro

    2006-03-01

    Acquisition and analysis of three-dimensional movement of knee joint is desired in orthopedic surgery. We have developed two methods to obtain dynamic volume images of knee joint. One is a 2D/3D registration method combining a bi-plane dynamic X-ray fluoroscopy and a static three-dimensional CT, the other is a method using so-called 4D-CT that uses a cone-beam and a wide 2D detector. In this paper, we present two analyses of knee joint movement obtained by these methods: (1) transition of the nearest points between femur and tibia (2) principal component analysis (PCA) of six parameters representing the three dimensional movement of knee. As a preprocessing for the analysis, at first the femur and tibia regions are extracted from volume data at each time frame and then the registration of the tibia between different frames by an affine transformation consisting of rotation and translation are performed. The same transformation is applied femur as well. Using those image data, the movement of femur relative to tibia can be analyzed. Six movement parameters of femur consisting of three translation parameters and three rotation parameters are obtained from those images. In the analysis (1), axis of each bone is first found and then the flexion angle of the knee joint is calculated. For each flexion angle, the minimum distance between femur and tibia and the location giving the minimum distance are found in both lateral condyle and medial condyle. As a result, it was observed that the movement of lateral condyle is larger than medial condyle. In the analysis (2), it was found that the movement of the knee can be represented by the first three principal components with precision of 99.58% and those three components seem to strongly relate to three major movements of femur in the knee bend known in orthopedic surgery.

  15. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  16. Parallel runway requirement analysis study. Volume 1: The analysis

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.

    1993-01-01

    The correlation of increased flight delays with the level of aviation activity is well recognized. A main contributor to these flight delays has been the capacity of airports. Though new airport and runway construction would significantly increase airport capacity, few programs of this type are currently underway, let alone planned, because of the high cost associated with such endeavors. Therefore, it is necessary to achieve the most efficient and cost effective use of existing fixed airport resources through better planning and control of traffic flows. In fact, during the past few years the FAA has initiated such an airport capacity program designed to provide additional capacity at existing airports. Some of the improvements that that program has generated thus far have been based on new Air Traffic Control procedures, terminal automation, additional Instrument Landing Systems, improved controller display aids, and improved utilization of multiple runways/Instrument Meteorological Conditions (IMC) approach procedures. A useful element to understanding potential operational capacity enhancements at high demand airports has been the development and use of an analysis tool called The PLAND_BLUNDER (PLB) Simulation Model. The objective for building this simulation was to develop a parametric model that could be used for analysis in determining the minimum safety level of parallel runway operations for various parameters representing the airplane, navigation, surveillance, and ATC system performance. This simulation is useful as: a quick and economical evaluation of existing environments that are experiencing IMC delays, an efficient way to study and validate proposed procedure modifications, an aid in evaluating requirements for new airports or new runways in old airports, a simple, parametric investigation of a wide range of issues and approaches, an ability to tradeoff air and ground technology and procedures contributions, and a way of considering probable

  17. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment.

    PubMed

    Keller, Mark; Naue, Jana; Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols.

  18. Volume analysis of heat-induced cracks in human molars: A preliminary study

    PubMed Central

    Sandholzer, Michael A.; Baron, Katharina; Heimel, Patrick; Metscher, Brian D.

    2014-01-01

    Context: Only a few methods have been published dealing with the visualization of heat-induced cracks inside bones and teeth. Aims: As a novel approach this study used nondestructive X-ray microtomography (micro-CT) for volume analysis of heat-induced cracks to observe the reaction of human molars to various levels of thermal stress. Materials and Methods: Eighteen clinically extracted third molars were rehydrated and burned under controlled temperatures (400, 650, and 800°C) using an electric furnace adjusted with a 25°C increase/min. The subsequent high-resolution scans (voxel-size 17.7 μm) were made with a compact micro-CT scanner (SkyScan 1174). In total, 14 scans were automatically segmented with Definiens XD Developer 1.2 and three-dimensional (3D) models were computed with Visage Imaging Amira 5.2.2. The results of the automated segmentation were analyzed with an analysis of variance (ANOVA) and uncorrected post hoc least significant difference (LSD) tests using Statistical Package for Social Sciences (SPSS) 17. A probability level of P < 0.05 was used as an index of statistical significance. Results: A temperature-dependent increase of heat-induced cracks was observed between the three temperature groups (P < 0.05, ANOVA post hoc LSD). In addition, the distributions and shape of the heat-induced changes could be classified using the computed 3D models. Conclusion: The macroscopic heat-induced changes observed in this preliminary study correspond with previous observations of unrestored human teeth, yet the current observations also take into account the entire microscopic 3D expansions of heat-induced cracks within the dental hard tissues. Using the same experimental conditions proposed in the literature, this study confirms previous results, adds new observations, and offers new perspectives in the investigation of forensic evidence. PMID:25125923

  19. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 4

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 4 of the four major tasks included in the study. Task 4 uses flight plan segment wind and temperature differences as indicators of dates and geographic areas for which significant forecast errors may have occurred. An in-depth analysis is then conducted for the days identified. The analysis show that significant errors occur in the operational forecast on 15 of the 33 arbitrarily selected days included in the study. Wind speeds in an area of maximum winds are underestimated by at least 20 to 25 kts. on 14 of these days. The analysis also show that there is a tendency to repeat the same forecast errors from prog to prog. Also, some perceived forecast errors from the flight plan comparisons could not be verified by visual inspection of the corresponding National Meteorological Center forecast and analyses charts, and it is likely that they are the result of weather data interpolation techniques or some other data processing procedure in the airlines' flight planning systems.

  20. Automatic cell segmentation and nuclear-to-cytoplasmic ratio analysis for third harmonic generated microscopy medical images.

    PubMed

    Lee, Gwo Giun; Lin, Huan-Hsiang; Tsai, Ming-Rung; Chou, Sin-Yo; Lee, Wen-Jeng; Liao, Yi-Hua; Sun, Chi-Kuang; Chen, Chun-Fu

    2013-04-01

    Traditional biopsy procedures require invasive tissue removal from a living subject, followed by time-consuming and complicated processes, so noninvasive in vivo virtual biopsy, which possesses the ability to obtain exhaustive tissue images without removing tissues, is highly desired. Some sets of in vivo virtual biopsy images provided by healthy volunteers were processed by the proposed cell segmentation approach, which is based on the watershed-based approach and the concept of convergence index filter for automatic cell segmentation. Experimental results suggest that the proposed algorithm not only reveals high accuracy for cell segmentation but also has dramatic potential for noninvasive analysis of cell nuclear-to-cytoplasmic ratio (NC ratio), which is important in identifying or detecting early symptoms of diseases with abnormal NC ratios, such as skin cancers during clinical diagnosis via medical imaging analysis.

  1. Identifying Like-Minded Audiences for Global Warming Public Engagement Campaigns: An Audience Segmentation Analysis and Tool Development

    PubMed Central

    Maibach, Edward W.; Leiserowitz, Anthony; Roser-Renouf, Connie; Mertz, C. K.

    2011-01-01

    Background Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation – a process of identifying coherent groups within a population – can be used to improve the effectiveness of public engagement campaigns. Methodology/Principal Findings In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164) to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%), to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%). Three of the segments (totaling 70%) were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18%) were unsupportive, and one was largely disengaged (12%), having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively. Conclusions/Significance In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are

  2. Flow Analysis on a Limited Volume Chilled Water System

    SciTech Connect

    Zheng, Lin

    2012-07-31

    LANL Currently has a limited volume chilled water system for use in a glove box, but the system needs to be updated. Before we start building our new system, a flow analysis is needed to ensure that there are no high flow rates, extreme pressures, or any other hazards involved in the system. In this project the piping system is extremely important to us because it directly affects the overall design of the entire system. The primary components necessary for the chilled water piping system are shown in the design. They include the pipes themselves (perhaps of more than one diameter), the various fitting used to connect the individual pipes to form the desired system, the flow rate control devices (valves), and the pumps that add energy to the fluid. Even the most simple pipe systems are actually quite complex when they are viewed in terms of rigorous analytical considerations. I used an 'exact' analysis and dimensional analysis considerations combined with experimental results for this project. When 'real-world' effects are important (such as viscous effects in pipe flows), it is often difficult or impossible to use only theoretical methods to obtain the desired results. A judicious combination of experimental data with theoretical considerations and dimensional analysis are needed in order to reduce risks to an acceptable level.

  3. Treatment Response Assessment of Head and Neck Cancers on CT Using Computerized Volume Analysis

    PubMed Central

    Hadjiiski, L.; Mukherji, S.K.; Gujar, S.K.; Sahiner, B.; Ibrahim, M.; Street, E.; Moyer, J.; Worden, F.P.; Chan, H.-P.

    2013-01-01

    Background and Purpose Head and neck cancer can cause substantial morbidity and mortality Our aim was to evaluate the potential usefulness of a computerized system for segmenting lesions in head and neck CT scans and for estimation of volume change of head and neck malignant tumors in response to treatment. Materials and Methods CT scans from a pretreatment examination and a post 1-cycle chemotherapy examination of 34 patients with 34 head and neck primary-site cancers were collected. The computerized system was developed in our laboratory. It performs 3D segmentation on the basis of a level-set model and uses as input an approximate bounding box for the lesion of interest. The 34 tumors included tongue, tonsil, vallecula, supraglottic, epiglottic, and hard palate carcinomas. As a reference standard, 1 radiologist outlined full 3D contours for each of the 34 primary tumors for both the pre- and posttreatment scans and a second radiologist verified the contours. Results The correlation between the automatic and manual estimates for both the pre- to post-treatment volume change and the percentage volume change for the 34 primary-site tumors was 0.95, with an average error of −2.4 ± 8.5% by automatic segmentation. There was no substantial difference and specific trend in the automatic segmentation accuracy for the different types of primary head and neck tumors, indicating that the computerized segmentation performs relatively robustly for this application. Conclusions The tumor size change in response to treatment can be accurately estimated by the computerized segmentation system relative to radiologists' manual estimations for different types of head and neck tumors. PMID:20595363

  4. A Theoretical Analysis of How Segmentation of Dynamic Visualizations Optimizes Students' Learning

    ERIC Educational Resources Information Center

    Spanjers, Ingrid A. E.; van Gog, Tamara; van Merrienboer, Jeroen J. G.

    2010-01-01

    This article reviews studies investigating segmentation of dynamic visualizations (i.e., showing dynamic visualizations in pieces with pauses in between) and discusses two not mutually exclusive processes that might underlie the effectiveness of segmentation. First, cognitive activities needed for dealing with the transience of dynamic…

  5. Understanding the market for geographic information: A market segmentation and characteristics analysis

    NASA Technical Reports Server (NTRS)

    Piper, William S.; Mick, Mark W.

    1994-01-01

    Findings and results from a marketing research study are presented. The report identifies market segments and the product types to satisfy demand in each. An estimate of market size is based on the specific industries in each segment. A sample of ten industries was used in the study. The scientific study covered U.S. firms only.

  6. A coronary artery segmentation method based on multiscale analysis and region growing.

    PubMed

    Kerkeni, Asma; Benabdallah, Asma; Manzanera, Antoine; Bedoui, Mohamed Hedi

    2016-03-01

    Accurate coronary artery segmentation is a fundamental step in various medical imaging applications such as stenosis detection, 3D reconstruction and cardiac dynamics assessing. In this paper, a multiscale region growing (MSRG) method for coronary artery segmentation in 2D X-ray angiograms is proposed. First, a region growing rule incorporating both vesselness and direction information in a unique way is introduced. Then an iterative multiscale search based on this criterion is performed. Selected points in each step are considered as seeds for the following step. By combining vesselness and direction information in the growing rule, this method is able to avoid blockage caused by low vesselness values in vascular regions, which in turn, yields continuous vessel tree. Performing the process in a multiscale fashion helps to extract thin and peripheral vessels often missed by other segmentation methods. Quantitative evaluation performed on real angiography images shows that the proposed segmentation method identifies about 80% of the total coronary artery tree in relatively easy images and 70% in challenging cases with a mean precision of 82% and outperforms others segmentation methods in terms of sensitivity. The MSRG segmentation method was also implemented with different enhancement filters and it has been shown that the Frangi filter gives better results. The proposed segmentation method has proven to be tailored for coronary artery segmentation. It keeps an acceptable performance when dealing with challenging situations such as noise, stenosis and poor contrast. PMID:26748040

  7. Segmental bioimpedance analysis in professional cyclists during a three week stage race.

    PubMed

    Marra, Maurizio; Da Prat, Barbara; Montagnese, Concetta; Caldara, Annarita; Sammarco, Rosa; Pasanisi, Fabrizio; Corsetti, Roberto

    2016-07-01

    Bioelectrical impedance analysis has been widely used in the clinical and sport areas because it is a safe, non-invasive, rapid and inexpensive technique that evaluates some electrical properties of the body, such as resistance (R), reactance (X c ) and phase angle (PhA). The aim of this study is to evaluate body composition changes in professional cyclists, participating at the Giro D'Italia 2012, a three week stage race, and in particular PhA modifications as an expression of fat free mass nutritional status. Data were collected at the beginning, in the middle and at the end of the competition. Body weight, circumferences, skinfold thickness and BIA variables (total and segmental body) were measured. Body composition, measured by skinfold thickness, changed during the competition: fat free mass increased, but not significantly, in the middle and at the end of the competition, whereas fat mass significantly decreased versus the baseline in the middle and at the end of the competition. The total PhA did not significantly change in the middle of the competition but was significantly reduced at the end. The arm PhA did not change significantly at both times of the competition, whereas a significant reduction was reported for leg PhA in the middle and at the end of the competition. These results suggest the use of bioimpedance analysis, in particular PhA measurement, to monitor athletes' fat free mass characteristics during medium- and long-term competitions. PMID:27243798

  8. Stereophotogrammetrie Mass Distribution Parameter Determination Of The Lower Body Segments For Use In Gait Analysis

    NASA Astrophysics Data System (ADS)

    Sheffer, Daniel B.; Schaer, Alex R.; Baumann, Juerg U.

    1989-04-01

    Inclusion of mass distribution information in biomechanical analysis of motion is a requirement for the accurate calculation of external moments and forces acting on the segmental joints during locomotion. Regression equations produced from a variety of photogrammetric, anthropometric and cadaeveric studies have been developed and espoused in literature. Because of limitations in the accuracy of predicted inertial properties based on the application of regression equation developed on one population and then applied on a different study population, the employment of a measurement technique that accurately defines the shape of each individual subject measured is desirable. This individual data acquisition method is especially needed when analyzing the gait of subjects with large differences in their extremity geo-metry from those considered "normal", or who may possess gross asymmetries in shape in their own contralateral limbs. This study presents the photogrammetric acquisition and data analysis methodology used to assess the inertial tensors of two groups of subjects, one with spastic diplegic cerebral palsy and the other considered normal.

  9. Concept Area Two Objectives and Test Items (Rev.) Part One, Part Two. Economic Analysis Course. Segments 17-49.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This report deals with the second concept area of the course and focuses on macroeconomics. Segments 17 through 49 are presented,…

  10. Bivariate segmentation of SNP-array data for allele-specific copy number analysis in tumour samples

    PubMed Central

    2013-01-01

    Background SNP arrays output two signals that reflect the total genomic copy number (LRR) and the allelic ratio (BAF), which in combination allow the characterisation of allele-specific copy numbers (ASCNs). While methods based on hidden Markov models (HMMs) have been extended from array comparative genomic hybridisation (aCGH) to jointly handle the two signals, only one method based on change-point detection, ASCAT, performs bivariate segmentation. Results In the present work, we introduce a generic framework for bivariate segmentation of SNP array data for ASCN analysis. For the matter, we discuss the characteristics of the typically applied BAF transformation and how they affect segmentation, introduce concepts of multivariate time series analysis that are of concern in this field and discuss the appropriate formulation of the problem. The framework is implemented in a method named CnaStruct, the bivariate form of the structural change model (SCM), which has been successfully applied to transcriptome mapping and aCGH. Conclusions On a comprehensive synthetic dataset, we show that CnaStruct outperforms the segmentation of existing ASCN analysis methods. Furthermore, CnaStruct can be integrated into the workflows of several ASCN analysis tools in order to improve their performance, specially on tumour samples highly contaminated by normal cells. PMID:23497144

  11. Change Detection and Land Use / Land Cover Database Updating Using Image Segmentation, GIS Analysis and Visual Interpretation

    NASA Astrophysics Data System (ADS)

    Mas, J.-F.; González, R.

    2015-08-01

    This article presents a hybrid method that combines image segmentation, GIS analysis, and visual interpretation in order to detect discrepancies between an existing land use/cover map and satellite images, and assess land use/cover changes. It was applied to the elaboration of a multidate land use/cover database of the State of Michoacán, Mexico using SPOT and Landsat imagery. The method was first applied to improve the resolution of an existing 1:250,000 land use/cover map produced through the visual interpretation of 2007 SPOT images. A segmentation of the 2007 SPOT images was carried out to create spectrally homogeneous objects with a minimum area of two hectares. Through an overlay operation with the outdated map, each segment receives the "majority" category from the map. Furthermore, spectral indices of the SPOT image were calculated for each band and each segment; therefore, each segment was characterized from the images (spectral indices) and the map (class label). In order to detect uncertain areas which present discrepancy between spectral response and class label, a multivariate trimming, which consists in truncating a distribution from its least likely values, was applied. The segments that behave like outliers were detected and labeled as "uncertain" and a probable alternative category was determined by means of a digital classification using a decision tree classification algorithm. Then, the segments were visually inspected in the SPOT image and high resolution imagery to assign a final category. The same procedure was applied to update the map to 2014 using Landsat imagery. As a final step, an accuracy assessment was carried out using verification sites selected from a stratified random sampling and visually interpreted using high resolution imagery and ground truth.

  12. Reducing pervasive false-positive identical-by-descent segments detected by large-scale pedigree analysis.

    PubMed

    Durand, Eric Y; Eriksson, Nicholas; McLean, Cory Y

    2014-08-01

    Analysis of genomic segments shared identical-by-descent (IBD) between individuals is fundamental to many genetic applications, from demographic inference to estimating the heritability of diseases, but IBD detection accuracy in nonsimulated data is largely unknown. In principle, it can be evaluated using known pedigrees, as IBD segments are by definition inherited without recombination down a family tree. We extracted 25,432 genotyped European individuals containing 2,952 father-mother-child trios from the 23andMe, Inc. data set. We then used GERMLINE, a widely used IBD detection method, to detect IBD segments within this cohort. Exploiting known familial relationships, we identified a false-positive rate over 67% for 2-4 centiMorgan (cM) segments, in sharp contrast with accuracies reported in simulated data at these sizes. Nearly all false positives arose from the allowance of haplotype switch errors when detecting IBD, a necessity for retrieving long (>6 cM) segments in the presence of imperfect phasing. We introduce HaploScore, a novel, computationally efficient metric that scores IBD segments proportional to the number of switch errors they contain. Applying HaploScore filtering to the IBD data at a precision of 0.8 produced a 13-fold increase in recall when compared with length-based filtering. We replicate the false IBD findings and demonstrate the generalizability of HaploScore to alternative data sources using an independent cohort of 555 European individuals from the 1000 Genomes project. HaploScore can improve the accuracy of segments reported by any IBD detection method, provided that estimates of the genotyping error rate and switch error rate are available. PMID:24784137

  13. Interactive 3D segmentation of the prostate in magnetic resonance images using shape and local appearance similarity analysis

    NASA Astrophysics Data System (ADS)

    Shahedi, Maysam; Fenster, Aaron; Cool, Derek W.; Romagnoli, Cesare; Ward, Aaron D.

    2013-03-01

    3D segmentation of the prostate in medical images is useful to prostate cancer diagnosis and therapy guidance, but is time-consuming to perform manually. Clinical translation of computer-assisted segmentation algorithms for this purpose requires a comprehensive and complementary set of evaluation metrics that are informative to the clinical end user. We have developed an interactive 3D prostate segmentation method for 1.5T and 3.0T T2-weighted magnetic resonance imaging (T2W MRI) acquired using an endorectal coil. We evaluated our method against manual segmentations of 36 3D images using complementary boundary-based (mean absolute distance; MAD), regional overlap (Dice similarity coefficient; DSC) and volume difference (ΔV) metrics. Our technique is based on inter-subject prostate shape and local boundary appearance similarity. In the training phase, we calculated a point distribution model (PDM) and a set of local mean intensity patches centered on the prostate border to capture shape and appearance variability. To segment an unseen image, we defined a set of rays - one corresponding to each of the mean intensity patches computed in training - emanating from the prostate centre. We used a radial-based search strategy and translated each mean intensity patch along its corresponding ray, selecting as a candidate the boundary point with the highest normalized cross correlation along each ray. These boundary points were then regularized using the PDM. For the whole gland, we measured a mean+/-std MAD of 2.5+/-0.7 mm, DSC of 80+/-4%, and ΔV of 1.1+/-8.8 cc. We also provided an anatomic breakdown of these metrics within the prostatic base, mid-gland, and apex.

  14. Coal gasification systems engineering and analysis. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Feasibility analyses and systems engineering studies for a 20,000 tons per day medium Btu (MBG) coal gasification plant to be built by TVA in Northern Alabama were conducted. Major objectives were as follows: (1) provide design and cost data to support the selection of a gasifier technology and other major plant design parameters, (2) provide design and cost data to support alternate product evaluation, (3) prepare a technology development plan to address areas of high technical risk, and (4) develop schedules, PERT charts, and a work breakdown structure to aid in preliminary project planning. Volume one contains a summary of gasification system characterizations. Five gasification technologies were selected for evaluation: Koppers-Totzek, Texaco, Lurgi Dry Ash, Slagging Lurgi, and Babcock and Wilcox. A summary of the trade studies and cost sensitivity analysis is included.

  15. Study of Alternate Space Shuttle Concepts. Volume 2, Part 2: Concept Analysis and Definition

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This is the final report of a Phase A Study of Alternate Space Shuttle Concepts by the Lockheed Missiles & Space Company (LMSC) for the National Aeronautics and Space Administration George C. Marshall Space Flight Center (MSFC). The eleven-month study, which began on 30 June 1970, is to examine the stage-and-one-half and other Space Shuttle configurations and to establish feasibility, performance, cost, and schedules for the selected concepts. This final report consists of four volumes as follows: Volume I - Executive Summary, Volume II - Concept Analysis and Definition, Volume III - Program Planning, and Volume IV - Data Cost Data. This document is Volume II, Concept Analysis and Definition.

  16. A comparison between handgrip strength, upper limb fat free mass by segmental bioelectrical impedance analysis (SBIA) and anthropometric measurements in young males

    NASA Astrophysics Data System (ADS)

    Gonzalez-Correa, C. H.; Caicedo-Eraso, J. C.; Varon-Serna, D. R.

    2013-04-01

    The mechanical function and size of a muscle may be closely linked. Handgrip strength (HGS) has been used as a predictor of functional performing. Anthropometric measurements have been made to estimate arm muscle area (AMA) and physical muscle mass volume of upper limb (ULMMV). Electrical volume estimation is possible by segmental BIA measurements of fat free mass (SBIA-FFM), mainly muscle-mass. Relationship among these variables is not well established. We aimed to determine if physical and electrical muscle mass estimations relate to each other and to what extent HGS is to be related to its size measured by both methods in normal or overweight young males. Regression analysis was used to determine association between these variables. Subjects showed a decreased HGS (65.5%), FFM, (85.5%) and AMA (74.5%). It was found an acceptable association between SBIA-FFM and AMA (r2 = 0.60) and poorer between physical and electrical volume (r2 = 0.55). However, a paired Student t-test and Bland and Altman plot showed that physical and electrical models were not interchangeable (pt<0.0001). HGS showed a very weak association with anthropometric (r2 = 0.07) and electrical (r2 = 0.192) ULMMV showing that muscle mass quantity does not mean muscle strength. Other factors influencing HGS like physical training or nutrition require more research.

  17. Comparative analysis between homoeologous genome segments of Brassica napus and its progenitor species reveals extensive sequence-level divergence.

    PubMed

    Cheung, Foo; Trick, Martin; Drou, Nizar; Lim, Yong Pyo; Park, Jee-Young; Kwon, Soo-Jin; Kim, Jin-A; Scott, Rod; Pires, J Chris; Paterson, Andrew H; Town, Chris; Bancroft, Ian

    2009-07-01

    Homoeologous regions of Brassica genomes were analyzed at the sequence level. These represent segments of the Brassica A genome as found in Brassica rapa and Brassica napus and the corresponding segments of the Brassica C genome as found in Brassica oleracea and B. napus. Analysis of synonymous base substitution rates within modeled genes revealed a relatively broad range of times (0.12 to 1.37 million years ago) since the divergence of orthologous genome segments as represented in B. napus and the diploid species. Similar, and consistent, ranges were also identified for single nucleotide polymorphism and insertion-deletion variation. Genes conserved across the Brassica genomes and the homoeologous segments of the genome of Arabidopsis thaliana showed almost perfect collinearity. Numerous examples of apparent transduplication of gene fragments, as previously reported in B. oleracea, were observed in B. rapa and B. napus, indicating that this phenomenon is widespread in Brassica species. In the majority of the regions studied, the C genome segments were expanded in size relative to their A genome counterparts. The considerable variation that we observed, even between the different versions of the same Brassica genome, for gene fragments and annotated putative genes suggest that the concept of the pan-genome might be particularly appropriate when considering Brassica genomes.

  18. a New Framework for Object-Based Image Analysis Based on Segmentation Scale Space and Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Hadavand, A.; Saadatseresht, M.; Homayouni, S.

    2015-12-01

    In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS), a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  19. An algorithm for control volume analysis of cryogenic systems

    NASA Astrophysics Data System (ADS)

    Stanton, Michael B.

    1989-06-01

    This thesis presents an algorithm suitable for numerical analysis of cryogenic refrigeration systems. Preliminary design of a cryogenic system commences with a number of decoupling assumptions with regard to the process variables of heat and work transfer (e.g., work input rate, heat loading rates) and state variables (pinch points, momentum losses). Making preliminary performance estimations minimizes the effect of component interactions which is inconsistent with the intent of analysis. A more useful design and analysis tool is one in which no restrictions are applied to the system - interactions become absolutely coupled and governed by the equilibrium state variables. Such a model would require consideration of hardware specifications and performance data and information with respect to the thermal environment. Model output would consist of the independent thermodynamic state variables from which process variables and performance parameters may be computed. This model will have a framework compatible for numerical solution on a digital computer so that it may be interfaced with graphic symbology for user interaction. This algorithm approaches cryogenic problems in a highly-coupled state-dependent manner. The framework for this algorithm revolves around the revolutionary thermodynamic solution technique for computer Aided Thermodynamics (CAT). Fundamental differences exist between the Control Volume (CV) algorithm and CAT, which will be discussed where appropriate.

  20. A geometric analysis of mastectomy incisions: Optimizing intraoperative breast volume

    PubMed Central

    Chopp, David; Rawlani, Vinay; Ellis, Marco; Johnson, Sarah A; Buck, Donald W; Khan, Seema; Bethke, Kevin; Hansen, Nora; Kim, John YS

    2011-01-01

    INTRODUCTION: The advent of acellular dermis-based tissue expander breast reconstruction has placed an increased emphasis on optimizing intraoperative volume. Because skin preservation is a critical determinant of intraoperative volume expansion, a mathematical model was developed to capture the influence of incision dimension on subsequent tissue expander volumes. METHODS: A mathematical equation was developed to calculate breast volume via integration of a geometrically modelled breast cross-section. The equation calculates volume changes associated with excised skin during the mastectomy incision by reducing the arc length of the cross-section. The degree of volume loss is subsequently calculated based on excision dimensions ranging from 35 mm to 60 mm. RESULTS: A quadratic relationship between breast volume and the vertical dimension of the mastectomy incision exists, such that incrementally larger incisions lead to a disproportionally greater amount of volume loss. The vertical dimension of the mastectomy incision – more so than the horizontal dimension – is of critical importance to maintain breast volume. Moreover, the predicted volume loss is more profound in smaller breasts and primarily occurs in areas that affect breast projection on ptosis. CONCLUSIONS: The present study is the first to model the relationship between the vertical dimensions of the mastectomy incision and subsequent volume loss. These geometric principles will aid in optimizing intra-operative volume expansion during expander-based breast reconstruction. PMID:22654531

  1. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  2. Study on the application of MRF and the D-S theory to image segmentation of the human brain and quantitative analysis of the brain tissue

    NASA Astrophysics Data System (ADS)

    Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang

    2012-01-01

    The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.

  3. phenoVein—A Tool for Leaf Vein Segmentation and Analysis1[OPEN

    PubMed Central

    Pflugfelder, Daniel; Huber, Gregor; Scharr, Hanno; Hülskamp, Martin; Koornneef, Maarten; Jahnke, Siegfried

    2015-01-01

    Precise measurements of leaf vein traits are an important aspect of plant phenotyping for ecological and genetic research. Here, we present a powerful and user-friendly image analysis tool named phenoVein. It is dedicated to automated segmenting and analyzing of leaf veins in images acquired with different imaging modalities (microscope, macrophotography, etc.), including options for comfortable manual correction. Advanced image filtering emphasizes veins from the background and compensates for local brightness inhomogeneities. The most important traits being calculated are total vein length, vein density, piecewise vein lengths and widths, areole area, and skeleton graph statistics, like the number of branching or ending points. For the determination of vein widths, a model-based vein edge estimation approach has been implemented. Validation was performed for the measurement of vein length, vein width, and vein density of Arabidopsis (Arabidopsis thaliana), proving the reliability of phenoVein. We demonstrate the power of phenoVein on a set of previously described vein structure mutants of Arabidopsis (hemivenata, ondulata3, and asymmetric leaves2-101) compared with wild-type accessions Columbia-0 and Landsberg erecta-0. phenoVein is freely available as open-source software. PMID:26468519

  4. Modelling soft tissue for kinematic analysis of multi-segment human body models.

    PubMed

    Benham, M P; Wright, D K; Bibb, R

    2001-01-01

    Traditionally biomechanical models represent the musculoskeletal system by a series of rigid links connected by rigidly defined rotational joints. More recently though the mechanics of joints and the action of soft tissues has come under closer scrutiny: biomechanical models might now include a full range of physiological structures. However, soft tissue representation, within multi-segment human body models, presents significant problems; not least in computational speed. We present a method for representing soft tissue physiology which provides for soft tissue wrapping around multiple bony objects; while showing forces at the insertion points, as well as normal reactions due to contact between the soft and bony tissues. These soft tissue representations may therefore be used to constrain the joint, as ligaments would, or to generate motion, like a muscle, so that joints may be modelled which more accurately simulate musculoskeletal motion in all degrees of freedom--rotational and translational. This method produces soft tissues that do not need to be tied to a certain path or route between the bony structures, but may move with the motion of the model; demonstrating a more realistic analysis of soft tissue activity in the musculoskeletal system. The combination of solid geometry models of the skeletal structure, and these novel soft tissue representations, may also provide a useful approach to synthesised human motion.

  5. Segmented independent component analysis for improved separation of fetal cardiac signals from nonstationary fetal magnetocardiograms

    PubMed Central

    Murta, Luiz O.; Guzo, Mauro G.; Moraes, Eder R.; Baffa, Oswaldo; Wakai, Ronald T.; Comani, Silvia

    2015-01-01

    Fetal magnetocardiograms (fMCGs) have been successfully processed with independent component analysis (ICA) to separate the fetal cardiac signals, but ICA effectiveness can be limited by signal nonstation-arities due to fetal movements. We propose an ICA-based method to improve the quality of fetal signals separated from fMCG affected by fetal movements. This technique (SegICA) includes a procedure to detect signal nonstationarities, according to which the fMCG recordings are divided in stationary segments that are then processed with ICA. The first and second statistical moments and the signal polarity reversal were used at different threshold levels to detect signal transients. SegICA effectiveness was assessed in two fMCG datasets (with and without fetal movements) by comparing the signal-to-noise ratio (SNR) of the signals extracted with ICA and with SegICA. Results showed that the SNR of fetal signals affected by fetal movements improved with SegICA, whereas the SNR gain was negligible elsewhere. The best measure to detect signal nonstationarities of physiological origin was signal polarity reversal at threshold level 0.9. The first statistical moment also provided good results at threshold level 0.6. SegICA seems a promising method to separate fetal cardiac signals of improved quality from nonstationary fMCG recordings affected by fetal movements. PMID:25781658

  6. Movement Analysis of Flexion and Extension of Honeybee Abdomen Based on an Adaptive Segmented Structure

    PubMed Central

    Zhao, Jieliang; Wu, Jianing; Yan, Shaoze

    2015-01-01

    Honeybees (Apis mellifera) curl their abdomens for daily rhythmic activities. Prior to determining this fact, people have concluded that honeybees could curl their abdomen casually. However, an intriguing but less studied feature is the possible unidirectional abdominal deformation in free-flying honeybees. A high-speed video camera was used to capture the curling and to analyze the changes in the arc length of the honeybee abdomen not only in free-flying mode but also in the fixed sample. Frozen sections and environment scanning electron microscope were used to investigate the microstructure and motion principle of honeybee abdomen and to explore the physical structure restricting its curling. An adaptive segmented structure, especially the folded intersegmental membrane (FIM), plays a dominant role in the flexion and extension of the abdomen. The structural features of FIM were utilized to mimic and exhibit movement restriction on honeybee abdomen. Combining experimental analysis and theoretical demonstration, a unidirectional bending mechanism of honeybee abdomen was revealed. Through this finding, a new perspective for aerospace vehicle design can be imitated. PMID:26223946

  7. An Empirical Analysis of Indoor Tanners: Implications for Audience Segmentation in Campaigns.

    PubMed

    Kelley, Dannielle E; Noar, Seth M; Myrick, Jessica Gall; Morales-Pico, Brenda; Zeitany, Alexandra; Thomas, Nancy E

    2016-05-01

    Tanning bed use before age 35 has been strongly associated with several types of skin cancer. The current study sought to advance an understanding of audience segmentation for indoor tanning among young women. Panhellenic sorority systems at two universities in the Southeastern United States participated in this study. A total of 1,481 young women took the survey; 421 (28%) had tanned indoors in the previous 12 months and were the focus of the analyses reported in this article. Results suggested two distinct tanner types: regular (n = 60) and irregular (n = 353) tanners. Regular tanners tanned more frequently (M = 36.2 vs. 8.6 times per year) and reported significantly higher positive outcome expectations (p < .001) and lower negative outcome expectations (p < .01) than irregular tanners, among other significant differences. Hierarchical logistic regression analysis revealed several significant (p < .001) predictors of regular tanning type, with tanning dependence emerging as the strongest predictor of this classification (OR = 2.25). Implications for developing anti-tanning messages directed at regular and irregular tanners are discussed. PMID:27115046

  8. Uncontrolled manifold analysis of segmental angle variability during walking: preadolescents with and without Down syndrome.

    PubMed

    Black, David P; Smith, Beth A; Wu, Jianhua; Ulrich, Beverly D

    2007-12-01

    The uncontrolled manifold (UCM) approach allows us to address issues concerning the nature of variability. In this study we applied the UCM analysis to gait and to a population known for exhibiting high levels of performance variability, Down syndrome (DS). We wanted to determine if preadolescents (ages between 8 and 10) with DS partition goal-equivalent variability (UCM( ||)) and non-goal equivalent variability differently than peers with typical development (TD) and whether treadmill practice would result in utilizing greater amounts of functional, task-specific variability to accomplish the task goal. We also wanted to determine how variance is structured with respect to two important performance variables: center of mass (COM) and head trajectory at one specific event (i.e., heel contact) for both groups during gait. Preadolescents with and without DS walked on a treadmill below, at, and above their preferred overground speed. We tested both groups before and after four visits of treadmill practice. We found that children with DS partition more UCM( ||) variance than children with TD across all speeds and both pre and post practice. The results also suggest that more segmental configuration variance was structured such that less motion of COM than head position was exhibited at heel contact. Overall, we believe children with DS are employing a different control strategy to compensate for their inherent limitations by exploiting that variability that corresponds to successfully performing the task.

  9. Polar Value Analysis of Corneal Astigmatism in Intrastromal Corneal Ring Segment Implantation

    PubMed Central

    Rho, Chang Rae; Kim, Min-Ji

    2016-01-01

    Purpose. To evaluate surgically induced astigmatism (SIA) and the average corneal power change in symmetric intrastromal corneal ring segment (ICRS) implantation. Methods. The study included 34 eyes of 34 keratoconus patients who underwent symmetric Intacs SK ICRS implantation. The corneal pocket incision meridian was the preoperative steep meridian. Corneal power data were obtained before and 3 months after Intacs SK ICRS implantation using scanning-slit topography. Polar value analysis was used to evaluate the SIA. Hotelling's trace test was used to compare intraindividual changes. Results. Three months postoperatively, the combined mean polar value for SIA changed significantly (Hotelling's T2 = 0.375; P = 0.006). The SIA was 1.54 D at 99° and the average corneal power decreased significantly by 3.8 D. Conclusion. Intacs SK ICRS placement decreased the average corneal power and corneal astigmatism compared to the preoperative corneal power and astigmatism when the corneal pocket incision was made at the preoperative steep meridian. PMID:27795856

  10. Finite element analysis of weightbath hydrotraction treatment of degenerated lumbar spine segments in elastic phase.

    PubMed

    Kurutz, M; Oroszváry, L

    2010-02-10

    3D finite element models of human lumbar functional spinal units (FSU) were used for numerical analysis of weightbath hydrotraction therapy (WHT) applied for treating degenerative diseases of the lumbar spine. Five grades of age-related degeneration were modeled by material properties. Tensile material parameters of discs were obtained by parameter identification based on in vivo measured elongations of lumbar segments during regular WHT, compressive material constants were obtained from the literature. It has been proved numerically that young adults of 40-45 years have the most deformable and vulnerable discs, while the stability of segments increases with further aging. The reasons were found by analyzing the separated contrasting effects of decreasing incompressibility and increasing hardening of nucleus, yielding non-monotonous functions of stresses and deformations in terms of aging and degeneration. WHT consists of indirect and direct traction phases. Discs show a bilinear material behaviour with higher resistance in indirect and smaller in direct traction phase. Consequently, although the direct traction load is only 6% of the indirect one, direct traction deformations are 15-90% of the indirect ones, depending on the grade of degeneration. Moreover, the ratio of direct stress relaxation remains equally about 6-8% only. Consequently, direct traction controlled by extra lead weights influences mostly the deformations being responsible for the nerve release; while the stress relaxation is influenced mainly by the indirect traction load coming from the removal of the compressive body weight and muscle forces in the water. A mildly degenerated disc in WHT shows 0.15mm direct, 0.45mm indirect and 0.6mm total extension; 0.2mm direct, 0.6mm indirect and 0.8mm total posterior contraction. A severely degenerated disc exhibits 0.05mm direct, 0.05mm indirect and 0.1mm total extension; 0.05mm direct, 0.25mm indirect and 0.3mm total posterior contraction. These

  11. Analysis of Cliff-Ramp Structures in Homogeneous Scalar Turbulence by the Method of Line Segments

    NASA Astrophysics Data System (ADS)

    Gauding, Michael; Goebbert, Jens Henrik; Peters, Norbert; Hasse, Christian

    2015-11-01

    The local structure of a turbulent scalar field in homogeneous isotropic turbulence is analyzed by direct numerical simulations (DNS). A novel signal decomposition approach is introduced where the signal of the scalar along a straight line is partitioned into segments based on the local extremal points of the scalar field. These segments are then parameterized by the distance between adjacent extremal points and a segment-based gradient. Joint statistics of the length and the segment-based gradient provide novel understanding about the local structure of the turbulent field and particularly about cliff-ramp-like structures. Ramp-like structures are unveiled by the asymmetry of joint distribution functions. Cliff-like structures are further analyzed by conditional statistics and it is shown from DNS that the width of cliffs scales with the Kolmogorov length scale.

  12. Relationship between methamphetamine use history and segmental hair analysis findings of MA users.

    PubMed

    Han, Eunyoung; Lee, Sangeun; In, Sanghwan; Park, Meejung; Park, Yonghoon; Cho, Sungnam; Shin, Junguk; Lee, Hunjoo

    2015-09-01

    The aim of this study was to investigate the relationship between methamphetamine (MA) use history and segmental hair analysis (1 and 3cm sections) and whole hair analysis results in Korean MA users in rehabilitation programs. Hair samples were collected from 26 Korean MA users. Eleven of the 26 subjects used cannabis with MA and two used cocaine, opiates, and MDMA with MA. Self-reported single dose of MA from the 26 subjects ranged from 0.03 to 0.5g/one time. Concentrations of MA and its metabolite amphetamine (AP) in hair were determined by gas chromatography mass spectrometry (GC/MS) after derivatization. The method used was well validated. Qualitative analysis from all 1cm sections (n=154) revealed a good correlation between positive or negative results for MA in hair and self-reported MA use (69.48%, n=107). In detail, MA results were positive in 66 hair specimens of MA users who reported administering MA, and MA results were negative in 41 hair specimens of MA users who denied MA administration in the corresponding month. Test results were false-negative in 10.39% (n=16) of hair specimens and false-positive in 20.13% (n=31) of hair specimens. In false positive cases, it is considered that after MA cessation it continued to be accumulated in hair still, while in false negative cases, self-reported histories showed a small amount of MA use or MA use 5-7 months previously. In terms of quantitative analysis, the concentrations of MA in 1 and 3cm long hair segments and in whole hair samples ranged from 1.03 to 184.98 (mean 22.01), 2.26 to 89.33 (mean 18.71), and 0.91 to 124.49 (mean 15.24)ng/mg, respectively. Ten subjects showed a good correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 7 among 10 subjects ranged from 0.71 to 0.98 (mean 0.85). Four subjects showed a low correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 4 subjects ranged from 0.36 to 0.55. Eleven subjects showed a poor

  13. Relationship between methamphetamine use history and segmental hair analysis findings of MA users.

    PubMed

    Han, Eunyoung; Lee, Sangeun; In, Sanghwan; Park, Meejung; Park, Yonghoon; Cho, Sungnam; Shin, Junguk; Lee, Hunjoo

    2015-09-01

    The aim of this study was to investigate the relationship between methamphetamine (MA) use history and segmental hair analysis (1 and 3cm sections) and whole hair analysis results in Korean MA users in rehabilitation programs. Hair samples were collected from 26 Korean MA users. Eleven of the 26 subjects used cannabis with MA and two used cocaine, opiates, and MDMA with MA. Self-reported single dose of MA from the 26 subjects ranged from 0.03 to 0.5g/one time. Concentrations of MA and its metabolite amphetamine (AP) in hair were determined by gas chromatography mass spectrometry (GC/MS) after derivatization. The method used was well validated. Qualitative analysis from all 1cm sections (n=154) revealed a good correlation between positive or negative results for MA in hair and self-reported MA use (69.48%, n=107). In detail, MA results were positive in 66 hair specimens of MA users who reported administering MA, and MA results were negative in 41 hair specimens of MA users who denied MA administration in the corresponding month. Test results were false-negative in 10.39% (n=16) of hair specimens and false-positive in 20.13% (n=31) of hair specimens. In false positive cases, it is considered that after MA cessation it continued to be accumulated in hair still, while in false negative cases, self-reported histories showed a small amount of MA use or MA use 5-7 months previously. In terms of quantitative analysis, the concentrations of MA in 1 and 3cm long hair segments and in whole hair samples ranged from 1.03 to 184.98 (mean 22.01), 2.26 to 89.33 (mean 18.71), and 0.91 to 124.49 (mean 15.24)ng/mg, respectively. Ten subjects showed a good correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 7 among 10 subjects ranged from 0.71 to 0.98 (mean 0.85). Four subjects showed a low correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 4 subjects ranged from 0.36 to 0.55. Eleven subjects showed a poor

  14. An entropy-based automated cell nuclei segmentation and quantification: application in analysis of wound healing process.

    PubMed

    Oswal, Varun; Belle, Ashwin; Diegelmann, Robert; Najarian, Kayvan

    2013-01-01

    The segmentation and quantification of cell nuclei are two very significant tasks in the analysis of histological images. Accurate results of cell nuclei segmentation are often adapted to a variety of applications such as the detection of cancerous cell nuclei and the observation of overlapping cellular events occurring during wound healing process in the human body. In this paper, an automated entropy-based thresholding system for segmentation and quantification of cell nuclei from histologically stained images has been presented. The proposed translational computation system aims to integrate clinical insight and computational analysis by identifying and segmenting objects of interest within histological images. Objects of interest and background regions are automatically distinguished by dynamically determining 3 optimal threshold values for the 3 color components of an input image. The threshold values are determined by means of entropy computations that are based on probability distributions of the color intensities of pixels and the spatial similarity of pixel intensities within neighborhoods. The effectiveness of the proposed system was tested over 21 histologically stained images containing approximately 1800 cell nuclei, and the overall performance of the algorithm was found to be promising, with high accuracy and precision values.

  15. An Accurate Scene Segmentation Method Based on Graph Analysis Using Object Matching and Audio Feature

    NASA Astrophysics Data System (ADS)

    Yamamoto, Makoto; Haseyama, Miki

    A method for accurate scene segmentation using two kinds of directed graph obtained by object matching and audio features is proposed. Generally, in audiovisual materials, such as broadcast programs and movies, there are repeated appearances of similar shots that include frames of the same background, object or place, and such shots are included in a single scene. Many scene segmentation methods based on this idea have been proposed; however, since they use color information as visual features, they cannot provide accurate scene segmentation results if the color features change in different shots for which frames include the same object due to camera operations such as zooming and panning. In order to solve this problem, scene segmentation by the proposed method is realized by using two novel approaches. In the first approach, object matching is performed between two frames that are each included in different shots. By using these matching results, repeated appearances of shots for which frames include the same object can be successfully found and represented as a directed graph. The proposed method also generates another directed graph that represents the repeated appearances of shots with similar audio features in the second approach. By combined use of these two directed graphs, degradation of scene segmentation accuracy, which results from using only one kind of graph, can be avoided in the proposed method and thereby accurate scene segmentation can be realized. Experimental results performed by applying the proposed method to actual broadcast programs are shown to verify the effectiveness of the proposed method.

  16. Analysis of human hair to assess exposure to organophosphate flame retardants: Influence of hair segments and gender differences.

    PubMed

    Qiao, Lin; Zheng, Xiao-Bo; Zheng, Jing; Lei, Wei-Xiang; Li, Hong-Fang; Wang, Mei-Huan; He, Chun-Tao; Chen, She-Jun; Yuan, Jian-Gang; Luo, Xiao-Jun; Yu, Yun-Jiang; Yang, Zhong-Yi; Mai, Bi-Xian

    2016-07-01

    Hair is a promising, non-invasive, human biomonitoring matrix that can provide insight into retrospective and integral exposure to organic pollutants. In the present study, we measured the concentrations of organophosphate flame retardants (PFRs) in hair and serum samples from university students in Guangzhou, China, and compared the PFR concentrations in the female hair segments using paired distal (5~10cm from the root) and proximal (0~5cm from the root) samples. PFRs were not detected in the serum samples. All PFRs except tricresyl phosphate (TMPP) and tri-n-propyl phosphate (TPP) were detected in more than half of all hair samples. The concentrations of total PFRs varied from 10.1 to 604ng/g, with a median of 148ng/g. Tris(chloroisopropyl) phosphate (TCIPP) and tri(2-ethylexyl) phosphate (TEHP) were the predominant PFRs in hair. The concentrations of most PFRs in the distal segments were 1.5~8.6 times higher than those in the proximal segments of the hair (t-test, p<0.05), which may be due to the longer exposure time of the distal segments to external sources. The values of log (PFR concentrations-distal/PFR concentrations-proximal) were positively and significantly correlated with log KOA of PFRs (p<0.05, r=0.68), indicating that PFRs with a higher log KOA tend to accumulate in hair at a higher rate than PFRs with a lower log KOA. Using combined segments of female hair, significantly higher PFR concentrations were observed in female hair than in male hair. In contrast, female hair exhibited significantly lower PFR concentrations than male hair when using the same hair position for both genders (0-5cm from the scalp). The controversial results regarding gender differences in PFRs in hair highlight the importance of segmental analysis when using hair as an indicator of human exposure to PFRs. PMID:27078091

  17. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 3

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.

  18. Incorporation of learned shape priors into a graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes of mice

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Song, Qi; Abràmoff, Michael D.; Sohn, Eliott; Wu, Xiaodong; Garvin, Mona K.

    2014-03-01

    Spectral-domain optical coherence tomography (SD-OCT) finds widespread use clinically for the detection and management of ocular diseases. This non-invasive imaging modality has also begun to find frequent use in research studies involving animals such as mice. Numerous approaches have been proposed for the segmentation of retinal surfaces in SD-OCT images obtained from human subjects; however, the segmentation of retinal surfaces in mice scans is not as well-studied. In this work, we describe a graph-theoretic segmentation approach for the simultaneous segmentation of 10 retinal surfaces in SD-OCT scans of mice that incorporates learned shape priors. We compared the method to a baseline approach that did not incorporate learned shape priors and observed that the overall unsigned border position errors reduced from 3.58 +/- 1.33 μm to 3.20 +/- 0.56 μm.

  19. Multi-temporal MRI carpal bone volumes analysis by principal axes registration

    NASA Astrophysics Data System (ADS)

    Ferretti, Roberta; Dellepiane, Silvana

    2016-03-01

    In this paper, a principal axes registration technique is presented, with the relevant application to segmented volumes. The purpose of the proposed registration is to compare multi-temporal volumes of carpal bones from Magnetic Resonance Imaging (MRI) acquisitions. Starting from the study of the second-order moment matrix, the eigenvectors are calculated to allow the rotation of volumes with respect to reference axes. Then the volumes are spatially translated to become perfectly overlapped. A quantitative evaluation of the results obtained is carried out by computing classical indices from the confusion matrix, which depict similarity measures between the volumes of the same organ as extracted from MRI acquisitions executed at different moments. Within the medical field, the way a registration can be used to compare multi-temporal images is of great interest, since it provides the physician with a tool which allows a visual monitoring of a disease evolution. The segmentation method used herein is based on the graph theory and is a robust, unsupervised and parameters independent method. Patients affected by rheumatic diseases have been considered.

  20. Segmentation and analysis of the three-dimensional redistribution of nuclear components in human mesenchymal stem cells.

    PubMed

    Vermolen, Bart J; Garini, Yuval; Young, Ian T; Dirks, Roeland W; Raz, Vered

    2008-09-01

    To better understand the impact of changes in nuclear architecture on nuclear functions, it is essential to quantitatively elucidate the three-dimensional organization of nuclear components using image processing tools. We have developed a novel image segmentation method, which involves a contrast enhancement and a subsequent thresholding step. In addition, we have developed a new segmentation method of the nuclear volume using the fluorescent background signal of a probe. After segmentation of the nucleus, a first-order normalization is performed on the signal positions of the component of interest to correct for the shape of the nucleus. This method allowed us to compare various signal positions within a single nucleus, and also on pooled data obtained from multiple nuclei, which may vary in size and shape. The algorithms have been tested by analyzing the spatial localization of nuclear bodies in relation to the nuclear center. Next, we used this new tool to study the change in the spatial distribution of nuclear components in cells before and after caspase-8 activation, which leads to cell death. Compared to the morphological TopHat method, this method gives similar but significantly faster results. A clear shift in the radial distribution of centromeres has been found, while the radial distribution of telomeres was changed much less. In addition, we have used this new tool to follow changes in the spatial distribution of two nuclear components in the same nucleus during activation of apoptosis. We show that after caspase-8 activation, when centromeres shift to a peripheral localization, the spatial distribution of PML-NBs does not change while that of centromeres did. We propose that the use of this new image segmentation method will contribute to a better understanding of the 3D spatial organization of the cell nucleus.

  1. Yucca Mountain transportation routes: Preliminary characterization and risk analysis; Volume 2, Figures [and] Volume 3, Technical Appendices

    SciTech Connect

    Souleyrette, R.R. II; Sathisan, S.K.; di Bartolo, R.

    1991-05-31

    This report presents appendices related to the preliminary assessment and risk analysis for high-level radioactive waste transportation routes to the proposed Yucca Mountain Project repository. Information includes data on population density, traffic volume, ecologically sensitive areas, and accident history.

  2. Subcortical volume analysis in traumatic brain injury: the importance of the fronto-striato-thalamic circuit in task switching.

    PubMed

    Leunissen, Inge; Coxon, James P; Caeyenberghs, Karen; Michiels, Karla; Sunaert, Stefan; Swinnen, Stephan P

    2014-02-01

    Traumatic brain injury (TBI) is associated with neuronal loss, diffuse axonal injury and executive dysfunction. Whereas executive dysfunction has traditionally been associated with prefrontal lesions, ample evidence suggests that those functions requiring behavioral flexibility critically depend on the interaction between frontal cortex, basal ganglia and thalamus. To test whether structural integrity of this fronto-striato-thalamic circuit can account for executive impairments in TBI we automatically segmented the thalamus, putamen and caudate of 25 patients and 21 healthy controls and obtained diffusion weighted images. We assessed components of executive function using the local-global task, which requires inhibition, updating and switching between actions. Shape analysis revealed localized atrophy of the limbic, executive and rostral-motor zones of the basal ganglia, whereas atrophy of the thalami was more global in TBI. This subcortical atrophy was related to white matter microstructural organization in TBI, suggesting that axonal injuries possibly contribute to subcortical volume loss. Global volume of the nuclei showed no clear relationship with task performance. However, the shape analysis revealed that participants with smaller volume of those subregions that have connections with the prefrontal cortex and rostral motor areas showed higher switch costs and mixing costs, and made more errors while switching. These results support the idea that flexible cognitive control over action depends on interactions within the fronto-striato-thalamic circuit.

  3. Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images

    PubMed Central

    Lee, Kyungmoo; Buitendijk, Gabriëlle H.S.; Bogunovic, Hrvoje; Springelkamp, Henriët; Hofman, Albert; Wahle, Andreas; Sonka, Milan; Vingerling, Johannes R.; Klaver, Caroline C.W.; Abràmoff, Michael D.

    2016-01-01

    Purpose To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. Methods Six hundred ninety macular SD-OCT image volumes (6.0 × 6.0 × 2.3 mm3) were obtained from one eyes of 690 subjects (74.6 ± 9.7 [mean ± SD] years, 37.8% of males) randomly selected from the population-based Rotterdam Study. The dataset consisted of 420 OCT volumes with successful automated retinal nerve fiber layer (RNFL) segmentations obtained from our previously reported graph-based segmentation method and 270 volumes with failed segmentations. To evaluate the reliability of the layer segmentations, we have developed a new metric, segmentability index SI, which is obtained from a random forest regressor based on 12 features using OCT voxel intensities, edge-based costs, and on-surface costs. The SI was compared with well-known quality indices, quality index (QI), and maximum tissue contrast index (mTCI), using receiver operating characteristic (ROC) analysis. Results The 95% confidence interval (CI) and the area under the curve (AUC) for the QI are 0.621 to 0.805 with AUC 0.713, for the mTCI 0.673 to 0.838 with AUC 0.756, and for the SI 0.784 to 0.920 with AUC 0.852. The SI AUC is significantly larger than either the QI or mTCI AUC (P < 0.01). Conclusions The segmentability index SI is well suited to identify SD-OCT scans for which successful automated intraretinal layer segmentations can be expected. Translational Relevance Interpreting the quantification of SD-OCT images requires the underlying segmentation to be reliable, but standard SD-OCT quality metrics do not predict which segmentations are reliable and which are not. The segmentability index SI presented in this study does allow reliable segmentations to be identified, which is important for more accurate layer thickness analyses in research and population studies. PMID:27066311

  4. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  5. Global Warming’s Six Americas: An Audience Segmentation Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Roser-Renouf, C.; Maibach, E.; Leiserowitz, A.

    2009-12-01

    One of the first rules of effective communication is to “know thy audience.” People have different psychological, cultural and political reasons for acting - or not acting - to reduce greenhouse gas emissions, and climate change educators can increase their impact by taking these differences into account. In this presentation we will describe six unique audience segments within the American public that each responds to the issue in its own distinct way, and we will discuss methods of engaging each. The six audiences were identified using a nationally representative survey of American adults conducted in the fall of 2008 (N=2,164). In two waves of online data collection, the public’s climate change beliefs, attitudes, risk perceptions, values, policy preferences, conservation, and energy-efficiency behaviors were assessed. The data were subjected to latent class analysis, yielding six groups distinguishable on all the above dimensions. The Alarmed (18%) are fully convinced of the reality and seriousness of climate change and are already taking individual, consumer, and political action to address it. The Concerned (33%) - the largest of the Six Americas - are also convinced that global warming is happening and a serious problem, but have not yet engaged with the issue personally. Three other Americas - the Cautious (19%), the Disengaged (12%) and the Doubtful (11%) - represent different stages of understanding and acceptance of the problem, and none are actively involved. The final America - the Dismissive (7%) - are very sure it is not happening and are actively involved as opponents of a national effort to reduce greenhouse gas emissions. Mitigating climate change will require a diversity of messages, messengers and methods that take into account these differences within the American public. The findings from this research can serve as guideposts for educators on the optimal choices for reaching and influencing target groups with varied informational needs

  6. EPA RREL'S MOBILE VOLUME REDUCTION UNIT -- APPLICATIONS ANALYSIS REPORT

    EPA Science Inventory

    The volume reduction unit (VRU) is a pilot-scale, mobile soil washing system designed to remove organic contaminants from the soil through particle size separation and solubilization. The VRU removes contaminants by suspending them in a wash solution and by reducing the volume of...

  7. A Genetic Analysis of Brain Volumes and IQ in Children

    ERIC Educational Resources Information Center

    van Leeuwen, Marieke; Peper, Jiska S.; van den Berg, Stephanie M.; Brouwer, Rachel M.; Hulshoff Pol, Hilleke E.; Kahn, Rene S.; Boomsma, Dorret I.

    2009-01-01

    In a population-based sample of 112 nine-year old twin pairs, we investigated the association among total brain volume, gray matter and white matter volume, intelligence as assessed by the Raven IQ test, verbal comprehension, perceptual organization and perceptual speed as assessed by the Wechsler Intelligence Scale for Children-III. Phenotypic…

  8. Tumor Burden Analysis on Computed Tomography by Automated Liver and Tumor Segmentation

    PubMed Central

    Linguraru, Marius George; Richbourg, William J.; Liu, Jianfei; Watt, Jeremy M.; Pamulapati, Vivek; Wang, Shijun; Summers, Ronald M.

    2013-01-01

    The paper presents the automated computation of hepatic tumor burden from abdominal CT images of diseased populations with images with inconsistent enhancement. The automated segmentation of livers is addressed first. A novel three-dimensional (3D) affine invariant shape parameterization is employed to compare local shape across organs. By generating a regular sampling of the organ's surface, this parameterization can be effectively used to compare features of a set of closed 3D surfaces point-to-point, while avoiding common problems with the parameterization of concave surfaces. From an initial segmentation of the livers, the areas of atypical local shape are determined using training sets. A geodesic active contour corrects locally the segmentations of the livers in abnormal images. Graph cuts segment the hepatic tumors using shape and enhancement constraints. Liver segmentation errors are reduced significantly and all tumors are detected. Finally, support vector machines and feature selection are employed to reduce the number of false tumor detections. The tumor detection true position fraction of 100% is achieved at 2.3 false positives/case and the tumor burden is estimated with 0.9% error. Results from the test data demonstrate the method's robustness to analyze livers from difficult clinical cases to allow the temporal monitoring of patients with hepatic cancer. PMID:22893379

  9. Volume component analysis for classification of LiDAR data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2015-03-01

    One of the most difficult challenges of working with LiDAR data is the large amount of data points that are produced. Analysing these large data sets is an extremely time consuming process. For this reason, automatic perception of LiDAR scenes is a growing area of research. Currently, most LiDAR feature extraction relies on geometrical features specific to the point cloud of interest. These geometrical features are scene-specific, and often rely on the scale and orientation of the object for classification. This paper proposes a robust method for reduced dimensionality feature extraction of 3D objects using a volume component analysis (VCA) approach.1 This VCA approach is based on principal component analysis (PCA). PCA is a method of reduced feature extraction that computes a covariance matrix from the original input vector. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix are used to describe an image. Block-based PCA is an adapted method for feature extraction in facial images because PCA, when performed in local areas of the image, can extract more significant features than can be extracted when the entire image is considered. The image space is split into several of these blocks, and PCA is computed individually for each block. This VCA proposes that a LiDAR point cloud can be represented as a series of voxels whose values correspond to the point density within that relative location. From this voxelized space, block-based PCA is used to analyze sections of the space where the sections, when combined, will represent features of the entire 3-D object. These features are then used as the input to a support vector machine which is trained to identify four classes of objects, vegetation, vehicles, buildings and barriers with an overall accuracy of 93.8%

  10. Airway segmentation and analysis for the study of mouse models of lung disease using micro-CT

    NASA Astrophysics Data System (ADS)

    Artaechevarria, X.; Pérez-Martín, D.; Ceresa, M.; de Biurrun, G.; Blanco, D.; Montuenga, L. M.; van Ginneken, B.; Ortiz-de-Solorzano, C.; Muñoz-Barrutia, A.

    2009-11-01

    Animal models of lung disease are gaining importance in understanding the underlying mechanisms of diseases such as emphysema and lung cancer. Micro-CT allows in vivo imaging of these models, thus permitting the study of the progression of the disease or the effect of therapeutic drugs in longitudinal studies. Automated analysis of micro-CT images can be helpful to understand the physiology of diseased lungs, especially when combined with measurements of respiratory system input impedance. In this work, we present a fast and robust murine airway segmentation and reconstruction algorithm. The algorithm is based on a propagating fast marching wavefront that, as it grows, divides the tree into segments. We devised a number of specific rules to guarantee that the front propagates only inside the airways and to avoid leaking into the parenchyma. The algorithm was tested on normal mice, a mouse model of chronic inflammation and a mouse model of emphysema. A comparison with manual segmentations of two independent observers shows that the specificity and sensitivity values of our method are comparable to the inter-observer variability, and radius measurements of the mainstem bronchi reveal significant differences between healthy and diseased mice. Combining measurements of the automatically segmented airways with the parameters of the constant phase model provides extra information on how disease affects lung function.

  11. Hospital benefit segmentation.

    PubMed

    Finn, D W; Lamb, C W

    1986-12-01

    Market segmentation is an important topic to both health care practitioners and researchers. The authors explore the relative importance that health care consumers attach to various benefits available in a major metropolitan area hospital. The purposes of the study are to test, and provide data to illustrate, the efficacy of one approach to hospital benefit segmentation analysis.

  12. Biomechanical Evaluation of Different Fixation Methods for Mandibular Anterior Segmental Osteotomy Using Finite Element Analysis, Part One: Superior Repositioning Surgery.

    PubMed

    Kilinç, Yeliz; Erkmen, Erkan; Kurt, Ahmet

    2016-01-01

    The aim of the current study was to comparatively evaluate the mechanical behavior of 3 different fixation methods following various amounts of superior repositioning of mandibular anterior segment. In this study, 3 different rigid fixation configurations comprising double right L, double left L, or double I miniplates with monocortical screws were compared under vertical, horizontal, and oblique load conditions by means of finite element analysis. A three-dimensional finite element model of a fully dentate mandible was generated. A 3 and 5 mm superior repositioning of mandibular anterior segmental osteotomy were simulated. Three different finite element models corresponding to different fixation configurations were created for each superior repositioning. The von Mises stress values on fixation appliances and principal maximum stresses (Pmax) on bony structures were predicted by finite element analysis. The results have demonstrated that double right L configuration provides better stability with less stress fields in comparison with other fixation configurations used in this study.

  13. An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm.

    PubMed

    Xu, Hongming; Lu, Cheng; Mandal, Mrinal

    2014-09-01

    In this paper, we propose an efficient method for segmenting cell nuclei in the skin histopathological images. The proposed technique consists of four modules. First, it separates the nuclei regions from the background with an adaptive threshold technique. Next, an elliptical descriptor is used to detect the isolated nuclei with elliptical shapes. This descriptor classifies the nuclei regions based on two ellipticity parameters. Nuclei clumps and nuclei with irregular shapes are then localized by an improved seed detection technique based on voting in the eroded nuclei regions. Finally, undivided nuclei regions are segmented by a marked watershed algorithm. Experimental results on 114 different image patches indicate that the proposed technique provides a superior performance in nuclei detection and segmentation.

  14. Concepts and analysis for precision segmented reflector and feed support structures

    NASA Technical Reports Server (NTRS)

    Miller, Richard K.; Thomson, Mark W.; Hedgepeth, John M.

    1990-01-01

    Several issues surrounding the design of a large (20-meter diameter) Precision Segmented Reflector are investigated. The concerns include development of a reflector support truss geometry that will permit deployment into the required doubly-curved shape without significant member strains. For deployable and erectable reflector support trusses, the reduction of structural redundancy was analyzed to achieve reduced weight and complexity for the designs. The stiffness and accuracy of such reduced member trusses, however, were found to be affected to a degree that is unexpected. The Precision Segmented Reflector designs were developed with performance requirements that represent the Reflector application. A novel deployable sunshade concept was developed, and a detailed parametric study of various feed support structural concepts was performed. The results of the detailed study reveal what may be the most desirable feed support structure geometry for Precision Segmented Reflector/Large Deployable Reflector applications.

  15. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment

    PubMed Central

    Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols. PMID:26147196

  16. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  17. Segmental and Positional Effects on Children's Coda Production: Comparing Evidence from Perceptual Judgments and Acoustic Analysis

    ERIC Educational Resources Information Center

    Theodore, Rachel M.; Demuth, Katherine; Shattuck-Hufnagel, Stephanie

    2012-01-01

    Children's early productions are highly variable. Findings from children's early productions of grammatical morphemes indicate that some of the variability is systematically related to segmental and phonological factors. Here, we extend these findings by assessing 2-year-olds' production of non-morphemic codas using both listener decisions and…

  18. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    SciTech Connect

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  19. Oil-spill risk analysis: Cook inlet outer continental shelf lease sale 149. Volume 2: Conditional risk contour maps of seasonal conditional probabilities. Final report

    SciTech Connect

    Johnson, W.R.; Marshall, C.F.; Anderson, C.M.; Lear, E.M.

    1994-08-01

    The Federal Government has proposed to offer Outer Continental Shelf (OCS) lands in Cook Inlet for oil and gas leasing. Because oil spills may occur from activities associated with offshore oil production, the Minerals Management Service conducts a formal risk assessment. In evaluating the significance of accidental oil spills, it is important to remember that the occurrence of such spills is fundamentally probabilistic. The effects of oil spills that could occur during oil and gas production must be considered. This report summarizes results of an oil-spill risk analysis conducted for the proposed Cook Inlet OCS Lease Sale 149. The objective of this analysis was to estimate relative risks associated with oil and gas production for the proposed lease sale. To aid the analysis, conditional risk contour maps of seasonal conditional probabilities of spill contact were generated for each environmental resource or land segment in the study area. This aspect is discussed in this volume of the two volume report.

  20. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 4: Mission peculiar spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) peculiar spacecraft segment and associated subsystems and modules are presented. The specifications considered include the following: (1) wideband communications subsystem module, (2) mission peculiar software, (3) hydrazine propulsion subsystem module, (4) solar array assembly, and (5) the scanning spectral radiometer.

  1. Solvent transport through hard-soft segmented polymer nanocomposites.

    PubMed

    Rath, Sangram K; Edatholath, Saji S; Patro, T Umasankar; Sudarshan, Kathi; Sastry, P U; Pujari, Pradeep K; Harikrishnan, G

    2016-01-28

    We conducted transport studies of a common solvent (toluene) in its condensed state, through a model hard-soft segmented polyurethane-clay nanocomposite. The solvent diffusivity is observed to be non-monotonic in a functional relationship with a filler volume fraction. In stark contrast, both classical tortuous path theory based geometric calculations and free volume measurements suggest the normally expected monotonic decrease in diffusivity with increase in clay volume fraction. Large deviations between experimentally observed diffusivity coefficients and those theoretically estimated from geometric theory are also observed. However, the equilibrium swelling of a nanocomposite as indicated by the solubility coefficient did not change. To gain an insight into the solvent interaction behavior, we conducted a pre- and post swollen segmented phase analysis of pure polymers and nanocomposites. We find that in a nanocomposite, the solvent has to interact with a filler altered hard-soft segmented morphology. In the altered phase separated morphology, the spatial distribution of thermodynamically segmented hard blocks in the continuous soft matrix becomes a strong function of filler concentration. Upon solvent interaction, this spatial distribution gets reoriented due to sorption and de-clustering. The results indicate strong non-barrier influences of nanoscale fillers dispersed in phase segmented block co-polymers, affecting solvent diffusivity through them. Based on pre- and post swollen morphological observations, we postulate a possible mechanism for the non-monotonic behaviour of solvent transport for hard-soft segmented co-polymers, in which the thermodynamic phase separation is influenced by the filler.

  2. Determination of fiber volume in graphite/epoxy materials using computer image analysis

    NASA Technical Reports Server (NTRS)

    Viens, Michael J.

    1990-01-01

    The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.

  3. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity

    PubMed Central

    Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin

    2016-01-01

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ±40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design. PMID:26805844

  4. Design and Analysis of Modules for Segmented X-Ray Optics

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; BIskach, Michael P.; Chan, Kai-Wing; Saha, Timo T; Zhang, William W.

    2012-01-01

    Future X-ray astronomy missions demand thin, light, and closely packed optics which lend themselves to segmentation of the annular mirrors and, in turn, a modular approach to the mirror design. The modular approach to X-ray Flight Mirror Assembly (FMA) design allows excellent scalability of the mirror technology to support a variety of mission sizes and science objectives. This paper describes FMA designs using slumped glass mirror segments for several X-ray astrophysics missions studied by NASA and explores the driving requirements and subsequent verification tests necessary to qualify a slumped glass mirror module for space-flight. A rigorous testing program is outlined allowing Technical Development Modules to reach technical readiness for mission implementation while reducing mission cost and schedule risk.

  5. Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.

    2013-01-01

    The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.

  6. Modeling and analysis of passive dynamic bipedal walking with segmented feet and compliant joints

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Wang, Qi-Ning; Gao, Yue; Xie, Guang-Ming

    2012-10-01

    Passive dynamic walking has been developed as a possible explanation for the efficiency of the human gait. This paper presents a passive dynamic walking model with segmented feet, which makes the bipedal walking gait more close to natural human-like gait. The proposed model extends the simplest walking model with the addition of flat feet and torsional spring based compliance on ankle joints and toe joints, to achieve stable walking on a slope driven by gravity. The push-off phase includes foot rotations around the toe joint and around the toe tip, which shows a great resemblance to human normal walking. This paper investigates the effects of the segmented foot structure on bipedal walking in simulations. The model achieves satisfactory walking results on even or uneven slopes.

  7. Growth and morphological analysis of segmented AuAg alloy nanowires created by pulsed electrodeposition in ion-track etched membranes

    PubMed Central

    Burr, Loic; Trautmann, Christina; Toimil-Molares, Maria Eugenia

    2015-01-01

    Summary Background: Multicomponent heterostructure nanowires and nanogaps are of great interest for applications in sensorics. Pulsed electrodeposition in ion-track etched polymer templates is a suitable method to synthesise segmented nanowires with segments consisting of two different types of materials. For a well-controlled synthesis process, detailed analysis of the deposition parameters and the size-distribution of the segmented wires is crucial. Results: The fabrication of electrodeposited AuAg alloy nanowires and segmented Au-rich/Ag-rich/Au-rich nanowires with controlled composition and segment length in ion-track etched polymer templates was developed. Detailed analysis by cyclic voltammetry in ion-track membranes, energy-dispersive X-ray spectroscopy and scanning electron microscopy was performed to determine the dependency between the chosen potential and the segment composition. Additionally, we have dissolved the middle Ag-rich segments in order to create small nanogaps with controlled gap sizes. Annealing of the created structures allows us to influence their morphology. Conclusion: AuAg alloy nanowires, segmented wires and nanogaps with controlled composition and size can be synthesised by electrodeposition in membranes, and are ideal model systems for investigation of surface plasmons. PMID:26199830

  8. Brain MRI Segmentation with Multiphase Minimal Partitioning: A Comparative Study

    PubMed Central

    Angelini, Elsa D.; Song, Ting; Mensh, Brett D.; Laine, Andrew F.

    2007-01-01

    This paper presents the implementation and quantitative evaluation of a multiphase three-dimensional deformable model in a level set framework for automated segmentation of brain MRIs. The segmentation algorithm performs an optimal partitioning of three-dimensional data based on homogeneity measures that naturally evolves to the extraction of different tissue types in the brain. Random seed initialization was used to minimize the sensitivity of the method to initial conditions while avoiding the need for a priori information. This random initialization ensures robustness of the method with respect to the initialization and the minimization set up. Postprocessing corrections with morphological operators were applied to refine the details of the global segmentation method. A clinical study was performed on a database of 10 adult brain MRI volumes to compare the level set segmentation to three other methods: “idealized” intensity thresholding, fuzzy connectedness, and an expectation maximization classification using hidden Markov random fields. Quantitative evaluation of segmentation accuracy was performed with comparison to manual segmentation computing true positive and false positive volume fractions. A statistical comparison of the segmentation methods was performed through a Wilcoxon analysis of these error rates and results showed very high quality and stability of the multiphase three-dimensional level set method. PMID:18253474

  9. Quantitative morphological analysis of curvilinear network for microscopic image based on individual fibre segmentation (IFS).

    PubMed

    Qiu, J; Li, F-F

    2014-12-01

    Microscopic images of curvilinear fibre network structure like cytoskeleton are traditionally analysed by qualitative observation, which can hardly provide quantitative information of their morphological properties. However, such information is crucially contributive to the understanding of important biological events, even helps to learn about the inner relations hard to perceive. Individual fibre segmentation-based curvilinear structure detector proposed in this study can identify each individual fibre in the network, as well as connections between different fibres. Quantitative information of each individual fibre, including length, orientation and position, can be extracted; so are the connecting modes in the fibre network, such as bifurcation, intersection and overlap. Distribution of fibres with different morphological properties is also presented. No manual intervening or subjective judging is required in the analysing process. Both synthesized and experimental microscopic images have verified that the detector is capable to segment curvilinear network at the subcellular level with strong noise immunity. The proposed detector is finally applied to the morphological study on cytoskeleton. It is believed that the individual fibre segmentation-based curvilinear structure detector can greatly enhance our understanding of those biological images generated from tons of biological experiments. PMID:25243901

  10. Analysis of flexible aircraft longitudinal dynamics and handling qualities. Volume 1: Analysis methods

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Schmidt, D. S.

    1985-01-01

    As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.

  11. Organizational Communication: Abstracts, Analysis, and Overview. Volume 6.

    ERIC Educational Resources Information Center

    Greenbaum, Howard H.; Falcione, Raymond L.

    This annual volume of organizational communication abstracts presents over 1,100 abstracts of the literature on organizational communication occurring in 1979. An introductory chapter explains the classification systems, provides operational definitions of terms, and concedes the shortcomings of the research effort. An overview chapter comments…

  12. Verification of fault tree analysis. Volume 1: experiments and results

    SciTech Connect

    Rothbart, G.; Fullwood, R.; Basin, S.; Newt, J.; Escalera, J.

    1981-05-01

    Volume 1 describes the development of the EPRI Reliability and Maintainability Analyzer (ERMA), an electronic instrument for simulating the reliability of complex systems. The operation concept of ERMA and verification of its statistical behavior applications to system models of varying complexity are summarized. The ERMA simulation results are compared to the results from the fault tree codes, which use equivalent system models.

  13. Analysis of the Relationship between Hypertrophy of the Ligamentum Flavum and Lumbar Segmental Motion with Aging Process

    PubMed Central

    Yoshiiwa, Toyomi; Kawano, Masanori; Ikeda, Shinichi; Tsumura, Hiroshi

    2016-01-01

    Study Design Retrospective cross-sectional study. Purpose To investigate the relationship between ligamentum flavum (LF) hypertrophy and lumbar segmental motion. Overview of Literature The pathogenesis of LF thickening is unclear and whether the thickening results from tissue hypertrophy or buckling remains controversial. Methods 296 consecutive patients underwent assessment of the lumbar spine by radiographic and magnetic resonance imaging (MRI). Of these patients, 39 with normal L4–L5 disc height were selected to exclude LF buckling as one component of LF hypertrophy. The study group included 27 men and 12 women, with an average age of 61.2 years (range, 23–81 years). Disc degeneration and LF thickness were quantified on MRI. Lumbar segmental spine instability and presence of a vacuum phenomenon were identified on radiographic images. Results The distribution of disc degeneration and LF thickness included grade II degeneration in 4 patients, with a mean LF thickness of 2.43±0.20 mm; grade III in 10 patients, 3.01±0.41 mm; and grade IV in 25 patients, 4.16±1.12 mm. LF thickness significantly increased with grade of disc degeneration and was significantly correlated with age (r=0.55, p<0.01). Logistic regression analysis identified predictive effects of segmental angulation (odds ratio [OR]=1.55, p=0.014) and age (OR=1.16, p=0.008). Conclusions Age-related increases in disc degeneration, combined with continuous lumbar segmental flexion-extension motion, leads to the development of LF hypertrophy. PMID:27340534

  14. Prevalence and Distribution of Segmentation Errors in Macular Ganglion Cell Analysis of Healthy Eyes Using Cirrus HD-OCT

    PubMed Central

    Alshareef, Rayan A.; Dumpala, Sunila; Rapole, Shruthi; Januwada, Manideepak; Goud, Abhilash; Peguda, Hari Kumar; Chhablani, Jay

    2016-01-01

    Purpose To determine the frequency of different types of spectral domain optical coherence tomography (SD-OCT) scan artifacts and errors in ganglion cell algorithm (GCA) in healthy eyes. Methods Infrared image, color-coded map and each of the 128 horizontal b-scans acquired in the macular ganglion cell-inner plexiform layer scans using the Cirrus HD-OCT (Carl Zeiss Meditec, Dublin, CA) macular cube 512 × 128 protocol in 30 healthy normal eyes were evaluated. The frequency and pattern of each artifact was determined. Deviation of the segmentation line was classified into mild (less than 10 microns), moderate (10–50 microns) and severe (more than 50 microns). Each deviation, if present, was noted as upward or downward deviation. Each artifact was further described as per location on the scan and zones in the total scan area. Results A total of 1029 (26.8%) out of total 3840 scans had scan errors. The most common scan error was segmentation error (100%), followed by degraded images (6.70%), blink artifacts (0.09%) and out of register artifacts (3.3%). Misidentification of the inner retinal layers was most frequent (62%). Upward Deviation of the segmentation line (47.91%) and severe deviation (40.3%) were more often noted. Artifacts were mostly located in the central scan area (16.8%). The average number of scans with artifacts per eye was 34.3% and was not related to signal strength on Spearman correlation (p = 0.36). Conclusions This study reveals that image artifacts and scan errors in SD-OCT GCA analysis are common and frequently involve segmentation errors. These errors may affect inner retinal thickness measurements in a clinically significant manner. Careful review of scans for artifacts is important when using this feature of SD-OCT device. PMID:27191396

  15. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 3: General purpose spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) general purpose aircraft segment are presented. The satellite is designed to provide attitude stabilization, electrical power, and a communications data handling subsystem which can support various mission peculiar subsystems. The various specifications considered include the following: (1) structures subsystem, (2) thermal control subsystem, (3) communications and data handling subsystem module, (4) attitude control subsystem module, (5) power subsystem module, and (6) electrical integration subsystem.

  16. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  17. Cargo Logistics Airlift Systems Study (CLASS). Volume 1: Analysis of current air cargo system

    NASA Technical Reports Server (NTRS)

    Burby, R. J.; Kuhlman, W. H.

    1978-01-01

    The material presented in this volume is classified into the following sections; (1) analysis of current routes; (2) air eligibility criteria; (3) current direct support infrastructure; (4) comparative mode analysis; (5) political and economic factors; and (6) future potential market areas. An effort was made to keep the observations and findings relating to the current systems as objective as possible in order not to bias the analysis of future air cargo operations reported in Volume 3 of the CLASS final report.

  18. A Rapid and Efficient 2D/3D Nuclear Segmentation Method for Analysis of Early Mouse Embryo and Stem Cell Image Data

    PubMed Central

    Lou, Xinghua; Kang, Minjung; Xenopoulos, Panagiotis; Muñoz-Descalzo, Silvia; Hadjantonakis, Anna-Katerina

    2014-01-01

    Summary Segmentation is a fundamental problem that dominates the success of microscopic image analysis. In almost 25 years of cell detection software development, there is still no single piece of commercial software that works well in practice when applied to early mouse embryo or stem cell image data. To address this need, we developed MINS (modular interactive nuclear segmentation) as a MATLAB/C++-based segmentation tool tailored for counting cells and fluorescent intensity measurements of 2D and 3D image data. Our aim was to develop a tool that is accurate and efficient yet straightforward and user friendly. The MINS pipeline comprises three major cascaded modules: detection, segmentation, and cell position classification. An extensive evaluation of MINS on both 2D and 3D images, and comparison to related tools, reveals improvements in segmentation accuracy and usability. Thus, its accuracy and ease of use will allow MINS to be implemented for routine single-cell-level image analyses. PMID:24672759

  19. Computer-aided segmentation and 3D analysis of in vivo MRI examinations of the human vocal tract during phonation

    NASA Astrophysics Data System (ADS)

    Wismüller, Axel; Behrends, Johannes; Hoole, Phil; Leinsinger, Gerda L.; Meyer-Baese, Anke; Reiser, Maximilian F.

    2008-03-01

    We developed, tested, and evaluated a 3D segmentation and analysis system for in vivo MRI examinations of the human vocal tract during phonation. For this purpose, six professionally trained speakers, age 22-34y, were examined using a standardized MRI protocol (1.5 T, T1w FLASH, ST 4mm, 23 slices, acq. time 21s). The volunteers performed a prolonged (>=21s) emission of sounds of the German phonemic inventory. Simultaneous audio tape recording was obtained to control correct utterance. Scans were made in axial, coronal, and sagittal planes each. Computer-aided quantitative 3D evaluation included (i) automated registration of the phoneme-specific data acquired in different slice orientations, (ii) semi-automated segmentation of oropharyngeal structures, (iii) computation of a curvilinear vocal tract midline in 3D by nonlinear PCA, (iv) computation of cross-sectional areas of the vocal tract perpendicular to this midline. For the vowels /a/,/e/,/i/,/o/,/ø/,/u/,/y/, the extracted area functions were used to synthesize phoneme sounds based on an articulatory-acoustic model. For quantitative analysis, recorded and synthesized phonemes were compared, where area functions extracted from 2D midsagittal slices were used as a reference. All vowels could be identified correctly based on the synthesized phoneme sounds. The comparison between synthesized and recorded vowel phonemes revealed that the quality of phoneme sound synthesis was improved for phonemes /a/ and /y/, if 3D instead of 2D data were used, as measured by the average relative frequency shift between recorded and synthesized vowel formants (p<0.05, one-sided Wilcoxon rank sum test). In summary, the combination of fast MRI followed by subsequent 3D segmentation and analysis is a novel approach to examine human phonation in vivo. It unveils functional anatomical findings that may be essential for realistic modelling of the human vocal tract during speech production.

  20. Knowledge-based 3D segmentation of the brain in MR images for quantitative multiple sclerosis lesion tracking

    NASA Astrophysics Data System (ADS)

    Fisher, Elizabeth; Cothren, Robert M., Jr.; Tkach, Jean A.; Masaryk, Thomas J.; Cornhill, J. Fredrick

    1997-04-01

    Brain segmentation in magnetic resonance (MR) images is an important step in quantitative analysis applications, including the characterization of multiple sclerosis (MS) lesions over time. Our approach is based on a priori knowledge of the intensity and three-dimensional (3D) spatial relationships of structures in MR images of the head. Optimal thresholding and connected-components analysis are used to generate a starting point for segmentation. A 3D radial search is then performed to locate probable locations of the intra-cranial cavity (ICC). Missing portions of the ICC surface are interpolated in order to exclude connected structures. Partial volume effects and inter-slice intensity variations in the image are accounted for automatically. Several studies were conducted to validate the segmentation. Accuracy was tested by calculating the segmented volume and comparing to known volumes of a standard MR phantom. Reliability was tested by comparing calculated volumes of individual segmentation results from multiple images of the same subject. The segmentation results were also compared to manual tracings. The average error in volume measurements for the phantom was 1.5% and the average coefficient of variation of brain volume measurements of the same subject was 1.2%. Since the new algorithm requires minimal user interaction, variability introduced by manual tracing and interactive threshold or region selection was eliminated. Overall, the new algorithm was shown to produce a more accurate and reliable brain segmentation than existing manual and semi-automated techniques.

  1. Industrial process heat data analysis and evaluation. Volume 2

    SciTech Connect

    Lewandowski, A; Gee, R; May, K

    1984-07-01

    The Solar Energy Research Institute (SERI) has modeled seven of the Department of Energy (DOE) sponsored solar Industrial Process Heat (IPH) field experiments and has generated thermal performance predictions for each project. Additionally, these performance predictions have been compared with actual performance measurements taken at the projects. Predictions were generated using SOLIPH, an hour-by-hour computer code with the capability for modeling many types of solar IPH components and system configurations. Comparisons of reported and predicted performance resulted in good agreement when the field test reliability and availability was high. Volume I contains the main body of the work; objective model description, site configurations, model results, data comparisons, and summary. Volume II contains complete performance prediction results (tabular and graphic output) and computer program listings.

  2. Intracranial compensatory mechanisms for volume perturbations: a theoretical analysis.

    PubMed

    Balachandra, S; Anand, S

    1993-06-01

    The proposed mathematical formulation accounts for the role of the absorption and production mechanisms of the intracranial cavity. The transport barrier conduction is governed by the pressure gradients across them and hence by the instantaneous flow rates. The above mentioned mechanisms have now been incorporated into a previous model for static changes in the cranial cavity. The integrated model now evolved is simulated for a constant, bolus and sinusoidal infusion. The output has been correlated to experimentally observed trends. The results that emerge, point to a system whose response is sensitive to the nature of CSF volume perturbations. The production and absorption mechanisms function in a relay configuration, whose primary objective is to maintain the base line CSF pressure values when deviations in pressure occur. These mechanisms have a finite activation time which is dependent on the nature of the volume variation.

  3. Texture analysis of automatic graph cuts segmentations for detection of lung cancer recurrence after stereotactic radiotherapy

    NASA Astrophysics Data System (ADS)

    Mattonen, Sarah A.; Palma, David A.; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.

    2015-03-01

    Stereotactic ablative radiotherapy (SABR) is a treatment for early-stage lung cancer with local control rates comparable to surgery. After SABR, benign radiation induced lung injury (RILI) results in tumour-mimicking changes on computed tomography (CT) imaging. Distinguishing recurrence from RILI is a critical clinical decision determining the need for potentially life-saving salvage therapies whose high risks in this population dictate their use only for true recurrences. Current approaches do not reliably detect recurrence within a year post-SABR. We measured the detection accuracy of texture features within automatically determined regions of interest, with the only operator input being the single line segment measuring tumour diameter, normally taken during the clinical workflow. Our leave-one-out cross validation on images taken 2-5 months post-SABR showed robustness of the entropy measure, with classification error of 26% and area under the receiver operating characteristic curve (AUC) of 0.77 using automatic segmentation; the results using manual segmentation were 24% and 0.75, respectively. AUCs for this feature increased to 0.82 and 0.93 at 8-14 months and 14-20 months post SABR, respectively, suggesting even better performance nearer to the date of clinical diagnosis of recurrence; thus this system could also be used to support and reinforce the physician's decision at that time. Based on our ongoing validation of this automatic approach on a larger sample, we aim to develop a computer-aided diagnosis system which will support the physician's decision to apply timely salvage therapies and prevent patients with RILI from undergoing invasive and risky procedures.

  4. Texture-based segmentation and analysis of emphysema depicted on CT images

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Zheng, Bin; Wang, Xingwei; Lederman, Dror; Pu, Jiantao; Sciurba, Frank C.; Gur, David; Leader, J. Ken

    2011-03-01

    In this study we present a texture-based method of emphysema segmentation depicted on CT examination consisting of two steps. Step 1, a fractal dimension based texture feature extraction is used to initially detect base regions of emphysema. A threshold is applied to the texture result image to obtain initial base regions. Step 2, the base regions are evaluated pixel-by-pixel using a method that considers the variance change incurred by adding a pixel to the base in an effort to refine the boundary of the base regions. Visual inspection revealed a reasonable segmentation of the emphysema regions. There was a strong correlation between lung function (FEV1%, FEV1/FVC, and DLCO%) and fraction of emphysema computed using the texture based method, which were -0.433, -.629, and -0.527, respectively. The texture-based method produced more homogeneous emphysematous regions compared to simple thresholding, especially for large bulla, which can appear as speckled regions in the threshold approach. In the texture-based method, single isolated pixels may be considered as emphysema only if neighboring pixels meet certain criteria, which support the idea that single isolated pixels may not be sufficient evidence that emphysema is present. One of the strength of our complex texture-based approach to emphysema segmentation is that it goes beyond existing approaches that typically extract a single or groups texture features and individually analyze the features. We focus on first identifying potential regions of emphysema and then refining the boundary of the detected regions based on texture patterns.

  5. Functional analysis of centipede development supports roles for Wnt genes in posterior development and segment generation.

    PubMed

    Hayden, Luke; Schlosser, Gerhard; Arthur, Wallace

    2015-01-01

    The genes of the Wnt family play important and highly conserved roles in posterior growth and development in a wide range of animal taxa. Wnt genes also operate in arthropod segmentation, and there has been much recent debate regarding the relationship between arthropod and vertebrate segmentation mechanisms. Due to its phylogenetic position, body form, and possession of many (11) Wnt genes, the centipede Strigamia maritima is a useful system with which to examine these issues. This study takes a functional approach based on treatment with lithium chloride, which causes ubiquitous activation of canonical Wnt signalling. This is the first functional developmental study performed in any of the 15,000 species of the arthropod subphylum Myriapoda. The expression of all 11 Wnt genes in Strigamia was analyzed in relation to posterior development. Three of these genes, Wnt11, Wnt5, and WntA, were strongly expressed in the posterior region and, thus, may play important roles in posterior developmental processes. In support of this hypothesis, LiCl treatment of S. maritima embryos was observed to produce posterior developmental defects and perturbations in AbdB and Delta expression. The effects of LiCl differ depending on the developmental stage treated, with more severe effects elicited by treatment during germband formation than by treatment at later stages. These results support a role for Wnt signalling in conferring posterior identity in Strigamia. In addition, data from this study are consistent with the hypothesis of segmentation based on a "clock and wavefront" mechanism operating in this species.

  6. Functional analysis of centipede development supports roles for Wnt genes in posterior development and segment generation.

    PubMed

    Hayden, Luke; Schlosser, Gerhard; Arthur, Wallace

    2015-01-01

    The genes of the Wnt family play important and highly conserved roles in posterior growth and development in a wide range of animal taxa. Wnt genes also operate in arthropod segmentation, and there has been much recent debate regarding the relationship between arthropod and vertebrate segmentation mechanisms. Due to its phylogenetic position, body form, and possession of many (11) Wnt genes, the centipede Strigamia maritima is a useful system with which to examine these issues. This study takes a functional approach based on treatment with lithium chloride, which causes ubiquitous activation of canonical Wnt signalling. This is the first functional developmental study performed in any of the 15,000 species of the arthropod subphylum Myriapoda. The expression of all 11 Wnt genes in Strigamia was analyzed in relation to posterior development. Three of these genes, Wnt11, Wnt5, and WntA, were strongly expressed in the posterior region and, thus, may play important roles in posterior developmental processes. In support of this hypothesis, LiCl treatment of S. maritima embryos was observed to produce posterior developmental defects and perturbations in AbdB and Delta expression. The effects of LiCl differ depending on the developmental stage treated, with more severe effects elicited by treatment during germband formation than by treatment at later stages. These results support a role for Wnt signalling in conferring posterior identity in Strigamia. In addition, data from this study are consistent with the hypothesis of segmentation based on a "clock and wavefront" mechanism operating in this species. PMID:25627713

  7. Analysis of the Command and Control Segment (CCS) attitude estimation algorithm

    NASA Technical Reports Server (NTRS)

    Stockwell, Catherine

    1993-01-01

    This paper categorizes the qualitative behavior of the Command and Control Segment (CCS) differential correction algorithm as applied to attitude estimation using simultaneous spin axis sun angle and Earth cord length measurements. The categories of interest are the domains of convergence, divergence, and their boundaries. Three series of plots are discussed that show the dependence of the estimation algorithm on the vehicle radius, the sun/Earth angle, and the spacecraft attitude. Common qualitative dynamics to all three series are tabulated and discussed. Out-of-limits conditions for the estimation algorithm are identified and discussed.

  8. A computer program for comprehensive ST-segment depression/heart rate analysis of the exercise ECG test.

    PubMed

    Lehtinen, R; Vänttinen, H; Sievänen, H; Malmivuo, J

    1996-06-01

    The ST-segment depression/heart rate (ST/HR) analysis has been found to improve the diagnostic accuracy of the exercise ECG test in detecting myocardial ischemia. Recently, three different continuous diagnostic variables based on the ST/HR analysis have been introduced; the ST/HR slope, the ST/HR index and the ST/HR hysteresis. The latter utilises both the exercise and recovery phases of the exercise ECG test, whereas the two former are based on the exercise phase only. This present article presents a computer program which not only calculates the above three diagnostic variables but also plots the full diagrams of ST-segment depression against heart rate during both exercise and recovery phases for each ECG lead from given ST/HR data. The program can be used in the exercise ECG diagnosis of daily clinical practice provided that the ST/HR data from the ECG measurement system can be linked to the program. At present, the main purpose of the program is to provide clinical and medical researchers with a practical tool for comprehensive clinical evaluation and development of the ST/HR analysis. PMID:8835841

  9. A computer program for comprehensive ST-segment depression/heart rate analysis of the exercise ECG test.

    PubMed

    Lehtinen, R; Vänttinen, H; Sievänen, H; Malmivuo, J

    1996-06-01

    The ST-segment depression/heart rate (ST/HR) analysis has been found to improve the diagnostic accuracy of the exercise ECG test in detecting myocardial ischemia. Recently, three different continuous diagnostic variables based on the ST/HR analysis have been introduced; the ST/HR slope, the ST/HR index and the ST/HR hysteresis. The latter utilises both the exercise and recovery phases of the exercise ECG test, whereas the two former are based on the exercise phase only. This present article presents a computer program which not only calculates the above three diagnostic variables but also plots the full diagrams of ST-segment depression against heart rate during both exercise and recovery phases for each ECG lead from given ST/HR data. The program can be used in the exercise ECG diagnosis of daily clinical practice provided that the ST/HR data from the ECG measurement system can be linked to the program. At present, the main purpose of the program is to provide clinical and medical researchers with a practical tool for comprehensive clinical evaluation and development of the ST/HR analysis.

  10. The effects of different syringe volume, needle size and sample volume on blood gas analysis in syringes washed with heparin

    PubMed Central

    Küme, Tuncay; Şişman, Ali Rıza; Solak, Ahmet; Tuğlu, Birsen; Çinkooğlu, Burcu; Çoker, Canan

    2012-01-01

    Introductıon: We evaluated the effect of different syringe volume, needle size and sample volume on blood gas analysis in syringes washed with heparin. Materials and methods: In this multi-step experimental study, percent dilution ratios (PDRs) and final heparin concentrations (FHCs) were calculated by gravimetric method for determining the effect of syringe volume (1, 2, 5 and 10 mL), needle size (20, 21, 22, 25 and 26 G) and sample volume (0.5, 1, 2, 5 and 10 mL). The effect of different PDRs and FHCs on blood gas and electrolyte parameters were determined. The erroneous results from nonstandardized sampling were evaluated according to RiliBAK’s TEa. Results: The increase of PDRs and FHCs was associated with the decrease of syringe volume, the increase of needle size and the decrease of sample volume: from 2.0% and 100 IU/mL in 10 mL-syringe to 7.0% and 351 IU/mL in 1 mL-syringe; from 4.9% and 245 IU/mL in 26G to 7.6% and 380 IU/mL in 20 G with combined 1 mL syringe; from 2.0% and 100 IU/mL in full-filled sample to 34% and 1675 IU/mL in 0.5 mL suctioned sample into 10 mL-syringe. There was no statistical difference in pH; but the percent decreasing in pCO2, K+, iCa2+, iMg2+; the percent increasing in pO2 and Na+ were statistical significance compared to samples full-filled in syringes. The all changes in pH and pO2 were acceptable; but the changes in pCO2, Na+, K+ and iCa2+ were unacceptable according to TEa limits except fullfilled-syringes. Conclusions: The changes in PDRs and FHCs due nonstandardized sampling in syringe washed with liquid heparin give rise to erroneous test results for pCO2 and electrolytes. PMID:22838185

  11. Risk factors for neovascular glaucoma after carbon ion radiotherapy of choroidal melanoma using dose-volume histogram analysis

    SciTech Connect

    Hirasawa, Naoki . E-mail: naoki_h@nirs.go.jp; Tsuji, Hiroshi; Ishikawa, Hitoshi; Koyama-Ito, Hiroko; Kamada, Tadashi; Mizoe, Jun-Etsu; Ito, Yoshiyuki; Naganawa, Shinji; Ohnishi, Yoshitaka; Tsujii, Hirohiko

    2007-02-01

    Purpose: To determine the risk factors for neovascular glaucoma (NVG) after carbon ion radiotherapy (C-ion RT) of choroidal melanoma. Methods and Materials: A total of 55 patients with choroidal melanoma were treated between 2001 and 2005 with C-ion RT based on computed tomography treatment planning. All patients had a tumor of large size or one located close to the optic disk. Univariate and multivariate analyses were performed to identify the risk factors of NVG for the following parameters; gender, age, dose-volumes of the iris-ciliary body and the wall of eyeball, and irradiation of the optic disk (ODI). Results: Neovascular glaucoma occurred in 23 patients and the 3-year cumulative NVG rate was 42.6 {+-} 6.8% (standard error), but enucleation from NVG was performed in only three eyes. Multivariate analysis revealed that the significant risk factors for NVG were V50{sub IC} (volume irradiated {>=}50 GyE to iris-ciliary body) (p = 0.002) and ODI (p = 0.036). The 3-year NVG rate for patients with V50{sub IC} {>=}0.127 mL and those with V50{sub IC} <0.127 mL were 71.4 {+-} 8.5% and 11.5 {+-} 6.3%, respectively. The corresponding rate for the patients with and without ODI were 62.9 {+-} 10.4% and 28.4 {+-} 8.0%, respectively. Conclusion: Dose-volume histogram analysis with computed tomography indicated that V50{sub IC} and ODI were independent risk factors for NVG. An irradiation system that can reduce the dose to both the anterior segment and the optic disk might be worth adopting to investigate whether or not incidence of NVG can be decreased with it.

  12. Synfuel program analysis. Volume 2: VENVAL users manual

    NASA Astrophysics Data System (ADS)

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This volume is intended for program analysts and is a users manual for the VENVAL model. It contains specific explanations as to input data requirements and programming procedures for the use of this model. VENVAL is a generalized computer program to aid in evaluation of prospective private sector production ventures. The program can project interrelated values of installed capacity, production, sales revenue, operating costs, depreciation, investment, dent, earnings, taxes, return on investment, depletion, and cash flow measures. It can also compute related public sector and other external costs and revenues if unit costs are furnished.

  13. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  14. Air segmented amplitude modulated multiplexed flow analysis with software-based phase recognition: determination of phosphate ion.

    PubMed

    Ogusu, Takeshi; Uchimoto, Katsuya; Takeuchi, Masaki; Tanaka, Hideji

    2014-01-01

    Amplitude modulated multiplexed flow analysis (AMMFA) has been improved by introducing air segmentation and software-based phase recognition. Sample solutions, the flow rates of which are respectively varied at different frequencies, are merged. Air is introduced to the merged liquid stream in order to limit the dispersion of analytes within each liquid segment separated by air bubbles. The stream is led to a detector with no physical deaeration. Air signals are distinguished from liquid signals through the analysis of detector output signals, and are suppressed down to the level of liquid signals. Resulting signals are smoothed based on moving average computation. Thus processed signals are analyzed by fast Fourier transform. The analytes in the samples are respectively determined from the amplitudes of the corresponding wave components obtained. The developed system has been applied to the simultaneous determinations of phosphate ions in water samples by a Malachite Green method. The linearity of the analytical curve (0.0-31.0 μmol dm(-3)) is good (r(2)>0.999) and the detection limit (3.3 σ) at the modulation period of 30s is 0.52 μmol dm(-3). Good recoveries around 100% have been obtained for phosphate ions spiked into real water samples.

  15. Quantitative trait locus analysis of leaf dissection in tomato using Lycopersicon pennellii segmental introgression lines.

    PubMed Central

    Holtan, Hans E E; Hake, Sarah

    2003-01-01

    Leaves are one of the most conspicuous and important organs of all seed plants. A fundamental source of morphological diversity in leaves is the degree to which the leaf is dissected by lobes and leaflets. We used publicly available segmental introgression lines to describe the quantitative trait loci (QTL) controlling the difference in leaf dissection seen between two tomato species, Lycopersicon esculentum and L. pennellii. We define eight morphological characteristics that comprise the mature tomato leaf and describe loci that affect each of these characters. We found 30 QTL that contribute one or more of these characters. Of these 30 QTL, 22 primarily affect leaf dissection and 8 primarily affect leaf size. On the basis of which characters are affected, four classes of loci emerge that affect leaf dissection. The majority of the QTL produce phenotypes intermediate to the two parent lines, while 5 QTL result in transgression with drastically increased dissection relative to both parent lines. PMID:14668401

  16. Quantitative trait locus analysis of leaf dissection in tomato using Lycopersicon pennellii segmental introgression lines.

    PubMed

    Holtan, Hans E E; Hake, Sarah

    2003-11-01

    Leaves are one of the most conspicuous and important organs of all seed plants. A fundamental source of morphological diversity in leaves is the degree to which the leaf is dissected by lobes and leaflets. We used publicly available segmental introgression lines to describe the quantitative trait loci (QTL) controlling the difference in leaf dissection seen between two tomato species, Lycopersicon esculentum and L. pennellii. We define eight morphological characteristics that comprise the mature tomato leaf and describe loci that affect each of these characters. We found 30 QTL that contribute one or more of these characters. Of these 30 QTL, 22 primarily affect leaf dissection and 8 primarily affect leaf size. On the basis of which characters are affected, four classes of loci emerge that affect leaf dissection. The majority of the QTL produce phenotypes intermediate to the two parent lines, while 5 QTL result in transgression with drastically increased dissection relative to both parent lines. PMID:14668401

  17. Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images

    PubMed Central

    de Castro, J.; Méndez, A.; Tarquis, A. M.

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  18. Fractal analysis of laplacian pyramidal filters applied to segmentation of soil images.

    PubMed

    de Castro, J; Ballesteros, F; Méndez, A; Tarquis, A M

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  19. A link-segment model of upright human posture for analysis of head-trunk coordination

    NASA Technical Reports Server (NTRS)

    Nicholas, S. C.; Doxey-Gasway, D. D.; Paloski, W. H.

    1998-01-01

    Sensory-motor control of upright human posture may be organized in a top-down fashion such that certain head-trunk coordination strategies are employed to optimize visual and/or vestibular sensory inputs. Previous quantitative models of the biomechanics of human posture control have examined the simple case of ankle sway strategy, in which an inverted pendulum model is used, and the somewhat more complicated case of hip sway strategy, in which multisegment, articulated models are used. While these models can be used to quantify the gross dynamics of posture control, they are not sufficiently detailed to analyze head-trunk coordination strategies that may be crucial to understanding its underlying mechanisms. In this paper, we present a biomechanical model of upright human posture that extends an existing four mass, sagittal plane, link-segment model to a five mass model including an independent head link. The new model was developed to analyze segmental body movements during dynamic posturography experiments in order to study head-trunk coordination strategies and their influence on sensory inputs to balance control. It was designed specifically to analyze data collected on the EquiTest (NeuroCom International, Clackamas, OR) computerized dynamic posturography system, where the task of maintaining postural equilibrium may be challenged under conditions in which the visual surround, support surface, or both are in motion. The performance of the model was tested by comparing its estimated ground reaction forces to those measured directly by support surface force transducers. We conclude that this model will be a valuable analytical tool in the search for mechanisms of balance control.

  20. STS-1 operational flight profile. Volume 6: Abort analysis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The abort analysis for the cycle 3 Operational Flight Profile (OFP) for the Space Transportation System 1 Flight (STS-1) is defined, superseding the abort analysis previously presented. Included are the flight description, abort analysis summary, flight design groundrules and constraints, initialization information, general abort description and results, abort solid rocket booster and external tank separation and disposal results, abort monitoring displays and discussion on both ground and onboard trajectory monitoring, abort initialization load summary for the onboard computer, list of the key abort powered flight dispersion analysis.

  1. New segmental long bone defect model in sheep: quantitative analysis of healing with dual energy x-ray absorptiometry.

    PubMed

    den Boer, F C; Patka, P; Bakker, F C; Wippermann, B W; van Lingen, A; Vink, G Q; Boshuizen, K; Haarman, H J

    1999-09-01

    An appropriate animal model is required for the study of treatments that enhance bone healing. A new segmental long bone defect model was developed for this purpose, and dual energy x-ray absorptiometry was used to quantify healing of this bone defect. In 15 sheep, a 3-cm segmental defect was created in the left tibia and fixed with an interlocking intramedullary nail. In seven animals, the defect was left empty for the assessment of the spontaneous healing response. In eight animals serving as a positive control, autologous bone grafting was performed. After 12 weeks, healing was evaluated with radiographs, a torsional test to failure, and dual energy x-ray absorptiometry. The mechanical test results were used for the assessment of unions and nonunions. Radiographic determination of nonunion was not reliably accomplished in this model. By means of dual energy x-ray absorptiometry, bone mineral density and content were measured in the middle of the defect. Bone mineral density was 91+/-7% (mean +/- SEM) and 72+/-6% that of the contralateral intact tibia in, respectively, the autologous bone-grafting and empty defect groups (p = 0.04). For bone mineral content, the values were, respectively, 117+/-18 and 82+/-9% (p = 0.07). Torsional strength and stiffness were also higher, although not significantly, in the group with autologous bone grafting than in that with the empty defect. Bone mineral density and content were closely related to the torsional properties (r2 ranged from 0.76 to 0.85, p < or = 0.0001). Because interlocking intramedullary nailing is a very common fixation method in patients, the newly developed segmental defect model has clinical relevance. The interlocking intramedullary nail provided adequate stability without implant failure. This model may be useful for the study of treatments that affect bone healing, and dual energy x-ray absorptiometry may be somewhat helpful in the analysis of healing of this bone defect.

  2. Particle filtration: An analysis using the method of volume averaging

    SciTech Connect

    Quintard, M.; Whitaker, S.

    1994-12-01

    The process of filtration of non-charged, submicron particles is analyzed using the method of volume averaging. The particle continuity equation is represented in terms of the first correction to the Smoluchowski equation that takes into account particle inertia effects for small Stokes numbers. This leads to a cellular efficiency that contains a minimum in the efficiency as a function of the particle size, and this allows us to identify the most penetrating particle size. Comparison of the theory with results from Brownian dynamics indicates that the first correction to the Smoluchowski equation gives reasonable results in terms of both the cellular efficiency and the most penetrating particle size. However, the results for larger particles clearly indicate the need to extend the Smoluchowski equation to include higher order corrections. Comparison of the theory with laboratory experiments, in the absence of adjustable parameters, provides interesting agreement for particle diameters that are equal to or less than the diameter of the most penetrating particle.

  3. Measurement and analysis of grain boundary grooving by volume diffusion

    NASA Technical Reports Server (NTRS)

    Hardy, S. C.; Mcfadden, G. B.; Coriell, S. R.; Voorhees, P. W.; Sekerka, R. F.

    1991-01-01

    Experimental measurements of isothermal grain boundary grooving by volume diffusion are carried out for Sn bicrystals in the Sn-Pb system near the eutectic temperature. The dimensions of the groove increase with a temporal exponent of 1/3, and measurement of the associated rate constant allows the determination of the product of the liquid diffusion coefficient D and the capillarity length Gamma associated with the interfacial free energy of the crystal-melt interface. The small-slope theory of Mullins is generalized to the entire range of dihedral angles by using a boundary integral formulation of the associated free boundary problem, and excellent agreement with experimental groove shapes is obtained. By using the diffusivity measured by Jordon and Hunt, the present measured values of Gamma are found to agree to within 5 percent with the values obtained from experiments by Gunduz and Hunt on grain boundary grooving in a temperature gradient.

  4. Verification of fault tree analysis. Volume 2. Technical description

    SciTech Connect

    Rothbart, G.; Fullwood, R.; Basin, S.; Newt, J.; Escalera, J.

    1981-05-01

    An electronic instrument has been developed to simulate the reliability of complex safety systems. Using digital integrated circuits on modular printed circuit boards, together with a monitoring microcomputer system and other support hardware, it is possible to simulate systems composed of up to twenty independent components ten billion times faster than real-time. Arbitrary time-dependent hazard functions, complex repair mechanisms and procedures, and common mode interactions are incorporated into the system hardware. This instrument, termed ERMA (EPRI Reliability and Maintainability Analyzer), is described in detail in this report which contains the details of the electronic circuitry and supporting software. A companion, Volume 1, describes the theory and the results of experiments performed with ERMA.

  5. Glacier volume estimation of Cascade Volcanoes—an analysis and comparison with other methods

    USGS Publications Warehouse

    Driedger, Carolyn L.; Kennard, P.M.

    1986-01-01

    During the 1980 eruption of Mount St. Helens, the occurrence of floods and mudflows made apparent a need to assess mudflow hazards on other Cascade volcanoes. A basic requirement for such analysis is information about the volume and distribution of snow and ice on these volcanoes. An analysis was made of the volume-estimation methods developed by previous authors and a volume estimation method was developed for use in the Cascade Range. A radio echo-sounder, carried in a backpack, was used to make point measurements of ice thickness on major glaciers of four Cascade volcanoes (Mount Rainier, Washington; Mount Hood and the Three Sisters, Oregon; and Mount Shasta, California). These data were used to generate ice-thickness maps and bedrock topographic maps for developing and testing volume-estimation methods. Subsequently, the methods were applied to the unmeasured glaciers on those mountains and, as a test of the geographical extent of applicability, to glaciers beyond the Cascades having measured volumes. Two empirical relationships were required in order to predict volumes for all the glaciers. Generally, for glaciers less than 2.6 km in length, volume was found to be estimated best by using glacier area, raised to a power. For longer glaciers, volume was found to be estimated best by using a power law relationship, including slope and shear stress. The necessary variables can be estimated from topographic maps and aerial photographs.

  6. Automatic segmentation and identification of solitary pulmonary nodules on follow-up CT scans based on local intensity structure analysis and non-rigid image registration

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Naito, Hideto; Nakamura, Yoshihiko; Kitasaka, Takayuki; Rueckert, Daniel; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2011-03-01

    This paper presents a novel method that can automatically segment solitary pulmonary nodule (SPN) and match such segmented SPNs on follow-up thoracic CT scans. Due to the clinical importance, a physician needs to find SPNs on chest CT and observe its progress over time in order to diagnose whether it is benign or malignant, or to observe the effect of chemotherapy for malignant ones using follow-up data. However, the enormous amount of CT images makes large burden tasks to a physician. In order to lighten this burden, we developed a method for automatic segmentation and assisting observation of SPNs in follow-up CT scans. The SPNs on input 3D thoracic CT scan are segmented based on local intensity structure analysis and the information of pulmonary blood vessels. To compensate lung deformation, we co-register follow-up CT scans based on an affine and a non-rigid registration. Finally, the matches of detected nodules are found from registered CT scans based on a similarity measurement calculation. We applied these methods to three patients including 14 thoracic CT scans. Our segmentation method detected 96.7% of SPNs from the whole images, and the nodule matching method found 83.3% correspondences from segmented SPNs. The results also show our matching method is robust to the growth of SPN, including integration/separation and appearance/disappearance. These confirmed our method is feasible for segmenting and identifying SPNs on follow-up CT scans.

  7. 3D multiscale segmentation and morphological analysis of X-ray microtomography from cold-sprayed coatings.

    PubMed

    Gillibert, L; Peyrega, C; Jeulin, D; Guipont, V; Jeandin, M

    2012-11-01

    X-ray microtomography from cold-sprayed coatings brings a new insight on this deposition process. A noise-tolerant segmentation algorithm is introduced, based on the combination of two segmentations: a deterministic multiscale segmentation and a stochastic segmentation. The stochastic approach uses random Poisson lines as markers. Results on a X-ray microtomographic image of aluminium particles are presented and validated. PMID:22946787

  8. Economic analysis of the space shuttle system, volume 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of the space shuttle system is presented. The analysis is based on economic benefits, recurring costs, non-recurring costs, and ecomomic tradeoff functions. The most economic space shuttle configuration is determined on the basis of: (1) objectives of reusable space transportation system, (2) various space transportation systems considered and (3) alternative space shuttle systems.

  9. Pressure vessels and piping design, analysis, and severe accidents. PVP-Volume 331

    SciTech Connect

    Dermenjian, A.A.

    1996-12-31

    The primary objective of the Design and Analysis Committee of the ASME Pressure Vessels and Piping Division is to provide a forum for the dissemination of information and the advancement of current theories and practices in the design and analysis of pressure vessels, piping systems, and components. This volume is divided into the following six sections: power plant piping and supports 1--3; applied dynamic response analysis; severe accident analysis; and student papers. Separate abstracts were prepared for 22 papers in this volume.

  10. Price-volume multifractal analysis and its application in Chinese stock markets

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Zhuang, Xin-tian; Liu, Zhi-ying

    2012-06-01

    An empirical research on Chinese stock markets is conducted using statistical tools. First, the multifractality of stock price return series, ri(ri=ln(Pt+1)-ln(Pt)) and trading volume variation series, vi(vi=ln(Vt+1)-ln(Vt)) is confirmed using multifractal detrended fluctuation analysis. Furthermore, a multifractal detrended cross-correlation analysis between stock price return and trading volume variation in Chinese stock markets is also conducted. It is shown that the cross relationship between them is also found to be multifractal. Second, the cross-correlation between stock price Pi and trading volume Vi is empirically studied using cross-correlation function and detrended cross-correlation analysis. It is found that both Shanghai stock market and Shenzhen stock market show pronounced long-range cross-correlations between stock price and trading volume. Third, a composite index R based on price and trading volume is introduced. Compared with stock price return series ri and trading volume variation series vi, R variation series not only remain the characteristics of original series but also demonstrate the relative correlation between stock price and trading volume. Finally, we analyze the multifractal characteristics of R variation series before and after three financial events in China (namely, Price Limits, Reform of Non-tradable Shares and financial crisis in 2008) in the whole period of sample to study the changes of stock market fluctuation and financial risk. It is found that the empirical results verified the validity of R.

  11. Investigating the Creeping Segment of the San Andreas Fault using InSAR time series analysis

    NASA Astrophysics Data System (ADS)

    Rolandone, Frederique; Ryder, Isabelle; Agram, Piyush S.; Burgmann, Roland; Nadeau, Robert M.

    2010-05-01

    We exploit the advanced Interferometric Synthetic Aperture Radar (InSAR) technique referred to as the Small BAseline Subset (SBAS) algorithm to analyze the creeping section of the San Andreas Fault in Central California. Various geodetic creep rate measurements along the Central San Andreas Fault (CSAF) have been made since 1969 including creepmeters, alignment arrays, geodolite, and GPS. They show that horizontal surface displacements increase from a few mm/yr at either end to a maximum of up to ~34 mm/yr in the central portion. They also indicate some discrepancies in rate estimates, with the range being as high as 10 mm/yr at some places along the fault. This variation is thought to be a result of the different geodetic techniques used and of measurements being made at variable distances from the fault. An interferometric stack of 12 interferograms for the period 1992-2001 shows the spatial variation of creep that occurs within a narrow (<2 km) zone close to the fault trace. The creep rate varies spatially along the fault but also in time. Aseismic slip on the CSAF shows several kinds of time dependence. Shallow slip, as measured by surface measurements across the narrow creeping zone, occurs partly as ongoing steady creep, along with brief episodes with slip from mm to cm. Creep rates along the San Juan Bautista segment increased after the 1989 Loma Prieta earthquake and slow slip transients of varying duration and magnitude occurred in both transition segments The main focus of this work is to use the SBAS technique to identify spatial and temporal variations of creep on the CSAF. We will present time series of line-of-sight (LOS) displacements derived from SAR data acquired by the ASAR instrument, on board the ENVISAT satellite, between 2003 and 2009. For each coherent pixel of the radar images we compute time-dependent surface displacements as well as the average LOS deformation rate. We compare our results with characteristic repeating microearthquakes that

  12. Practical considerations for the segmented-flow analysis of nitrate and ammonium in seawater and the avoidance of matrix effects

    NASA Astrophysics Data System (ADS)

    Rho, Tae Keun; Coverly, Stephen; Kim, Eun-Soo; Kang, Dong-Jin; Kahng, Sung-Hyun; Na, Tae-Hee; Cho, Sung-Rok; Lee, Jung-Moo; Moon, Cho-Rong

    2015-12-01

    In this study we describe measures taken in our laboratory to improve the long-term precision of nitrate and ammonia analysis in seawater using a microflow segmented-flow analyzer. To improve the nitrate reduction efficiency using a flow-through open tube cadmium reactor (OTCR), we compared alternative buffer formulations and regeneration procedures for an OTCR. We improved long-term stability for nitrate with a modified flow scheme and color reagent formulation and for ammonia by isolating samples from the ambient air and purifying the air used for bubble segmentation. We demonstrate the importance of taking into consideration the residual nutrient content of the artificial seawater used for the preparation of calibration standards. We describe how an operating procedure to eliminate errors from that source as well as from the refractive index of the matrix itself can be modified to include the minimization of dynamic refractive index effects resulting from differences between the matrix of the samples, the calibrants, and the wash solution. We compare the data for long-term measurements of certified reference material under two different conditions, using ultrapure water (UPW) and artificial seawater (ASW) for the sampler wash.

  13. A spherical harmonics intensity model for 3D segmentation and 3D shape analysis of heterochromatin foci.

    PubMed

    Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl

    2016-08-01

    The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci.

  14. Detection of microcalcification clusters using Hessian matrix and foveal segmentation method on multiscale analysis in digital mammograms.

    PubMed

    Thangaraju, Balakumaran; Vennila, Ila; Chinnasamy, Gowrishankar

    2012-10-01

    Mammography is the most efficient technique for detecting and diagnosing breast cancer. Clusters of microcalcifications have been mainly targeted as a reliable early sign of breast cancer and their earliest detection is essential to reduce the probability of mortality rate. Since the size of microcalcifications is very tiny and may be overlooked by the observing radiologist, we have developed a Computer Aided Diagnosis system for automatic and accurate cluster detection. A three-phased novel approach is presented in this paper. Firstly, regions of interest that corresponds to microcalcifications are identified. This can be achieved by analyzing the bandpass coefficients of the mammogram image. The suspicious regions are passed to the second phase, in which the nodular structured microcalcifications are detected based on eigenvalues of second order partial derivatives of the image and microcalcification pixels are segmented out by exploiting the foveal segmentation in multiscale analysis. Finally, by combining the responses coming out from the second order partial derivatives and the foveal method, potential microcalcifications are detected. The detection performance of the proposed method has been evaluated by using 370 mammograms. The detection method has a TP ratio of 97.76 % with 0.68 false positives per image. We have examined the performance of our computerized scheme using free-response operating characteristics curve.

  15. A registration-based segmentation method with application to adiposity analysis of mice microCT images

    NASA Astrophysics Data System (ADS)

    Bai, Bing; Joshi, Anand; Brandhorst, Sebastian; Longo, Valter D.; Conti, Peter S.; Leahy, Richard M.

    2014-04-01

    Obesity is a global health problem, particularly in the U.S. where one third of adults are obese. A reliable and accurate method of quantifying obesity is necessary. Visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) are two measures of obesity that reflect different associated health risks, but accurate measurements in humans or rodent models are difficult. In this paper we present an automatic, registration-based segmentation method for mouse adiposity studies using microCT images. We co-register the subject CT image and a mouse CT atlas. Our method is based on surface matching of the microCT image and an atlas. Surface-based elastic volume warping is used to match the internal anatomy. We acquired a whole body scan of a C57BL6/J mouse injected with contrast agent using microCT and created a whole body mouse atlas by manually delineate the boundaries of the mouse and major organs. For method verification we scanned a C57BL6/J mouse from the base of the skull to the distal tibia. We registered the obtained mouse CT image to our atlas. Preliminary results show that we can warp the atlas image to match the posture and shape of the subject CT image, which has significant differences from the atlas. We plan to use this software tool in longitudinal obesity studies using mouse models.

  16. Comparative analysis of the distribution of segmented filamentous bacteria in humans, mice and chickens.

    PubMed

    Yin, Yeshi; Wang, Yu; Zhu, Liying; Liu, Wei; Liao, Ningbo; Jiang, Mizu; Zhu, Baoli; Yu, Hongwei D; Xiang, Charlie; Wang, Xin

    2013-03-01

    Segmented filamentous bacteria (SFB) are indigenous gut commensal bacteria. They are commonly detected in the gastrointestinal tracts of both vertebrates and invertebrates. Despite the significant role they have in the modulation of the development of host immune systems, little information exists regarding the presence of SFB in humans. The aim of this study was to investigate the distribution and diversity of SFB in humans and to determine their phylogenetic relationships with their hosts. Gut contents from 251 humans, 92 mice and 72 chickens were collected for bacterial genomic DNA extraction and subjected to SFB 16S rRNA-specific PCR detection. The results showed SFB colonization to be age-dependent in humans, with the majority of individuals colonized within the first 2 years of life, but this colonization disappeared by the age of 3 years. Results of 16S rRNA sequencing showed that multiple operational taxonomic units of SFB could exist in the same individuals. Cross-species comparison among human, mouse and chicken samples demonstrated that each host possessed an exclusive predominant SFB sequence. In summary, our results showed that SFB display host specificity, and SFB colonization, which occurs early in human life, declines in an age-dependent manner. PMID:23151642

  17. Quantitative analysis of anatomical relationship between cavernous segment internal carotid artery and pituitary macroadenoma

    PubMed Central

    Lin, Bon-Jour; Chung, Tzu-Tsao; Lin, Meng-Chi; Lin, Chin; Hueng, Dueng-Yuan; Chen, Yuan-Hao; Hsia, Chung-Ching; Ju, Da-Tong; Ma, Hsin-I; Liu, Ming-Ying; Tang, Chi-Tun

    2016-01-01

    Abstract Cavernous segment internal carotid artery (CSICA) injury during endoscopic transsphenoidal surgery for pituitary tumor is rare but fatal. The aim of this study is to investigate anatomical relationship between pituitary macroadenoma and corresponding CSICA using quantitative means with a sense to improve safety of surgery. In this retrospective study, a total of 98 patients with nonfunctioning pituitary macroadenomas undergoing endoscopic transsphenoidal surgeries were enrolled from 2005 to 2014. Intercarotid distances between bilateral CSICAs were measured in the 4 coronal levels, namely optic strut, convexity of carotid prominence, median sella turcica, and dorsum sellae. Parasellar extension was graded and recorded by Knosp–Steiner classification. Our findings indicated a linear relationship between size of pituitary macroadenoma and intercarotid distance over CSICA. The correlation was absent in pituitary macroadenoma with Knosp–Steiner grade 4 parasellar extension. Bigger pituitary macroadenoma makes more lateral deviation of CSICA. While facing larger tumor, sufficient bony graft is indicated for increasing surgical field, working area and operative safety. PMID:27741111

  18. Molecular combing in the analysis of developmentally regulated amplified segments of Bradysia hygida.

    PubMed

    Passos, K J R; Togoro, S Y; Carignon, S; Koundrioukoff, S; Lachages, A-M; Debatisse, M; Fernandez, M A

    2012-08-06

    Molecular combing technology is an important new tool for the functional and physical mapping of genome segments. It is designed to identify amplifications, microdeletions, and rearrangements in a DNA sequence and to study the process of DNA replication. This technique has recently been used to identify and analyze the dynamics of replication in amplified domains. In Bradysia hygida, multiple amplification initiation sites are predicted to exist upstream of the BhC4-1 gene. However, it has been impossible to identify them using the available standard techniques. The aim of this study was to optimize molecular combing technology to obtain DNA fibers from the polytene nuclei of the salivary glands of B. hygida to study the dynamics of DNA replication in this organism. Our results suggest that combing this DNA without prior purification of the polytene nuclei is possible. The density, integrity, and linearity of the DNA fibers were analyzed, fibers 50 to 300 kb in length were detected, and a 9-kb fragment within the amplified region was visualized using biotin detected by Alexa Fluor 488-conjugated streptavidin technique. The feasibility of physically mapping these fibers demonstrated in this study suggests that molecular combing may be used to identify the replication origin of the BhC4-1 amplicon.

  19. A review of heart chamber segmentation for structural and functional analysis using cardiac magnetic resonance imaging.

    PubMed

    Peng, Peng; Lekadir, Karim; Gooya, Ali; Shao, Ling; Petersen, Steffen E; Frangi, Alejandro F

    2016-04-01

    Cardiovascular magnetic resonance (CMR) has become a key imaging modality in clinical cardiology practice due to its unique capabilities for non-invasive imaging of the cardiac chambers and great vessels. A wide range of CMR sequences have been developed to assess various aspects of cardiac structure and function, and significant advances have also been made in terms of imaging quality and acquisition times. A lot of research has been dedicated to the development of global and regional quantitative CMR indices that help the distinction between health and pathology. The goal of this review paper is to discuss the structural and functional CMR indices that have been proposed thus far for clinical assessment of the cardiac chambers. We include indices definitions, the requirements for the calculations, exemplar applications in cardiovascular diseases, and the corresponding normal ranges. Furthermore, we review the most recent state-of-the art techniques for the automatic segmentation of the cardiac boundaries, which are necessary for the calculation of the CMR indices. Finally, we provide a detailed discussion of the existing literature and of the future challenges that need to be addressed to enable a more robust and comprehensive assessment of the cardiac chambers in clinical practice.

  20. Who Will More Likely Buy PHEV: A Detailed Market Segmentation Analysis

    SciTech Connect

    Lin, Zhenhong; Greene, David L

    2010-01-01

    Understanding the diverse PHEV purchase behaviors among prospective new car buyers is key for designing efficient and effective policies for promoting new energy vehicle technologies. The ORNL MA3T model developed for the U.S. Department of Energy is described and used to project PHEV purchase probabilities by different consumers. MA3T disaggregates the U.S. household vehicle market into 1458 consumer segments based on region, residential area, driver type, technology attitude, home charging availability and work charging availability and is calibrated to the EIA s Annual Energy Outlook. Simulation results from MA3T are used to identify the more likely PHEV buyers and provide explanations. It is observed that consumers who have home charging, drive more frequently and live in urban area are more likely to buy a PHEV. Early adopters are projected to be more likely PHEV buyers in the early market, but the PHEV purchase probability by the late majority consumer can increase over time when PHEV gradually becomes a familiar product. Copyright Form of EVS25.

  1. Photogrammetric Digital Outcrop Model analysis of a segment of the Centovalli Line (Trontano, Italy)

    NASA Astrophysics Data System (ADS)

    Consonni, Davide; Pontoglio, Emanuele; Bistacchi, Andrea; Tunesi, Annalisa

    2015-04-01

    The Centovalli Line is a complex network of brittle faults developing between Domodossola (West) and Locarno (East), where it merges with the Canavese Line (western segment of the Periadriatic Lineament). The Centovalli Line roughly follows the Southern Steep Belt which characterizes the inner or "root" zone of the Penninic and Austroalpine units, which underwent several deformation phases under variable P-T conditions over all the Alpine orogenic history. The last deformation phases in this area developed under brittle conditions, resulting in an array of dextral-reverse subvertical faults with a general E-W trend that partly reactivates and partly crosscuts the metamorphic foliations and lithological boundaries. Here we report on a quantitative digital outcrop model (DOM) study aimed at quantifying the fault zone architecture in a particularly well exposed outcrop near Trontano, at the western edge of the Centovalli Line. The DOM was reconstructed with photogrammetry and allowed to perform a complete characterization of the damage zones and multiple fault cores on both point cloud and textured surfaces models. Fault cores have been characterized in terms of attitude, thickness, and internal distribution of fault rocks (gouge-bearing), including possibly seismogenic localized slip surfaces. In the damage zones, the fracture network has been characterized in terms of fracture intensity (both P10 and P21 on virtual scanlines and scan-areas), fracture attitude, fracture connectivity, etc.

  2. Asymmetry analysis of the arm segments during forward handspring on floor.

    PubMed

    Exell, Timothy A; Robinson, Gemma; Irwin, Gareth

    2016-08-01

    Asymmetry in gymnastics underpins successful performance and may also have implications as an injury mechanism; therefore, understanding of this concept could be useful for coaches and clinicians. The aim of this study was to examine kinematic and external kinetic asymmetry of the arm segments during the contact phase of a fundamental skill, the forward handspring on floor. Using a repeated single subject design six female National elite gymnasts (age: 19 ± 1.5 years, mass: 58.64 ± 3.72 kg, height: 1.62 ± 0.41 m), each performed 15 forward handsprings, synchronised 3D kinematic and kinetic data were collected. Asymmetry between the lead and non-lead side arms was quantified during each trial. Significant kinetic asymmetry was observed for all gymnasts (p < 0.005) with the direction of the asymmetry being related to the lead leg. All gymnasts displayed kinetic asymmetry for ground reaction force. Kinematic asymmetry was present for more gymnasts at the shoulder than the distal joints. These findings provide useful information for coaching gymnastics skills, which may subjectively appear to be symmetrical. The observed asymmetry has both performance and injury implications.

  3. Extensive serum biomarker analysis in patients with ST segment elevation myocardial infarction (STEMI).

    PubMed

    Zhang, Yi; Lin, Peiyi; Jiang, Huilin; Xu, Jieling; Luo, Shuhong; Mo, Junrong; Li, Yunmei; Chen, Xiaohui

    2015-12-01

    ST segment elevation myocardial infarction (STEMI) is one of the leading causes of morbidity and mortality and some characteristics of STEMI are poorly understood. The aim of the present study is to detect protein expression profiles in the serum of STEMI patients, and to identify biomarkers for this disease. Cytokine profiles of serum from STEMI patients and healthy controls were analyzed with a semi-quantitative human antibody array for 174 proteins, and the results showed blood serum concentrations of 21 cytokines differed considerably between STEMI patients and healthy subjects. In the next phase, a sandwich ELISA kit individually validated eight biomarker results from 21 of the microarray experiments. Clinical validation demonstrated a significant increase of BNDF, PDGF-AA and MMP-9 in patients with AMI. Meanwhile, BNDF, PDGF-AA and MMP-9 distinguished AMI patients from healthy controls with a mean area under the receiver operating characteristic (ROC) curves of 0.870, 0.885, and 0.81, respectively, with diagnostic cut-off points of 0.688 ng/mL, 297.86 ng/mL and 690.066 ng/mL. Our study indicated that these three cytokines were up-regulated in STEMI samples, and may hold promise for the assessment of STEMI.

  4. Ceramic component development analysis -- Volume 1. Final report

    SciTech Connect

    Boss, D.E.

    1998-06-09

    The development of advanced filtration media for advanced fossil-fueled power generating systems is a critical step in meeting the performance and emissions requirements for these systems. While porous metal and ceramic candle-filters have been available for some time, the next generation of filters will include ceramic-matrix composites (CMCs) (Techniweave/Westinghouse, Babcock and Wilcox (B and W), DuPont Lanxide Composites), intermetallic alloys (Pall Corporation), and alternate filter geometries (CeraMem Separations). The goal of this effort was to perform a cursory review of the manufacturing processes used by 5 companies developing advanced filters from the perspective of process repeatability and the ability for their processes to be scale-up to produce volumes. Given the brief nature of the on-site reviews, only an overview of the processes and systems could be obtained. Each of the 5 companies had developed some level of manufacturing and quality assurance documentation, with most of the companies leveraging the procedures from other products they manufacture. It was found that all of the filter manufacturers had a solid understanding of the product development path. Given that these filters are largely developmental, significant additional work is necessary to understand the process-performance relationships and projecting manufacturing costs.

  5. Viscous wing theory development. Volume 1: Analysis, method and results

    NASA Technical Reports Server (NTRS)

    Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.

    1986-01-01

    Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.

  6. SLUDGE TREATMENT PROJECT ALTERNATIVES ANALYSIS SUMMARY REPORT [VOLUME 1

    SciTech Connect

    FREDERICKSON JR; ROURK RJ; HONEYMAN JO; JOHNSON ME; RAYMOND RE

    2009-01-19

    Highly radioactive sludge (containing up to 300,000 curies of actinides and fission products) resulting from the storage of degraded spent nuclear fuel is currently stored in temporary containers located in the 105-K West storage basin near the Columbia River. The background, history, and known characteristics of this sludge are discussed in Section 2 of this report. There are many compelling reasons to remove this sludge from the K-Basin. These reasons are discussed in detail in Section1, and they include the following: (1) Reduce the risk to the public (from a potential release of highly radioactive material as fine respirable particles by airborne or waterborn pathways); (2) Reduce the risk overall to the Hanford worker; and (3) Reduce the risk to the environment (the K-Basin is situated above a hazardous chemical contaminant plume and hinders remediation of the plume until the sludge is removed). The DOE-RL has stated that a key DOE objective is to remove the sludge from the K-West Basin and River Corridor as soon as possible, which will reduce risks to the environment, allow for remediation of contaminated areas underlying the basins, and support closure of the 100-KR-4 operable unit. The environmental and nuclear safety risks associated with this sludge have resulted in multiple legal and regulatory remedial action decisions, plans,and commitments that are summarized in Table ES-1 and discussed in more detail in Volume 2, Section 9.

  7. Structural analysis of cylindrical thrust chambers, volume 1

    NASA Technical Reports Server (NTRS)

    Armstrong, W. H.

    1979-01-01

    Life predictions of regeneratively cooled rocket thrust chambers are normally derived from classical material fatigue principles. The failures observed in experimental thrust chambers do not appear to be due entirely to material fatigue. The chamber coolant walls in the failed areas exhibit progressive bulging and thinning during cyclic firings until the wall stress finally exceeds the material rupture stress and failure occurs. A preliminary analysis of an oxygen free high conductivity (OFHC) copper cylindrical thrust chamber demonstrated that the inclusion of cumulative cyclic plastic effects enables the observed coolant wall thinout to be predicted. The thinout curve constructed from the referent analysis of 10 firing cycles was extrapolated from the tenth cycle to the 200th cycle. The preliminary OFHC copper chamber 10-cycle analysis was extended so that the extrapolated thinout curve could be established by performing cyclic analysis of deformed configurations at 100 and 200 cycles. Thus the original range of extrapolation was reduced and the thinout curve was adjusted by using calculated thinout rates at 100 and 100 cycles. An analysis of the same underformed chamber model constructed of half-hard Amzirc to study the effect of material properties on the thinout curve is included.

  8. Seismic piping test and analysis. Volumes 1, 2, and 3

    SciTech Connect

    Not Available

    1980-09-01

    This report presents selected results to date of a dynamic testing and analysis program focusing on a piping system at Consolidated Edison Company of New York's Indian Point-1 Nuclear Generating Station. The goal of this research program is the development of more accurate and realistic models of piping systems subjected to seismic, hydraulic, operating, and other dynamic loads. The program seeks to identify piping system properties significant to dynamic response rather than seeking to simulate any particular form of excitation. The fundamental experimental approach is the excitation of piping/restraint devices/supports by a variety of dynamic test methods and the analysis of the resulting response to identify the characteristic dynamic properties of the system tested. The comparison of the identified dynamic properties to those predicted by alternative analytical approaches will support improvements in methods used in the dynamic analysis of piping, restraint, devices, and supports.

  9. Finite element analysis of laminated plates and shells, volume 1

    NASA Technical Reports Server (NTRS)

    Seide, P.; Chang, P. N. H.

    1978-01-01

    The finite element method is used to investigate the static behavior of laminated composite flat plates and cylindrical shells. The analysis incorporates the effects of transverse shear deformation in each layer through the assumption that the normals to the undeformed layer midsurface remain straight but need not be normal to the mid-surface after deformation. A digital computer program was developed to perform the required computations. The program includes a very efficient equation solution code which permits the analysis of large size problems. The method is applied to the problem of stretching and bending of a perforated curved plate.

  10. Thermal characterization and analysis of microliter liquid volumes using the three-omega method.

    PubMed

    Roy-Panzer, Shilpi; Kodama, Takashi; Lingamneni, Srilakshmi; Panzer, Matthew A; Asheghi, Mehdi; Goodson, Kenneth E

    2015-02-01

    Thermal phenomena in many biological systems offer an alternative detection opportunity for quantifying relevant sample properties. While there is substantial prior work on thermal characterization methods for fluids, the push in the biology and biomedical research communities towards analysis of reduced sample volumes drives a need to extend and scale these techniques to these volumes of interest, which can be below 100 pl. This work applies the 3ω technique to measure the temperature-dependent thermal conductivity and heat capacity of de-ionized water, silicone oil, and salt buffer solution droplets from 24 to 80 °C. Heater geometries range in length from 200 to 700 μm and in width from 2 to 5 μm to accommodate the size restrictions imposed by small volume droplets. We use these devices to measure droplet volumes of 2 μl and demonstrate the potential to extend this technique down to pl droplet volumes based on an analysis of the thermally probed volume. Sensitivity and uncertainty analyses provide guidance for relevant design variables for characterizing properties of interest by investigating the tradeoffs between measurement frequency regime, device geometry, and substrate material. Experimental results show that we can extract thermal conductivity and heat capacity with these sample volumes to within less than 1% of thermal properties reported in the literature.

  11. Thermal characterization and analysis of microliter liquid volumes using the three-omega method

    NASA Astrophysics Data System (ADS)

    Roy-Panzer, Shilpi; Kodama, Takashi; Lingamneni, Srilakshmi; Panzer, Matthew A.; Asheghi, Mehdi; Goodson, Kenneth E.

    2015-02-01

    Thermal phenomena in many biological systems offer an alternative detection opportunity for quantifying relevant sample properties. While there is substantial prior work on thermal characterization methods for fluids, the push in the biology and biomedical research communities towards analysis of reduced sample volumes drives a need to extend and scale these techniques to these volumes of interest, which can be below 100 pl. This work applies the 3ω technique to measure the temperature-dependent thermal conductivity and heat capacity of de-ionized water, silicone oil, and salt buffer solution droplets from 24 to 80 °C. Heater geometries range in length from 200 to 700 μm and in width from 2 to 5 μm to accommodate the size restrictions imposed by small volume droplets. We use these devices to measure droplet volumes of 2 μl and demonstrate the potential to extend this technique down to pl droplet volumes based on an analysis of the thermally probed volume. Sensitivity and uncertainty analyses provide guidance for relevant design variables for characterizing properties of interest by investigating the tradeoffs between measurement frequency regime, device geometry, and substrate material. Experimental results show that we can extract thermal conductivity and heat capacity with these sample volumes to within less than 1% of thermal properties reported in the literature.

  12. A Finite-Volume "Shaving" Method for Interfacing NASA/DAO''s Physical Space Statistical Analysis System to the Finite-Volume GCM with a Lagrangian Control-Volume Vertical Coordinate

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; DaSilva, Arlindo; Atlas, Robert (Technical Monitor)

    2001-01-01

    Toward the development of a finite-volume Data Assimilation System (fvDAS), a consistent finite-volume methodology is developed for interfacing the NASA/DAO's Physical Space Statistical Analysis System (PSAS) to the joint NASA/NCAR finite volume CCM3 (fvCCM3). To take advantage of the Lagrangian control-volume vertical coordinate of the fvCCM3, a novel "shaving" method is applied to the lowest few model layers to reflect the surface pressure changes as implied by the final analysis. Analysis increments (from PSAS) to the upper air variables are then consistently put onto the Lagrangian layers as adjustments to the volume-mean quantities during the analysis cycle. This approach is demonstrated to be superior to the conventional method of using independently computed "tendency terms" for surface pressure and upper air prognostic variables.

  13. Cost-volume-profit and net present value analysis of health information systems.

    PubMed

    McLean, R A

    1998-08-01

    The adoption of any information system should be justified by an economic analysis demonstrating that its projected benefits outweigh its projected costs. Analysis differ, however, on which methods to employ for such a justification. Accountants prefer cost-volume-profit analysis, and economists prefer net present value analysis. The article explains the strengths and weaknesses of each method and shows how they can be used together so that well-informed investments in information systems can be made.

  14. Determination of lead in hair and its segmental analysis by solid sampling electrothermal atomic absorption spectrometry

    NASA Astrophysics Data System (ADS)

    Baysal, Asli; Akman, Suleyman

    2010-04-01

    A rapid and practical solid sampling electrothermal atomic absorption spectrometric method was described for the determination of lead in scalp hair. Hair samples were washed once with acetone; thrice with distilled-deionized water and again once with acetone and dried at 75 °C. Typically 0.05 to 1.0 mg of dried samples were inserted on the platforms of solid sampling autosampler. The effects of pyrolysis temperature, atomization temperature, the amount of sample as well as addition of a modifier (Pd/Mg) and/or auxiliary digesting agents (hydrogen peroxide and nitric acid) and/or a surfactant (Triton X-100) on the recovery of lead were investigated. Hair samples were washed once with acetone; thrice with distilled-deionized water and again once with acetone and dried at 75 °C. Typically 0.05 to 1.0 mg of dried samples were inserted on the platforms of solid sampling autosampler. The limit of detection for lead (3 σ, N = 10) was 0.3 ng/g The addition of modifier, acids, oxidant and surfactant hardly improved the results. Due to the risk of contamination and relatively high blank values, the lead in hair were determined directly without adding any reagent(s). Finally, the method was applied for the segmental determination of lead concentrations in hair of different persons which is important to know when and how much a person was exposed to the analyte. For this purpose, 0.5 cm of pieces were cut along the one or a few close strands and analyzed by solid sampling.

  15. Underground Test Area Subproject Phase I Data Analysis Task. Volume II - Potentiometric Data Document Package

    SciTech Connect

    1996-12-01

    Volume II of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the potentiometric data. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  16. Underground Test Area Subproject Phase I Data Analysis Task. Volume VIII - Risk Assessment Documentation Package

    SciTech Connect

    1996-12-01

    Volume VIII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the risk assessment documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  17. Underground Test Area Subproject Phase I Data Analysis Task. Volume VI - Groundwater Flow Model Documentation Package

    SciTech Connect

    1996-11-01

    Volume VI of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the groundwater flow model data. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  18. Underground Test Area Subproject Phase I Data Analysis Task. Volume VII - Tritium Transport Model Documentation Package

    SciTech Connect

    1996-12-01

    Volume VII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the tritium transport model documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  19. Satellite power systems (SPS) concept definition study. Volume 7: SPS program plan and economic analysis, appendixes

    NASA Technical Reports Server (NTRS)

    Hanley, G.

    1978-01-01

    Three appendixes in support of Volume 7 are contained in this document. The three appendixes are: (1) Satellite Power System Work Breakdown Structure Dictionary; (2) SPS cost Estimating Relationships; and (3) Financial and Operational Concept. Other volumes of the final report that provide additional detail are: Executive Summary; SPS Systems Requirements; SPS Concept Evolution; SPS Point Design Definition; Transportation and Operations Analysis; and SPS Technology Requirements and Verification.

  20. Linkage disequilibrium analysis by searching for shared segments: Mapping a locus for benign recurrent intrahepatic cholestasis (BRIC)

    SciTech Connect

    Freimer, N.; Baharloo, S.; Blankenship, K.

    1994-09-01

    The lod score method of linkage analysis has two important drawbacks: parameters must be specified for the transmission of the disease (e.g. penetrance), and large numbers of genetically informative individuals must be studied. Although several robust non-parametric methods are available, these also require large sample sizes. The availability of dense genetic maps permits genome screening to be conducted by linkage disequilibrium (LD) mapping methods, which are statistically powerful and non-parametric. Lander & Botstein proposed that LD mapping could be employed to screen the human genome for disease loci; we have now applied this strategy to map a gene for an autosomal recessive disorder, benign recurrent intrahepatic cholestatis (BRIC). Our approach to LD mapping was based on identifying chromosome segments shared between distantly related patients; we used 256 microsatellite markers to genotype three affected individuals, and their parents, from an isolated town in The Netherlands. Because endogamy occurred in this population for several generations, all of the BRIC patients are known to be distantly related to each other, but the pedigree structure and connections could not be certainly established more than three generations before the present, so lod score analysis was impossible. A 20 cM region on chromosome 18 is shared by 5/6 patient chromosomes; subsequently, we noted that 6/6 chromosomes shared an interval of about 3 cM in this region. Calculations indicate that it is extremely unlikely that such a region could be inherited by chance rather than by descent from a common ancestor. Thus, LD mapping by searching for shared chromosomal segments is an extremely powerful approach for genome screening to identify disease loci.

  1. Spaceborne power systems preference analyses. Volume 2: Decision analysis

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Feinberg, A.; Miles, R. F., Jr.

    1985-01-01

    Sixteen alternative spaceborne nuclear power system concepts were ranked using multiattribute decision analysis. The purpose of the ranking was to identify promising concepts for further technology development and the issues associated with such development. Four groups were interviewed to obtain preference. The four groups were: safety, systems definition and design, technology assessment, and mission analysis. The highest ranked systems were the heat-pipe thermoelectric systems, heat-pipe Stirling, in-core thermionic, and liquid-metal thermoelectric systems. The next group contained the liquid-metal Stirling, heat-pipe Alkali Metal Thermoelectric Converter (AMTEC), heat-pipe Brayton, liquid-metal out-of-core thermionic, and heat-pipe Rankine systems. The least preferred systems were the liquid-metal AMTEC, heat-pipe thermophotovoltaic, liquid-metal Brayton and Rankine, and gas-cooled Brayton. The three nonheat-pipe technologies selected matched the top three nonheat-pipe systems ranked by this study.

  2. Integrated operations/payloads/fleet analysis. Volume 2: Payloads

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The payloads for NASA and non-NASA missions of the integrated fleet are analyzed to generate payload data for the capture and cost analyses for the period 1979 to 1990. Most of the effort is on earth satellites, probes, and planetary missions because of the space shuttle's ability to retrieve payloads for repair, overhaul, and maintenance. Four types of payloads are considered: current expendable payload; current reusable payload; low cost expendable payload, (satellite to be used with expendable launch vehicles); and low cost reusable payload (satellite to be used with the space shuttle/space tug system). Payload weight analysis, structural sizing analysis, and the influence of mean mission duration on program cost are also discussed. The payload data were computerized, and printouts of the data for payloads for each program or mission are included.

  3. Space tug economic analysis study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of space tug operations is presented. The space tug is defined as any liquid propulsion stage under 100,000 pounds propellant loading that is flown from the space shuttle cargo bay. Two classes of vehicles are the orbit injection stages and reusable space tugs. The vehicle configurations, propellant combinations, and operating modes used for the study are reported. The summary contains data on the study approach, results, conclusions, and recommendations.

  4. Satellite services system analysis study. Volume 5: Programmatics

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The overall program and resources needed for development and operation of a Satellite Services System is reviewed. Program requirements covered system operations through 1993 and were completed in preliminary form. Program requirements were refined based on equipment preliminary design and analysis. Schedules, costs, equipment utilization, and facility/advanced technology requirements were included in the update. Equipment user charges were developed for each piece of equipment and for representative satellite servicing missions.

  5. Conjoint Analysis of Study Abroad Preferences: Key Attributes, Segments and Implications for Increasing Student Participation

    ERIC Educational Resources Information Center

    Garver, Michael S.; Divine, Richard L.

    2008-01-01

    An adaptive conjoint analysis was performed on the study abroad preferences of a sample of undergraduate college students. The results indicate that trip location, cost, and time spent abroad are the three most important determinants of student preference for different study abroad trip scenarios. The analysis also uncovered four different study…

  6. Image-based segmentation for characterization and quantitative analysis of the spinal cord injuries by using diffusion patterns

    NASA Astrophysics Data System (ADS)

    Hannula, Markus; Olubamiji, Adeola; Kunttu, Iivari; Dastidar, Prasun; Soimakallio, Seppo; Öhman, Juha; Hyttinen, Jari

    2011-03-01

    In medical imaging, magnetic resonance imaging sequences are able to provide information of the damaged brain structure and the neuronal connections. The sequences can be analyzed to form 3D models of the geometry and further including functional information of the neurons of the specific brain area to develop functional models. Modeling offers a tool which can be used for the modeling of brain trauma from images of the patients and thus information to tailor the properties of the transplanted cells. In this paper, we present image-based methods for the analysis of human spinal cord injuries. In this effort, we use three dimensional diffusion tensor imaging, which is an effective method for analyzing the response of the water molecules. This way, our idea is to study how the injury affects on the tissues and how this can be made visible in the imaging. In this paper, we present here a study of spinal cord analysis to two subjects, one healthy volunteer and one spinal cord injury patient. We have done segmentations and volumetric analysis for detection of anatomical differences. The functional differences are analyzed by using diffusion tensor imaging. The obtained results show that this kind of analysis is capable of finding differences in spinal cords anatomy and function.

  7. Extracting Metrics for Three-dimensional Root Systems: Volume and Surface Analysis from In-soil X-ray Computed Tomography Data.

    PubMed

    Suresh, Niraj; Stephens, Sean A; Adams, Lexor; Beck, Anthon N; McKinney, Adriana L; Varga, Tamas

    2016-01-01

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as processes with important implications to climate change and crop management. Quantitative size information on roots in their native environment is invaluable for studying root growth and environmental processes involving plants. X-ray computed tomography (XCT) has been demonstrated to be an effective tool for in situ root scanning and analysis. We aimed to develop a costless and efficient tool that approximates the surface and volume of the root regardless of its shape from three-dimensional (3D) tomography data. The root structure of a Prairie dropseed (Sporobolus heterolepis) specimen was imaged using XCT. The root was reconstructed, and the primary root structure was extracted from the data using a combination of licensed and open-source software. An isosurface polygonal mesh was then created for ease of analysis. We have developed the standalone application imeshJ, generated in MATLAB(1), to calculate root volume and surface area from the mesh. The outputs of imeshJ are surface area (in mm(2)) and the volume (in mm(3)). The process, utilizing a unique combination of tools from imaging to quantitative root analysis, is described. A combination of XCT and open-source software proved to be a powerful combination to noninvasively image plant root samples, segment root data, and extract quantitative information from the 3D data. This methodology of processing 3D data should be applicable to other material/sample systems where there is connectivity between components of similar X-ray attenuation and difficulties arise with segmentation. PMID:27168248

  8. Extracting Metrics for Three-dimensional Root Systems: Volume and Surface Analysis from In-soil X-ray Computed Tomography Data.

    PubMed

    Suresh, Niraj; Stephens, Sean A; Adams, Lexor; Beck, Anthon N; McKinney, Adriana L; Varga, Tamas

    2016-01-01

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as processes with important implications to climate change and crop management. Quantitative size information on roots in their native environment is invaluable for studying root growth and environmental processes involving plants. X-ray computed tomography (XCT) has been demonstrated to be an effective tool for in situ root scanning and analysis. We aimed to develop a costless and efficient tool that approximates the surface and volume of the root regardless of its shape from three-dimensional (3D) tomography data. The root structure of a Prairie dropseed (Sporobolus heterolepis) specimen was imaged using XCT. The root was reconstructed, and the primary root structure was extracted from the data using a combination of licensed and open-source software. An isosurface polygonal mesh was then created for ease of analysis. We have developed the standalone application imeshJ, generated in MATLAB(1), to calculate root volume and surface area from the mesh. The outputs of imeshJ are surface area (in mm(2)) and the volume (in mm(3)). The process, utilizing a unique combination of tools from imaging to quantitative root analysis, is described. A combination of XCT and open-source software proved to be a powerful combination to noninvasively image plant root samples, segment root data, and extract quantitative information from the 3D data. This methodology of processing 3D data should be applicable to other material/sample systems where there is connectivity between components of similar X-ray attenuation and difficulties arise with segmentation.

  9. Analysis of space tug operating techniques. Volume 2: Study results

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design requirements for space tug systems and cost analysis of the refurbishment phases are discussed. The vehicle is an integral propulsion stage using liquid hydrogen and liquid oxygen as propellants and is capable of operating either as a fully or a partially autonomous vehicle. Structural features are an integral liquid hydrogen tank, a liquid oxygen tank, a meteoroid shield, an aft conical docking and structural support ring, and a staged combustion main engine. The vehicle is constructed of major modules for ease of maintenance. Line drawings and block diagrams are included to explain the maintenance requirements for the subsystems.

  10. Development of a rotorcraft. Propulsion dynamics interface analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Hull, R.

    1982-01-01

    The details of the modeling process and its implementation approach are presented. A generic methodology and model structure for performing coupled propulsion/rotor response analysis that is applicable to a variety of rotorcraft types was developed. A method for parameterizing the model structure to represent a particular rotorcraft is defined. The generic modeling methodology, the development of the propulsion system and the rotor/fuselage models, and the formulation of the resulting coupled rotor/propulsion system model are described. A test case that was developed is described.

  11. Wind tunnel test IA300 analysis and results, volume 1

    NASA Technical Reports Server (NTRS)

    Kelley, P. B.; Beaufait, W. B.; Kitchens, L. L.; Pace, J. P.

    1987-01-01

    The analysis and interpretation of wind tunnel pressure data from the Space Shuttle wind tunnel test IA300 are presented. The primary objective of the test was to determine the effects of the Space Shuttle Main Engine (SSME) and the Solid Rocket Booster (SRB) plumes on the integrated vehicle forebody pressure distributions, the elevon hinge moments, and wing loads. The results of this test will be combined with flight test results to form a new data base to be employed in the IVBC-3 airloads analysis. A secondary objective was to obtain solid plume data for correlation with the results of gaseous plume tests. Data from the power level portion was used in conjunction with flight base pressures to evaluate nominal power levels to be used during the investigation of changes in model attitude, eleveon deflection, and nozzle gimbal angle. The plume induced aerodynamic loads were developed for the Space Shuttle bases and forebody areas. A computer code was developed to integrate the pressure data. Using simplified geometrical models of the Space Shuttle elements and components, the pressure data were integrated to develop plume induced force and moments coefficients that can be combined with a power-off data base to develop a power-on data base.

  12. Demand modelling of passenger air travel: An analysis and extension, volume 2

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.

    1978-01-01

    Previous intercity travel demand models in terms of their ability to predict air travel in a useful way and the need for disaggregation in the approach to demand modelling are evaluated. The viability of incorporating non-conventional factors (i.e. non-econometric, such as time and cost) in travel demand forecasting models are determined. The investigation of existing models is carried out in order to provide insight into their strong points and shortcomings. The model is characterized as a market segmentation model. This is a consequence of the strengths of disaggregation and its natural evolution to a usable aggregate formulation. The need for this approach both pedagogically and mathematically is discussed. In addition this volume contains two appendices which should prove useful to the non-specialist in the area.

  13. Small V/STOL aircraft analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Smith, K. R., Jr.; Belina, F. W.

    1974-01-01

    A study has been made of the economic viability of advanced V/STOL aircraft concepts in performing general aviation missions. A survey of general aviation aircraft users, operators, and manufacturers indicated that personnel transport missions formulated around business executive needs, commuter air service, and offshore oil supply are the leading potential areas of application using VTOL aircraft. Advanced VTOL concepts potentially available in the late 1970 time period were evaluated as alternatives to privately owned contemporary aircraft and commercial airline service in satisfying these personnel transport needs. Economic analysis incorporating the traveler's value of time as the principle figure of merit were used to identify the relative merits of alternative VTOL air transportation concepts.

  14. Numerical Analysis of a Finite Element/Volume Penalty Method

    NASA Astrophysics Data System (ADS)

    Maury, Bertrand

    The penalty method makes it possible to incorporate a large class of constraints in general purpose Finite Element solvers like freeFEM++. We present here some contributions to the numerical analysis of this method. We propose an abstract framework for this approach, together with some general error estimates based on the discretization parameter ɛ and the space discretization parameter h. As this work is motivated by the possibility to handle constraints like rigid motion for fluid-particle flows, we shall pay a special attention to a model problem of this kind, where the constraint is prescribed over a subdomain. We show how the abstract estimate can be applied to this situation, in the case where a non-body-fitted mesh is used. In addition, we describe how this method provides an approximation of the Lagrange multiplier associated to the constraint.

  15. Space tug economic analysis study. Volume 2: Tug concepts analysis. Part 2: Economic analysis

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of space tug operations is presented. The subjects discussed are: (1) cost uncertainties, (2) scenario analysis, (3) economic sensitivities, (4) mixed integer programming formulation of the space tug problem, and (5) critical parameters in the evaluation of a public expenditure.

  16. Pressure-volume analysis of changes in cardiac function in chronic cardiomyoplasty.

    PubMed

    Cho, P W; Levin, H R; Curtis, W E; Tsitlik, J E; DiNatale, J M; Kass, D A; Gardner, T J; Kunel, R W; Acker, M A

    1993-07-01

    Reports of clinical improvement in human studies of dynamic cardiomyoplasty lack support by consistent objective hemodynamic evidence. Animal studies have also yielded conflicting results, likely due to nonuniform models, particularly the use of unconditioned wraps, and to limitations in commonly used study modalities caused by exaggerated heart motion during wrap stimulation. Our purpose was to assess the primary functional properties of the heart wrapped by conditioned muscle using pressure-volume relation analysis based on conductance catheter volume data. Compared with the unstimulated state, 1:1 stimulation caused an increase in contractility and decreases in end-diastolic volume and stroke work. Assisted beats during 1:2 stimulation showed an increase in contractility and a decrease in end-diastolic volume. Unassisted beats (1:2) showed decreases in end-diastolic volume and stroke work. There was no augmentation of cardiac output or ejection fraction with stimulation (1:1 or 1:2). We conclude that in the nonfailing heart, increased contractility does not augment cardiac output, ejection fraction, and stroke work because of a simultaneous decrease in end-diastolic volume. These changes in contractility and end-diastolic volume may prove therapeutic for dilated cardiomyopathy.

  17. Comparison of segmented flow analysis and ion chromatography for the quantitative characterization of carbohydrates in tobacco products.

    PubMed

    Shifflett, John R; Jones, Lindsey A; Limowski, Edward R; Bezabeh, Dawit Z

    2012-11-28

    Segmented flow analysis (SFA) and ion chromatography with pulsed amperometric detection (IC-PAD) are widely used analytical techniques for the analysis of glucose, fructose, and sucrose in tobacco. In the work presented here, 27 cured tobacco leaves and 21 tobacco products were analyzed for sugars using SFA and IC. The results of these analyses demonstrated that both techniques identified the same trends in sugar content across tobacco leaf and tobacco product types. However, comparison of results between techniques was limited by the selectivity of the SFA method, which relies on the specificity of the reaction of p-hydroxybenzoic acid hydrazide (PAHBAH) with glucose and fructose to generate a detectable derivative. Sugar amines and chlorogenic acid, which are found in tobacco, are also known to react with PAHBAH to form a reaction product that interferes with the analysis of fructose and glucose. To mitigate this problem, solid phase extraction (SPE) was used to remove interferences such as sugar amines and chlorogenic acid from sample matrices prior to SFA. A combination of C18 and cation exchange solid phase extraction cartridges was used, and the results from SFA and IC analyses showed significant convergence in the results of both analytical methods. For example, the average difference between the results from the SFA and IC analyses for flue-cured tobacco samples dropped by 73% when the two-step C18/cation exchange resin sample cleanup was used.

  18. Analysis of cell concentration, volume concentration, and colony size of Microcystis via laser particle analyzer.

    PubMed

    Li, Ming; Zhu, Wei; Gao, Li

    2014-05-01

    The analysis of the cell concentration, volume concentration, and colony size of Microcystis is widely used to provide early warnings of the occurrence of blooms and to facilitate the development of predictive tools to mitigate their impact. This study developed a new approach for the analysis of the cell concentration, volume concentration, and colony size of Microcystis by applying a laser particle analyzer. Four types of Microcystis samples (55 samples in total) were analyzed by a laser particle analyzer and a microscope. By the application of the laser particle analyzer (1) when n = 1.40 and k = 0.1 (n is the intrinsic refractive index, whereas k is absorption of light by the particle), the results of the laser particle analyzer showed good agreement with the microscopic results for the obscuration indicator, volume concentration, and size distribution of Microcystis; (2) the Microcystis cell concentration can be calculated based on its linear relationship with obscuration; and (3) the volume concentration and size distribution of Microcystis particles (including single cells and colonies) can be obtained. The analytical processes involved in this new approach are simpler and faster compared to that by microscopic counting method. From the results, it was identified that the relationship between cell concentration and volume concentration depended on the colony size of Microcystis because the intercellular space was high when the colony size was high. Calculation of cell concentration and volume concentration may occur when the colony size information is sufficient.

  19. Mutation analysis of the genes associated with anterior segment dysgenesis, microcornea and microphthalmia in 257 patients with glaucoma.

    PubMed

    Huang, Xiaobo; Xiao, Xueshan; Jia, Xiaoyun; Li, Shiqiang; Li, Miaoling; Guo, Xiangming; Liu, Xing; Zhang, Qingjiong

    2015-10-01

    Genetic factors have an important role in the development of glaucoma; however, the exact genetic defects remain to be identified in the majority of patients. Glaucoma is frequently observed in patients with anterior segment dysgenesis (ASD), microcornea or microphthalmia. The present study aimed to detect the potential mutations in the genes associated with ASD, microcornea and microphthalmia in 257 patients with glaucoma. Variants in 43 of the 46 genes, which are associated with ASD, microcornea or microphthalmia, were available in whole‑exome sequencing. Candidate variants in the 43 genes were selected following multi‑step bioinformatic analysis and were subsequently confirmed by Sanger sequencing. Confirmed variants were further validated by segregation analysis and analysis of controls. Overall, 70 candidate variants were selected from whole‑exome sequencing, of which 53 (75.7%) were confirmed by Sanger sequencing. In total, 27 of the 53 were considered potentially pathogenic based on bioinformatic analysis and analysis of controls. Of the 27, 6 were identified in BEST1, 4 in EYA1, 3 in GDF6, 2 in BMP4, 2 in CRYBA4, 2 in HCCS, and 1 in each of CRYAA, CRYGC, CRYGD, COL4A1, FOXC1, GJA8, PITX2 and SHH. The 27 variants were detected in 28 of 257 (10.9%) patients, including 11 of 125 patients with primary open‑angle glaucoma and 17 of 132 patients with primary angle‑closure glaucoma. Variants in these genes may be a potential risk factor for primary glaucoma. Careful clinical observation and analysis of additional patients in different populations are expected to further these findings.

  20. A hydrogen energy carrier. Volume 2: Systems analysis

    NASA Technical Reports Server (NTRS)

    Savage, R. L. (Editor); Blank, L. (Editor); Cady, T. (Editor); Cox, K. (Editor); Murray, R. (Editor); Williams, R. D. (Editor)

    1973-01-01

    A systems analysis of hydrogen as an energy carrier in the United States indicated that it is feasible to use hydrogen in all energy use areas, except some types of transportation. These use areas are industrial, residential and commercial, and electric power generation. Saturation concept and conservation concept forecasts of future total energy demands were made. Projected costs of producing hydrogen from coal or from nuclear heat combined with thermochemical decomposition of water are in the range $1.00 to $1.50 per million Btu of hydrogen produced. Other methods are estimated to be more costly. The use of hydrogen as a fuel will require the development of large-scale transmission and storage systems. A pipeline system similar to the existing natural gas pipeline system appears practical, if design factors are included to avoid hydrogen environment embrittlement of pipeline metals. Conclusions from the examination of the safety, legal, environmental, economic, political and societal aspects of hydrogen fuel are that a hydrogen energy carrier system would be compatible with American values and the existing energy system.

  1. Forced-Choice Analysis of Segmental Production by Chinese-Accented English Speakers

    ERIC Educational Resources Information Center

    Rogers, Catherine L.; Dalby, Jonathan

    2005-01-01

    This study describes the development of a minimal-pairs word list targeting phoneme contrasts that pose difficulty for Mandarin Chinese-speaking learners of English as a second language. The target phoneme inventory was compiled from analysis of phonetic transcriptions of about 800 mono- and polysyllabic English words with examples of all the…

  2. The ACODEA Framework: Developing Segmentation and Classification Schemes for Fully Automatic Analysis of Online Discussions

    ERIC Educational Resources Information Center

    Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank

    2012-01-01

    Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…

  3. Head and Neck Cancers on CT: Preliminary Study of Treatment Response Assessment Based on Computerized Volume Analysis

    PubMed Central

    Hadjiiski, Lubomir; Mukherji, Suresh K.; Ibrahim, Mohannad; Sahiner, Berkman; Gujar, Sachin K.; Moyer, Jeffrey; Chan, Heang-Ping

    2013-01-01

    OBJECTIVE The objective of our study was to investigate the feasibility of computerized segmentation of lesions on head and neck CT scans and evaluate its potential for estimating changes in tumor volume in response to treatment of head and neck cancers. MATERIALS AND METHODS Twenty-six CT scans were retrospectively collected from the files of 13 patients with 35 head and neck lesions. The CT scans were obtained from an examination performed before treatment (pretreatment scan) and an examination performed after one cycle of chemotherapy (posttreatment scan). Thirteen lesions were primary site cancers and 22 were metastatic lymph nodes. An experienced radiologist (radiologist 1) marked the 35 lesions and outlined each lesion’s 2D contour on the best slice on both the pre- and posttreatment scans. Full 3D contours were also manually extracted for the 13 primary tumors. Another experienced radiologist (radiologist 2) verified and modified, if necessary, all manually drawn 2D and 3D contours. An in-house-developed computerized system performed 3D segmentation based on a level set model. RESULTS The computer-estimated change in tumor volume and percentage change in tumor volume between the pre- and posttreatment scans achieved a high correlation (intra-class correlation coefficient [ICC] = 0.98 and 0.98, respectively) with the estimates from manual segmentation for the 13 primary tumors. The average error in estimating the percentage change in tumor volume by automatic segmentation relative to the radiologists’ average error was −1.5% ± 5.4% (SD). For the 35 lesions, the ICC between the automatic and manual estimates of change in pre- to posttreatment tumor area was 0.93 and of percentage change in pre-to posttreatment tumor area was 0.85. The average error in estimating the percentage change in tumor area by automatic segmentation was −3.2% ± 15.3%. CONCLUSION Preliminary results indicate that this computerized segmentation system can reliably estimate

  4. A novel image analysis method based on Bayesian segmentation for event-related functional MRI

    NASA Astrophysics Data System (ADS)

    Huang, Lejian; Comer, Mary L.; Talavage, Thomas M.

    2008-02-01

    This paper presents the application of the expectation-maximization/maximization of the posterior marginals (EM/MPM) algorithm to signal detection for functional MRI (fMRI). On basis of assumptions for fMRI 3-D image data, a novel analysis method is proposed and applied to synthetic data and human brain data. Synthetic data analysis is conducted using two statistical noise models (white and autoregressive of order 1) and, for low contrast-to-noise ratio (CNR) data, reveals better sensitivity and specificity for the new method than for the traditional General Linear Model (GLM) approach. When applied to human brain data, functional activation regions are found to be consistent with those obtained using the GLM approach.

  5. Choroidal volume variations with age, axial length, and sex in healthy subjects: a three-dimensional analysis

    PubMed Central

    Barteselli, Giulio; Chhablani, Jay; El-Emam, Sharif; Wang, Haiyan; Chuang, Janne; Kozak, Igor; Cheng, Lingyun; Bartsch, Dirk-Uwe; Freeman, William R.

    2012-01-01

    Purpose To demonstrate the three-dimensional choroidal volume distribution in healthy subjects using enhanced depth imaging (EDI) spectral-domain optical coherence tomography (SD-OCT) and to evaluate its association with age, sex, and axial length. Design Retrospective case series. Participants One hundred and seventy six eyes from 114 subjects with no retinal or choroidal disease. Methods EDI SD-OCT imaging studies for healthy patients who had undergone a 31-raster scanning protocol on a commercial SD-OCT device were reviewed. Manual segmentation of the choroid was performed by two retinal specialists. Macular choroidal volume map and three-dimensional topography were automatically created by the built-in software of the device. Mean choroidal volume was calculated for each Early Treatment Diabetic Retinopathy Study (ETDRS) subfield. Regression analyses were used to evaluate the correlation between macular choroidal volume and age, sex, and axial length. Main Outcome Measures Three-dimensional topography and ETDRS-style volume map of the choroid. Results Three-dimensional topography of the choroid and volume map was obtained in all cases. The mean choroidal volume was 0.228 ± 0.077 mm3 for the center ring and 7.374 ± 2.181 mm3 for the total ETDRS grid. The nasal quadrant showed the lowest choroidal volume, and the superior quadrant the highest. The temporal and inferior quadrants did not show different choroidal volume values. Choroidal volume in all the EDTRS rings was significantly correlated with axial length after adjustment for age (P<0.0001), with age after adjustment for axial length (P<0.0001) and with sex after adjustment for axial length (P<0.05). Choroidal volume decreases by 0.54 mm3 (7.32%) for every decade and by 0.56 mm3 (7.59%) for every mm of axial length. Males have a 7.37% greater choroidal volume compared to that of females. Conclusions EDI SD-OCT is non-invasive and well-tolerated procedure with an excellent ability to visualize three

  6. Structural Analysis and Testing of an Erectable Truss for Precision Segmented Reflector Application

    NASA Technical Reports Server (NTRS)

    Collins, Timothy J.; Fichter, W. B.; Adams, Richard R.; Javeed, Mehzad

    1995-01-01

    This paper describes analysis and test results obtained at Langley Research Center (LaRC) on a doubly curved testbed support truss for precision reflector applications. Descriptions of test procedures and experimental results that expand upon previous investigations are presented. A brief description of the truss is given, and finite-element-analysis models are described. Static-load and vibration test procedures are discussed, and experimental results are shown to be repeatable and in generally good agreement with linear finite-element predictions. Truss structural performance (as determined by static deflection and vibration testing) is shown to be predictable and very close to linear. Vibration test results presented herein confirm that an anomalous mode observed during initial testing was due to the flexibility of the truss support system. Photogrammetric surveys with two 131-in. reference scales show that the root-mean-square (rms) truss-surface accuracy is about 0.0025 in. Photogrammetric measurements also indicate that the truss coefficient of thermal expansion (CTE) is in good agreement with that predicted by analysis. A detailed description of the photogrammetric procedures is included as an appendix.

  7. Improving image segmentation performance and quantitative analysis via a computer-aided grading methodology for optical coherence tomography retinal image analysis

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Salinas, Harry M.; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M.; Puliafito, Carmen A.

    2010-07-01

    We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 μm and 26.71 μm when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 μm and 0.6 and 1.76 μm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R2>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.

  8. A combined segmented anode gas ionization chamber and time-of-flight detector for heavy ion elastic recoil detection analysis

    NASA Astrophysics Data System (ADS)

    Ström, Petter; Petersson, Per; Rubel, Marek; Possnert, Göran

    2016-10-01

    A dedicated detector system for heavy ion elastic recoil detection analysis at the Tandem Laboratory of Uppsala University is presented. Benefits of combining a time-of-flight measurement with a segmented anode gas ionization chamber are demonstrated. The capability of ion species identification is improved with the present system, compared to that obtained when using a single solid state silicon detector for the full ion energy signal. The system enables separation of light elements, up to Neon, based on atomic number while signals from heavy elements such as molybdenum and tungsten are separated based on mass, to a sample depth on the order of 1 μm. The performance of the system is discussed and a selection of material analysis applications is given. Plasma-facing materials from fusion experiments, in particular metal mirrors, are used as a main example for the discussion. Marker experiments using nitrogen-15 or oxygen-18 are specific cases for which the described improved species separation and sensitivity are required. Resilience to radiation damage and significantly improved energy resolution for heavy elements at low energies are additional benefits of the gas ionization chamber over a solid state detector based system.

  9. Segmental neuromyotonia

    PubMed Central

    Panwar, Ajay; Junewar, Vivek; Sahu, Ritesh; Shukla, Rakesh

    2015-01-01

    Unilateral focal neuromyotonia has been rarely reported in fingers or extraocular muscles. We report a case of segmental neuromyotonia in a 20-year-old boy who presented to us with intermittent tightness in right upper limb. Electromyography revealed myokymic and neuromyotonic discharges in proximal as well as distal muscles of the right upper limb. Patient's symptoms responded well to phenytoin therapy. Such an atypical involvement of two contiguous areas of a single limb in neuromyotonia has not been reported previously. Awareness of such an atypical presentation of the disease can be important in timely diagnosis and treatment of a patient. PMID:26167035

  10. On 3-D inelastic analysis methods for hot section components. Volume 1: Special finite element models

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional analysis of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. This report is presented in two volumes. Volume 1 describes effort performed under Task 4B, Special Finite Element Special Function Models, while Volume 2 concentrates on Task 4C, Advanced Special Functions Models.

  11. Interobserver variability effects on computerized volume analysis of treatment response of head and neck lesions in CT

    NASA Astrophysics Data System (ADS)

    Hadjiiski, Lubomir; Chan, Heang-Ping; Ibrahim, Mohannad; Sahiner, Berkman; Gujar, Sachin; Mukherji, Suresh K.

    2010-03-01

    A computerized system for segmenting lesions in head and neck CT scans was developed to assist radiologists in estimation of the response to treatment of malignant lesions. The system performs 3D segmentation based on a level set model and uses as input an approximate bounding box for the lesion of interest. We investigated the effect of the interobserver variability of radiologists' marking of the bounding box on the automatic segmentation performance. In this preliminary study, CT scans from a pre-treatment exam and a post one-cycle chemotherapy exam of 34 patients with primary site head and neck neoplasms were used. For each tumor, an experienced radiologist marked the lesion with a bounding box and provided a reference standard by outlining the full 3D contour on both the pre- and post treatment scans. A second radiologist independently marked each tumor again with another bounding box. The correlation between the automatic and manual estimates for both the pre-to-post-treatment volume change and the percent volume change was r=0.95. Based on the bounding boxes by the second radiologist, the correlation between the automatic and manual estimate for the pre-to-post-treatment volume change was r=0.89 and for the percent volume change was r=0.91. The correlation for the automatic estimates obtained from the bounding boxes by the two radiologists was as follows: (1) pretreatment volume r=0.92, (2) post-treatment volume r=0.88, (3) pre-to-post-treatment change r=0.89 and (4) percent preto- post-treatment change r=0.90. The difference between the automatic estimates based on the two sets of bounding boxes did not achieve statistical significance for any of the estimates (p>0.29). The preliminary results indicate that the automated segmentation system can reliably estimate tumor size change in response to treatment relative to radiologist's hand segmentation as reference standard, and that the performance was robust against inter-observer variability in marking the

  12. Subcortical brain segmentation of two dimensional T1-weighted data sets with FMRIB's Integrated Registration and Segmentation Tool (FIRST).

    PubMed

    Amann, Michael; Andělová, Michaela; Pfister, Armanda; Mueller-Lenke, Nicole; Traud, Stefan; Reinhardt, Julia; Magon, Stefano; Bendfeldt, Kerstin; Kappos, Ludwig; Radue, Ernst-Wilhelm; Stippich, Christoph; Sprenger, Till

    2015-01-01

    Brain atrophy has been identified as an important contributing factor to the development of disability in multiple sclerosis (MS). In this respect, more and more interest is focussing on the role of deep grey matter (DGM) areas. Novel data analysis pipelines are available for the automatic segmentation of DGM using three-dimensional (3D) MRI data. However, in clinical trials, often no such high-resolution data are acquired and hence no conclusions regarding the impact of new treatments on DGM atrophy were possible so far. In this work, we used FMRIB's Integrated Registration and Segmentation Tool (FIRST) to evaluate the possibility of segmenting DGM structures using standard two-dimensional (2D) T1-weighted MRI. In a cohort of 70 MS patients, both 2D and 3D T1-weighted data were acquired. The thalamus, putamen, pallidum, nucleus accumbens, and caudate nucleus were bilaterally segmented using FIRST. Volumes were calculated for each structure and for the sum of basal ganglia (BG) as well as for the total DGM. The accuracy and reliability of the 2D data segmentation were compared with the respective results of 3D segmentations using volume difference, volume overlap and intra-class correlation coefficients (ICCs). The mean differences for the individual substructures were between 1.3% (putamen) and -25.2% (nucleus accumbens). The respective values for the BG were -2.7% and for DGM 1.3%. Mean volume overlap was between 89.1% (thalamus) and 61.5% (nucleus accumbens); BG: 84.1%; DGM: 86.3%. Regarding ICC, all structures showed good agreement with the exception of the nucleus accumbens. The results of the segmentation were additionally validated through expert manual delineation of the caudate nucleus and putamen in a subset of the 3D data. In conclusion, we demonstrate that subcortical segmentation of 2D data are feasible using FIRST. The larger subcortical GM structures can be segmented with high consistency. This forms the basis for the application of FIRST in large 2D

  13. Design and experimental gait analysis of a multi-segment in-pipe robot inspired by earthworm's peristaltic locomotion

    NASA Astrophysics Data System (ADS)

    Fang, Hongbin; Wang, Chenghao; Li, Suyi; Xu, Jian; Wang, K. W.

    2014-03-01

    This paper reports the experimental progress towards developing a multi-segment in-pipe robot inspired by earthworm's body structure and locomotion mechanism. To mimic the alternating contraction and elongation of a single earthworm's segment, a robust, servomotor based actuation mechanism is developed. In each robot segment, servomotor-driven cords and spring steel belts are utilized to imitate the earthworm's longitudinal and circular muscles, respectively. It is shown that the designed segment can contract and relax just like an earthworm's body segment. The axial and radial deformation of a single segment is measured experimentally, which agrees with the theoretical predictions. Then a multisegment earthworm-like robot is fabricated by assembling eight identical segments in series. The locomotion performance of this robot prototype is then extensively tested in order to investigate the correlation between gait design and dynamic locomotion characteristics. Based on the principle of retrograde peristalsis wave, a gait generator is developed for the multi-segment earthworm-like robot, following which gaits of the robot can be constructed. Employing the generated gaits, the 8-segment earthworm-like robot can successfully perform both horizontal locomotion and vertical climb in pipes. By changing gait parameters, i.e., with different gaits, locomotion characteristics including average speed and anchor slippage can be significantly tailored. The proposed actuation method and prototype of the multi-segment in-pipe robot as well as the gait generator provide a bionic realization of earthworm's locomotion with promising potentials in various applications such as pipeline inspection and cleaning.

  14. Vibration damping for the Segmented Mirror Telescope

    NASA Astrophysics Data System (ADS)

    Maly, Joseph R.; Yingling, Adam J.; Griffin, Steven F.; Agrawal, Brij N.; Cobb, Richard G.; Chambers, Trevor S.

    2012-09-01

    The Segmented Mirror Telescope (SMT) at the Naval Postgraduate School (NPS) in Monterey is a next-generation deployable telescope, featuring a 3-meter 6-segment primary mirror and advanced wavefront sensing and correction capabilities. In its stowed configuration, the SMT primary mirror segments collapse into a small volume; once on location, these segments open to the full 3-meter diameter. The segments must be very accurately aligned after deployment and the segment surfaces are actively controlled using numerous small, embedded actuators. The SMT employs a passive damping system to complement the actuators and mitigate the effects of low-frequency (<40 Hz) vibration modes of the primary mirror segments. Each of the six segments has three or more modes in this bandwidth, and resonant vibration excited by acoustics or small disturbances on the structure can result in phase mismatches between adjacent segments thereby degrading image quality. The damping system consists of two tuned mass dampers (TMDs) for each of the mirror segments. An adjustable TMD with passive magnetic damping was selected to minimize sensitivity to changes in temperature; both frequency and damping characteristics can be tuned for optimal vibration mitigation. Modal testing was performed with a laser vibrometry system to characterize the SMT segments with and without the TMDs. Objectives of this test were to determine operating deflection shapes of the mirror and to quantify segment edge displacements; relative alignment of λ/4 or better was desired. The TMDs attenuated the vibration amplitudes by 80% and reduced adjacent segment phase mismatches to acceptable levels.

  15. NeuroBlocks--Visual Tracking of Segmentation and Proofreading for Large Connectomics Projects.

    PubMed

    Ai-Awami, Ali K; Beyer, Johanna; Haehn, Daniel; Kasthuri, Narayanan; Lichtman, Jeff W; Pfister, Hanspeter; Hadwiger, Markus

    2016-01-01

    In the field of connectomics, neuroscientists acquire electron microscopy volumes at nanometer resolution in order to reconstruct a detailed wiring diagram of the neurons in the brain. The resulting image volumes, which often are hundreds of terabytes in size, need to be segmented to identify cell boundaries, synapses, and important cell organelles. However, the segmentation process of a single volume is very complex, time-intensive, and usually performed using a diverse set of tools and many users. To tackle the associated challenges, this paper presents NeuroBlocks, which is a novel visualization system for tracking the state, progress, and evolution of very large volumetric segmentation data in neuroscience. NeuroBlocks is a multi-user web-based application that seamlessly integrates the diverse set of tools that neuroscientists currently use for manual and semi-automatic segmentation, proofreading, visualization, and analysis. NeuroBlocks is the first system that integrates this heterogeneous tool set, providing crucial support for the management, provenance, accountability, and auditing of large-scale segmentations. We describe the design of NeuroBlocks, starting with an analysis of the domain-specific tasks, their inherent challenges, and our subsequent task abstraction and visual representation. We demonstrate the utility of our design based on two case studies that focus on different user roles and their respective requirements for performing and tracking the progress of segmentation and proofreading in a large real-world connectomics project.

  16. SharpViSu: integrated analysis and segmentation of super-resolution microscopy data

    PubMed Central

    Andronov, Leonid; Lutz, Yves; Vonesch, Jean-Luc; Klaholz, Bruno P.

    2016-01-01

    Summary: We introduce SharpViSu, an interactive open-source software with a graphical user interface, which allows performing processing steps for localization data in an integrated manner. This includes common features and new tools such as correction of chromatic aberrations, drift correction based on iterative cross-correlation calculations, selection of localization events, reconstruction of 2D and 3D datasets in different representations, estimation of resolution by Fourier ring correlation, clustering analysis based on Voronoi diagrams and Ripley’s functions. SharpViSu is optimized to work with eventlist tables exported from most popular localization software. We show applications of these on single and double-labelled super-resolution data. Availability and implementation: SharpViSu is available as open source code and as compiled stand-alone application under https://github.com/andronovl/SharpViSu. Contact: klaholz@igbmc.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153691

  17. California State Library: Processing Center Design and Specifications, Vol. V: Cost Analysis. Supplemental Volume

    ERIC Educational Resources Information Center

    Hargrove, Thomas L; Stirling, Keith H.

    Presenting this cost analysis as a supplemental volume, separate from the main report, allows the chief activities in implementing the Processing Center Design to be correlated with costs as of a particular date and according to varying rates of production. In considering the total budget, three main areas are distinguished: (1) Systems…

  18. SOLVENT-BASED TO WATERBASED ADHESIVE-COATED SUBSTRATE RETROFIT - VOLUME I: COMPARATIVE ANALYSIS

    EPA Science Inventory

    This volume represents the analysis of case study facilities' experience with waterbased adhesive use and retrofit requirements. (NOTE: The coated and laminated substrate manufacturing industry was selected as part of NRMRL'S support of the 33/50 Program because of its significan...

  19. Geotechnical Analysis Report for July 2004 - June 2005, Volume 2, Supporting Data

    SciTech Connect

    Washington TRU Solutions LLC

    2006-03-20

    This report is a compilation of geotechnical data presented as plots for each active instrument installed in the underground at the Waste Isolation Pilot Plant (WIPP) through June 30, 2005. A summary of the geotechnical analyses that were performed using the enclosed data is provided in Volume 1 of the Geotechnical Analysis Report (GAR).

  20. Waste Isolation Pilot Plant Geotechnical Analysis Report for July 2005 - June 2006, Volume 2, Supporting Data

    SciTech Connect

    Washington TRU Solutions LLC

    2007-03-25

    This report is a compilation of geotechnical data presented as plots for each active instrument installed in the underground at the Waste Isolation Pilot Plant (WIPP) through June 30, 2006. A summary of the geotechnical analyses that were performed using the enclosed data is provided in Volume 1 of the Geotechnical Analysis Report (GAR).

  1. Content Analysis of the "Journal of Counseling & Development": Volumes 74 to 84

    ERIC Educational Resources Information Center

    Blancher, Adam T.; Buboltz, Walter C.; Soper, Barlow

    2010-01-01

    A content analysis of the research published in the "Journal of Counseling & Development" ("JCD") was conducted for Volumes 74 (1996) through 84 (2006). Frequency distributions were used to identify the most published authors and their institutional affiliations, as well as some basic characteristics (type of sample, gender, and ethnicity) of the…

  2. Oil-spill risk analysis: Cook inlet outer continental shelf lease sale 149. Volume 1. The analysis. Final report

    SciTech Connect

    Johnson, W.R.; Marshall, C.F.; Anderson, C.M.; Lear, E.M.

    1994-08-01

    This report summarizes results of an oil-spill risk analysis (OSRA) conducted for the proposed lower Cook Inlet Outer Continental Shelf (OCS) Lease Sale 149. The objective of this analysis was to estimate relative oil-spill risks associated with oil and gas production from the leasing alternatives proposed for the lease sale. The Minerals Management Service (MMS) will consider the analysis in the environmental impact statement (EIS) prepared for the lease sale. The analysis for proposed OCS Lease Sale 149 was conducted in three parts corresponding to different aspects of the overall problem. The first part dealt with the probability of oil-spill occurrence. The second dealt with trajectories of oil spills from potential spill sites to various environmental resources or land segments. The third part combined the results of the first two parts to give estimates of the overall oil-spill risk if there is oil production as a result of the lease sale. To aid the analysis, conditional risk contour maps of seasonal conditional probabilities of spill contact were generated for each environmental resource or land segment in the study area (see vol. 2).

  3. Comparison of gray matter volume and thickness for analysis of cortical changes in Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Liu, Jiachao; Li, Ziyi; Chen, Kewei; Yao, Li; Wang, Zhiqun; Li, Kunchen; Guo, Xiaojuan

    2011-03-01

    Gray matter volume and cortical thickness are two indices of concern in brain structure magnetic resonance imaging research. Gray matter volume reflects mixed-measurement information of cerebral cortex, while cortical thickness reflects only the information of distance between inner surface and outer surface of cerebral cortex. Using Scaled Subprofile Modeling based on Principal Component Analysis (SSM_PCA) and Pearson's Correlation Analysis, this study further provided quantitative comparisons and depicted both global relevance and local relevance to comprehensively investigate morphometrical abnormalities in cerebral cortex in Alzheimer's disease (AD). Thirteen patients with AD and thirteen age- and gender-matched healthy controls were included in this study. Results showed that factor scores from the first 8 principal components accounted for ~53.38% of the total variance for gray matter volume, and ~50.18% for cortical thickness. Factor scores from the fifth principal component showed significant correlation. In addition, gray matter voxel-based volume was closely related to cortical thickness alterations in most cortical cortex, especially, in some typical abnormal brain regions such as insula and the parahippocampal gyrus in AD. These findings suggest that these two measurements are effective indices for understanding the neuropathology in AD. Studies using both gray matter volume and cortical thickness can separate the causes of the discrepancy, provide complementary information and carry out a comprehensive description of the morphological changes of brain structure.

  4. Solvent transport through hard-soft segmented polymer nanocomposites.

    PubMed

    Rath, Sangram K; Edatholath, Saji S; Patro, T Umasankar; Sudarshan, Kathi; Sastry, P U; Pujari, Pradeep K; Harikrishnan, G

    2016-01-28

    We conducted transport studies of a common solvent (toluene) in its condensed state, through a model hard-soft segmented polyurethane-clay nanocomposite. The solvent diffusivity is observed to be non-monotonic in a functional relationship with a filler volume fraction. In stark contrast, both classical tortuous path theory based geometric calculations and free volume measurements suggest the normally expected monotonic decrease in diffusivity with increase in clay volume fraction. Large deviations between experimentally observed diffusivity coefficients and those theoretically estimated from geometric theory are also observed. However, the equilibrium swelling of a nanocomposite as indicated by the solubility coefficient did not change. To gain an insight into the solvent interaction behavior, we conducted a pre- and post swollen segmented phase analysis of pure polymers and nanocomposites. We find that in a nanocomposite, the solvent has to interact with a filler altered hard-soft segmented morphology. In the altered phase separated morphology, the spatial distribution of thermodynamically segmented hard blocks in the continuous soft matrix becomes a strong function of filler concentration. Upon solvent interaction, this spatial distribution gets reoriented due to sorption and de-clustering. The results indicate strong non-barrier influences of nanoscale fillers dispersed in phase segmented block co-polymers, affecting solvent diffusivity through them. Based on pre- and post swollen morphological observations, we postulate a possible mechanism for the non-monotonic behaviour of solvent transport for hard-soft segmented co-polymers, in which the thermodynamic phase separation is influenced by the filler. PMID:26726752

  5. Asymmetric bias in user guided segmentations of brain structures.

    PubMed

    Maltbie, Eric; Bhatt, Kshamta; Paniagua, Beatriz; Smith, Rachel G; Graves, Michael M; Mosconi, Matthew W; Peterson, Sarah; White, Scott; Blocher, Joseph; El-Sayed, Mo