Science.gov

Sample records for volume segmentation analysis

  1. Automated segmentation and dose-volume analysis with DICOMautomaton

    NASA Astrophysics Data System (ADS)

    Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.

    2014-03-01

    Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.

  2. Morphological segmentation and partial volume analysis for volumetry of solid pulmonary lesions in thoracic CT scans.

    PubMed

    Kuhnigk, Jan-Martin; Dicken, Volker; Bornemann, Lars; Bakai, Annemarie; Wormanns, Dag; Krass, Stefan; Peitgen, Heinz-Otto

    2006-04-01

    Volumetric growth assessment of pulmonary lesions is crucial to both lung cancer screening and oncological therapy monitoring. While several methods for small pulmonary nodules have previously been presented, the segmentation of larger tumors that appear frequently in oncological patients and are more likely to be complexly interconnected with lung morphology has not yet received much attention. We present a fast, automated segmentation method that is based on morphological processing and is suitable for both small and large lesions. In addition, the proposed approach addresses clinical challenges to volume assessment such as variations in imaging protocol or inspiration state by introducing a method of segmentation-based partial volume analysis (SPVA) that follows on the segmentation procedure. Accuracy and reproducibility studies were performed to evaluate the new algorithms. In vivo interobserver and interscan studies on low-dose data from eight clinical metastasis patients revealed that clinically significant volume change can be detected reliably and with negligible computation time by the presented methods. In addition, phantom studies were conducted. Based on the segmentation performed with the proposed method, the performance of the SPVA volumetry method was compared with the conventional technique on a phantom that was scanned with different dosages and reconstructed with varying parameters. Both systematic and absolute errors were shown to be reduced substantially by the SPVA method. The method was especially successful in accounting for slice thickness and reconstruction kernel variations, where the median error was more than halved in comparison to the conventional approach. PMID:16608058

  3. To be presented at the IEEE VR 2013 Workshop on Interactive Volume Interaction, March 2013 at Orlando, FL Interactive Coarse Segmentation and Analysis of Volume Data with a

    E-print Network

    -handed interaction, virtual reality. 1 SUITE OF 3D INTERACTION TOOLS FOR INTERACTIVE SEGMENTATION OF VOLUME DATA surfaces through the volume do not live up to the users' expectations. We propose to alter the way

  4. Inter-sport variability of muscle volume distribution identified by segmental bioelectrical impedance analysis in four ball sports

    PubMed Central

    Yamada, Yosuke; Masuo, Yoshihisa; Nakamura, Eitaro; Oda, Shingo

    2013-01-01

    The aim of this study was to evaluate and quantify differences in muscle distribution in athletes of various ball sports using segmental bioelectrical impedance analysis (SBIA). Participants were 115 male collegiate athletes from four ball sports (baseball, soccer, tennis, and lacrosse). Percent body fat (%BF) and lean body mass were measured, and SBIA was used to measure segmental muscle volume (MV) in bilateral upper arms, forearms, thighs, and lower legs. We calculated the MV ratios of dominant to nondominant, proximal to distal, and upper to lower limbs. The measurements consisted of a total of 31 variables. Cluster and factor analyses were applied to identify redundant variables. The muscle distribution was significantly different among groups, but the %BF was not. The classification procedures of the discriminant analysis could correctly distinguish 84.3% of the athletes. These results suggest that collegiate ball game athletes have adapted their physique to their sport movements very well, and the SBIA, which is an affordable, noninvasive, easy-to-operate, and fast alternative method in the field, can distinguish ball game athletes according to their specific muscle distribution within a 5-minute measurement. The SBIA could be a useful, affordable, and fast tool for identifying talents for specific sports. PMID:24379714

  5. Segmentation strategies for polymerized volume data sets 

    E-print Network

    Doddapaneni, Venkata Purna

    2006-04-12

    A new technique, called the polymerization algorithm, is described for the hierarchical segmentation of polymerized volume data sets (PVDS) using the Lblock data structure. The Lblock data structure is defined as a 3dimensional isorectangular block...

  6. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 2: Program users manual

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  7. Carving: Scalable Interactive Segmentation of Neural Volume Electron Microscopy Images

    E-print Network

    Hamprecht, Fred A.

    Carving: Scalable Interactive Segmentation of Neural Volume Electron Microscopy Images C. N electron microscopy images. We propose a supervoxel-based en- ergy function with a novel background prior available. Keywords: electron microscopy, seeded segmentation, interactive seg- mentation, graph cut

  8. Tooth segmentation system with intelligent editing for cephalometric analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shoupu

    2015-03-01

    Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.

  9. 3D visualization for medical volume segmentation validation

    NASA Astrophysics Data System (ADS)

    Eldeib, Ayman M.

    2002-05-01

    This paper presents a 3-D visualization tool that manipulates and/or enhances by user input the segmented targets and other organs. A 3-D visualization tool is developed to create a precise and realistic 3-D model from CT/MR data set for manipulation in 3-D and permitting physician or planner to look through, around, and inside the various structures. The 3-D visualization tool is designed to assist and to evaluate the segmentation process. It can control the transparency of each 3-D object. It displays in one view a 2-D slice (axial, coronal, and/or sagittal)within a 3-D model of the segmented tumor or structures. This helps the radiotherapist or the operator to evaluate the adequacy of the generated target compared to the original 2-D slices. The graphical interface enables the operator to easily select a specific 2-D slice of the 3-D volume data set. The operator is enabled to manually override and adjust the automated segmentation results. After correction, the operator can see the 3-D model again and go back and forth till satisfactory segmentation is obtained. The novelty of this research work is in using state-of-the-art of image processing and 3-D visualization techniques to facilitate a process of a medical volume segmentation validation and assure the accuracy of the volume measurement of the structure of interest.

  10. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    PubMed Central

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís; Sastre-Garriga, Jaume; Montalban, Xavier; Rovira, Àlex; Lladó, Xavier

    2015-01-01

    Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS) lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM) and white matter (WM) using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations.

  11. Vessel segmentation for angiographic enhancement and analysis

    E-print Network

    Lübeck, Universität zu

    Vessel segmentation for angiographic enhancement and analysis Alexandru Condurache1 , Til Aach1@isip.uni-luebeck.de Abstract. Angiography is a widely used method of vessel imaging for the diagnosis and treatment of pathological manifestations as well as for medical research. Vessel segmentation in angiograms is useful

  12. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  13. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51? ± ?1.92) to (97.27? ± ?0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  14. A Bayesian Approach to Video Object Segmentation via Merging 3D Watershed Volumes

    E-print Network

    Chen, Sheng-Wei

    A Bayesian Approach to Video Object Segmentation via Merging 3D Watershed Volumes Yu-Pao Tsai1 data into a set of 3D watershed volumes, where each watershed volume is a series of corresponding 2D-controlled watershed segmentation, where the markers are extracted by first generating a set of initial markers via

  15. Industry Analysis and Customer Segmentation Company Description

    E-print Network

    Dahl, David B.

    Industry Analysis and Customer Segmentation Company Description: Our company specializes benefits and services industry, our businesses offer exceptional service, broad capabilities and enduring unique ways do customers pay for things in other industries that might be useful to the industry? What

  16. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    NASA Astrophysics Data System (ADS)

    Hatt, M.; Lamare, F.; Boussion, N.; Turzo, A.; Collet, C.; Salzenstein, F.; Roux, C.; Jarritt, P.; Carson, K.; Cheze-LeRest, C.; Visvikis, D.

    2007-07-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.

  17. Semiautomatic Regional Segmentation to Measure Orbital Fat Volumes in Thyroid-Associated Ophthalmopathy

    PubMed Central

    Comerci, M.; Elefante, A.; Strianese, D.; Senese, R.; Bonavolontà, P.; Alfano, B.; Bonavolontà, G.; Brunetti, A.

    2013-01-01

    Summary This study was designed to validate a novel semi-automated segmentation method to measure regional intra-orbital fat tissue volume in Graves' ophthalmopathy. Twenty-four orbits from 12 patients with Graves' ophthalmopathy, 24 orbits from 12 controls, ten orbits from five MRI study simulations and two orbits from a digital model were used. Following manual region of interest definition of the orbital volumes performed by two operators with different levels of expertise, an automated procedure calculated intra-orbital fat tissue volumes (global and regional, with automated definition of four quadrants). In patients with Graves' disease, clinical activity score and degree of exophthalmos were measured and correlated with intra-orbital fat volumes. Operator performance was evaluated and statistical analysis of the measurements was performed. Accurate intra-orbital fat volume measurements were obtained with coefficients of variation below 5%. The mean operator difference in total fat volume measurements was 0.56%. Patients had significantly higher intra-orbital fat volumes than controls (p<0.001 using Student's t test). Fat volumes and clinical score were significantly correlated (p<0.001). The semi-automated method described here can provide accurate, reproducible intra-orbital fat measurements with low inter-operator variation and good correlation with clinical data. PMID:24007725

  18. Semiautomatic regional segmentation to measure orbital fat volumes in thyroid-associated ophthalmopathy. A validation study.

    PubMed

    Comerci, M; Elefante, A; Strianese, D; Senese, R; Bonavolontà, P; Alfano, B; Bonavolontà, B; Brunetti, A

    2013-08-01

    This study was designed to validate a novel semi-automated segmentation method to measure regional intra-orbital fat tissue volume in Graves' ophthalmopathy. Twenty-four orbits from 12 patients with Graves' ophthalmopathy, 24 orbits from 12 controls, ten orbits from five MRI study simulations and two orbits from a digital model were used. Following manual region of interest definition of the orbital volumes performed by two operators with different levels of expertise, an automated procedure calculated intra-orbital fat tissue volumes (global and regional, with automated definition of four quadrants). In patients with Graves' disease, clinical activity score and degree of exophthalmos were measured and correlated with intra-orbital fat volumes. Operator performance was evaluated and statistical analysis of the measurements was performed. Accurate intra-orbital fat volume measurements were obtained with coefficients of variation below 5%. The mean operator difference in total fat volume measurements was 0.56%. Patients had significantly higher intra-orbital fat volumes than controls (p<0.001 using Student's t test). Fat volumes and clinical score were significantly correlated (p<0.001). The semi-automated method described here can provide accurate, reproducible intra-orbital fat measurements with low inter-operator variation and good correlation with clinical data. PMID:24007725

  19. 3D SEGMENTATION OF RODENT BRAIN STRUCTURES USING ACTIVE VOLUME MODEL WITH SHAPE PRIORS

    E-print Network

    Huang, Junzhou

    3D SEGMENTATION OF RODENT BRAIN STRUCTURES USING ACTIVE VOLUME MODEL WITH SHAPE PRIORS Shaoting of the rodent brain from MR images, and the proposed method performed better than the original AVM. Index Terms-- Segmentation, deformable models, Ac- tive Volume Model, Active Shape Model, Shape prior, rodent brain 1

  20. Brain tumor target volume determination for radiation therapy treatment planning through the use of automated MRI segmentation

    NASA Astrophysics Data System (ADS)

    Mazzara, Gloria Patrika

    Radiation therapy seeks to effectively irradiate the tumor cells while minimizing the dose to adjacent normal cells. Prior research found that the low success rates for treating brain tumors would be improved with higher radiation doses to the tumor area. This is feasible only if the target volume can be precisely identified. However, the definition of tumor volume is still based on time-intensive, highly subjective manual outlining by radiation oncologists. In this study the effectiveness of two automated Magnetic Resonance Imaging (MRI) segmentation methods, k-Nearest Neighbors (kNN) and Knowledge-Guided (KG), in determining the Gross Tumor Volume (GTV) of brain tumors for use in radiation therapy was assessed. Three criteria were applied: accuracy of the contours; quality of the resulting treatment plan in terms of dose to the tumor; and a novel treatment plan evaluation technique based on post-treatment images. The kNN method was able to segment all cases while the KG method was limited to enhancing tumors and gliomas with clear enhancing edges. Various software applications were developed to create a closed smooth contour that encompassed the tumor pixels from the segmentations and to integrate these results into the treatment planning software. A novel, probabilistic measurement of accuracy was introduced to compare the agreement of the segmentation methods with the weighted average physician volume. Both computer methods under-segment the tumor volume when compared with the physicians but performed within the variability of manual contouring (28% +/- 12% for inter-operator variability). Computer segmentations were modified vertically to compensate for their under-segmentation. When comparing radiation treatment plans designed from physician-defined tumor volumes with treatment plans developed from the modified segmentation results, the reference target volume was irradiated within the same level of conformity. Analysis of the plans based on post-treatment MRI showed that the segmentation plans provided similar dose coverage to areas being treated by the original treatment plans. This research demonstrates that computer segmentations provide a feasible route to automatic target volume definition. Because of the lower variability and greater efficiency of the automated techniques, their use could lead to more precise plans and better prognosis for brain tumor patients.

  1. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  2. Spinal Cord Segmentation for Volume Estimation in Healthy and Multiple Sclerosis Subjects using Crawlers and Minimal Paths

    E-print Network

    Hamarneh, Ghassan

    Spinal Cord Segmentation for Volume Estimation in Healthy and Multiple Sclerosis Subjects using Columbia Vancouver, Canada roger.tam@ubc.ca Abstract--Spinal cord analysis is an important problem of both healthy and pathological spinal cords from clinical MR data. This is the first study to validate

  3. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region

    PubMed Central

    Tian, Jing; Varga, Boglárka; Somfai, Gábor Márk; Lee, Wen-Hsiang; Smiddy, William E.; Cabrera DeBuc, Delia

    2015-01-01

    Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra’s algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (? 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data. PMID:26258430

  4. Hybrid segmentation framework for 3D medical image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  5. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    PubMed Central

    Hatt, Mathieu; Lamare, Frédéric; Boussion, Nicolas; Roux, Christian; Turzo, Alexandre; Cheze-Lerest, Catherine; Jarritt, Peter; Carson, Kathryn; Salzenstein, Fabien; Collet, Christophe; Visvikis, Dimitris

    2007-01-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the Fuzzy Hidden Markov Chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical Hidden Markov Chain (HMC) algorithm, FHMC takes into account noise, voxel’s intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the “fuzzy” nature of the object on interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8mm3 and 64mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned. PMID:17664555

  6. Partial volume segmentation of brain magnetic resonance images based on maximum a posteriori probability

    E-print Network

    Partial volume segmentation of brain magnetic resonance images based on maximum a posteriori, and image-intensity inhomogeneity render a challenging task for segmentation of brain magnetic resonance MR correction I. INTRODUCTION Magnetic resonance MR imaging has several advantages over other medical imaging

  7. Segmentation of cardiac MR volume data using 3D active appearance models

    NASA Astrophysics Data System (ADS)

    Mitchell, Steven C.; Lelieveldt, Boudewijn P. F.; Bosch, Johan G.; van der Geest, Rob J.; Reiber, Johan H. C.; Sonka, Milan

    2002-05-01

    Active Appearance Models (AAMs) are useful for the segmentation of cardiac MR images since they exploit prior knowledge about the cardiac shape and image appearance. However, traditional AAMs only process 2D images, not taking into account the 3D data inherent to MR. This paper presents a novel, true 3D Active Appearance Model that models the intrinsic 3D shape and image appearance of the left ventricle in cardiac MR data. In 3D-AAM, shape and appearance of the Left Ventricle (LV) is modeled from a set of expert drawn contours. The contours are then resampled to a manually defined set of landmark points, and subsequently aligned. Appearance variations in both shape and texture are captured using Principal Component Analysis (PCA) on the training set. Segmentation is achieved by minimizing the model appearance-to-target differences by adjusting the model eigen-coefficients using a gradient descent approach. The clinical potential of the 3D-AAM is demonstrated in short-axis cardiac magnetic resonance (MR) images. The method's performance was assessed by comparison with manually-identified independent standards in 56 clinical MR sequences. The method showed good agreement with the independent standards using quantitative indices such as border positioning errors, endo- and epicardial volumes, and left ventricular mass. The 3D AAM method shows high promise for successful segmentation of three-dimensional images in MR.

  8. Spinal Crawlers: Deformable Organisms for Spinal Cord Segmentation and Analysis

    E-print Network

    Hamarneh, Ghassan

    Spinal Crawlers: Deformable Organisms for Spinal Cord Segmentation and Analysis Chris Mc, Canada {cmcintos, hamarneh}@cs.sfu.ca Abstract. Spinal cord analysis is an important problem relating to the study of various neurological diseases. We present a novel approach to spinal cord segmentation

  9. Midbrain volume segmentation using active shape models and LBPs

    NASA Astrophysics Data System (ADS)

    Olveres, Jimena; Nava, Rodrigo; Escalante-Ramírez, Boris; Cristóbal, Gabriel; García-Moreno, Carla María.

    2013-09-01

    In recent years, the use of Magnetic Resonance Imaging (MRI) to detect different brain structures such as midbrain, white matter, gray matter, corpus callosum, and cerebellum has increased. This fact together with the evidence that midbrain is associated with Parkinson's disease has led researchers to consider midbrain segmentation as an important issue. Nowadays, Active Shape Models (ASM) are widely used in literature for organ segmentation where the shape is an important discriminant feature. Nevertheless, this approach is based on the assumption that objects of interest are usually located on strong edges. Such a limitation may lead to a final shape far from the actual shape model. This paper proposes a novel method based on the combined use of ASM and Local Binary Patterns for segmenting midbrain. Furthermore, we analyzed several LBP methods and evaluated their performance. The joint-model considers both global and local statistics to improve final adjustments. The results showed that our proposal performs substantially better than the ASM algorithm and provides better segmentation measurements.

  10. High volume production trial of mirror segments for the Thirty Meter Telescope

    NASA Astrophysics Data System (ADS)

    Oota, Tetsuji; Negishi, Mahito; Shinonaga, Hirohiko; Gomi, Akihiko; Tanaka, Yutaka; Akutsu, Kotaro; Otsuka, Itaru; Mochizuki, Shun; Iye, Masanori; Yamashita, Takuya

    2014-07-01

    The Thirty Meter Telescope is a next-generation optical/infrared telescope to be constructed on Mauna Kea, Hawaii toward the end of this decade, as an international project. Its 30 m primary mirror consists of 492 off-axis aspheric segmented mirrors. High volume production of hundreds of segments has started in 2013 based on the contract between National Astronomical Observatory of Japan and Canon Inc.. This paper describes the achievements of the high volume production trials. The Stressed Mirror Figuring technique which is established by Keck Telescope engineers is arranged and adopted. To measure the segment surface figure, a novel stitching algorithm is evaluated by experiment. The integration procedure is checked with prototype segment.

  11. Comprehensive evaluation of an image segmentation technique for measuring tumor volume from CT images

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun

    2008-03-01

    Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.

  12. Multi-region unstructured volume segmentation using tetrahedron filling

    SciTech Connect

    Willliams, Sean Jamerson; Dillard, Scott E; Thoma, Dan J; Hlawitschka, Mario; Hamann, Bernd

    2010-01-01

    Segmentation is one of the most common operations in image processing, and while there are several solutions already present in the literature, they each have their own benefits and drawbacks that make them well-suited for some types of data and not for others. We focus on the problem of breaking an image into multiple regions in a single segmentation pass, while supporting both voxel and scattered point data. To solve this problem, we begin with a set of potential boundary points and use a Delaunay triangulation to complete the boundaries. We use heuristic- and interaction-driven Voronoi clustering to find reasonable groupings of tetrahedra. Apart from the computation of the Delaunay triangulation, our algorithm has linear time complexity with respect to the number of tetrahedra.

  13. Segmentation propagation for the automated quantification of ventricle volume from serial MRI

    NASA Astrophysics Data System (ADS)

    Linguraru, Marius George; Butman, John A.

    2009-02-01

    Accurate ventricle volume estimates could potentially improve the understanding and diagnosis of communicating hydrocephalus. Postoperative communicating hydrocephalus has been recognized in patients with brain tumors where the changes in ventricle volume can be difficult to identify, particularly over short time intervals. Because of the complex alterations of brain morphology in these patients, the segmentation of brain ventricles is challenging. Our method evaluates ventricle size from serial brain MRI examinations; we (i) combined serial images to increase SNR, (ii) automatically segmented this image to generate a ventricle template using fast marching methods and geodesic active contours, and (iii) propagated the segmentation using deformable registration of the original MRI datasets. By applying this deformation to the ventricle template, serial volume estimates were obtained in a robust manner from routine clinical images (0.93 overlap) and their variation analyzed.

  14. Comparison of EM-based and level set partial volume segmentations of MR brain images

    NASA Astrophysics Data System (ADS)

    Tagare, Hemant D.; Chen, Yunmei; Fulbright, Robert K.

    2008-03-01

    EM and level set algorithms are competing methods for segmenting MRI brain images. This paper presents a fair comparison of the two techniques using the Montreal Neurological Institute's software phantom. There are many flavors of level set algorithms for segmentation into multiple regions (multi-phase algorithms, multi-layer algorithms). The specific algorithm evaluated by us is a variant of the multi-layer level set algorithm. It uses a single level set function for segmenting the image into multiple classes and can be run to completion without restarting. The EM-based algorithm is standard. Both algorithms have the capacity to model a variable number of partial volume classes as well as image inhomogeneity (bias field). Our evaluation consists of systematically changing the number of partial volume classes, additive image noise, and regularization parameters. The results suggest that the performances of both algorithms are comparable across noise, number of partial volume classes, and regularization. The segmentation errors of both algorithms are around 5 - 10% for cerebrospinal fluid, gray and white matter. The level set algorithm appears to have a slight advantage for gray matter segmentation. This may be beneficial in studying certain brain diseases (Multiple Sclerosis or Alzheimer's disease) where small changes in gray matter volume are significant.

  15. Quantitative land cover change analysis using fuzzy segmentation

    NASA Astrophysics Data System (ADS)

    Lizarazo, Ivan

    2012-04-01

    Fuzzy image segmentation was proposed recently as an alternative GEOBIA method for conducting discrete land cover classification. In this paper, a variant of fuzzy segmentation is applied for continuous land cover change analysis. The method comprises two main stages: (i) estimation of compositional land cover for each data by fuzzy segmentation; and (ii) change analysis using a fuzzy change matrix. The fuzzy segmentation stage outputs fuzzy-crisp and crisp-fuzzy image regions whose spectral and geometric properties are measured to populate the set of predictors used to estimate land cover at single dates. The variant of fuzzy image segmentation is implemented using advanced machine learning techniques and tested in a rapidly urbanizing area using Landsat multi-spectral imagery. Experimental results suggest that the method produces accurate characterization of continuous land cover classes. Thus, the proposed method is potentially useful for enhancing the current GEOBIA perspective which focuses mainly on discrete land cover classifications.

  16. LANDSAT-D program. Volume 2: Ground segment

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Raw digital data, as received from the LANDSAT spacecraft, cannot generate images that meet specifications. Radiometric corrections must be made to compensate for aging and for differences in sensitivity among the instrument sensors. Geometric corrections must be made to compensate for off-nadir look angle, and to calculate spacecraft drift from its prescribed path. Corrections must also be made for look-angle jitter caused by vibrations induced by spacecraft equipment. The major components of the LANDSAT ground segment and their functions are discussed.

  17. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  18. Multi-Segment Hemodynamic and Volume Assessment With Impedance Plethysmography: Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Ku, Yu-Tsuan E.; Montgomery, Leslie D.; Webbon, Bruce W. (Technical Monitor)

    1995-01-01

    Definition of multi-segmental circulatory and volume changes in the human body provides an understanding of the physiologic responses to various aerospace conditions. We have developed instrumentation and testing procedures at NASA Ames Research Center that may be useful in biomedical research and clinical diagnosis. Specialized two, four, and six channel impedance systems will be described that have been used to measure calf, thigh, thoracic, arm, and cerebral hemodynamic and volume changes during various experimental investigations.

  19. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes

    PubMed Central

    Eapen, Maya; Korah, Reeba; Geetha, G.

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  20. Lung Segmentation in 4D CT Volumes Based on Robust Active Shape Model Matching

    PubMed Central

    Gill, Gurman; Beichel, Reinhard R.

    2015-01-01

    Dynamic and longitudinal lung CT imaging produce 4D lung image data sets, enabling applications like radiation treatment planning or assessment of response to treatment of lung diseases. In this paper, we present a 4D lung segmentation method that mutually utilizes all individual CT volumes to derive segmentations for each CT data set. Our approach is based on a 3D robust active shape model and extends it to fully utilize 4D lung image data sets. This yields an initial segmentation for the 4D volume, which is then refined by using a 4D optimal surface finding algorithm. The approach was evaluated on a diverse set of 152 CT scans of normal and diseased lungs, consisting of total lung capacity and functional residual capacity scan pairs. In addition, a comparison to a 3D segmentation method and a registration based 4D lung segmentation approach was performed. The proposed 4D method obtained an average Dice coefficient of 0.9773 ± 0.0254, which was statistically significantly better (p value ?0.001) than the 3D method (0.9659 ± 0.0517). Compared to the registration based 4D method, our method obtained better or similar performance, but was 58.6% faster. Also, the method can be easily expanded to process 4D CT data sets consisting of several volumes. PMID:26557844

  1. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    NASA Astrophysics Data System (ADS)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  2. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  3. Sonar Picture Segmentation using Markovian Multigrid Algorithm and Multiresolution Analysis

    E-print Network

    Sonar Picture Segmentation using Markovian Multigrid Algorithm and Multiresolution Analysis C on sonar pictures. On the sea-bottom lies some natural or man made objects that we have to detect (cf section 5) or the multiresolution analysis (cf section 6 ) we proposed on real sonar picture allow

  4. Automatic large-volume object region segmentation in LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2014-10-01

    LiDAR is a remote sensing method which produces precise point clouds consisting of millions of geo-spatially located 3D data points. Because of the nature of LiDAR point clouds, it can often be difficult for analysts to accurately and efficiently recognize and categorize objects. The goal of this paper is automatic large-volume object region segmentation in LiDAR point clouds. This efficient segmentation technique is intended to be a pre- processing step for the eventual classification of objects within the point cloud. The data is initially segmented into local histogram bins. This local histogram bin representation allows for the efficient consolidation of the point cloud data into voxels without the loss of location information. Additionally, by binning the points, important feature information can be extracted, such as the distribution of points, the density of points and a local ground. From these local histograms, a 3D automatic seeded region growing technique is applied. This technique performs seed selection based on two criteria, similarity and Euclidean distance to nearest neighbors. The neighbors of selected seeds are then examined and assigned labels based on location and Euclidean distance to a region mean. After the initial segmentation step, region integration is performed to rejoin over-segmented regions. The large amount of points in LiDAR data can make other segmentation techniques extremely time consuming. In addition to producing accurate object segmentation results, the proposed local histogram binning process allows for efficient segmentation, covering a point cloud of over 9,000 points in 10 seconds.

  5. Influences of skull segmentation inaccuracies on EEG source analysis.

    PubMed

    Lanfer, B; Scherg, M; Dannhauer, M; Knösche, T R; Burger, M; Wolters, C H

    2012-08-01

    The low-conducting human skull is known to have an especially large influence on electroencephalography (EEG) source analysis. Because of difficulties segmenting the complex skull geometry out of magnetic resonance images, volume conductor models for EEG source analysis might contain inaccuracies and simplifications regarding the geometry of the skull. The computer simulation study presented here investigated the influences of a variety of skull geometry deficiencies on EEG forward simulations and source reconstruction from EEG data. Reference EEG data was simulated in a detailed and anatomically plausible reference model. Test models were derived from the reference model representing a variety of skull geometry inaccuracies and simplifications. These included erroneous skull holes, local errors in skull thickness, modeling cavities as bone, downward extension of the model and simplifying the inferior skull or the inferior skull and scalp as layers of constant thickness. The reference EEG data was compared to forward simulations in the test models, and source reconstruction in the test models was performed on the simulated reference data. The finite element method with high-resolution meshes was employed for all forward simulations. It was found that large skull geometry inaccuracies close to the source space, for example, when cutting the model directly below the skull, led to errors of 20mm and more for extended source space regions. Local defects, for example, erroneous skull holes, caused non-negligible errors only in the vicinity of the defect. The study design allowed a comparison of influence size, and guidelines for modeling the skull geometry were concluded. PMID:22584227

  6. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  7. Segment clustering methodology for unsupervised Holter recordings analysis

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sotelo, Jose Luis; Peluffo-Ordoñez, Diego; Castellanos Dominguez, German

    2015-01-01

    Cardiac arrhythmia analysis on Holter recordings is an important issue in clinical settings, however such issue implicitly involves attending other problems related to the large amount of unlabelled data which means a high computational cost. In this work an unsupervised methodology based in a segment framework is presented, which consists of dividing the raw data into a balanced number of segments in order to identify fiducial points, characterize and cluster the heartbeats in each segment separately. The resulting clusters are merged or split according to an assumed criterion of homogeneity. This framework compensates the high computational cost employed in Holter analysis, being possible its implementation for further real time applications. The performance of the method is measure over the records from the MIT/BIH arrhythmia database and achieves high values of sensibility and specificity, taking advantage of database labels, for a broad kind of heartbeats types recommended by the AAMI.

  8. Analysis of recent segmental duplications in the bovine genome

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Duplicated sequences are an important source of gene innovation and structural variation within mammalian genomes. We describe the first systematic and genome-wide analysis of segmental duplications in the modern domesticated cattle (Bos taurus). Using two distinct computational analyses, we estimat...

  9. Statistical Analysis of Manual Segmentations of Structures in Medical Images

    E-print Network

    Grimm, Cindy

    statistical shape theory to generate joint inferences and analyze this data generated by the citizenStatistical Analysis of Manual Segmentations of Structures in Medical Images Sebastian Kurtek§ , Jingyong Su , Cindy Grimm , Michelle Vaughan , Ross Sowell , Anuj Srivastava §Department of Statistics

  10. Salted and preserved duck eggs: a consumer market segmentation analysis.

    PubMed

    Arthur, Jennifer; Wiseman, Kelleen; Cheng, K M

    2015-08-01

    The combination of increasing ethnic diversity in North America and growing consumer support for local food products may present opportunities for local producers and processors in the ethnic foods product category. Our study examined the ethnic Chinese (pop. 402,000) market for salted and preserved duck eggs in Vancouver, British Columbia (BC), Canada. The objective of the study was to develop a segmentation model using survey data to categorize consumer groups based on their attitudes and the importance they placed on product attributes. We further used post-segmentation acculturation score, demographics and buyer behaviors to define these groups. Data were gathered via a survey of randomly selected Vancouver households with Chinese surnames (n = 410), targeting the adult responsible for grocery shopping. Results from principal component analysis and a 2-step cluster analysis suggest the existence of 4 market segments, described as Enthusiasts, Potentialists, Pragmatists, Health Skeptics (salted duck eggs), and Neutralists (preserved duck eggs). Kruskal Wallis tests and post hoc Mann-Whitney tests found significant differences between segments in terms of attitudes and the importance placed on product characteristics. Health Skeptics, preserved egg Potentialists, and Pragmatists of both egg products were significantly biased against Chinese imports compared to others. Except for Enthusiasts, segments disagreed that eggs are 'Healthy Products'. Preserved egg Enthusiasts had a significantly lower acculturation score (AS) compared to all others, while salted egg Enthusiasts had a lower AS compared to Health Skeptics. All segments rated "produced in BC, not mainland China" products in the "neutral to very likely" range for increasing their satisfaction with the eggs. Results also indicate that buyers of each egg type are willing to pay an average premium of at least 10% more for BC produced products versus imports, with all other characteristics equal. Overall results indicate that opportunities exist for local producers and processors: Chinese Canadians with lower AS form a core part of the potential market. PMID:26089479

  11. A novel colonic polyp volume segmentation method for computer tomographic colonography

    NASA Astrophysics Data System (ADS)

    Wang, Huafeng; Li, Lihong C.; Han, Hao; Song, Bowen; Peng, Hao; Wang, Yunhong; Wang, Lihua; Liang, Zhengrong

    2014-03-01

    Colorectal cancer is the third most common type of cancer. However, this disease can be prevented by detection and removal of precursor adenomatous polyps after the diagnosis given by experts on computer tomographic colonography (CTC). During CTC diagnosis, the radiologist looks for colon polyps and measures not only the size but also the malignancy. It is a common sense that to segment polyp volumes from their complicated growing environment is of much significance for accomplishing the CTC based early diagnosis task. Previously, the polyp volumes are mainly given from the manually or semi-automatically drawing by the radiologists. As a result, some deviations cannot be avoided since the polyps are usually small (6~9mm) and the radiologists' experience and knowledge are varying from one to another. In order to achieve automatic polyp segmentation carried out by the machine, we proposed a new method based on the colon decomposition strategy. We evaluated our algorithm on both phantom and patient data. Experimental results demonstrate our approach is capable of segment the small polyps from their complicated growing background.

  12. Documented Safety Analysis for the B695 Segment

    SciTech Connect

    Laycak, D

    2008-09-11

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems, and keeping them as simple as possible while complying with industry standards and institutional requirements. No operations to be performed in the B695 Segment or building system are considered to be complex. No anticipated future change in the facility mission is expected to impact the extent of safety analysis documented in this DSA.

  13. Semi-automated segmentation of carotid artery total plaque volume from three dimensional ultrasound carotid imaging

    NASA Astrophysics Data System (ADS)

    Buchanan, D.; Gyacskov, I.; Ukwatta, E.; Lindenmaier, T.; Fenster, A.; Parraga, G.

    2012-03-01

    Carotid artery total plaque volume (TPV) is a three-dimensional (3D) ultrasound (US) imaging measurement of carotid atherosclerosis, providing a direct non-invasive and regional estimation of atherosclerotic plaque volume - the direct determinant of carotid stenosis and ischemic stroke. While 3DUS measurements of TPV provide the potential to monitor plaque in individual patients and in populations enrolled in clinical trials, until now, such measurements have been performed manually which is laborious, time-consuming and prone to intra-observer and inter-observer variability. To address this critical translational limitation, here we describe the development and application of a semi-automated 3DUS plaque volume measurement. This semi-automated TPV measurement incorporates three user-selected boundaries in two views of the 3DUS volume to generate a geometric approximation of TPV for each plaque measured. We compared semi-automated repeated measurements to manual segmentation of 22 individual plaques ranging in volume from 2mm3 to 151mm3. Mean plaque volume was 43+/-40mm3 for semi-automated and 48+/-46mm3 for manual measurements and these were not significantly different (p=0.60). Mean coefficient of variation (CV) was 12.0+/-5.1% for the semi-automated measurements.

  14. QUANTIFICATION OF MENISCAL VOLUME BY SEGMENTATION OF 3T MAGNETIC RESONANCE IMAGES

    PubMed Central

    Bowers, Megan E.; Tung, Glenn A; Fleming, Braden C.; Crisco, Joseph J.; Rey, Jesus

    2007-01-01

    Meniscal injuries place the knee at risk for early osteoarthritis (OA) because they disrupt their load-bearing capabilities. Partial resection is routinely performed to alleviate symptomatic meniscal tears. While the removal of meniscal tissue may not be the only factor associated with partial meniscectomy outcome, the amount removed certainly contributes to functional loss. It is unknown, however, whether there is a critical amount of meniscal tissue that can be removed without diminishing the structure’s chondroprotective role. In order to examine the existence of such a threshold, it is necessary to accurately quantify meniscal volume both before and after partial meniscectomy to determine the amount of meniscal tissue removed. Therefore, our goal was to develop and validate an MR-based method for assessing meniscal volume. The specific aims were: (1) to evaluate the feasibility of the MR-based segmentation method; (2) to determine the method’s reliability for repeated measurements; and (3) to validate its accuracy in situ. MR images were obtained on a 3T magnet, and each scan was segmented using a biplanar approach. The MR-based volumes for each specimen were compared to those measured by water displacement. The results indicate that the biplanar approach of measuring meniscal volumes is accurate and reliable. The calculated volumes of the menisci were within 5% of the true values, the coefficients of variation were 4%, and the intraclass correlation coefficients were greater than 0.96. These data demonstrate that this method could be used to measure the amount of meniscal tissue excised during partial meniscectomy to within 125.7mm3. PMID:17391677

  15. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  16. Systematic Error in Hippocampal Volume Asymmetry Measurement is Minimal with a Manual Segmentation Protocol

    PubMed Central

    Rogers, Baxter P.; Sheffield, Julia M.; Luksik, Andrew S.; Heckers, Stephan

    2012-01-01

    Hemispheric asymmetry of hippocampal volume is a common finding that has biological relevance, including associations with dementia and cognitive performance. However, a recent study has reported the possibility of systematic error in measurements of hippocampal asymmetry by magnetic resonance volumetry. We manually traced the volumes of the anterior and posterior hippocampus in 40 healthy people to measure systematic error related to image orientation. We found a bias due to the side of the screen on which the hippocampus was viewed, such that hippocampal volume was larger when traced on the left side of the screen than when traced on the right (p?=?0.05). However, this bias was smaller than the anatomical right?>?left asymmetry of the anterior hippocampus. We found right?>?left asymmetry of hippocampal volume regardless of image presentation (radiological versus neurological). We conclude that manual segmentation protocols can minimize the effect of image orientation in the study of hippocampal volume asymmetry, but our confirmation that such bias exists suggests strategies to avoid it in future studies. PMID:23248580

  17. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    NASA Astrophysics Data System (ADS)

    Leibfarth, S.; Eckert, F.; Welz, S.; Siegel, C.; Schmidt, H.; Schwenzer, N.; Zips, D.; Thorwarth, D.

    2015-07-01

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only.

  18. Machine learning based vesselness measurement for coronary artery segmentation in cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Loziczonek, Maciej; Georgescu, Bogdan; Zhou, S. Kevin; Vega-Higuera, Fernando; Comaniciu, Dorin

    2011-03-01

    Automatic coronary centerline extraction and lumen segmentation facilitate the diagnosis of coronary artery disease (CAD), which is a leading cause of death in developed countries. Various coronary centerline extraction methods have been proposed and most of them are based on shortest path computation given one or two end points on the artery. The major variation of the shortest path based approaches is in the different vesselness measurements used for the path cost. An empirically designed measurement (e.g., the widely used Hessian vesselness) is by no means optimal in the use of image context information. In this paper, a machine learning based vesselness is proposed by exploiting the rich domain specific knowledge embedded in an expert-annotated dataset. For each voxel, we extract a set of geometric and image features. The probabilistic boosting tree (PBT) is then used to train a classifier, which assigns a high score to voxels inside the artery and a low score to those outside. The detection score can be treated as a vesselness measurement in the computation of the shortest path. Since the detection score measures the probability of a voxel to be inside the vessel lumen, it can also be used for the coronary lumen segmentation. To speed up the computation, we perform classification only for voxels around the heart surface, which is achieved by automatically segmenting the whole heart from the 3D volume in a preprocessing step. An efficient voxel-wise classification strategy is used to further improve the speed. Experiments demonstrate that the proposed learning based vesselness outperforms the conventional Hessian vesselness in both speed and accuracy. On average, it only takes approximately 2.3 seconds to process a large volume with a typical size of 512x512x200 voxels.

  19. Hierarchical probabilistic Gabor and MRF segmentation of brain tumours in MRI volumes.

    PubMed

    Subbanna, Nagesh K; Precup, Doina; Collins, D Louis; Arbel, Tal

    2013-01-01

    In this paper, we present a fully automated hierarchical probabilistic framework for segmenting brain tumours from multispectral human brain magnetic resonance images (MRIs) using multiwindow Gabor filters and an adapted Markov Random Field (MRF) framework. In the first stage, a customised Gabor decomposition is developed, based on the combined-space characteristics of the two classes (tumour and non-tumour) in multispectral brain MRIs in order to optimally separate tumour (including edema) from healthy brain tissues. A Bayesian framework then provides a coarse probabilistic texture-based segmentation of tumours (including edema) whose boundaries are then refined at the voxel level through a modified MRF framework that carefully separates the edema from the main tumour. This customised MRF is not only built on the voxel intensities and class labels as in traditional MRFs, but also models the intensity differences between neighbouring voxels in the likelihood model, along with employing a prior based on local tissue class transition probabilities. The second inference stage is shown to resolve local inhomogeneities and impose a smoothing constraint, while also maintaining the appropriate boundaries as supported by the local intensity difference observations. The method was trained and tested on the publicly available MICCAI 2012 Brain Tumour Segmentation Challenge (BRATS) Database [1] on both synthetic and clinical volumes (low grade and high grade tumours). Our method performs well compared to state-of-the-art techniques, outperforming the results of the top methods in cases of clinical high grade and low grade tumour core segmentation by 40% and 45% respectively. PMID:24505735

  20. Linear test bed. Volume 1: Test bed no. 1. [aerospike test bed with segmented combustor

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Linear Test Bed program was to design, fabricate, and evaluation test an advanced aerospike test bed which employed the segmented combustor concept. The system is designated as a linear aerospike system and consists of a thrust chamber assembly, a power package, and a thrust frame. It was designed as an experimental system to demonstrate the feasibility of the linear aerospike-segmented combustor concept. The overall dimensions are 120 inches long by 120 inches wide by 96 inches in height. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure, at a mixture ratio of 5.5. At the design conditions, the sea level thrust is 200,000 pounds. The complete program including concept selection, design, fabrication, component test, system test, supporting analysis and posttest hardware inspection is described.

  1. An automatic method of brain tumor segmentation from MRI volume based on the symmetry of brain and level set method

    NASA Astrophysics Data System (ADS)

    Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su

    2010-02-01

    This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.

  2. Breast Density Analysis Using an Automatic Density Segmentation Algorithm.

    PubMed

    Oliver, Arnau; Tortajada, Meritxell; Lladó, Xavier; Freixenet, Jordi; Ganau, Sergi; Tortajada, Lidia; Vilagran, Mariona; Sentís, Melcior; Martí, Robert

    2015-10-01

    Breast density is a strong risk factor for breast cancer. In this paper, we present an automated approach for breast density segmentation in mammographic images based on a supervised pixel-based classification and using textural and morphological features. The objective of the paper is not only to show the feasibility of an automatic algorithm for breast density segmentation but also to prove its potential application to the study of breast density evolution in longitudinal studies. The database used here contains three complete screening examinations, acquired 2 years apart, of 130 different patients. The approach was validated by comparing manual expert annotations with automatically obtained estimations. Transversal analysis of the breast density analysis of craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts acquired in the same study showed a correlation coefficient of ??=?0.96 between the mammographic density percentage for left and right breasts, whereas a comparison of both mammographic views showed a correlation of ??=?0.95. A longitudinal study of breast density confirmed the trend that dense tissue percentage decreases over time, although we noticed that the decrease in the ratio depends on the initial amount of breast density. PMID:25720749

  3. Three-dimensional segmentation of pulmonary artery volume from thoracic computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Lindenmaier, Tamas J.; Sheikh, Khadija; Bluemke, Emma; Gyacskov, Igor; Mura, Marco; Licskai, Christopher; Mielniczuk, Lisa; Fenster, Aaron; Cunningham, Ian A.; Parraga, Grace

    2015-03-01

    Chronic obstructive pulmonary disease (COPD), is a major contributor to hospitalization and healthcare costs in North America. While the hallmark of COPD is airflow limitation, it is also associated with abnormalities of the cardiovascular system. Enlargement of the pulmonary artery (PA) is a morphological marker of pulmonary hypertension, and was previously shown to predict acute exacerbations using a one-dimensional diameter measurement of the main PA. We hypothesized that a three-dimensional (3D) quantification of PA size would be more sensitive than 1D methods and encompass morphological changes along the entire central pulmonary artery. Hence, we developed a 3D measurement of the main (MPA), left (LPA) and right (RPA) pulmonary arteries as well as total PA volume (TPAV) from thoracic CT images. This approach incorporates segmentation of pulmonary vessels in cross-section for the MPA, LPA and RPA to provide an estimate of their volumes. Three observers performed five repeated measurements for 15 ex-smokers with ?10 pack-years, and randomly identified from a larger dataset of 199 patients. There was a strong agreement (r2=0.76) for PA volume and PA diameter measurements, which was used as a gold standard. Observer measurements were strongly correlated and coefficients of variation for observer 1 (MPA:2%, LPA:3%, RPA:2%, TPA:2%) were not significantly different from observer 2 and 3 results. In conclusion, we generated manual 3D pulmonary artery volume measurements from thoracic CT images that can be performed with high reproducibility. Future work will involve automation for implementation in clinical workflows.

  4. Volume segmentation and reconstruction from freehand three-dimensional ultrasound data with application to ovarian follicle measurement.

    PubMed

    Gooding, Mark J; Kennedy, Stephen; Noble, J Alison

    2008-02-01

    This article presents a semi-automatic method for segmentation and reconstruction of freehand three-dimensional (3D) ultrasound data. The method incorporates a number of interesting features within the level-set framework: First, segmentation is carried out using region competition, requiring multiple distinct and competing regions to be encoded within the framework. This region competition uses a simple dot-product based similarity measure to compare intensities within each region. In addition, segmentation and surface reconstruction is performed within the 3D domain to take advantage of the additional spatial information available. This means that the method must interpolate the surface where there are gaps in the data, a feature common to freehand 3D ultrasound reconstruction. Finally, although the level-set method is restricted to a voxel grid, no assumption is made that the data being segmented will conform to this grid and may be segmented in its world-reference position. The volume reconstruction method is demonstrated in vivo for the volume measurement of ovarian follicles. The 3D reconstructions produce a lower error variance than the current clinical measurement based on a mean diameter estimated from two-dimensional (2D) images. However, both the clinical measurement and the semi-automatic method appear to underestimate the true follicular volume. PMID:17935866

  5. Three-dimensional analysis tool for segmenting and measuring the structure of telomeres in mammalian nuclei

    NASA Astrophysics Data System (ADS)

    Vermolen, Bart J.; Young, Ian T.; Chuang, Alice; Wark, Landon; Chuang, Tony; Mai, Sabine; Garini, Yuval

    2005-03-01

    Quantitative analysis in combination with fluorescence microscopy calls for innovative digital image measurement tools. We have developed a three-dimensional tool for segmenting and analyzing FISH stained telomeres in interphase nuclei. After deconvolution of the images, we segment the individual telomeres and measure a distribution parameter we call ?T. This parameter describes if the telomeres are distributed in a sphere-like volume (?T ~ 1) or in a disk-like volume (?T >> 1). Because of the statistical nature of this parameter, we have to correct for the fact that we do not have an infinite number of telomeres to calculate this parameter. In this study we show a way to do this correction. After sorting mouse lymphocytes and calculating ?T and using the correction introduced in this paper we show a significant difference between nuclei in G2 and nuclei in either G0/G1 or S phase. The mean values of ?T for G0/G1, S and G2 are 1.03, 1.02 and 13 respectively.

  6. Automatic quantification of epicardial fat volume on non-enhanced cardiac CT scans using a multi-atlas segmentation approach

    E-print Network

    van Vliet, Lucas J.

    -enhanced cardiac CT scans is therefore of clinical interest. The purpose of this work isAutomatic quantification of epicardial fat volume on non-enhanced cardiac CT scans using a multi.1118/1.3512795 Automatic segmentation of intracranial arteries and veins in four-dimensional cerebral CT perfusion scans

  7. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI

    PubMed Central

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2014-01-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  8. Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images

    PubMed Central

    Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.

    2015-01-01

    Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634

  9. Study of tracking and data acquisition system for the 1990's. Volume 4: TDAS space segment architecture

    NASA Technical Reports Server (NTRS)

    Orr, R. S.

    1984-01-01

    Tracking and data acquisition system (TDAS) requirements, TDAS architectural goals, enhanced TDAS subsystems, constellation and networking options, TDAS spacecraft options, crosslink implementation, baseline TDAS space segment architecture, and treat model development/security analysis are addressed.

  10. A microelectrode for continuous recording of volume fluxes in isolated perfused tubule segments.

    PubMed

    Geibel, J; Völkl, H; Lang, F

    1984-04-01

    Manufacture, properties and use of a micro enzyme electrode for continuous monitoring of volume fluxes in the isolated tubule preparation is described. The specific electrode is a galactose-oxidase enzyme electrode, which can be used to detect changes in raffinose concentrations. The electrode's response to raffinose is almost linear over concentrations from 0-12 mmol/l. The electrode equally responds to galactose as to raffinose but is insensitive to other sugars, to pH changes (from 6.0-8.0), CO2 (from 1-10%) and electrolytes tested. Reducing O2 from 100 to 10% and to 1%, leads to a reduction of the reading by 10% and 30%, respectively. The reading is almost doubled when the temperature is increased from 20-40 degrees C. Furthermore, reducing agents such as uric acid and ascorbic acid interfere with the reading. If these substances and raffinose are omitted from the perfusate for isolated perfused proximal mouse tubules, the reading is identical in perfusate and collected fluid, indicating that the tubular epithelium does not produce substances in sufficient amounts to interfere with the electrode reading. After addition of 6 mmol/l raffinose to the perfusate the raffinose concentration in the collected fluid of 0.76 +/- 0.05 mm segments of straight proximal mouse tubules (perfusion rate = 3.4 +/- 0.45 nl/min) is 10.2 +/- 0.3 mmol/l, indicating a volume reabsorption of 1.5 +/- 0.3 nl/min. Peritubular application of acetazolamide reduces the volume reabsorption by 42 +/- 4%. PMID:6462886

  11. Analysis of Retinal Peripapillary Segmentation in Early Alzheimer's Disease Patients

    PubMed Central

    Salobrar-Garcia, Elena; Hoyas, Irene; Leal, Mercedes; de Hoz, Rosa; Rojas, Blanca; Ramirez, Ana I.; Salazar, Juan J.; Yubero, Raquel; Gil, Pedro; Triviño, Alberto; Ramirez, José M.

    2015-01-01

    Decreased thickness of the retinal nerve fiber layer (RNFL) may reflect retinal neuronal-ganglion cell death. A decrease in the RNFL has been demonstrated in Alzheimer's disease (AD) in addition to aging by optical coherence tomography (OCT). Twenty-three mild-AD patients and 28 age-matched control subjects with mean Mini-Mental State Examination 23.3 and 28.2, respectively, with no ocular disease or systemic disorders affecting vision, were considered for study. OCT peripapillary and macular segmentation thickness were examined in the right eye of each patient. Compared to controls, eyes of patients with mild-AD patients showed no statistical difference in peripapillary RNFL thickness (P > 0.05); however, sectors 2, 3, 4, 8, 9, and 11 of the papilla showed thinning, while in sectors 1, 5, 6, 7, and 10 there was thickening. Total macular volume and RNFL thickness of the fovea in all four inner quadrants and in the outer temporal quadrants proved to be significantly decreased (P < 0.01). Despite the fact that peripapillary RNFL thickness did not statistically differ in comparison to control eyes, the increase in peripapillary thickness in our mild-AD patients could correspond to an early neurodegeneration stage and may entail the existence of an inflammatory process that could lead to progressive peripapillary fiber damage. PMID:26557684

  12. Influence of cold walls on PET image quantification and volume segmentation: A phantom study

    SciTech Connect

    Berthon, B.; Marshall, C.; Edwards, A.; Spezi, E.; Evans, M.

    2013-08-15

    Purpose: Commercially available fillable plastic inserts used in positron emission tomography phantoms usually have thick plastic walls, separating their content from the background activity. These “cold” walls can modify the intensity values of neighboring active regions due to the partial volume effect, resulting in errors in the estimation of standardized uptake values. Numerous papers suggest that this is an issue for phantom work simulating tumor tissue, quality control, and calibration work. This study aims to investigate the influence of the cold plastic wall thickness on the quantification of 18F-fluorodeoxyglucose on the image activity recovery and on the performance of advanced automatic segmentation algorithms for the delineation of active regions delimited by plastic walls.Methods: A commercial set of six spheres of different diameters was replicated using a manufacturing technique which achieves a reduction in plastic walls thickness of up to 90%, while keeping the same internal volume. Both sets of thin- and thick-wall inserts were imaged simultaneously in a custom phantom for six different tumor-to-background ratios. Intensity values were compared in terms of mean and maximum standardized uptake values (SUVs) in the spheres and mean SUV of the hottest 1 ml region (SUV{sub max}, SUV{sub mean}, and SUV{sub peak}). The recovery coefficient (RC) was also derived for each sphere. The results were compared against the values predicted by a theoretical model of the PET-intensity profiles for the same tumor-to-background ratios (TBRs), sphere sizes, and wall thicknesses. In addition, ten automatic segmentation methods, written in house, were applied to both thin- and thick-wall inserts. The contours obtained were compared to computed tomography derived gold standard (“ground truth”), using five different accuracy metrics.Results: The authors' results showed that thin-wall inserts achieved significantly higher SUV{sub mean}, SUV{sub max}, and RC values (up to 25%, 16%, and 25% higher, respectively) compared to thick-wall inserts, which was in agreement with the theory. This effect decreased with increasing sphere size and TBR, and resulted in substantial (>5%) differences between thin- and thick-wall inserts for spheres up to 30 mm diameter and TBR up to 4. Thinner plastic walls were also shown to significantly improve the delineation accuracy for the majority of the segmentation methods tested, by increasing the proportion of lesion voxels detected, although the errors in image quantification remained non-negligible.Conclusions: This study quantified the significant effect of a 90% reduction in the thickness of insert walls on SUV quantification and PET-based boundary detection. Mean SUVs inside the inserts and recovery coefficients were particularly affected by the presence of thick cold walls, as predicted by a theoretical approach. The accuracy of some delineation algorithms was also significantly improved by the introduction of thin wall inserts instead of thick wall inserts. This study demonstrates the risk of errors deriving from the use of cold wall inserts to assess and compare the performance of PET segmentation methods.

  13. High volume data storage architecture analysis

    NASA Technical Reports Server (NTRS)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  14. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  15. Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging

    PubMed Central

    Beaton, L.; Mazzaferri, J.; Lalonde, F.; Hidalgo-Aguirre, M.; Descovich, D.; Lesk, M. R.; Costantino, S.

    2015-01-01

    We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373

  16. Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging.

    PubMed

    Beaton, L; Mazzaferri, J; Lalonde, F; Hidalgo-Aguirre, M; Descovich, D; Lesk, M R; Costantino, S

    2015-05-01

    We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373

  17. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  18. Improving the clinical correlation of multiple sclerosis black hole volume change by paired-scan analysis.

    PubMed

    Tam, Roger C; Traboulsee, Anthony; Riddehough, Andrew; Li, David K B

    2012-01-01

    The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes. PMID:24179734

  19. Glioma grading using apparent diffusion coefficient map: application of histogram analysis based on automatic segmentation.

    PubMed

    Lee, Jeongwon; Choi, Seung Hong; Kim, Ji-Hoon; Sohn, Chul-Ho; Lee, Sooyeul; Jeong, Jaeseung

    2014-09-01

    The accurate diagnosis of glioma subtypes is critical for appropriate treatment, but conventional histopathologic diagnosis often exhibits significant intra-observer variability and sampling error. The aim of this study was to investigate whether histogram analysis using an automatically segmented region of interest (ROI), excluding cystic or necrotic portions, could improve the differentiation between low-grade and high-grade gliomas. Thirty-two patients (nine low-grade and 23 high-grade gliomas) were included in this retrospective investigation. The outer boundaries of the entire tumors were manually drawn in each section of the contrast-enhanced T1 -weighted MR images. We excluded cystic or necrotic portions from the entire tumor volume. The histogram analyses were performed within the ROI on normalized apparent diffusion coefficient (ADC) maps. To evaluate the contribution of the proposed method to glioma grading, we compared the area under the receiver operating characteristic (ROC) curves. We found that an ROI excluding cystic or necrotic portions was more useful for glioma grading than was an entire tumor ROI. In the case of the fifth percentile values of the normalized ADC histogram, the area under the ROC curve for the tumor ROIs excluding cystic or necrotic portions was significantly higher than that for the entire tumor ROIs (p?segmentation of a cystic or necrotic area probably improves the ability to differentiate between high- and low-grade gliomas on an ADC map. PMID:25042540

  20. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  1. Preliminary analysis of the Knipovich Ridge segmentation: inuence of focused magmatism and ridge obliquity on

    E-print Network

    Okino, Kyoko

    reserved. Keywords: mid-ocean ridges; segmentation; sea-£oor spreading; gravity anomalies; sonar methods question facing modern investigation of mid-ocean ridge tectonics and geophysics [1]. Although spreadingPreliminary analysis of the Knipovich Ridge segmentation: in£uence of focused magmatism and ridge

  2. Quantitative Analysis of Peristaltic and Segmental Motion In Vivo in the Rat Small Intestine Using Dynamic

    E-print Network

    Brasseur, James G.

    Quantitative Analysis of Peristaltic and Segmental Motion In Vivo in the Rat Small Intestine Using of nutrients that takes place within the small intestine. The normal processes of the small intestine are known been used extensively to study segments of the intestine that have been exteriorized from animals

  3. Investigation into the use of market segmentation analysis in transportation energy planning

    SciTech Connect

    Trombly, J.W.

    1985-01-01

    This research explores the application of market-segmentation analysis in transportation energy planning. The study builds on the concepts of market segmentation developed in the marketing literature to suggest a strategy of segmentation analysis for use in transportation planning. Results of the two statewide telephone surveys conducted in 1979 and 1980 for the New York State Department of Transportation are used as the data base for identifying target segments. Subjects in these surveys were asked to indicate which of 18 energy conservation actions had been implemented over the prior year to conserve gasoline. These responses serve as the basis for segmentation. Two alternative methods are pursued in identifying target market segments for purposes of transportation energy planning. The first approach consists of the application of conventional multivariate analysis procedures. The second method exploits the principles of latent trait or modern test theory. Results of the conventional analysis suggest that the data collected can be divided into eight segments. Results of the application of latent trait theory identify three market segments. Results of this study may be used to design future responses to energy shortages in addition to suggesting strategies to be pursued in measuring consumer response.

  4. A COMPREHENSIVE ANALYSIS OF SWIFT XRT DATA. II. DIVERSE PHYSICAL ORIGINS OF THE SHALLOW DECAY SEGMENT

    E-print Network

    Zhang, Bing

    A COMPREHENSIVE ANALYSIS OF SWIFT XRT DATA. II. DIVERSE PHYSICAL ORIGINS OF THE SHALLOW DECAY ABSTRACT The origin of the shallow decay segment in Swift XRT light curves remains a puzzle. We analyze the properties of this segment with a sample of 53 long Swift GRBs detected before 2007 February. We show

  5. Early Detection of Myocardial Ischemia Using Transient ST-Segment Episode Analysis of ECG

    E-print Network

    Ng, Vincent

    %) for SCD cases, which is mainly due to Acute Myocardial Infarction (AMI), myocardial ischaemia and cardiacEarly Detection of Myocardial Ischemia Using Transient ST-Segment Episode Analysis of ECG S. C the recognition of ST-segment deviations and transient ST episodes which help in the diagnosis of myocardial

  6. Three-dimensional analysis tool for segmenting and measuring the structure of telomeres in mammalian nuclei

    E-print Network

    van Vliet, Lucas J.

    Three-dimensional analysis tool for segmenting and measuring the structure of telomeres and analyzing FISH stained telomeres in interphase nuclei. After deconvolution of the images, we segment the individual telomeres and measure a distribution parameter we call T . This parameter describes

  7. Analysis of adjacent segment reoperation after lumbar total disc replacement

    PubMed Central

    Rainey, Scott; Blumenthal, Scott L.; Zigler, Jack E.; Guyer, Richard D.; Ohnmeiss, Donna D.

    2012-01-01

    Background Fusion has long been used for treating chronic back pain unresponsive to nonoperative care. However, potential development of adjacent segment degeneration resulting in reoperation is a concern. Total disc replacement (TDR) has been proposed as a method for addressing back pain and preventing or reducing adjacent segment degeneration. The purpose of the study was to determine the reoperation rate at the segment adjacent to a level implanted with a lumbar TDR and to analyze the pre-TDR condition of the adjacent segment. Methods This study was based on a retrospective review of charts and radiographs from a consecutive series of 1000 TDR patients to identify those who underwent reoperation because of adjacent segment degeneration. Some of the patients were part of randomized studies comparing TDR with fusion. Adjacent segment reoperation data were also collected from 67 patients who were randomized to fusion in those studies. The condition of the adjacent segment before the index surgery was compared with its condition before reoperation based on radiographs, magnetic resonance imaging (MRI), and computed tomography. Results Of the 1000 TDR patients, 20 (2.0%) underwent reoperation. The mean length of time from arthroplasty to reoperation was 28.3 months (range, 0.5–85 months). Of the adjacent segments evaluated on preoperative MRI, 38.8% were normal, 38.8% were moderately diseased, and 22.2% were classified as having severe degeneration. None of these levels had a different grading at the time of reoperation compared with the pre-TDR MRI study. Reoperation for adjacent segment degeneration was performed in 4.5% of the fusion patients. Conclusions The 2.0% rate of adjacent segment degeneration resulting in reoperation in this study is similar to the 2.0% to 2.8% range in other studies and lower than the published rates of 7% to 18% after lumbar fusion. By carefully assessing the presence of pre-existing degenerative changes before performing arthroplasty, this rate may be reduced even more. PMID:25694883

  8. Segmentation-based and rule-based spectral mixture analysis for estimating urban imperviousness

    NASA Astrophysics Data System (ADS)

    Li, Miao; Zang, Shuying; Wu, Changshan; Deng, Yingbin

    2015-03-01

    For detailed estimation of urban imperviousness, numerous image processing methods have been developed, and applied to different urban areas with some success. Most of these methods, however, are global techniques. That is, they have been applied to the entire study area without considering spatial and contextual variations. To address this problem, this paper explores whether two spatio-contextual analysis techniques, namely segmentation-based and rule-based analysis, can improve urban imperviousness estimation. These two spatio-contextual techniques were incorporated to a classic urban imperviousness estimation technique, fully-constrained linear spectral mixture analysis (FCLSMA) method. In particular, image segmentation was applied to divide the image to homogenous segments, and spatially varying endmembers were chosen for each segment. Then an FCLSMA was applied for each segment to estimate the pixel-wise fractional coverage of high-albedo material, low-albedo material, vegetation, and soil. Finally, a rule-based analysis was carried out to estimate the percent impervious surface area (%ISA). The developed technique was applied to a Landsat TM image acquired in Milwaukee River Watershed, an urbanized watershed in Wisconsin, United States. Results indicate that the performance of the developed segmentation-based and rule-based LSMA (S-R-LSMA) outperforms traditional SMA techniques, with a mean average error (MAE) of 5.44% and R2 of 0.88. Further, a comparative analysis shows that, when compared to segmentation, rule-based analysis plays a more essential role in improving the estimation accuracy.

  9. Recurrence interval analysis of trading volumes

    NASA Astrophysics Data System (ADS)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals ? between successive trading volumes exceeding a certain threshold q . The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  10. Lung Extraction, Lobe Segmentation and Hierarchical Region Assessment for Quantitative Analysis on High

    E-print Network

    Lung Extraction, Lobe Segmentation and Hierarchical Region Assessment for Quantitative Analysis Care Division, Brigham and Women's Hospital, Boston, MA Abstract. Regional assessment of lung disease specific to different lung regions on high resolution computed tomography (HRCT) datasets. We present

  11. Health Lifestyles: Audience Segmentation Analysis for Public Health Interventions.

    ERIC Educational Resources Information Center

    Slater, Michael D.; Flora, June A.

    This paper is concerned with the application of market research techniques to segment large populations into homogeneous units in order to improve the reach, utilization, and effectiveness of health programs. The paper identifies seven distinctive patterns of health attitudes, social influences, and behaviors using cluster analytic techniques in a…

  12. Failure analysis for model-based organ segmentation using outlier detection

    NASA Astrophysics Data System (ADS)

    Saalbach, Axel; Wächter Stehle, Irina; Lorenz, Cristian; Weese, Jürgen

    2014-03-01

    During the last years Model-Based Segmentation (MBS) techniques have been used in a broad range of medical applications. In clinical practice, such techniques are increasingly employed for diagnostic purposes and treatment decisions. However, it is not guaranteed that a segmentation algorithm will converge towards the desired solution. In specific situations as in the presence of rare anatomical variants (which cannot be represented) or for images with an extremely low quality, a meaningful segmentation might not be feasible. At the same time, an automated estimation of the segmentation reliability is commonly not available. In this paper we present an approach for the identification of segmentation failures using concepts from the field of outlier detection. The approach is validated on a comprehensive set of Computed Tomography Angiography (CTA) images by means of Receiver Operating Characteristic (ROC) analysis. Encouraging results in terms of an Area Under the ROC Curve (AUC) of up to 0.965 were achieved.

  13. Computed Tomographic Image Analysis Based on FEM Performance Comparison of Segmentation on Knee Joint Reconstruction

    PubMed Central

    Jang, Seong-Wook; Seo, Young-Jin; Yoo, Yon-Sik

    2014-01-01

    The demand for an accurate and accessible image segmentation to generate 3D models from CT scan data has been increasing as such models are required in many areas of orthopedics. In this paper, to find the optimal image segmentation to create a 3D model of the knee CT data, we compared and validated segmentation algorithms based on both objective comparisons and finite element (FE) analysis. For comparison purposes, we used 1 model reconstructed in accordance with the instructions of a clinical professional and 3 models reconstructed using image processing algorithms (Sobel operator, Laplacian of Gaussian operator, and Canny edge detection). Comparison was performed by inspecting intermodel morphological deviations with the iterative closest point (ICP) algorithm, and FE analysis was performed to examine the effects of the segmentation algorithm on the results of the knee joint movement analysis. PMID:25538950

  14. A novel approach for the automated segmentation and volume quantification of cardiac fats on computed tomography.

    PubMed

    Rodrigues, É O; Morais, F F C; Morais, N A O S; Conci, L S; Neto, L V; Conci, A

    2016-01-01

    The deposits of fat on the surroundings of the heart are correlated to several health risk factors such as atherosclerosis, carotid stiffness, coronary artery calcification, atrial fibrillation and many others. These deposits vary unrelated to obesity, which reinforces its direct segmentation for further quantification. However, manual segmentation of these fats has not been widely deployed in clinical practice due to the required human workload and consequential high cost of physicians and technicians. In this work, we propose a unified method for an autonomous segmentation and quantification of two types of cardiac fats. The segmented fats are termed epicardial and mediastinal, and stand apart from each other by the pericardium. Much effort was devoted to achieve minimal user intervention. The proposed methodology mainly comprises registration and classification algorithms to perform the desired segmentation. We compare the performance of several classification algorithms on this task, including neural networks, probabilistic models and decision tree algorithms. Experimental results of the proposed methodology have shown that the mean accuracy regarding both epicardial and mediastinal fats is 98.5% (99.5% if the features are normalized), with a mean true positive rate of 98.0%. In average, the Dice similarity index was equal to 97.6%. PMID:26474835

  15. SU-E-J-238: Monitoring Lymph Node Volumes During Radiotherapy Using Semi-Automatic Segmentation of MRI Images

    SciTech Connect

    Veeraraghavan, H; Tyagi, N; Riaz, N; McBride, S; Lee, N; Deasy, J

    2014-06-01

    Purpose: Identification and image-based monitoring of lymph nodes growing due to disease, could be an attractive alternative to prophylactic head and neck irradiation. We evaluated the accuracy of the user-interactive Grow Cut algorithm for volumetric segmentation of radiotherapy relevant lymph nodes from MRI taken weekly during radiotherapy. Method: The algorithm employs user drawn strokes in the image to volumetrically segment multiple structures of interest. We used a 3D T2-wturbo spin echo images with an isotropic resolution of 1 mm3 and FOV of 492×492×300 mm3 of head and neck cancer patients who underwent weekly MR imaging during the course of radiotherapy. Various lymph node (LN) levels (N2, N3, N4'5) were individually contoured on the weekly MR images by an expert physician and used as ground truth in our study. The segmentation results were compared with the physician drawn lymph nodes based on DICE similarity score. Results: Three head and neck patients with 6 weekly MR images were evaluated. Two patients had level 2 LN drawn and one patient had level N2, N3 and N4'5 drawn on each MR image. The algorithm took an average of a minute to segment the entire volume (512×512×300 mm3). The algorithm achieved an overall DICE similarity score of 0.78. The time taken for initializing and obtaining the volumetric mask was about 5 mins for cases with only N2 LN and about 15 mins for the case with N2,N3 and N4'5 level nodes. The longer initialization time for the latter case was due to the need for accurate user inputs to separate overlapping portions of the different LN. The standard deviation in segmentation accuracy at different time points was utmost 0.05. Conclusions: Our initial evaluation of the grow cut segmentation shows reasonably accurate and consistent volumetric segmentations of LN with minimal user effort and time.

  16. REACH. Teacher's Guide, Volume III. Task Analysis.

    ERIC Educational Resources Information Center

    Morris, James Lee; And Others

    Designed for use with individualized instructional units (CE 026 345-347, CE 026 349-351) in the electromechanical cluster, this third volume of the postsecondary teacher's guide presents the task analysis which was used in the development of the REACH (Refrigeration, Electro-Mechanical, Air Conditioning, Heating) curriculum. The major blocks of…

  17. A Two-Step Segmentation Method for Breast Ultrasound Masses Based on Multi-resolution Analysis.

    PubMed

    Rodrigues, Rafael; Braz, Rui; Pereira, Manuela; Moutinho, José; Pinheiro, Antonio M G

    2015-06-01

    Breast ultrasound images have several attractive properties that make them an interesting tool in breast cancer detection. However, their intrinsic high noise rate and low contrast turn mass detection and segmentation into a challenging task. In this article, a fully automated two-stage breast mass segmentation approach is proposed. In the initial stage, ultrasound images are segmented using support vector machine or discriminant analysis pixel classification with a multiresolution pixel descriptor. The features are extracted using non-linear diffusion, bandpass filtering and scale-variant mean curvature measures. A set of heuristic rules complement the initial segmentation stage, selecting the region of interest in a fully automated manner. In the second segmentation stage, refined segmentation of the area retrieved in the first stage is attempted, using two different techniques. The AdaBoost algorithm uses a descriptor based on scale-variant curvature measures and non-linear diffusion of the original image at lower scales, to improve the spatial accuracy of the ROI. Active contours use the segmentation results from the first stage as initial contours. Results for both proposed segmentation paths were promising, with normalized Dice similarity coefficients of 0.824 for AdaBoost and 0.813 for active contours. Recall rates were 79.6% for AdaBoost and 77.8% for active contours, whereas the precision rate was 89.3% for both methods. PMID:25736608

  18. Page segmentation for document image analysis using a neural network

    NASA Astrophysics Data System (ADS)

    Patel, Devesh

    1996-07-01

    In this paper we present a method for segmenting document page images into text and nontext regions. The underlying assumption made by this approach is that the two regimes can be viewed as different textures. We do not use any a priori knowledge of the document format. A convolution-based method is used to generate the texture feature images. The coefficients of the convolution masks are obtained using a single-layer artificial neural network that generates eigenvectors of the correlation matrix of the input data. The coefficients of these masks have been `learned' from examples of the document images and have a potential of being considerably more powerful than masks with preset coefficients. A thresholding scheme based on a measure of entropy is used to segment the feature images into the homogeneous regions.

  19. Combined texture feature analysis of segmentation and classification of benign and malignant tumour CT slices.

    PubMed

    Padma, A; Sukanesh, R

    2013-01-01

    A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity. PMID:23094909

  20. Finite Volume Methods: Foundation and Analysis

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Ohlberger, Mario

    2003-01-01

    Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

  1. Multi-stage Learning for Robust Lung Segmentation in Challenging CT Volumes

    E-print Network

    Imaging, Siemens Healthcare, Oxford, UK Abstract. Simple algorithms for segmenting healthy lung parenchyma hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of dis with a discriminative classifier. The carina location is used to predict approx- imate poses (translation, orientation

  2. Automated Segmentation of Cerebellum Using Brain Mask and Partial Volume Estimation Map

    PubMed Central

    Lee, Dong-Kyun; Yoon, Uicheul; Kwak, Kichang; Lee, Jong-Min

    2015-01-01

    While segmentation of the cerebellum is an indispensable step in many studies, its contrast is not clear because of the adjacent cerebrospinal fluid, meninges, and cerebra peduncle. Thus, various cerebellar segmentation methods, such as a deformable model or a template-based algorithm might exhibit incorrect segmentation of the venous sinuses and the cerebellar peduncle. In this study, we propose a fully automated procedure combining cerebellar tissue classification, a template-based approach, and morphological operations sequentially. The cerebellar region was defined approximately by removing the cerebral region from the brain mask. Then, the noncerebellar region was trimmed using a morphological operator and the brain-stem atlas was aligned to the individual brain to define the brain-stem area. The proposed method was validated with the well-known FreeSurfer and ITK-SNAP packages using the dice similarity index and recall and precision scores. As a result, the proposed method was significantly better than the other methods for the dice similarity index (0.93, FreeSurfer: 0.92, ITK-SNAP: 0.87) and precision (0.95, FreeSurfer: 0.90, ITK-SNAP: 0.93). Therefore, it could be said that the proposed method yielded a robust and accurate segmentation result. Moreover, additional postprocessing with the brain-stem atlas could improve its result. PMID:26060504

  3. Cumulative Heat Diffusion Using Volume Gradient Operator for Volume Analysis.

    PubMed

    Gurijala, K C; Wang, Lei; Kaufman, A

    2012-12-01

    We introduce a simple, yet powerful method called the Cumulative Heat Diffusion for shape-based volume analysis, while drastically reducing the computational cost compared to conventional heat diffusion. Unlike the conventional heat diffusion process, where the diffusion is carried out by considering each node separately as the source, we simultaneously consider all the voxels as sources and carry out the diffusion, hence the term cumulative heat diffusion. In addition, we introduce a new operator that is used in the evaluation of cumulative heat diffusion called the Volume Gradient Operator (VGO). VGO is a combination of the LBO and a data-driven operator which is a function of the half gradient. The half gradient is the absolute value of the difference between the voxel intensities. The VGO by its definition captures the local shape information and is used to assign the initial heat values. Furthermore, VGO is also used as the weighting parameter for the heat diffusion process. We demonstrate that our approach can robustly extract shape-based features and thus forms the basis for an improved classification and exploration of features based on shape. PMID:26357113

  4. Verifying volume rendering using discretization error analysis.

    PubMed

    Etiene, Tiago; Jönsson, Daniel; Ropinski, Timo; Scheidegger, Carlos; Comba, João L D; Nonato, Luis Gustavo; Kirby, Robert M; Ynnerman, Anders; Silva, Cláudio T

    2014-01-01

    We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice, and discuss its limitations. We also report the errors identified by our approach when applied to two publicly available volume rendering packages. PMID:24201332

  5. Verifying Volume Rendering Using Discretization Error Analysis.

    PubMed

    Etiene, Tiago; Jonsson, Daniel; Ropinski, Timo; Scheidegger, Carlos; Comba, Joao; Nonato, L Gustavo; Kirby, Robert M; Ynnerman, Anders; Silva, Claudio T

    2013-06-13

    We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice and discuss its limitations. We also report the errors identified by our approach when applied to two publicly-available volume rendering packages. PMID:23775481

  6. Application of taxonomy theory, Volume 1: Computing a Hopf bifurcation-related segment of the feasibility boundary. Final report

    SciTech Connect

    Zaborszky, J.; Venkatasubramanian, V.

    1995-10-01

    Taxonomy Theory is the first precise comprehensive theory for large power system dynamics modeled in any detail. The motivation for this project is to show that it can be used, practically, for analyzing a disturbance that actually occurred on a large system, which affected a sizable portion of the Midwest with supercritical Hopf type oscillations. This event is well documented and studied. The report first summarizes Taxonomy Theory with an engineering flavor. Then various computational approaches are sighted and analyzed for desirability to use with Taxonomy Theory. Then working equations are developed for computing a segment of the feasibility boundary that bounds the region of (operating) parameters throughout which the operating point can be moved without losing stability. Then experimental software incorporating large EPRI software packages PSAPAC is developed. After a summary of the events during the subject disturbance, numerous large scale computations, up to 7600 buses, are reported. These results are reduced into graphical and tabular forms, which then are analyzed and discussed. The report is divided into two volumes. This volume illustrates the use of the Taxonomy Theory for computing the feasibility boundary and presents evidence that the event indeed led to a Hopf type oscillation on the system. Furthermore it proves that the Feasibility Theory can indeed be used for practical computation work with very large systems. Volume 2, a separate volume, will show that the disturbance has led to a supercritical (that is stable oscillation) Hopf bifurcation.

  7. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations.

    PubMed

    Hart, Nicolas H; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L; Newton, Robert U

    2015-09-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ? 2.0%; ICC ? 0.988) and inter-tester (CV ? 2.4%; ICC ? 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key pointsMusculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest.Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis.Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ? 1.5%; R ? 0.990) achieving greater reliability and lower error than soft-tissue (CV ? 2.4%; R ? 0.980) masses when using our appendicular segmental boundaries. PMID:26336349

  8. Extensions to analysis of ignition transients of segmented rocket motors

    NASA Technical Reports Server (NTRS)

    Caveny, L. H.

    1978-01-01

    The analytical procedures described in NASA CR-150162 were extended for the purpose of analyzing the data from the first static test of the Solid Rocket Booster for the Space Shuttle. The component of thrust associated with the rapid changes in the internal flow field was calculated. This dynamic thrust component was shown to be prominent during flame spreading. An approach was implemented to account for the close coupling between the igniter and head end segment of the booster. The tips of the star points were ignited first, followed by radial and longitudinal flame spreading.

  9. Atlas-Based Segmentation Improves Consistency and Decreases Time Required for Contouring Postoperative Endometrial Cancer Nodal Volumes

    SciTech Connect

    Young, Amy V.; Wortham, Angela; Wernick, Iddo; Evans, Andrew; Ennis, Ronald D.

    2011-03-01

    Purpose: Accurate target delineation of the nodal volumes is essential for three-dimensional conformal and intensity-modulated radiotherapy planning for endometrial cancer adjuvant therapy. We hypothesized that atlas-based segmentation ('autocontouring') would lead to time savings and more consistent contours among physicians. Methods and Materials: A reference anatomy atlas was constructed using the data from 15 postoperative endometrial cancer patients by contouring the pelvic nodal clinical target volume on the simulation computed tomography scan according to the Radiation Therapy Oncology Group 0418 trial using commercially available software. On the simulation computed tomography scans from 10 additional endometrial cancer patients, the nodal clinical target volume autocontours were generated. Three radiation oncologists corrected the autocontours and delineated the manual nodal contours under timed conditions while unaware of the other contours. The time difference was determined, and the overlap of the contours was calculated using Dice's coefficient. Results: For all physicians, manual contouring of the pelvic nodal target volumes and editing the autocontours required a mean {+-} standard deviation of 32 {+-} 9 vs. 23 {+-} 7 minutes, respectively (p = .000001), a 26% time savings. For each physician, the time required to delineate the manual contours vs. correcting the autocontours was 30 {+-} 3 vs. 21 {+-} 5 min (p = .003), 39 {+-} 12 vs. 30 {+-} 5 min (p = .055), and 29 {+-} 5 vs. 20 {+-} 5 min (p = .0002). The mean overlap increased from manual contouring (0.77) to correcting the autocontours (0.79; p = .038). Conclusion: The results of our study have shown that autocontouring leads to increased consistency and time savings when contouring the nodal target volumes for adjuvant treatment of endometrial cancer, although the autocontours still required careful editing to ensure that the lymph nodes at risk of recurrence are properly included in the target volume.

  10. Automated abdominal lymph node segmentation based on RST analysis and SVM

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Misawa, Kazunari; Mori, Kensaku

    2014-03-01

    This paper describes a segmentation method for abdominal lymph node (LN) using radial structure tensor analysis (RST) and support vector machine. LN analysis is one of crucial parts of lymphadenectomy, which is a surgical procedure to remove one or more LNs in order to evaluate them for the presence of cancer. Several works for automated LN detection and segmentation have been proposed. However, there are a lot of false positives (FPs). The proposed method consists of LN candidate segmentation and FP reduction. LN candidates are extracted using RST analysis in each voxel of CT scan. RST analysis can discriminate between difference local intensity structures without influence of surrounding structures. In FP reduction process, we eliminate FPs using support vector machine with shape and intensity information of the LN candidates. The experimental result reveals that the sensitivity of the proposed method was 82.0 % with 21.6 FPs/case.

  11. A robust and fast line segment detector based on top-down smaller eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing

    2014-01-01

    In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.

  12. Fire flame detection using color segmentation and space-time analysis

    NASA Astrophysics Data System (ADS)

    Ruchanurucks, Miti; Saengngoen, Praphin; Sajjawiso, Theeraphat

    2011-10-01

    This paper presents a fire flame detection using CCTV cameras based on image processing. The scheme relies on color segmentation and space-time analysis. The segmentation is performed to extract fire-like-color regions in an image. Many methods are benchmarked against each other to find the best for practical CCTV camera. After that, the space-time analysis is used to recognized fire behavior. A space-time window is generated from contour of the threshold image. Feature extraction is done in Fourier domain of the window. Neural network is used for behavior recognition. The system will be shown to be practical and robust.

  13. Analysis of a kinetic multi-segment foot model. Part I: Model repeatability and kinematic validity.

    PubMed

    Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L

    2012-04-01

    Kinematic multi-segment foot models are still evolving, but have seen increased use in clinical and research settings. The addition of kinetics may increase knowledge of foot and ankle function as well as influence multi-segment foot model evolution; however, previous kinetic models are too complex for clinical use. In this study we present a three-segment kinetic foot model and thorough evaluation of model performance during normal gait. In this first of two companion papers, model reference frames and joint centers are analyzed for repeatability, joint translations are measured, segment rigidity characterized, and sample joint angles presented. Within-tester and between-tester repeatability were first assessed using 10 healthy pediatric participants, while kinematic parameters were subsequently measured on 17 additional healthy pediatric participants. Repeatability errors were generally low for all sagittal plane measures as well as transverse plane Hindfoot and Forefoot segments (median<3°), while the least repeatable orientations were the Hindfoot coronal plane and Hallux transverse plane. Joint translations were generally less than 2mm in any one direction, while segment rigidity analysis suggested rigid body behavior for the Shank and Hindfoot, with the Forefoot violating the rigid body assumptions in terminal stance/pre-swing. Joint excursions were consistent with previously published studies. PMID:22421190

  14. Segmentation of vascular structures and hematopoietic cells in 3D microscopy images and quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mu, Jian; Yang, Lin; Kamocka, Malgorzata M.; Zollman, Amy L.; Carlesso, Nadia; Chen, Danny Z.

    2015-03-01

    In this paper, we present image processing methods for quantitative study of how the bone marrow microenvironment changes (characterized by altered vascular structure and hematopoietic cell distribution) caused by diseases or various factors. We develop algorithms that automatically segment vascular structures and hematopoietic cells in 3-D microscopy images, perform quantitative analysis of the properties of the segmented vascular structures and cells, and examine how such properties change. In processing images, we apply local thresholding to segment vessels, and add post-processing steps to deal with imaging artifacts. We propose an improved watershed algorithm that relies on both intensity and shape information and can separate multiple overlapping cells better than common watershed methods. We then quantitatively compute various features of the vascular structures and hematopoietic cells, such as the branches and sizes of vessels and the distribution of cells. In analyzing vascular properties, we provide algorithms for pruning fake vessel segments and branches based on vessel skeletons. Our algorithms can segment vascular structures and hematopoietic cells with good quality. We use our methods to quantitatively examine the changes in the bone marrow microenvironment caused by the deletion of Notch pathway. Our quantitative analysis reveals property changes in samples with deleted Notch pathway. Our tool is useful for biologists to quantitatively measure changes in the bone marrow microenvironment, for developing possible therapeutic strategies to help the bone marrow microenvironment recovery.

  15. Morphotectonic Index Analysis as an Indicator of Neotectonic Segmentation of the Nicoya Peninsula, Costa Rica

    NASA Astrophysics Data System (ADS)

    Morrish, S.; Marshall, J. S.

    2013-12-01

    The Nicoya Peninsula lies within the Costa Rican forearc where the Cocos plate subducts under the Caribbean plate at ~8.5 cm/yr. Rapid plate convergence produces frequent large earthquakes (~50yr recurrence interval) and pronounced crustal deformation (0.1-2.0m/ky uplift). Seven uplifted segments have been identified in previous studies using broad geomorphic surfaces (Hare & Gardner 1984) and late Quaternary marine terraces (Marshall et al. 2010). These surfaces suggest long term net uplift and segmentation of the peninsula in response to contrasting domains of subducting seafloor (EPR, CNS-1, CNS-2). In this study, newer 10m contour digital topographic data (CENIGA- Terra Project) will be used to characterize and delineate this segmentation using morphotectonic analysis of drainage basins and correlation of fluvial terrace/ geomorphic surface elevations. The peninsula has six primary watersheds which drain into the Pacific Ocean; the Río Andamojo, Río Tabaco, Río Nosara, Río Ora, Río Bongo, and Río Ario which range in area from 200 km2 to 350 km2. The trunk rivers follow major lineaments that define morphotectonic segment boundaries and in turn their drainage basins are bisected by them. Morphometric analysis of the lower (1st and 2nd) order drainage basins will provide insight into segmented tectonic uplift and deformation by comparing values of drainage basin asymmetry, stream length gradient, and hypsometry with respect to margin segmentation and subducting seafloor domain. A general geomorphic analysis will be conducted alongside the morphometric analysis to map previously recognized (Morrish et al. 2010) but poorly characterized late Quaternary fluvial terraces. Stream capture and drainage divide migration are common processes throughout the peninsula in response to the ongoing deformation. Identification and characterization of basin piracy throughout the peninsula will provide insight into the history of landscape evolution in response to differential uplift. Conducting this morphotectonic analysis of the Nicoya Peninsula will provide further constraints on rates of segment uplift, location of segment boundaries, and advance the understanding of the long term deformation of the region in relation to subduction.

  16. Programming model Network Structue Analysis of the Cost-Effectiveness of Fish Tagging Technologies and Programs Segment code Segment Name RKM

    E-print Network

    and Programs Segment code Segment Name RKM 1 1 Segment bifurcations COLR1 Lower Columbia 29 29 Segment of river NUMBER) NEW SEGMENT LABEL SEGMENT/SUBBASIN/DAM NAME RKM Release site? Detection site? Dam? LOCR 1 COLR1) NEW SEGMENT LABEL SEGMENT/SUBBASIN/DAM NAME RKM Release site? Detection site? Dam? SNAK 7 SNAK3 Lower

  17. Vessel Segmentation and Analysis in Laboratory Skin Transplant Micro-angiograms

    E-print Network

    Lübeck, Universität zu

    Vessel Segmentation and Analysis in Laboratory Skin Transplant Micro-angiograms Alexandru transplantations depends on the adequate revascularization of the trans- planted dermal matrix. To induce vessel and length of newly grown vessels have to be measured in micro-angiograms (x-ray images of the blood vessels

  18. Optimal Sparse Segment Identification With Application in Copy Number Variation Analysis

    E-print Network

    Cai, T. Tony

    ; Likelihood ratio selection; Multiple testing; Signal detection. 1. INTRODUCTION In genetics, the study of DNA (McCarroll and Altshuler 2007). CNV refers to du- plication or deletion of a segment of DNA sequences JENG, T. Tony CAI, and Hongzhe LI Motivated by DNA copy number variation (CNV) analysis based on high

  19. Segmental analysis of indocyanine green pharmacokinetics for the reliable diagnosis of functional vascular insufficiency

    NASA Astrophysics Data System (ADS)

    Kang, Yujung; Lee, Jungsul; An, Yuri; Jeon, Jongwook; Choi, Chulhee

    2011-03-01

    Accurate and reliable diagnosis of functional insufficiency of peripheral vasculature is essential since Raynaud phenomenon (RP), most common form of peripheral vascular insufficiency, is commonly associated with systemic vascular disorders. We have previously demonstrated that dynamic imaging of near-infrared fluorophore indocyanine green (ICG) can be a noninvasive and sensitive tool to measure tissue perfusion. In the present study, we demonstrated that combined analysis of multiple parameters, especially onset time and modified Tmax which means the time from onset of ICG fluorescence to Tmax, can be used as a reliable diagnostic tool for RP. To validate the method, we performed the conventional thermographic analysis combined with cold challenge and rewarming along with ICG dynamic imaging and segmental analysis. A case-control analysis demonstrated that segmental pattern of ICG dynamics in both hands was significantly different between normal and RP case, suggesting the possibility of clinical application of this novel method for the convenient and reliable diagnosis of RP.

  20. Mimicking human expert interpretation of remotely sensed raster imagery by using a novel segmentation analysis within ArcGIS

    NASA Astrophysics Data System (ADS)

    Le Bas, Tim; Scarth, Anthony; Bunting, Peter

    2015-04-01

    Traditional computer based methods for the interpretation of remotely sensed imagery use each pixel individually or the average of a small window of pixels to calculate a class or thematic value, which provides an interpretation. However when a human expert interprets imagery, the human eye is excellent at finding coherent and homogenous areas and edge features. It may therefore be advantageous for computer analysis to mimic human interpretation. A new toolbox for ArcGIS 10.x will be presented that segments the data layers into a set of polygons. Each polygon is defined by a K-means clustering and region growing algorithm, thus finding areas, their edges and any lineations in the imagery. Attached to each polygon are the characteristics of the imagery such as mean and standard deviation of the pixel values, within the polygon. The segmentation of imagery into a jigsaw of polygons also has the advantage that the human interpreter does not need to spend hours digitising the boundaries. The segmentation process has been taken from the RSGIS library of analysis and classification routines (Bunting et al., 2014). These routines are freeware and have been modified to be available in the ArcToolbox under the Windows (v7) operating system. Input to the segmentation process is a multi-layered raster image, for example; a Landsat image, or a set of raster datasets made up from derivatives of topography. The size and number of polygons are set by the user and are dependent on the imagery used. Examples will be presented of data from the marine environment utilising bathymetric depth, slope, rugosity and backscatter from a multibeam system. Meaningful classification of the polygons using their numerical characteristics is the next goal. Object based image analysis (OBIA) should help this workflow. Fully calibrated imagery systems will allow numerical classification to be translated into more readily understandable terms. Peter Bunting, Daniel Clewley, Richard M. Lucas and Sam Gillingham. 2014. The Remote Sensing and GIS Software Library (RSGISLib), Computers & Geosciences. Volume 62, Pages 216-226 http://dx.doi.org/10.1016/j.cageo.2013.08.007.

  1. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    PubMed

    Belevich, Ilya; Joensuu, Merja; Kumar, Darshan; Vihinen, Helena; Jokitalo, Eija

    2016-01-01

    Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program. PMID:26727152

  2. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-01

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  3. Phylogenomic analysis reveals ancient segmental duplications in the human genome.

    PubMed

    Hafeez, Madiha; Shabbir, Madiha; Altaf, Fouzia; Abbasi, Amir Ali

    2016-01-01

    Evolution of organismal complexity and origin of novelties during vertebrate history has been widely explored in context of both regulation of gene expression and gene duplication events. Ohno (1970) for the first time put forward the idea of two rounds whole genome duplication events as the most plausible explanation for evolutionarizing the vertebrate lineage (2R hypothesis). To test the validity of 2R hypothesis, a robust phylogenomic analysis of multigene families with triplicated or quadruplicated representation on human FGFR bearing chromosomes (4/5/8/10) was performed. Topology comparison approach categorized members of 80 families into five distinct co-duplicated groups. Genes belonging to one co-duplicated group are duplicated concurrently, whereas genes of two different co-duplicated groups do not share their duplication history and have not duplicated in congruency. Our findings contradict the 2R model and are indicative of small-scale duplications and rearrangements that cover the entire span of animal's history. PMID:26327327

  4. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (? = 0), centered (? = 0.5) and forward (? = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of ? = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  5. Control volume based hydrocephalus research; analysis of human data

    NASA Astrophysics Data System (ADS)

    Cohen, Benjamin; Wei, Timothy; Voorhees, Abram; Madsen, Joseph; Anor, Tomer

    2010-11-01

    Hydrocephalus is a neuropathophysiological disorder primarily diagnosed by increased cerebrospinal fluid volume and pressure within the brain. To date, utilization of clinical measurements have been limited to understanding of the relative amplitude and timing of flow, volume and pressure waveforms; qualitative approaches without a clear framework for meaningful quantitative comparison. Pressure volume models and electric circuit analogs enforce volume conservation principles in terms of pressure. Control volume analysis, through the integral mass and momentum conservation equations, ensures that pressure and volume are accounted for using first principles fluid physics. This approach is able to directly incorporate the diverse measurements obtained by clinicians into a simple, direct and robust mechanics based framework. Clinical data obtained for analysis are discussed along with data processing techniques used to extract terms in the conservation equation. Control volume analysis provides a non-invasive, physics-based approach to extracting pressure information from magnetic resonance velocity data that cannot be measured directly by pressure instrumentation.

  6. An Initial Approach to Segmentation and Analysis of Nerve Cells using Ridge S.J. Richerson1

    E-print Network

    Lübeck, Universität zu

    An Initial Approach to Segmentation and Analysis of Nerve Cells using Ridge Detection S of the cells are first enhanced by means of the first eigenvalue of the hessian matrix. Then, a hysteresis-based ridge segmentation returns the outline of the cell, which is used to separate it from the background

  7. Analysis of neonatal respiratory distress syndrome among different gestational segments

    PubMed Central

    Wang, Jian; Liu, Xuehua; Zhu, Tong; Yan, Chaoying

    2015-01-01

    Objective: To find more effective diagnosis and treatment of NRDS through comparatively analyzing the different gestational neonates with respiratory distress syndrome in risk factors, clinical characteristics, treatment and prognosis. Methods: The clinical data of 232 neonates were retrospectively analyzed who had been admitted into the neonatal intensive care unit and diagnosed with NRDS from January 2008 to December 2010. These cases were divided into three groups according to gestational age, which included full-term group, late preterm and early preterm group. Statistical analysis was used to detect the differences of relative factors among the three groups.Results: For pathogen, the full-term and late preterm infants accounted for more than 50% The majority of full-term infants were less than 39 weeks, taking up 83.7%. As many as 61.1% of the late preterm infants were born at maternal age over 30 years. The incidence of Cesarean section was high among the three groups, especially the full-term (90.7%) and late preterm group (86.1%). For clinical features, full-term infants had late onsets more than 12 h after birth. Air bronchogram could be found commonly in early preterm neonates, influencing 92% of them. However, it was rare in the other two groups. The incidence of lung infection in each group was all about 50%. In addition, Gas leakage and PPHN were more common complications in full-term and late preterm group, while for the early preterm group was the bronchopulmonary dysplasia and intracranial hemorrhage. For treatment, the proportions of full-term infants receiving application of HFOV and NO were 57.0% and 24.4%, and for late preterm infants were 36.1% and 22.2%. The application of HFOV and NO was not as much to early preterm infants as other groups. There was no significant difference in the duration of invasive ventilation between all groups. However, the noninvasive ventilation time after extubation was as long as 10.1±0.5 days in early preterm infants. The proportions of infants receiving application of PS were 53.5%, 83.3% and 81.8%, respectively. OI values improved greatly 2 h after application of PS on early preterm infants. However, the obvious difference was found only after 24 h for full-term and late preterm infants.Conclusion: Besides early preterm infants, full-term and late preterm have the growing trend in the pathogenesis of NRDS. Infants of different gestational age have their own characteristics of the risk factors, which cesarean section impacts greatly on the incidence of term and late preterm infants. The clinical feature, chest X-ray changes and common complications were characteristics between term and premature infants with NRDS. The PS treatment work slower in term and late preterm infants, who needed more HFOV and NO treatment.

  8. Analysis, design, and test of a graphite/polyimide Shuttle orbiter body flap segment

    NASA Technical Reports Server (NTRS)

    Graves, S. R.; Morita, W. H.

    1982-01-01

    For future missions, increases in Space Shuttle orbiter deliverable and recoverable payload weight capability may be needed. Such increases could be obtained by reducing the inert weight of the Shuttle. The application of advanced composites in orbiter structural components would make it possible to achieve such reductions. In 1975, NASA selected the orbiter body flap as a demonstration component for the Composite for Advanced Space Transportation Systems (CASTS) program. The progress made in 1977 through 1980 was integrated into a design of a graphite/polyimide (Gr/Pi) body flap technology demonstration segment (TDS). Aspects of composite body flap design and analysis are discussed, taking into account the direct-bond fibrous refractory composite insulation (FRCI) tile on Gr/Pi structure, Gr/Pi body flap weight savings, the body flap design concept, and composite body flap analysis. Details regarding the Gr/Pi technology demonstration segment are also examined.

  9. Stress Analysis of Bolted, Segmented Cylindrical Shells Exhibiting Flange Mating-Surface Waviness

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2009-01-01

    Bolted, segmented cylindrical shells are a common structural component in many engineering systems especially for aerospace launch vehicles. Segmented shells are often needed due to limitations of manufacturing capabilities or transportation issues related to very long, large-diameter cylindrical shells. These cylindrical shells typically have a flange or ring welded to opposite ends so that shell segments can be mated together and bolted to form a larger structural system. As the diameter of these shells increases, maintaining strict fabrication tolerances for the flanges to be flat and parallel on a welded structure is an extreme challenge. Local fit-up stresses develop in the structure due to flange mating-surface mismatch (flange waviness). These local stresses need to be considered when predicting a critical initial flaw size. Flange waviness is one contributor to the fit-up stress state. The present paper describes the modeling and analysis effort to simulate fit-up stresses due to flange waviness in a typical bolted, segmented cylindrical shell. Results from parametric studies are presented for various flange mating-surface waviness distributions and amplitudes.

  10. An Improved Level Set for Liver Segmentation and Perfusion Analysis in MRIs.

    PubMed

    Chen, Gang; Gu, Lixu; Qian, Lijun; Xu, Jianrong

    2009-01-01

    Determining liver segmentation accurately from MRIs is the primary and crucial step for any automated liver perfusion analysis, which provides important information about the blood supply to the liver. Although implicit contour extraction methods, such as level set methods (LSMs) and active contours, are often used to segment livers, the results are not always satisfactory due to the presence of artifacts and low-gradient response on the liver boundary. In this paper, we propose a multiple-initialization, multiple-step LSM to overcome the leakage and over-segmentation problems. The multiple-initialization curves are first evolved separately using the fast marching methods and LSMs, which are then combined with a convex hull algorithm to obtain a rough liver contour. Finally, the contour is evolved again using global level set smoothing to determine a precise liver boundary. Experimental results on 12 abdominal MRI series showed that the proposed approach obtained better liver segmentation results, so that a refined liver perfusion curve without respiration affection can be obtained by using a modified chamfer matching algorithm and the perfusion curve is evaluated by radiologists. PMID:19129028

  11. Pulse shape analysis in segmented detectors as a technique for background reduction in Ge double-beta decay experiments

    E-print Network

    S. R. Elliott; V. M. Gehman; K. Kazkaz; D-M. Mei; A. R. Young

    2005-09-20

    The need to understand and reject backgrounds in Ge-diode detector double-beta decay experiments has given rise to the development of pulse shape analysis in such detectors to discern single-site energy deposits from multiple-site deposits. Here, we extend this analysis to segmented Ge detectors to study the effectiveness of combining segmentation with pulse shape analysis to identify the multiplicity of the energy deposits.

  12. Fetal autonomic brain age scores, segmented heart rate variability analysis, and traditional short term variability

    PubMed Central

    Hoyer, Dirk; Kowalski, Eva-Maria; Schmidt, Alexander; Tetschke, Florian; Nowack, Samuel; Rudolph, Anja; Wallwitz, Ulrike; Kynass, Isabelle; Bode, Franziska; Tegtmeyer, Janine; Kumm, Kathrin; Moraru, Liviu; Götz, Theresa; Haueisen, Jens; Witte, Otto W.; Schleußner, Ekkehard; Schneider, Uwe

    2014-01-01

    Disturbances of fetal autonomic brain development can be evaluated from fetal heart rate patterns (HRP) reflecting the activity of the autonomic nervous system. Although HRP analysis from cardiotocographic (CTG) recordings is established for fetal surveillance, temporal resolution is low. Fetal magnetocardiography (MCG), however, provides stable continuous recordings at a higher temporal resolution combined with a more precise heart rate variability (HRV) analysis. A direct comparison of CTG and MCG based HRV analysis is pending. The aims of the present study are: (i) to compare the fetal maturation age predicting value of the MCG based fetal Autonomic Brain Age Score (fABAS) approach with that of CTG based Dawes-Redman methodology; and (ii) to elaborate fABAS methodology by segmentation according to fetal behavioral states and HRP. We investigated MCG recordings from 418 normal fetuses, aged between 21 and 40 weeks of gestation. In linear regression models we obtained an age predicting value of CTG compatible short term variability (STV) of R2 = 0.200 (coefficient of determination) in contrast to MCG/fABAS related multivariate models with R2 = 0.648 in 30 min recordings, R2 = 0.610 in active sleep segments of 10 min, and R2 = 0.626 in quiet sleep segments of 10 min. Additionally segmented analysis under particular exclusion of accelerations (AC) and decelerations (DC) in quiet sleep resulted in a novel multivariate model with R2 = 0.706. According to our results, fMCG based fABAS may provide a promising tool for the estimation of fetal autonomic brain age. Beside other traditional and novel HRV indices as possible indicators of developmental disturbances, the establishment of a fABAS score normogram may represent a specific reference. The present results are intended to contribute to further exploration and validation using independent data sets and multicenter research structures. PMID:25505399

  13. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  14. Mean platelet volume is associated with infarct size and microvascular obstruction estimated by cardiac magnetic resonance in ST segment elevation myocardial infarction.

    PubMed

    Fabregat-Andrés, Óscar; Cubillos, Andrés; Ferrando-Beltrán, Mónica; Bochard-Villanueva, Bruno; Estornell-Erill, Jordi; Fácila, Lorenzo; Ridocci-Soriano, Francisco; Morell, Salvador

    2013-06-01

    Mean platelet volume (MPV) is an indicator of platelet activation. High MPV has been recently considered as an independent risk factor for poor outcomes after ST-segment elevation myocardial infarction (STEMI). We analyzed 128 patients diagnosed with first STEMI successfully reperfused during three consecutive years. MPV was measured on admission and a cardiac magnetic resonance (CMR) exam was performed within the first week in all patients. Myocardial necrosis size was estimated by the area of late gadolinium enhancement (LGE), identifying microvascular obstruction (MVO), if present. Clinical outcomes were recorded at 1 year follow-up. High MPV was defined as a value in the third tertile (?9.5?fl), and a low MPV, as a value in the lower two. We found a slight but significant correlation between MPV and infarct size (r = 0.287, P = 0.008). Patients with high MPV had more extensive infarcted area (percentage of necrosis by LGE: 17.6 vs. 12.5%, P = 0.021) and more presence of MVO (patients with MVO pattern: 44.4 vs. 25.3%, P = 0.027). In a multivariable analysis, hazard ratio for major adverse cardiac events was 3.35 [95% confidence interval (CI) 1.1-9.9, P = 0.03] in patients with high MPV. High MPV in patients with first STEMI is associated with higher infarct size and more presence of MVO measured by CMR. PMID:23322274

  15. Predictive value of admission platelet volume indices for in-hospital major adverse cardiovascular events in acute ST-segment elevation myocardial infarction.

    PubMed

    Celik, Turgay; Kaya, Mehmet G; Akpek, Mahmut; Gunebakmaz, Ozgur; Balta, Sevket; Sarli, Bahadir; Duran, Mustafa; Demirkol, Sait; Uysal, Onur Kadir; Oguzhan, Abdurrahman; Gibson, C Michael

    2015-02-01

    Although mean platelet volume (MPV) is an independent correlate of impaired angiographic reperfusion and 6-month mortality in ST-segment elevation myocardial infarction (STEMI) treated with primary percutaneous coronary intervention (pPCI), there is less data regarding the association between platelet distribution width (PDW) and in-hospital major adverse cardiovascular events (MACEs). A total of 306 patients with STEMI pPCI were evaluated. No reflow was defined as a post-PCI thrombolysis in myocardial infarction (TIMI) flow grade of 0, 1, or 2 (group 1). Angiographic success was defined as TIMI flow grade 3 (group 2). The values of MPV and PDW were higher among patients with no reflow. In-stent thrombosis, nonfatal myocardial infarction, in-hospital mortality, and MACEs were significantly more frequent among patients with no reflow. In multivariate analysis, PDW, MPV, high-sensitivity C-reactive protein, and glucose on admission were independent correlates of in-hospital MACEs. Admission PDW and MPV are independent correlates of no reflow and in-hospital MACEs among patients with STEMI undergoing pPCI. PMID:24301422

  16. Analysis of object segmentation methods for VOP generation in MPEG-4

    NASA Astrophysics Data System (ADS)

    Vaithianathan, Karthikeyan; Panchanathan, Sethuraman

    2000-04-01

    The recent audio-visual standard MPEG4 emphasizes content- based information representation and coding. Rather than operating at the level of pixels, MPEG4 operates at a higher level of abstraction, capturing the information based on the content of a video sequence. Video object plane (VOP) extraction is an important step in defining the content of any video sequence, except in the case of authored applications which involve creation of video sequences using synthetic objects and graphics. The generation of VOPs from a video sequence involves segmenting the objects from every frame of the video sequence. The problem of object segmentation is also being addressed by the Computer Vision community. The major problem faced by the researchers is to define object boundaries such that they are semantically meaningful. Finding a single robust solution for this problem that can work for all kinds of video sequences still remains to be a challenging task. The object segmentation problem can be simplified by imposing constraints on the video sequences. These constraints largely depend on the type of application where the segmentation technique will be used. The purpose of this paper is twofold. In the first section, we summarize the state-of- the-art research in this topic and analyze the various VOP generation and object segmentation methods that have been presented in the recent literature. In the next section, we focus on the different types of video sequences, the important cues that can be employed for efficient object segmentation, the different object segmentation techniques and the types of techniques that are well suited for each type of application. A detailed analysis of these approaches from the perspective of accuracy of the object boundaries, robustness towards different kinds of video sequences, ability to track the objects through the video sequences, and complexity involved in implementing these approaches along with other limitations will be discussed. In the final section, we concentrate on the specific problems that require special attention and discuss the scope and direction for further research.

  17. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    SciTech Connect

    Akhbardeh, Alireza; Jacobs, Michael A.

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data. In the synthetic data, the authors demonstrated the performance of the NLDR method compared with conventional linear DR methods. The NLDR approach enabled successful segmentation of the structures, whereas, in most cases, PCA and MDS failed. The NLDR approach was able to segment different breast tissue types with a high accuracy and the embedded image of the breast MRI data demonstrated fuzzy boundaries between the different types of breast tissue, i.e., fatty, glandular, and tissue with lesions (>86%). Conclusions: The proposed hybrid NLDR methods were able to segment clinical breast data with a high accuracy and construct an embedded image that visualized the contribution of different radiological parameters.

  18. Method 349.0 Determination of Ammonia in Estuarine and Coastal Waters by Gas Segmented Continuous Flow Colorimetric Analysis

    EPA Science Inventory

    This method provides a procedure for the determination of ammonia in estuarine and coastal waters. The method is based upon the indophenol reaction,1-5 here adapted to automated gas-segmented continuous flow analysis.

  19. Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation.

    PubMed

    Alzubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  20. Multiresolution Analysis Using Wavelet, Ridgelet, and Curvelet Transforms for Medical Image Segmentation

    PubMed Central

    AlZubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  1. Simplex volume analysis for finding endmembers in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Hsiao-Chi; Song, Meiping; Chang, Chein-I.

    2015-05-01

    Using maximal simplex volume as an optimal criterion for finding endmembers is a common approach and has been widely studied in the literature. Interestingly, very little work has been reported on how simplex volume is calculated. It turns out that the issue of calculating simplex volume is much more complicated and involved than what we may think. This paper investigates this issue from two different aspects, geometric structure and eigen-analysis. The geometric structure is derived from its simplex structure whose volume can be calculated by multiplying its base with its height. On the other hand, eigen-analysis takes advantage of the Cayley-Menger determinant to calculate the simplex volume. The major issue of this approach is that when the matrix is ill-rank where determinant is desired. To deal with this problem two methods are generally considered. One is to perform data dimensionality reduction to make the matrix to be of full rank. The drawback of this method is that the original volume has been shrunk and the found volume of a dimensionality-reduced simplex is not the real original simplex volume. Another is to use singular value decomposition (SVD) to find singular values for calculating simplex volume. The dilemma of this method is its instability in numerical calculations. This paper explores all of these three methods in simplex volume calculation. Experimental results show that geometric structure-based method yields the most reliable simplex volume.

  2. Multivariate statistical analysis as a tool for the segmentation of 3D spectral data.

    PubMed

    Lucas, G; Burdet, P; Cantoni, M; Hébert, C

    2013-01-01

    Acquisition of three-dimensional (3D) spectral data is nowadays common using many different microanalytical techniques. In order to proceed to the 3D reconstruction, data processing is necessary not only to deal with noisy acquisitions but also to segment the data in term of chemical composition. In this article, we demonstrate the value of multivariate statistical analysis (MSA) methods for this purpose, allowing fast and reliable results. Using scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) coupled with a focused ion beam (FIB), a stack of spectrum images have been acquired on a sample produced by laser welding of a nickel-titanium wire and a stainless steel wire presenting a complex microstructure. These data have been analyzed using principal component analysis (PCA) and factor rotations. PCA allows to significantly improve the overall quality of the data, but produces abstract components. Here it is shown that rotated components can be used without prior knowledge of the sample to help the interpretation of the data, obtaining quickly qualitative mappings representative of elements or compounds found in the material. Such abundance maps can then be used to plot scatter diagrams and interactively identify the different domains in presence by defining clusters of voxels having similar compositions. Identified voxels are advantageously overlaid on secondary electron (SE) images with higher resolution in order to refine the segmentation. The 3D reconstruction can then be performed using available commercial softwares on the basis of the provided segmentation. To asses the quality of the segmentation, the results have been compared to an EDX quantification performed on the same data. PMID:24035679

  3. Profiling the different needs and expectations of patients for population-based medicine: a case study using segmentation analysis

    PubMed Central

    2012-01-01

    Background This study illustrates an evidence-based method for the segmentation analysis of patients that could greatly improve the approach to population-based medicine, by filling a gap in the empirical analysis of this topic. Segmentation facilitates individual patient care in the context of the culture, health status, and the health needs of the entire population to which that patient belongs. Because many health systems are engaged in developing better chronic care management initiatives, patient profiles are critical to understanding whether some patients can move toward effective self-management and can play a central role in determining their own care, which fosters a sense of responsibility for their own health. A review of the literature on patient segmentation provided the background for this research. Method First, we conducted a literature review on patient satisfaction and segmentation to build a survey. Then, we performed 3,461 surveys of outpatient services users. The key structures on which the subjects’ perception of outpatient services was based were extrapolated using principal component factor analysis with varimax rotation. After the factor analysis, segmentation was performed through cluster analysis to better analyze the influence of individual attitudes on the results. Results Four segments were identified through factor and cluster analysis: the “unpretentious,” the “informed and supported,” the “experts” and the “advanced” patients. Their policies and managerial implications are outlined. Conclusions With this research, we provide the following: – a method for profiling patients based on common patient satisfaction surveys that is easily replicable in all health systems and contexts; – a proposal for segments based on the results of a broad-based analysis conducted in the Italian National Health System (INHS). Segments represent profiles of patients requiring different strategies for delivering health services. Their knowledge and analysis might support an effort to build an effective population-based medicine approach. PMID:23256543

  4. Feature-driven model-based segmentation

    NASA Astrophysics Data System (ADS)

    Qazi, Arish A.; Kim, John; Jaffray, David A.; Pekar, Vladimir

    2011-03-01

    The accurate delineation of anatomical structures is required in many medical image analysis applications. One example is radiation therapy planning (RTP), where traditional manual delineation is tedious, labor intensive, and can require hours of clinician's valuable time. Majority of automated segmentation methods in RTP belong to either model-based or atlas-based approaches. One substantial limitation of model-based segmentation is that its accuracy may be restricted by the uncertainties in image content, specifically when segmenting low-contrast anatomical structures, e.g. soft tissue organs in computed tomography images. In this paper, we introduce a non-parametric feature enhancement filter which replaces raw intensity image data by a high level probabilistic map which guides the deformable model to reliably segment low-contrast regions. The method is evaluated by segmenting the submandibular and parotid glands in the head and neck region and comparing the results to manual segmentations in terms of the volume overlap. Quantitative results show that we are in overall good agreement with expert segmentations, achieving volume overlap of up to 80%. Qualitatively, we demonstrate that we are able to segment low-contrast regions, which otherwise are difficult to delineate with deformable models relying on distinct object boundaries from the original image data.

  5. Lymph node segmentation using active contours

    NASA Astrophysics Data System (ADS)

    Honea, David M.; Ge, Yaorong; Snyder, Wesley E.; Hemler, Paul F.; Vining, David J.

    1997-04-01

    Node volume analysis is very important medically. An automatic method of segmenting the node in spiral CT x-ray images is needed to produce accurate, consistent, and efficient volume measurements. The method of active contours (snakes) is proposed here as good solution to the node segmentation problem. Optimum parameterization and search strategies for using a two-dimensional snake to find node cross-sections are described, and an energy normalization scheme which preserves important spatial variations in energy is introduced. Three-dimensional segmentation is achieved without additional operator interaction by propagating the 2D results to adjacent slices. The method gives promising segmentation results on both simulated and real node images.

  6. Segmentation of Unstructured Datasets

    NASA Technical Reports Server (NTRS)

    Bhat, Smitha

    1996-01-01

    Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

  7. Multi-Modal Glioblastoma Segmentation: Man versus Machine

    PubMed Central

    Pica, Alessia; Schucht, Philippe; Beck, Jürgen; Verma, Rajeev Kumar; Slotboom, Johannes; Reyes, Mauricio; Wiest, Roland

    2014-01-01

    Background and Purpose Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. Methods We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. Results Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p?=?0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (?) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. Conclusions In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity. PMID:24804720

  8. Automatic segmentation and analysis of fibrin networks in 3D confocal microscopy images

    NASA Astrophysics Data System (ADS)

    Liu, Xiaomin; Mu, Jian; Machlus, Kellie R.; Wolberg, Alisa S.; Rosen, Elliot D.; Xu, Zhiliang; Alber, Mark S.; Chen, Danny Z.

    2012-02-01

    Fibrin networks are a major component of blood clots that provides structural support to the formation of growing clots. Abnormal fibrin networks that are too rigid or too unstable can promote cardiovascular problems and/or bleeding. However, current biological studies of fibrin networks rarely perform quantitative analysis of their structural properties (e.g., the density of branch points) due to the massive branching structures of the networks. In this paper, we present a new approach for segmenting and analyzing fibrin networks in 3D confocal microscopy images. We first identify the target fibrin network by applying the 3D region growing method with global thresholding. We then produce a one-voxel wide centerline for each fiber segment along which the branch points and other structural information of the network can be obtained. Branch points are identified by a novel approach based on the outer medial axis. Cells within the fibrin network are segmented by a new algorithm that combines cluster detection and surface reconstruction based on the ?-shape approach. Our algorithm has been evaluated on computer phantom images of fibrin networks for identifying branch points. Experiments on z-stack images of different types of fibrin networks yielded results that are consistent with biological observations.

  9. Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population

    USGS Publications Warehouse

    Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel

    2002-01-01

    A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.

  10. Advanced finite element analysis of L4-L5 implanted spine segment

    NASA Astrophysics Data System (ADS)

    Pawlikowski, Marek; Doma?ski, Janusz; Suchocki, Cyprian

    2015-09-01

    In the paper finite element (FE) analysis of implanted lumbar spine segment is presented. The segment model consists of two lumbar vertebrae L4 and L5 and the prosthesis. The model of the intervertebral disc prosthesis consists of two metallic plates and a polyurethane core. Bone tissue is modelled as a linear viscoelastic material. The prosthesis core is made of a polyurethane nanocomposite. It is modelled as a non-linear viscoelastic material. The constitutive law of the core, derived in one of the previous papers, is implemented into the FE software Abaqus®. It was done by means of the User-supplied procedure UMAT. The metallic plates are elastic. The most important parts of the paper include: description of the prosthesis geometrical and numerical modelling, mathematical derivation of stiffness tensor and Kirchhoff stress and implementation of the constitutive model of the polyurethane core into Abaqus® software. Two load cases were considered, i.e. compression and stress relaxation under constant displacement. The goal of the paper is to numerically validate the constitutive law, which was previously formulated, and to perform advanced FE analyses of the implanted L4-L5 spine segment in which non-standard constitutive law for one of the model materials, i.e. the prosthesis core, is implemented.

  11. Automated system for ST segment and arrhythmia analysis in exercise radionuclide ventriculography

    SciTech Connect

    Hsia, P.W.; Jenkins, J.M.; Shimoni, Y.; Gage, K.P.; Santinga, J.T.; Pitt, B.

    1986-06-01

    A computer-based system for interpretation of the electrocardiogram (ECG) in the diagnosis of arrhythmia and ST segment abnormality in an exercise system is presented. The system was designed for inclusion in a gamma camera so the ECG diagnosis could be combined with the diagnostic capability of radionuclide ventriculography. Digitized data are analyzed in a beat-by-beat mode and a contextual diagnosis of underlying rhythm is provided. Each beat is assigned a beat code based on a combination of waveform analysis and RR interval measurement. The waveform analysis employs a new correlation coefficient formula which corrects for baseline wander. Selective signal averaging, in which only normal beats are included, is done for an improved signal-to-noise ratio prior to ST segment analysis. Template generation, R wave detection, QRS window size, baseline correction, and continuous updating of heart rate have all been automated. ST level and slope measurements are computed on signal-averaged data. Arrhythmia analysis of 13 passages of abnormal rhythm by computer was found to be correct in 98.4 percent of all beats. 25 passages of exercise data, 1-5 min in length, were evaluated by the cardiologist and found to be in agreement in 95.8 percent in measurements of ST level and 91.7 percent in measurements of ST slope.

  12. Analysis of forced expired volume signals using multi-exponential

    E-print Network

    Timmer, Jens

    Analysis of forced expired volume signals using multi-exponential functions H. Steltner1 M. Vogel2 to complete forced expiration manúuvres. The aim of the study is to evaluate whether forced vital capacity (FVC), the volume exhaled at the end of completed forced expiration, can be estimated by extrapolating

  13. Combined Finite Element --Finite Volume Method ( Convergence Analysis )

    E-print Network

    Magdeburg, Universität

    Combined Finite Element -- Finite Volume Method ( Convergence Analysis ) M'aria Luk idea is to combine finite volume and finite element methods in an appropriate way. Thus nonlinear grid. Diffusion terms are discretized by the conforming piecewise linear finite element method

  14. Texture analysis improves level set segmentation of the anterior abdominal wall

    SciTech Connect

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Landman, Bennett A.

    2013-12-15

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention.Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall.Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture.Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture analysis can improve the level set segmentation around the abdominal region.

  15. Segmentation and morphometric analysis of cells from fluorescence microscopy images of cytoskeletons.

    PubMed

    Ujihara, Yoshihiro; Nakamura, Masanori; Miyazaki, Hiroshi; Wada, Shigeo

    2013-01-01

    We developed a method to reconstruct cell geometry from confocal fluorescence microscopy images of the cytoskeleton. In the method, region growing was implemented twice. First, it was applied to the extracellular regions to differentiate them from intracellular noncytoskeletal regions, which both appear black in fluorescence microscopy imagery, and then to cell regions for cell identification. Analysis of morphological parameters revealed significant changes in cell shape associated with cytoskeleton disruption, which offered insight into the mechanical role of the cytoskeleton in maintaining cell shape. The proposed segmentation method is promising for investigations on cell morphological changes with respect to internal cytoskeletal structures. PMID:23762186

  16. Guidance for Environmental Background Analysis Volume III: Groundwater

    E-print Network

    Guidance for Environmental Background Analysis Volume III: Groundwater Prepared for: Naval This guidance document provides instructions for characterizing groundwater background conditions and comparing datasets representing groundwater impacted by an actual or potential chemical release to appropriate

  17. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    SciTech Connect

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time (compared to an average of 39 min per case by manual segmentation). Conclusions: The computerized liver extraction scheme provides an efficient and accurate way of measuring liver volumes in CT.

  18. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  19. Association of mean platelet volume with impaired myocardial reperfusion and short-term mortality in patients with ST-segment elevation myocardial infarction undergoing primary percutaneous coronary intervention.

    PubMed

    Lai, Hong-Mei; Chen, Qing-Jie; Yang, Yi-Ning; Ma, Yi-Tong; Li, Xiao-Mei; Xu, Rui; Zhai, Hui; Liu, Fen; Chen, Bang-Dang; Zhao, Qian

    2016-01-01

    Impaired myocardial reperfusion, defined angiographically by myocardial blush grade (MBG) 0 or 1, is associated with adverse clinical outcomes in patients with ST-segment elevation myocardial infarction (STEMI). The aim of this study was to investigate the impact of admission mean platelet volume (MPV) on the myocardial reperfusion and 30-day all-cause mortality in patients with STEMI with successful epicardial reperfusion after primary percutaneous coronary intervention (PCI). A total of 453 patients with STEMI who underwent primary PCI within 12?h of symptoms onset and achieved thrombolysis in myocardial infarction (TIMI) 3 flow at infarct-related artery after PCI were enrolled and divided into two groups based on postinterventional MBG: those with MBG 2/3 and those with MBG 0/1. Admission MPV was measured before coronary angiography. The primary endpoint was all-cause mortality at 30 days. MPV was significantly higher in patients with MBG 0/1 than in patients with MBG 2/3 (10.38?±?0.98 vs. 9.59?±?0.73, P?analysis demonstrated MPV was independently associated with postinterventional impaired myocardial reperfusion (odds ratio 2.684, 95% confidence interval 2.010-3.585, P?

  20. Three-Dimensional Blood Vessel Segmentation and Centerline Extraction based on Two-Dimensional Cross-Section Analysis.

    PubMed

    Kumar, Rahul Prasanna; Albregtsen, Fritz; Reimers, Martin; Edwin, Bjørn; Langø, Thomas; Elle, Ole Jakob

    2015-05-01

    The segmentation of tubular tree structures like vessel systems in volumetric datasets is of vital interest for many medical applications. In this paper we present a novel, semi-automatic method for blood vessel segmentation and centerline extraction, by tracking the blood vessel tree from a user-initiated seed point to the ends of the blood vessel tree. The novelty of our method is in performing only two-dimensional cross-section analysis for segmentation of the connected blood vessels. The cross-section analysis is done by our novel single-scale or multi-scale circle enhancement filter, used at the blood vessel trunk or bifurcation, respectively. The method was validated for both synthetic and medical images. Our validation has shown that the cross-sectional centerline error for our method is below 0.8 pixels and the Dice coefficient for our segmentation is 80% ± 2.7%. On combining our method with an optional active contour post-processing, the Dice coefficient for the resulting segmentation is found to be 94% ± 2.4%. Furthermore, by restricting the image analysis to the regions of interest and converting most of the three-dimensional calculations to two-dimensional calculations, the processing was found to be more than 18 times faster than Frangi vesselness with thinning, 8 times faster than user-initiated active contour segmentation with thinning and 7 times faster than our previous method. PMID:25398332

  1. Information architecture. Volume 2, Part 1: Baseline analysis summary

    SciTech Connect

    1996-12-01

    The Department of Energy (DOE) Information Architecture, Volume 2, Baseline Analysis, is a collaborative and logical next-step effort in the processes required to produce a Departmentwide information architecture. The baseline analysis serves a diverse audience of program management and technical personnel and provides an organized way to examine the Department`s existing or de facto information architecture. A companion document to Volume 1, The Foundations, it furnishes the rationale for establishing a Departmentwide information architecture. This volume, consisting of the Baseline Analysis Summary (part 1), Baseline Analysis (part 2), and Reference Data (part 3), is of interest to readers who wish to understand how the Department`s current information architecture technologies are employed. The analysis identifies how and where current technologies support business areas, programs, sites, and corporate systems.

  2. Computerized analysis of coronary artery disease: Performance evaluation of segmentation and tracking of coronary arteries in CT angiograms

    SciTech Connect

    Zhou, Chuan Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean; Agarwal, Prachi; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Patel, Smita; Wei, Jun

    2014-08-15

    Purpose: The authors are developing a computer-aided detection system to assist radiologists in analysis of coronary artery disease in coronary CT angiograms (cCTA). This study evaluated the accuracy of the authors’ coronary artery segmentation and tracking method which are the essential steps to define the search space for the detection of atherosclerotic plaques. Methods: The heart region in cCTA is segmented and the vascular structures are enhanced using the authors’ multiscale coronary artery response (MSCAR) method that performed 3D multiscale filtering and analysis of the eigenvalues of Hessian matrices. Starting from seed points at the origins of the left and right coronary arteries, a 3D rolling balloon region growing (RBG) method that adapts to the local vessel size segmented and tracked each of the coronary arteries and identifies the branches along the tracked vessels. The branches are queued and subsequently tracked until the queue is exhausted. With Institutional Review Board approval, 62 cCTA were collected retrospectively from the authors’ patient files. Three experienced cardiothoracic radiologists manually tracked and marked center points of the coronary arteries as reference standard following the 17-segment model that includes clinically significant coronary arteries. Two radiologists visually examined the computer-segmented vessels and marked the mistakenly tracked veins and noisy structures as false positives (FPs). For the 62 cases, the radiologists marked a total of 10191 center points on 865 visible coronary artery segments. Results: The computer-segmented vessels overlapped with 83.6% (8520/10191) of the center points. Relative to the 865 radiologist-marked segments, the sensitivity reached 91.9% (795/865) if a true positive is defined as a computer-segmented vessel that overlapped with at least 10% of the reference center points marked on the segment. When the overlap threshold is increased to 50% and 100%, the sensitivities were 86.2% and 53.4%, respectively. For the 62 test cases, a total of 55 FPs were identified by radiologist in 23 of the cases. Conclusions: The authors’ MSCAR-RBG method achieved high sensitivity for coronary artery segmentation and tracking. Studies are underway to further improve the accuracy for the arterial segments affected by motion artifacts, severe calcified and noncalcified soft plaques, and to reduce the false tracking of the veins and other noisy structures. Methods are also being developed to detect coronary artery disease along the tracked vessels.

  3. Sequence and phylogenetic analysis of M-class genome segments of novel duck reovirus NP03

    PubMed Central

    Wang, Shao; Chen, Shilong; Cheng, Xiaoxia; Chen, Shaoying; Lin, FengQiang; Jiang, Bing; Zhu, Xiaoli; Li, Zhaolong; Wang, Jinxiang

    2015-01-01

    We report the sequence and phylogenetic analysis of the entire M1, M2, and M3 genome segments of the novel duck reovirus (NDRV) NP03. Alignment between the newly determined nucleotide sequences as well as their deduced amino acid sequences and the published sequences of avian reovirus (ARV) was carried out with DNASTAR software. Sequence comparison showed that the M2 gene had the most variability among the M-class genes of DRV. Phylogenetic analysis of the M-class genes of ARV strains revealed different lineages and clusters within DRVs. The 5 NDRV strains used in this study fall into a well-supported lineage that includes chicken ARV strains, whereas Muscovy DRV (MDRV) strains are separate from NDRV strains and form a distinct genetic lineage in the M2 gene tree. However, the MDRV and NDRV strains are closely related and located in a common lineage in the M1 and M3 gene trees, respectively. PMID:25852231

  4. Segmentation and volumetric measurement of renal cysts and parenchyma from MR images of polycystic kidneys using multi-spectral analysis method

    NASA Astrophysics Data System (ADS)

    Bae, K. T.; Commean, P. K.; Brunsden, B. S.; Baumgarten, D. A.; King, B. F., Jr.; Wetzel, L. H.; Kenney, P. J.; Chapman, A. B.; Torres, V. E.; Grantham, J. J.; Guay-Woodford, L. M.; Tao, C.; Miller, J. P.; Meyers, C. M.; Bennett, W. M.

    2008-03-01

    For segmentation and volume measurement of renal cysts and parenchyma from kidney MR images in subjects with autosomal dominant polycystic kidney disease (ADPKD), a semi-automated, multi-spectral anaylsis (MSA) method was developed and applied to T1- and T2-weighted MR images. In this method, renal cysts and parenchyma were characterized and segmented for their characteristic T1 and T2 signal intensity differences. The performance of the MSA segmentation method was tested on ADPKD phantoms and patients. Segmented renal cysts and parenchyma volumes were measured and compared with reference standard measurements by fluid displacement method in the phantoms and stereology and region-based thresholding methods in patients, respectively. As results, renal cysts and parenchyma were segmented successfully with the MSA method. The volume measurements obtained with MSA were in good agreement with the measurements by other segmentation methods for both phantoms and subjects. The MSA method, however, was more time-consuming than the other segmentation methods because it required pre-segmentation, image registration and tissue classification-determination steps.

  5. Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives.

    PubMed

    Krishnamurthy, Senthilkumar; Narasimhan, Ganesh; Rengasamy, Umamaheswari

    2016-01-01

    The three-dimensional analysis on lung computed tomography scan was carried out in this study to detect the malignant lung nodules. An automatic three-dimensional segmentation algorithm proposed here efficiently segmented the tissue clusters (nodules) inside the lung. However, an automatic morphological region-grow segmentation algorithm that was implemented to segment the well-circumscribed nodules present inside the lung did not segment the juxta-pleural nodule present on the inner surface of wall of the lung. A novel edge bridge and fill technique is proposed in this article to segment the juxta-pleural and pleural-tail nodules accurately. The centroid shift of each candidate nodule was computed. The nodules with more centroid shift in the consecutive slices were eliminated since malignant nodule's resultant position did not usually deviate. The three-dimensional shape variation and edge sharp analyses were performed to reduce the false positives and to classify the malignant nodules. The change in area and equivalent diameter was more for malignant nodules in the consecutive slices and the malignant nodules showed a sharp edge. Segmentation was followed by three-dimensional centroid, shape and edge analysis which was carried out on a lung computed tomography database of 20 patient with 25 malignant nodules. The algorithms proposed in this article precisely detected 22 malignant nodules and failed to detect 3 with a sensitivity of 88%. Furthermore, this algorithm correctly eliminated 216 tissue clusters that were initially segmented as nodules; however, 41 non-malignant tissue clusters were detected as malignant nodules. Therefore, the false positive of this algorithm was 2.05 per patient. PMID:26721427

  6. Evolutionary analysis of the segment from helix 3 through helix 5 in vertebrate progesterone receptors.

    PubMed

    Baker, Michael E; Uh, Kayla Y

    2012-10-01

    The interaction between helix 3 and helix 5 in the human mineralocorticoid receptor [MR], progesterone receptor [PR] and glucocorticoid receptor [GR] influences their response to steroids. For the human PR, mutations at Gly-722 on helix 3 and Met-759 on helix 5 alter responses to progesterone. We analyzed the evolution of these two sites and the rest of a 59 residue segment containing helices 3, 4 and 5 in vertebrate PRs and found that a glycine corresponding to Gly-722 on helix 3 in human PR first appears in platypus, a monotreme. In lamprey, skates, fish, amphibians and birds, cysteine is found at this position in helix 3. This suggests that the cysteine to glycine replacement in helix 3 in the PR was important in the evolution of mammals. Interestingly, our analysis of the rest of the 59 residue segment finds 100% sequence conservation in almost all mammal PRs, substantial conservation in reptile and amphibian PRs and divergence of land vertebrate PR sequences from the fish PR sequences. The differences between fish and land vertebrate PRs may be important in the evolution of different biological progestins in fish and mammalian PR, as well as differences in susceptibility to environmental chemicals that disrupt PR-mediated physiology. PMID:22575083

  7. Investigating materials for breast nodules simulation by using segmentation and similarity analysis of digital images

    NASA Astrophysics Data System (ADS)

    Siqueira, Paula N.; Marcomini, Karem D.; Sousa, Maria A. Z.; Schiabel, Homero

    2015-03-01

    The task of identifying the malignancy of nodular lesions on mammograms becomes quite complex due to overlapped structures or even to the granular fibrous tissue which can cause confusion in classifying masses shape, leading to unnecessary biopsies. Efforts to develop methods for automatic masses detection in CADe (Computer Aided Detection) schemes have been made with the aim of assisting radiologists and working as a second opinion. The validation of these methods may be accomplished for instance by using databases with clinical images or acquired through breast phantoms. With this aim, some types of materials were tested in order to produce radiographic phantom images which could characterize a good enough approach to the typical mammograms corresponding to actual breast nodules. Therefore different nodules patterns were physically produced and used on a previous developed breast phantom. Their characteristics were tested according to the digital images obtained from phantom exposures at a LORAD M-IV mammography unit. Two analysis were realized the first one by the segmentation of regions of interest containing the simulated nodules by an automated segmentation technique as well as by an experienced radiologist who has delineated the contour of each nodule by means of a graphic display digitizer. Both results were compared by using evaluation metrics. The second one used measure of quality Structural Similarity (SSIM) to generate quantitative data related to the texture produced by each material. Although all the tested materials proved to be suitable for the study, the PVC film yielded the best results.

  8. An extensive log-file analysis of step-and-shoot intensity modulated radiation therapy segment delivery errors.

    PubMed

    Stell, Anthony M; Li, Jonathan G; Zeidan, Omar A; Dempsey, James F

    2004-06-01

    We present a study to evaluate the monitor unit (MU), dosimetric, and leaf-motion errors found in the delivery of 91 step-and-shoot IMRT treatment plans performed at three nominal dose rates using a dual modality high energy Linac (Varian 2100 C/D, Varian Medical Systems Inc., Palo Alto, CA) equipped with a 120-leaf multileaf collimator (MLC). The analysis was performed by studying log files generated by the MLC controller system. Recent studies by our group have validated that the automatically generated MLC log files accurately record the actual system delivery. A total of 635 beams were delivered at three nominal dose rates: 100, 300, and 600 MU/min. The log files were manually retrieved and analysis software was developed to extract the recorded MU delivery and leaf positions for each segment. Our analysis revealed that the magnitude of segment MU errors were independent of the planned segment MUs. Segment MU errors were found to increase with dose rate having maximum errors per segment of +/-1.8 MU at 600 MU/min, +/-0.8 MU at 300 MU/min, and +/-0.5 MU at 100 MU/min. The total absolute MU error in each plan was observed to increase with the number of plan segments, with the trend increasing more rapidly for higher dose rates. Three dimensional dose distributions were recomputed based on the observed segment MU errors for three plans with large cumulative absolute MU errors. Comparison with the original treatment plans indicated no clinically significant consequences due to these errors. In addition, approximately 80% of the total segment deliveries reported at least one collimator leaf moving at least 1 mm (projected at isocenter) during segment delivery. Such errors occur near the end of segment delivery and have been previously observed by our group using a fast video-based electronic portal imaging device. At 600 MU/min, between 5% and 23% of the plan MUs were delivered during leaf motion that had exceeded a 1 mm position tolerance. These leaf motion errors were not included in the treatment plan recalculations performed in this study. PMID:15259664

  9. Level Set Segmentation From Multiple Non-Uniform Volume Datasets Ken Museth* David E. Breen* Leonid Zhukov* Ross T. Whitakery

    E-print Network

    Breen, David E.

    representations; Keywords: Segmentation, visualization, level set models, 3D re- construction. 1 Introduction Many of today's volumetric datasets are generated by medical MR, CT and other scanners. A typical 3-D scan has Zhukov* Ross T. Whitakery * California Institute of Technology y University of Utah Figure 1: 3D level

  10. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images

    PubMed Central

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L.; Levin, Michael; Miller, Eric L.

    2015-01-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach. PMID:26601004

  11. Segmentation of fluorescence microscopy images for quantitative analysis of cell nuclear architecture.

    PubMed

    Russell, Richard A; Adams, Niall M; Stephens, David A; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S

    2009-04-22

    Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments. PMID:19383481

  12. Chapman-Enskog Analysis of Finite Volume Lattice Boltzmann Schemes

    E-print Network

    Siboni, Nima H; Varnik, Fathollah

    2014-01-01

    In this paper, we provide a systematic analysis of some finite volume lattice Boltzmann schemes in two dimensions. A complete iteration cycle in time evolution of discretized distribution functions is formally divided into collision and propagation (streaming) steps. Considering mass and momentum conserving properties of the collision step, it becomes obvious that changes in the momentum of finite volume cells is just due to the propagation step. Details of the propagation step are discussed for different approximate schemes for the evaluation of fluxes at the boundaries of the finite volume cells. Moreover, a full Chapman-Enskog analysis is conducted allowing to recover the Navier-Stokes equation. As an important result of this analysis, the relation between the lattice Boltzmann relaxation time and the kinematic viscosity of the fluid is derived for each approximate flux evaluation scheme. In particular, it is found that the constant upwind scheme leads to a positive numerical viscosity while the central sc...

  13. Screening Analysis : Volume 1, Description and Conclusions.

    SciTech Connect

    Bonneville Power Administration; Corps of Engineers; Bureau of Reclamation

    1992-08-01

    The SOR consists of three analytical phases leading to a Draft EIS. The first phase Pilot Analysis, was performed for the purpose of testing the decision analysis methodology being used in the SOR. The Pilot Analysis is described later in this chapter. The second phase, Screening Analysis, examines all possible operating alternatives using a simplified analytical approach. It is described in detail in this and the next chapter. This document also presents the results of screening. The final phase, Full-Scale Analysis, will be documented in the Draft EIS and is intended to evaluate comprehensively the few, best alternatives arising from the screening analysis. The purpose of screening is to analyze a wide variety of differing ways of operating the Columbia River system to test the reaction of the system to change. The many alternatives considered reflect the range of needs and requirements of the various river users and interests in the Columbia River Basin. While some of the alternatives might be viewed as extreme, the information gained from the analysis is useful in highlighting issues and conflicts in meeting operating objectives. Screening is also intended to develop a broad technical basis for evaluation including regional experts and to begin developing an evaluation capability for each river use that will support full-scale analysis. Finally, screening provides a logical method for examining all possible options and reaching a decision on a few alternatives worthy of full-scale analysis. An organizational structure was developed and staffed to manage and execute the SOR, specifically during the screening phase and the upcoming full-scale analysis phase. The organization involves ten technical work groups, each representing a particular river use. Several other groups exist to oversee or support the efforts of the work groups.

  14. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  15. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  16. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    SciTech Connect

    Not Available

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  17. Laser power conversion system analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-ground laser power conversion system analysis investigated the feasibility and cost effectiveness of converting solar energy into laser energy in space, and transmitting the laser energy to earth for conversion to electrical energy. The analysis included space laser systems with electrical outputs on the ground ranging from 100 to 10,000 MW. The space laser power system was shown to be feasible and a viable alternate to the microwave solar power satellite. The narrow laser beam provides many options and alternatives not attainable with a microwave beam.

  18. Quantitative Analysis of the Drosophila Segmentation Regulatory Network Using Pattern Generating Potentials

    PubMed Central

    Richards, Adam; McCutchan, Michael; Wakabayashi-Ito, Noriko; Hammonds, Ann S.; Celniker, Susan E.; Kumar, Sudhir; Wolfe, Scot A.; Brodsky, Michael H.; Sinha, Saurabh

    2010-01-01

    Cis-regulatory modules that drive precise spatial-temporal patterns of gene expression are central to the process of metazoan development. We describe a new computational strategy to annotate genomic sequences based on their “pattern generating potential” and to produce quantitative descriptions of transcriptional regulatory networks at the level of individual protein-module interactions. We use this approach to convert the qualitative understanding of interactions that regulate Drosophila segmentation into a network model in which a confidence value is associated with each transcription factor-module interaction. Sequence information from multiple Drosophila species is integrated with transcription factor binding specificities to determine conserved binding site frequencies across the genome. These binding site profiles are combined with transcription factor expression information to create a model to predict module activity patterns. This model is used to scan genomic sequences for the potential to generate all or part of the expression pattern of a nearby gene, obtained from available gene expression databases. Interactions between individual transcription factors and modules are inferred by a statistical method to quantify a factor's contribution to the module's pattern generating potential. We use these pattern generating potentials to systematically describe the location and function of known and novel cis-regulatory modules in the segmentation network, identifying many examples of modules predicted to have overlapping expression activities. Surprisingly, conserved transcription factor binding site frequencies were as effective as experimental measurements of occupancy in predicting module expression patterns or factor-module interactions. Thus, unlike previous module prediction methods, this method predicts not only the location of modules but also their spatial activity pattern and the factors that directly determine this pattern. As databases of transcription factor specificities and in vivo gene expression patterns grow, analysis of pattern generating potentials provides a general method to decode transcriptional regulatory sequences and networks. PMID:20808951

  19. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms.

    PubMed

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly A

    2013-02-15

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652

  20. Bivariate segmentation of SNP-array data for allele-specific copy number analysis in tumour samples

    PubMed Central

    2013-01-01

    Background SNP arrays output two signals that reflect the total genomic copy number (LRR) and the allelic ratio (BAF), which in combination allow the characterisation of allele-specific copy numbers (ASCNs). While methods based on hidden Markov models (HMMs) have been extended from array comparative genomic hybridisation (aCGH) to jointly handle the two signals, only one method based on change-point detection, ASCAT, performs bivariate segmentation. Results In the present work, we introduce a generic framework for bivariate segmentation of SNP array data for ASCN analysis. For the matter, we discuss the characteristics of the typically applied BAF transformation and how they affect segmentation, introduce concepts of multivariate time series analysis that are of concern in this field and discuss the appropriate formulation of the problem. The framework is implemented in a method named CnaStruct, the bivariate form of the structural change model (SCM), which has been successfully applied to transcriptome mapping and aCGH. Conclusions On a comprehensive synthetic dataset, we show that CnaStruct outperforms the segmentation of existing ASCN analysis methods. Furthermore, CnaStruct can be integrated into the workflows of several ASCN analysis tools in order to improve their performance, specially on tumour samples highly contaminated by normal cells. PMID:23497144

  1. Optical granulometric analysis of sedimentary deposits by color segmentation-based software: OPTGRAN-CS

    NASA Astrophysics Data System (ADS)

    Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.

    2015-12-01

    The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.

  2. Model & Main Result Mathematical setting revisited Discrete Duality Finite Volume Schemes DDFV Scheme for the Model Problem Convergence Analysis Convergence of finite volume schemes for

    E-print Network

    Jeanjean, Louis

    Model & Main Result Mathematical setting revisited Discrete Duality Finite Volume Schemes DDFV Scheme for the Model Problem Convergence Analysis Convergence of finite volume schemes revisited Discrete Duality Finite Volume Schemes DDFV Scheme for the Model Problem Convergence Analysis Plan

  3. A comparison between handgrip strength, upper limb fat free mass by segmental bioelectrical impedance analysis (SBIA) and anthropometric measurements in young males

    NASA Astrophysics Data System (ADS)

    Gonzalez-Correa, C. H.; Caicedo-Eraso, J. C.; Varon-Serna, D. R.

    2013-04-01

    The mechanical function and size of a muscle may be closely linked. Handgrip strength (HGS) has been used as a predictor of functional performing. Anthropometric measurements have been made to estimate arm muscle area (AMA) and physical muscle mass volume of upper limb (ULMMV). Electrical volume estimation is possible by segmental BIA measurements of fat free mass (SBIA-FFM), mainly muscle-mass. Relationship among these variables is not well established. We aimed to determine if physical and electrical muscle mass estimations relate to each other and to what extent HGS is to be related to its size measured by both methods in normal or overweight young males. Regression analysis was used to determine association between these variables. Subjects showed a decreased HGS (65.5%), FFM, (85.5%) and AMA (74.5%). It was found an acceptable association between SBIA-FFM and AMA (r2 = 0.60) and poorer between physical and electrical volume (r2 = 0.55). However, a paired Student t-test and Bland and Altman plot showed that physical and electrical models were not interchangeable (pt<0.0001). HGS showed a very weak association with anthropometric (r2 = 0.07) and electrical (r2 = 0.192) ULMMV showing that muscle mass quantity does not mean muscle strength. Other factors influencing HGS like physical training or nutrition require more research.

  4. Change Detection and Land Use / Land Cover Database Updating Using Image Segmentation, GIS Analysis and Visual Interpretation

    NASA Astrophysics Data System (ADS)

    Mas, J.-F.; González, R.

    2015-08-01

    This article presents a hybrid method that combines image segmentation, GIS analysis, and visual interpretation in order to detect discrepancies between an existing land use/cover map and satellite images, and assess land use/cover changes. It was applied to the elaboration of a multidate land use/cover database of the State of Michoacán, Mexico using SPOT and Landsat imagery. The method was first applied to improve the resolution of an existing 1:250,000 land use/cover map produced through the visual interpretation of 2007 SPOT images. A segmentation of the 2007 SPOT images was carried out to create spectrally homogeneous objects with a minimum area of two hectares. Through an overlay operation with the outdated map, each segment receives the "majority" category from the map. Furthermore, spectral indices of the SPOT image were calculated for each band and each segment; therefore, each segment was characterized from the images (spectral indices) and the map (class label). In order to detect uncertain areas which present discrepancy between spectral response and class label, a multivariate trimming, which consists in truncating a distribution from its least likely values, was applied. The segments that behave like outliers were detected and labeled as "uncertain" and a probable alternative category was determined by means of a digital classification using a decision tree classification algorithm. Then, the segments were visually inspected in the SPOT image and high resolution imagery to assign a final category. The same procedure was applied to update the map to 2014 using Landsat imagery. As a final step, an accuracy assessment was carried out using verification sites selected from a stratified random sampling and visually interpreted using high resolution imagery and ground truth.

  5. Quantitative analysis of volume images -electron microscopic tomography of HIV

    E-print Network

    Nyström, Ingela

    Quantitative analysis of volume images - electron microscopic tomography of HIV Ingela Nystr syndrome (AIDS), namely human immunode#12;ciency virus (HIV), produced by electron microscopic tomography by the HIV Structure Group at the Dept. of Biochemistry, Uppsala University. The algorithms are used

  6. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  7. Stereophotogrammetrie Mass Distribution Parameter Determination Of The Lower Body Segments For Use In Gait Analysis

    NASA Astrophysics Data System (ADS)

    Sheffer, Daniel B.; Schaer, Alex R.; Baumann, Juerg U.

    1989-04-01

    Inclusion of mass distribution information in biomechanical analysis of motion is a requirement for the accurate calculation of external moments and forces acting on the segmental joints during locomotion. Regression equations produced from a variety of photogrammetric, anthropometric and cadaeveric studies have been developed and espoused in literature. Because of limitations in the accuracy of predicted inertial properties based on the application of regression equation developed on one population and then applied on a different study population, the employment of a measurement technique that accurately defines the shape of each individual subject measured is desirable. This individual data acquisition method is especially needed when analyzing the gait of subjects with large differences in their extremity geo-metry from those considered "normal", or who may possess gross asymmetries in shape in their own contralateral limbs. This study presents the photogrammetric acquisition and data analysis methodology used to assess the inertial tensors of two groups of subjects, one with spastic diplegic cerebral palsy and the other considered normal.

  8. High-throughput histopathological image analysis via robust cell segmentation and hashing.

    PubMed

    Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting

    2015-12-01

    Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells . PMID:26599156

  9. Interactive 3D segmentation of the prostate in magnetic resonance images using shape and local appearance similarity analysis

    NASA Astrophysics Data System (ADS)

    Shahedi, Maysam; Fenster, Aaron; Cool, Derek W.; Romagnoli, Cesare; Ward, Aaron D.

    2013-03-01

    3D segmentation of the prostate in medical images is useful to prostate cancer diagnosis and therapy guidance, but is time-consuming to perform manually. Clinical translation of computer-assisted segmentation algorithms for this purpose requires a comprehensive and complementary set of evaluation metrics that are informative to the clinical end user. We have developed an interactive 3D prostate segmentation method for 1.5T and 3.0T T2-weighted magnetic resonance imaging (T2W MRI) acquired using an endorectal coil. We evaluated our method against manual segmentations of 36 3D images using complementary boundary-based (mean absolute distance; MAD), regional overlap (Dice similarity coefficient; DSC) and volume difference (?V) metrics. Our technique is based on inter-subject prostate shape and local boundary appearance similarity. In the training phase, we calculated a point distribution model (PDM) and a set of local mean intensity patches centered on the prostate border to capture shape and appearance variability. To segment an unseen image, we defined a set of rays - one corresponding to each of the mean intensity patches computed in training - emanating from the prostate centre. We used a radial-based search strategy and translated each mean intensity patch along its corresponding ray, selecting as a candidate the boundary point with the highest normalized cross correlation along each ray. These boundary points were then regularized using the PDM. For the whole gland, we measured a mean+/-std MAD of 2.5+/-0.7 mm, DSC of 80+/-4%, and ?V of 1.1+/-8.8 cc. We also provided an anatomic breakdown of these metrics within the prostatic base, mid-gland, and apex.

  10. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    SciTech Connect

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-19

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

  11. A Theoretical Analysis of How Segmentation of Dynamic Visualizations Optimizes Students' Learning

    ERIC Educational Resources Information Center

    Spanjers, Ingrid A. E.; van Gog, Tamara; van Merrienboer, Jeroen J. G.

    2010-01-01

    This article reviews studies investigating segmentation of dynamic visualizations (i.e., showing dynamic visualizations in pieces with pauses in between) and discusses two not mutually exclusive processes that might underlie the effectiveness of segmentation. First, cognitive activities needed for dealing with the transience of dynamic…

  12. Unconventional Word Segmentation in Emerging Bilingual Students' Writing: A Longitudinal Analysis

    ERIC Educational Resources Information Center

    Sparrow, Wendy

    2014-01-01

    This study explores cross-language and longitudinal patterns in unconventional word segmentation in 25 emerging bilingual students' (Spanish/English) writing from first through third grade. Spanish and English writing samples were collected annually and analyzed for two basic types of unconventional word segmentation: hyposegmentation, in…

  13. Understanding the market for geographic information: A market segmentation and characteristics analysis

    NASA Technical Reports Server (NTRS)

    Piper, William S.; Mick, Mark W.

    1994-01-01

    Findings and results from a marketing research study are presented. The report identifies market segments and the product types to satisfy demand in each. An estimate of market size is based on the specific industries in each segment. A sample of ten industries was used in the study. The scientific study covered U.S. firms only.

  14. Manjari I. Rao Analysis of a Locally Varying Intensity Template for Segmentation of

    E-print Network

    North Carolina at Chapel Hill, University of

    of Kidneys in CT Images (Under the supervision of Edward L. Chaney, PhD) The purpose of this study was to evaluate the use of a locally varying intensity template for automatic segmentation of kidneys in CT images. Kidney segmentation is often difficult because the surrounding soft tissue has varying contrast across

  15. Dioxin analysis of Philadelphia northwest incinerator. Summary report. Volume 1

    SciTech Connect

    Milner, I.

    1986-01-01

    A study was conducted by US EPA Region 3 to determine the dioxin-related impact of the Philadelphia Northwest Incinerator on public health. Specifically, it was designed to assess quantitatively the risks to public health resulting from emissions into the ambient air of dioxins as well as the potential effect of deposition of dioxins on the soil in the vicinity of the incinerator. Volume 1 is an executive summary of the study findings. Volume 2 contains contractor reports, laboratory analysis results and other documentation.

  16. Leukocyte telomere length and hippocampus volume: a meta-analysis

    PubMed Central

    Nilsonne, Gustav; Tamm, Sandra; Månsson, Kristoffer N. T.; Åkerstedt, Torbjörn; Lekander, Mats

    2015-01-01

    Leukocyte telomere length has been shown to correlate to hippocampus volume, but effect estimates differ in magnitude and are not uniformly positive. This study aimed primarily to investigate the relationship between leukocyte telomere length and hippocampus gray matter volume by meta-analysis and secondarily to investigate possible effect moderators. Five studies were included with a total of 2107 participants, of which 1960 were contributed by one single influential study. A random-effects meta-analysis estimated the effect to r = 0.12 [95% CI -0.13, 0.37] in the presence of heterogeneity and a subjectively estimated moderate to high risk of bias. There was no evidence that apolipoprotein E (APOE) genotype was an effect moderator, nor that the ratio of leukocyte telomerase activity to telomere length was a better predictor than leukocyte telomere length for hippocampus volume. This meta-analysis, while not proving a positive relationship, also is not able to disprove the earlier finding of a positive correlation in the one large study included in analyses. We propose that a relationship between leukocyte telomere length and hippocamus volume may be mediated by transmigrating monocytes which differentiate into microglia in the brain parenchyma. PMID:26674112

  17. Incorporation of texture-based features in optimal graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Abràmoff, Michael D.; Sonka, Milan; Kwon, Young H.; Garvin, Mona K.

    2012-02-01

    While efficient graph-theoretic approaches exist for the optimal (with respect to a cost function) and simultaneous segmentation of multiple surfaces within volumetric medical images, the appropriate design of cost functions remains an important challenge. Previously proposed methods have used simple cost functions or optimized a combination of the same, but little has been done to design cost functions using learned features from a training set, in a less biased fashion. Here, we present a method to design cost functions for the simultaneous segmentation of multiple surfaces using the graph-theoretic approach. Classified texture features were used to create probability maps, which were incorporated into the graph-search approach. The efficiency of such an approach was tested on 10 optic nerve head centered optical coherence tomography (OCT) volumes obtained from 10 subjects that presented with glaucoma. The mean unsigned border position error was computed with respect to the average of manual tracings from two independent observers and compared to our previously reported results. A significant improvement was noted in the overall means which reduced from 9.25 +/- 4.03?m to 6.73 +/- 2.45?m (p < 0.01) and is also comparable with the inter-observer variability of 8.85 +/- 3.85?m.

  18. Semi-automatic segmentation and modeling of the cervical spinal cord for volume quantification in multiple sclerosis patients from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sonkova, Pavlina; Evangelou, Iordanis E.; Gallo, Antonio; Cantor, Fredric K.; Ohayon, Joan; McFarland, Henry F.; Bagnato, Francesca

    2008-03-01

    Spinal cord (SC) tissue loss is known to occur in some patients with multiple sclerosis (MS), resulting in SC atrophy. Currently, no measurement tools exist to determine the magnitude of SC atrophy from Magnetic Resonance Images (MRI). We have developed and implemented a novel semi-automatic method for quantifying the cervical SC volume (CSCV) from Magnetic Resonance Images (MRI) based on level sets. The image dataset consisted of SC MRI exams obtained at 1.5 Tesla from 12 MS patients (10 relapsing-remitting and 2 secondary progressive) and 12 age- and gender-matched healthy volunteers (HVs). 3D high resolution image data were acquired using an IR-FSPGR sequence acquired in the sagittal plane. The mid-sagittal slice (MSS) was automatically located based on the entropy calculation for each of the consecutive sagittal slices. The image data were then pre-processed by 3D anisotropic diffusion filtering for noise reduction and edge enhancement before segmentation with a level set formulation which did not require re-initialization. The developed method was tested against manual segmentation (considered ground truth) and intra-observer and inter-observer variability were evaluated.

  19. Movement Analysis of Flexion and Extension of Honeybee Abdomen Based on an Adaptive Segmented Structure

    PubMed Central

    Zhao, Jieliang; Wu, Jianing; Yan, Shaoze

    2015-01-01

    Honeybees (Apis mellifera) curl their abdomens for daily rhythmic activities. Prior to determining this fact, people have concluded that honeybees could curl their abdomen casually. However, an intriguing but less studied feature is the possible unidirectional abdominal deformation in free-flying honeybees. A high-speed video camera was used to capture the curling and to analyze the changes in the arc length of the honeybee abdomen not only in free-flying mode but also in the fixed sample. Frozen sections and environment scanning electron microscope were used to investigate the microstructure and motion principle of honeybee abdomen and to explore the physical structure restricting its curling. An adaptive segmented structure, especially the folded intersegmental membrane (FIM), plays a dominant role in the flexion and extension of the abdomen. The structural features of FIM were utilized to mimic and exhibit movement restriction on honeybee abdomen. Combining experimental analysis and theoretical demonstration, a unidirectional bending mechanism of honeybee abdomen was revealed. Through this finding, a new perspective for aerospace vehicle design can be imitated. PMID:26223946

  20. phenoVein—A Tool for Leaf Vein Segmentation and Analysis1[OPEN

    PubMed Central

    Pflugfelder, Daniel; Huber, Gregor; Scharr, Hanno; Hülskamp, Martin; Koornneef, Maarten; Jahnke, Siegfried

    2015-01-01

    Precise measurements of leaf vein traits are an important aspect of plant phenotyping for ecological and genetic research. Here, we present a powerful and user-friendly image analysis tool named phenoVein. It is dedicated to automated segmenting and analyzing of leaf veins in images acquired with different imaging modalities (microscope, macrophotography, etc.), including options for comfortable manual correction. Advanced image filtering emphasizes veins from the background and compensates for local brightness inhomogeneities. The most important traits being calculated are total vein length, vein density, piecewise vein lengths and widths, areole area, and skeleton graph statistics, like the number of branching or ending points. For the determination of vein widths, a model-based vein edge estimation approach has been implemented. Validation was performed for the measurement of vein length, vein width, and vein density of Arabidopsis (Arabidopsis thaliana), proving the reliability of phenoVein. We demonstrate the power of phenoVein on a set of previously described vein structure mutants of Arabidopsis (hemivenata, ondulata3, and asymmetric leaves2-101) compared with wild-type accessions Columbia-0 and Landsberg erecta-0. phenoVein is freely available as open-source software. PMID:26468519

  1. Movement Analysis of Flexion and Extension of Honeybee Abdomen Based on an Adaptive Segmented Structure.

    PubMed

    Zhao, Jieliang; Wu, Jianing; Yan, Shaoze

    2015-01-01

    Honeybees (Apis mellifera) curl their abdomens for daily rhythmic activities. Prior to determining this fact, people have concluded that honeybees could curl their abdomen casually. However, an intriguing but less studied feature is the possible unidirectional abdominal deformation in free-flying honeybees. A high-speed video camera was used to capture the curling and to analyze the changes in the arc length of the honeybee abdomen not only in free-flying mode but also in the fixed sample. Frozen sections and environment scanning electron microscope were used to investigate the microstructure and motion principle of honeybee abdomen and to explore the physical structure restricting its curling. An adaptive segmented structure, especially the folded intersegmental membrane (FIM), plays a dominant role in the flexion and extension of the abdomen. The structural features of FIM were utilized to mimic and exhibit movement restriction on honeybee abdomen. Combining experimental analysis and theoretical demonstration, a unidirectional bending mechanism of honeybee abdomen was revealed. Through this finding, a new perspective for aerospace vehicle design can be imitated. PMID:26223946

  2. Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method

    PubMed Central

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  3. phenoVein-A Tool for Leaf Vein Segmentation and Analysis.

    PubMed

    Bühler, Jonas; Rishmawi, Louai; Pflugfelder, Daniel; Huber, Gregor; Scharr, Hanno; Hülskamp, Martin; Koornneef, Maarten; Schurr, Ulrich; Jahnke, Siegfried

    2015-12-01

    Precise measurements of leaf vein traits are an important aspect of plant phenotyping for ecological and genetic research. Here, we present a powerful and user-friendly image analysis tool named phenoVein. It is dedicated to automated segmenting and analyzing of leaf veins in images acquired with different imaging modalities (microscope, macrophotography, etc.), including options for comfortable manual correction. Advanced image filtering emphasizes veins from the background and compensates for local brightness inhomogeneities. The most important traits being calculated are total vein length, vein density, piecewise vein lengths and widths, areole area, and skeleton graph statistics, like the number of branching or ending points. For the determination of vein widths, a model-based vein edge estimation approach has been implemented. Validation was performed for the measurement of vein length, vein width, and vein density of Arabidopsis (Arabidopsis thaliana), proving the reliability of phenoVein. We demonstrate the power of phenoVein on a set of previously described vein structure mutants of Arabidopsis (hemivenata, ondulata3, and asymmetric leaves2-101) compared with wild-type accessions Columbia-0 and Landsberg erecta-0. phenoVein is freely available as open-source software. PMID:26468519

  4. Radio Frequency Ablation Registration, Segmentation, and Fusion Tool

    PubMed Central

    McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.

    2008-01-01

    The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716

  5. Quantitative analysis of volume images: electron microscopic tomography of HIV

    NASA Astrophysics Data System (ADS)

    Nystroem, Ingela; Bengtsson, Ewert W.; Nordin, Bo G.; Borgefors, Gunilla

    1994-05-01

    Three-dimensional objects should be represented by 3D images. So far, most of the evaluation of images of 3D objects have been done visually, either by looking at slices through the volumes or by looking at 3D graphic representations of the data. In many applications a more quantitative evaluation would be valuable. Our application is the analysis of volume images of the causative agent of the acquired immune deficiency syndrome (AIDS), namely human immunodeficiency virus (HIV), produced by electron microscopic tomography (EMT). A structural analysis of the virus is of importance. The representation of some of the interesting structural features will depend on the orientation and the position of the object relative to the digitization grid. We describe a method of defining orientation and position of objects based on the moment of inertia of the objects in the volume image. In addition to a direct quantification of the 3D object a quantitative description of the convex deficiency may provide valuable information about the geometrical properties. The convex deficiency is the volume object subtracted from its convex hull. We describe an algorithm for creating an enclosing polyhedron approximating the convex hull of an arbitrarily shaped object.

  6. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment.

    PubMed

    Keller, Mark; Naue, Jana; Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols. PMID:26147196

  7. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment

    PubMed Central

    Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols. PMID:26147196

  8. Influence of Pressure on Chain and Segmental Dynamics in Polyisoprene

    SciTech Connect

    Pawlus, Sebastian; Sokolov, Alexei P; Paluch, Marian; Mierzwa, Michal

    2010-01-01

    We present detailed studies of variation in segmental and chain dynamics of polyisoprene under pressure. Samples with two molecular weights (MW), 2.4 and 25 kg/mol (below and above entanglement), were investigated. Dielectric spectroscopy measurements at isobaric and isothermal conditions exhibit clear differences in temperature and pressure dependencies of chain and segmental relaxation times. Moreover, application of pressure increases time separation between the segmental and normal (chain) modes at the isochronic conditions. This increase can be explained by an effective increase in number of Rouse segments under compression at the same segmental relaxation time. Our analysis also reveals that the thermodynamic scaling of the relaxation times (log vs TV , V is volume) does not work well simultaneously for both processes.

  9. Airway segmentation and analysis for the study of mouse models of lung disease using micro-CT

    NASA Astrophysics Data System (ADS)

    Artaechevarria, X.; Pérez-Martín, D.; Ceresa, M.; de Biurrun, G.; Blanco, D.; Montuenga, L. M.; van Ginneken, B.; Ortiz-de-Solorzano, C.; Muñoz-Barrutia, A.

    2009-11-01

    Animal models of lung disease are gaining importance in understanding the underlying mechanisms of diseases such as emphysema and lung cancer. Micro-CT allows in vivo imaging of these models, thus permitting the study of the progression of the disease or the effect of therapeutic drugs in longitudinal studies. Automated analysis of micro-CT images can be helpful to understand the physiology of diseased lungs, especially when combined with measurements of respiratory system input impedance. In this work, we present a fast and robust murine airway segmentation and reconstruction algorithm. The algorithm is based on a propagating fast marching wavefront that, as it grows, divides the tree into segments. We devised a number of specific rules to guarantee that the front propagates only inside the airways and to avoid leaking into the parenchyma. The algorithm was tested on normal mice, a mouse model of chronic inflammation and a mouse model of emphysema. A comparison with manual segmentations of two independent observers shows that the specificity and sensitivity values of our method are comparable to the inter-observer variability, and radius measurements of the mainstem bronchi reveal significant differences between healthy and diseased mice. Combining measurements of the automatically segmented airways with the parameters of the constant phase model provides extra information on how disease affects lung function.

  10. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  11. Global Warming’s Six Americas: An Audience Segmentation Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Roser-Renouf, C.; Maibach, E.; Leiserowitz, A.

    2009-12-01

    One of the first rules of effective communication is to “know thy audience.” People have different psychological, cultural and political reasons for acting - or not acting - to reduce greenhouse gas emissions, and climate change educators can increase their impact by taking these differences into account. In this presentation we will describe six unique audience segments within the American public that each responds to the issue in its own distinct way, and we will discuss methods of engaging each. The six audiences were identified using a nationally representative survey of American adults conducted in the fall of 2008 (N=2,164). In two waves of online data collection, the public’s climate change beliefs, attitudes, risk perceptions, values, policy preferences, conservation, and energy-efficiency behaviors were assessed. The data were subjected to latent class analysis, yielding six groups distinguishable on all the above dimensions. The Alarmed (18%) are fully convinced of the reality and seriousness of climate change and are already taking individual, consumer, and political action to address it. The Concerned (33%) - the largest of the Six Americas - are also convinced that global warming is happening and a serious problem, but have not yet engaged with the issue personally. Three other Americas - the Cautious (19%), the Disengaged (12%) and the Doubtful (11%) - represent different stages of understanding and acceptance of the problem, and none are actively involved. The final America - the Dismissive (7%) - are very sure it is not happening and are actively involved as opponents of a national effort to reduce greenhouse gas emissions. Mitigating climate change will require a diversity of messages, messengers and methods that take into account these differences within the American public. The findings from this research can serve as guideposts for educators on the optimal choices for reaching and influencing target groups with varied informational needs, values and beliefs.

  12. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    PubMed Central

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-01-01

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test,p?=?0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p?=?0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction and resulted in statistically not significantly different segmentation error indices (ANOVA test, significance level of 0.05). Conclusions: All three experts were able to produce liver segmentations with low error rates. User interaction time savings of up to 71% compared to a 2D refinement approach demonstrate the utility and potential of our approach. The system offers a range of different tools to manipulate segmentation results, and some users might benefit from a longer learning phase to develop efficient segmentation refinement strategies. The presented approach represents a generally applicable segmentation approach that can be applied to many medical image segmentation problems. PMID:22380370

  13. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    SciTech Connect

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction and resulted in statistically not significantly different segmentation error indices (ANOVA test, significance level of 0.05). Conclusions: All three experts were able to produce liver segmentations with low error rates. User interaction time savings of up to 71% compared to a 2D refinement approach demonstrate the utility and potential of our approach. The system offers a range of different tools to manipulate segmentation results, and some users might benefit from a longer learning phase to develop efficient segmentation refinement strategies. The presented approach represents a generally applicable segmentation approach that can be applied to many medical image segmentation problems.

  14. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 4

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 4 of the four major tasks included in the study. Task 4 uses flight plan segment wind and temperature differences as indicators of dates and geographic areas for which significant forecast errors may have occurred. An in-depth analysis is then conducted for the days identified. The analysis show that significant errors occur in the operational forecast on 15 of the 33 arbitrarily selected days included in the study. Wind speeds in an area of maximum winds are underestimated by at least 20 to 25 kts. on 14 of these days. The analysis also show that there is a tendency to repeat the same forecast errors from prog to prog. Also, some perceived forecast errors from the flight plan comparisons could not be verified by visual inspection of the corresponding National Meteorological Center forecast and analyses charts, and it is likely that they are the result of weather data interpolation techniques or some other data processing procedure in the airlines' flight planning systems.

  15. Volume analysis of heat-induced cracks in human molars: A preliminary study

    PubMed Central

    Sandholzer, Michael A.; Baron, Katharina; Heimel, Patrick; Metscher, Brian D.

    2014-01-01

    Context: Only a few methods have been published dealing with the visualization of heat-induced cracks inside bones and teeth. Aims: As a novel approach this study used nondestructive X-ray microtomography (micro-CT) for volume analysis of heat-induced cracks to observe the reaction of human molars to various levels of thermal stress. Materials and Methods: Eighteen clinically extracted third molars were rehydrated and burned under controlled temperatures (400, 650, and 800°C) using an electric furnace adjusted with a 25°C increase/min. The subsequent high-resolution scans (voxel-size 17.7 ?m) were made with a compact micro-CT scanner (SkyScan 1174). In total, 14 scans were automatically segmented with Definiens XD Developer 1.2 and three-dimensional (3D) models were computed with Visage Imaging Amira 5.2.2. The results of the automated segmentation were analyzed with an analysis of variance (ANOVA) and uncorrected post hoc least significant difference (LSD) tests using Statistical Package for Social Sciences (SPSS) 17. A probability level of P < 0.05 was used as an index of statistical significance. Results: A temperature-dependent increase of heat-induced cracks was observed between the three temperature groups (P < 0.05, ANOVA post hoc LSD). In addition, the distributions and shape of the heat-induced changes could be classified using the computed 3D models. Conclusion: The macroscopic heat-induced changes observed in this preliminary study correspond with previous observations of unrestored human teeth, yet the current observations also take into account the entire microscopic 3D expansions of heat-induced cracks within the dental hard tissues. Using the same experimental conditions proposed in the literature, this study confirms previous results, adds new observations, and offers new perspectives in the investigation of forensic evidence. PMID:25125923

  16. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  17. Concepts and analysis for precision segmented reflector and feed support structures

    NASA Technical Reports Server (NTRS)

    Miller, Richard K.; Thomson, Mark W.; Hedgepeth, John M.

    1990-01-01

    Several issues surrounding the design of a large (20-meter diameter) Precision Segmented Reflector are investigated. The concerns include development of a reflector support truss geometry that will permit deployment into the required doubly-curved shape without significant member strains. For deployable and erectable reflector support trusses, the reduction of structural redundancy was analyzed to achieve reduced weight and complexity for the designs. The stiffness and accuracy of such reduced member trusses, however, were found to be affected to a degree that is unexpected. The Precision Segmented Reflector designs were developed with performance requirements that represent the Reflector application. A novel deployable sunshade concept was developed, and a detailed parametric study of various feed support structural concepts was performed. The results of the detailed study reveal what may be the most desirable feed support structure geometry for Precision Segmented Reflector/Large Deployable Reflector applications.

  18. Application of Control Volume Analysis to Cerebrospinal Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Wei, Timothy; Cohen, Benjamin; Anor, Tomer; Madsen, Joseph

    2011-11-01

    Hydrocephalus is among the most common birth defects and may not be prevented nor cured. Afflicted individuals face serious issues, which at present are too complicated and not well enough understood to treat via systematic therapies. This talk outlines the framework and application of a control volume methodology to clinical Phase Contrast MRI data. Specifically, integral control volume analysis utilizes a fundamental, fluid dynamics methodology to quantify intracranial dynamics within a precise, direct, and physically meaningful framework. A chronically shunted, hydrocephalic patient in need of a revision procedure was used as an in vivo case study. Magnetic resonance velocity measurements within the patient's aqueduct were obtained in four biomedical state and were analyzed using the methods presented in this dissertation. Pressure force estimates were obtained, showing distinct differences in amplitude, phase, and waveform shape for different intracranial states within the same individual. Thoughts on the physiological and diagnostic research and development implications/opportunities will be presented.

  19. Growth and morphological analysis of segmented AuAg alloy nanowires created by pulsed electrodeposition in ion-track etched membranes

    PubMed Central

    Burr, Loic; Trautmann, Christina; Toimil-Molares, Maria Eugenia

    2015-01-01

    Summary Background: Multicomponent heterostructure nanowires and nanogaps are of great interest for applications in sensorics. Pulsed electrodeposition in ion-track etched polymer templates is a suitable method to synthesise segmented nanowires with segments consisting of two different types of materials. For a well-controlled synthesis process, detailed analysis of the deposition parameters and the size-distribution of the segmented wires is crucial. Results: The fabrication of electrodeposited AuAg alloy nanowires and segmented Au-rich/Ag-rich/Au-rich nanowires with controlled composition and segment length in ion-track etched polymer templates was developed. Detailed analysis by cyclic voltammetry in ion-track membranes, energy-dispersive X-ray spectroscopy and scanning electron microscopy was performed to determine the dependency between the chosen potential and the segment composition. Additionally, we have dissolved the middle Ag-rich segments in order to create small nanogaps with controlled gap sizes. Annealing of the created structures allows us to influence their morphology. Conclusion: AuAg alloy nanowires, segmented wires and nanogaps with controlled composition and size can be synthesised by electrodeposition in membranes, and are ideal model systems for investigation of surface plasmons. PMID:26199830

  20. Brain MRI Segmentation with Multiphase Minimal Partitioning: A Comparative Study

    PubMed Central

    Angelini, Elsa D.; Song, Ting; Mensh, Brett D.; Laine, Andrew F.

    2007-01-01

    This paper presents the implementation and quantitative evaluation of a multiphase three-dimensional deformable model in a level set framework for automated segmentation of brain MRIs. The segmentation algorithm performs an optimal partitioning of three-dimensional data based on homogeneity measures that naturally evolves to the extraction of different tissue types in the brain. Random seed initialization was used to minimize the sensitivity of the method to initial conditions while avoiding the need for a priori information. This random initialization ensures robustness of the method with respect to the initialization and the minimization set up. Postprocessing corrections with morphological operators were applied to refine the details of the global segmentation method. A clinical study was performed on a database of 10 adult brain MRI volumes to compare the level set segmentation to three other methods: “idealized” intensity thresholding, fuzzy connectedness, and an expectation maximization classification using hidden Markov random fields. Quantitative evaluation of segmentation accuracy was performed with comparison to manual segmentation computing true positive and false positive volume fractions. A statistical comparison of the segmentation methods was performed through a Wilcoxon analysis of these error rates and results showed very high quality and stability of the multiphase three-dimensional level set method. PMID:18253474

  1. Analysis of automated highway system risks and uncertainties. Volume 5

    SciTech Connect

    Sicherman, A.

    1994-10-01

    This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.

  2. Improving Text Segmentation Using Latent Semantic Analysis: A Reanalysis of Choi,

    E-print Network

    in order to improve the accuracy of a text segmentation algorithm. By comparing the accuracy of the very, they were able to show the benefit derived from such knowledge. In their experiments, semantic knowledge was-specificity of the LSA corpus explains the largest part of the benefit, one may wonder if it is possible to use LSA

  3. Market Structure and Institutional Position Analysis of Regional Market Segments in Private College Recruiting.

    ERIC Educational Resources Information Center

    Litten, Larry H.

    Presented are several of the techniques that have been used as part of a comprehensive market research program at Carleton College (Northfield, Minnesota). The basic focus is on regional segments in Carleton's applicant pool. Carleton's position in the market is examined in relation to selected types of schools with which it competes for…

  4. Spinal nerve segmentation in the chick embryo: analysis of distinct axon-repulsive systems.

    PubMed

    Vermeren, M M; Cook, G M; Johnson, A R; Keynes, R J; Tannahill, D

    2000-09-01

    In higher vertebrates, the segmental organization of peripheral spinal nerves is established by a repulsive mechanism whereby sensory and motor axons are excluded from the posterior half-somite. A number of candidate axon repellents have been suggested to mediate this barrier to axon growth, including Sema3A, Ephrin-B, and peanut agglutinin (PNA)-binding proteins. We have tested the candidacy of these factors in vitro by examining their contribution to the growth cone collapse-inducing activity of somite-derived protein extracts on sensory, motor, and retinal axons. We find that Sema3A is unlikely to play a role in the segmentation of sensory or motor axons and that Ephrin-B may contribute to motor but not sensory axon segmentation. We also provide evidence that the only candidate molecule(s) that induces the growth cone collapse of both sensory and motor axons binds to PNA and is not Sema3A or Ephrin-B. By grafting primary sensory, motor, and quail retinal neurons into the chick trunk in vivo, we provide further evidence that the posterior half-somite represents a universal barrier to growing axons. Taken together, these results suggest that the mechanisms of peripheral nerve segmentation should be considered in terms of repellent molecules in addition to the identified molecules. PMID:10964478

  5. Semi-supervised morpheme segmentation without morphological analysis zkan Kili, Cem Bozahin

    E-print Network

    Bozsahin, Cem

    -mail: okilic@ii.metu.edu.tr, bozsahin@metu.edu.tr Abstract The premise of unsupervised statistical learning methods lies in a cognitively very plausible assumption that learning starts with an unlabeled dataset.25% of METU-Turkish Corpus for manual segmentation to extract the set of morphemes (and morphs) in its 2

  6. Probabilistic Segmentation and Analysis of Horizontal Cells Vebjorn Ljosa and Ambuj K. Singh

    E-print Network

    California at Santa Barbara, University of

    segmentation, in which each pixel is assigned a probability of belonging to each cell instead of being cat it has had in genetics, to other fields. Neuroscience is one field that could benefit greatly from data by the photoreceptors in response to Figure 1. Confocal micrograph of three horizon- tal cells in a detached cat retina

  7. Segmental and Positional Effects on Children's Coda Production: Comparing Evidence from Perceptual Judgments and Acoustic Analysis

    ERIC Educational Resources Information Center

    Theodore, Rachel M.; Demuth, Katherine; Shattuck-Hufnagel, Stephanie

    2012-01-01

    Children's early productions are highly variable. Findings from children's early productions of grammatical morphemes indicate that some of the variability is systematically related to segmental and phonological factors. Here, we extend these findings by assessing 2-year-olds' production of non-morphemic codas using both listener decisions and…

  8. Analysis of volume holographic storage allowing large-angle illumination

    NASA Astrophysics Data System (ADS)

    Shamir, Joseph

    2005-05-01

    Advanced technological developments have stimulated renewed interest in volume holography for applications such as information storage and wavelength multiplexing for communications and laser beam shaping. In these and many other applications, the information-carrying wave fronts usually possess narrow spatial-frequency bands, although they may propagate at large angles with respect to each other or a preferred optical axis. Conventional analytic methods are not capable of properly analyzing the optical architectures involved. For mitigation of the analytic difficulties, a novel approximation is introduced to treat narrow spatial-frequency band wave fronts propagating at large angles. This approximation is incorporated into the analysis of volume holography based on a plane-wave decomposition and Fourier analysis. As a result of the analysis, the recently introduced generalized Bragg selectivity is rederived for this more general case and is shown to provide enhanced performance for the above indicated applications. The power of the new theoretical description is demonstrated with the help of specific examples and computer simulations. The simulations reveal some interesting effects, such as coherent motion blur, that were predicted in an earlier publication.

  9. Synfuel program analysis. Volume I. Procedures-capabilities

    SciTech Connect

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This is the first of the two volumes describing the analytic procedures and resulting capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative synfuel projects and integrated programs. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specifiy cases and interpret outputs. It also contains an explicit description (with examples) of the types of results which can be obtained when applied to: the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. In all cases, the objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

  10. User's operating procedures. Volume 2: Scout project financial analysis program

    NASA Technical Reports Server (NTRS)

    Harris, C. G.; Haris, D. K.

    1985-01-01

    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.

  11. Simplifying the spectral analysis of the volume operator

    E-print Network

    R. Loll

    1997-06-13

    The volume operator plays a central role in both the kinematics and dynamics of canonical approaches to quantum gravity which are based on algebras of generalized Wilson loops. We introduce a method for simplifying its spectral analysis, for quantum states that can be realized on a cubic three-dimensional lattice. This involves a decomposition of Hilbert space into sectors transforming according to the irreducible representations of a subgroup of the cubic group. As an application, we determine the complete spectrum for a class of states with six-valent intersections.

  12. Study of Alternate Space Shuttle Concepts. Volume 2, Part 2: Concept Analysis and Definition

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This is the final report of a Phase A Study of Alternate Space Shuttle Concepts by the Lockheed Missiles & Space Company (LMSC) for the National Aeronautics and Space Administration George C. Marshall Space Flight Center (MSFC). The eleven-month study, which began on 30 June 1970, is to examine the stage-and-one-half and other Space Shuttle configurations and to establish feasibility, performance, cost, and schedules for the selected concepts. This final report consists of four volumes as follows: Volume I - Executive Summary, Volume II - Concept Analysis and Definition, Volume III - Program Planning, and Volume IV - Data Cost Data. This document is Volume II, Concept Analysis and Definition.

  13. Parallel runway requirement analysis study. Volume 1: The analysis

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.

    1993-01-01

    The correlation of increased flight delays with the level of aviation activity is well recognized. A main contributor to these flight delays has been the capacity of airports. Though new airport and runway construction would significantly increase airport capacity, few programs of this type are currently underway, let alone planned, because of the high cost associated with such endeavors. Therefore, it is necessary to achieve the most efficient and cost effective use of existing fixed airport resources through better planning and control of traffic flows. In fact, during the past few years the FAA has initiated such an airport capacity program designed to provide additional capacity at existing airports. Some of the improvements that that program has generated thus far have been based on new Air Traffic Control procedures, terminal automation, additional Instrument Landing Systems, improved controller display aids, and improved utilization of multiple runways/Instrument Meteorological Conditions (IMC) approach procedures. A useful element to understanding potential operational capacity enhancements at high demand airports has been the development and use of an analysis tool called The PLAND_BLUNDER (PLB) Simulation Model. The objective for building this simulation was to develop a parametric model that could be used for analysis in determining the minimum safety level of parallel runway operations for various parameters representing the airplane, navigation, surveillance, and ATC system performance. This simulation is useful as: a quick and economical evaluation of existing environments that are experiencing IMC delays, an efficient way to study and validate proposed procedure modifications, an aid in evaluating requirements for new airports or new runways in old airports, a simple, parametric investigation of a wide range of issues and approaches, an ability to tradeoff air and ground technology and procedures contributions, and a way of considering probable blunder mechanisms and range of blunder scenarios. This study describes the steps of building the simulation and considers the input parameters, assumptions and limitations, and available outputs. Validation results and sensitivity analysis are addressed as well as outlining some IMC and Visual Meteorological Conditions (VMC) approaches to parallel runways. Also, present and future applicable technologies (e.g., Digital Autoland Systems, Traffic Collision and Avoidance System II, Enhanced Situational Awareness System, Global Positioning Systems for Landing, etc.) are assessed and recommendations made.

  14. Design and Analysis of Modules for Segmented X-Ray Optics

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; BIskach, Michael P.; Chan, Kai-Wing; Saha, Timo T; Zhang, William W.

    2012-01-01

    Future X-ray astronomy missions demand thin, light, and closely packed optics which lend themselves to segmentation of the annular mirrors and, in turn, a modular approach to the mirror design. The modular approach to X-ray Flight Mirror Assembly (FMA) design allows excellent scalability of the mirror technology to support a variety of mission sizes and science objectives. This paper describes FMA designs using slumped glass mirror segments for several X-ray astrophysics missions studied by NASA and explores the driving requirements and subsequent verification tests necessary to qualify a slumped glass mirror module for space-flight. A rigorous testing program is outlined allowing Technical Development Modules to reach technical readiness for mission implementation while reducing mission cost and schedule risk.

  15. Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.

    2013-01-01

    The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.

  16. Computer-aided segmentation and 3D analysis of in vivo MRI examinations of the human vocal tract during phonation

    NASA Astrophysics Data System (ADS)

    Wismüller, Axel; Behrends, Johannes; Hoole, Phil; Leinsinger, Gerda L.; Meyer-Baese, Anke; Reiser, Maximilian F.

    2008-03-01

    We developed, tested, and evaluated a 3D segmentation and analysis system for in vivo MRI examinations of the human vocal tract during phonation. For this purpose, six professionally trained speakers, age 22-34y, were examined using a standardized MRI protocol (1.5 T, T1w FLASH, ST 4mm, 23 slices, acq. time 21s). The volunteers performed a prolonged (>=21s) emission of sounds of the German phonemic inventory. Simultaneous audio tape recording was obtained to control correct utterance. Scans were made in axial, coronal, and sagittal planes each. Computer-aided quantitative 3D evaluation included (i) automated registration of the phoneme-specific data acquired in different slice orientations, (ii) semi-automated segmentation of oropharyngeal structures, (iii) computation of a curvilinear vocal tract midline in 3D by nonlinear PCA, (iv) computation of cross-sectional areas of the vocal tract perpendicular to this midline. For the vowels /a/,/e/,/i/,/o/,/ø/,/u/,/y/, the extracted area functions were used to synthesize phoneme sounds based on an articulatory-acoustic model. For quantitative analysis, recorded and synthesized phonemes were compared, where area functions extracted from 2D midsagittal slices were used as a reference. All vowels could be identified correctly based on the synthesized phoneme sounds. The comparison between synthesized and recorded vowel phonemes revealed that the quality of phoneme sound synthesis was improved for phonemes /a/ and /y/, if 3D instead of 2D data were used, as measured by the average relative frequency shift between recorded and synthesized vowel formants (p<0.05, one-sided Wilcoxon rank sum test). In summary, the combination of fast MRI followed by subsequent 3D segmentation and analysis is a novel approach to examine human phonation in vivo. It unveils functional anatomical findings that may be essential for realistic modelling of the human vocal tract during speech production.

  17. Challenges in the segmentation and analysis of X-ray Micro-CT image data

    NASA Astrophysics Data System (ADS)

    Larsen, J. D.; Schaap, M. G.; Tuller, M.; Kulkarni, R.; Guber, A.

    2014-12-01

    Pore scale modeling of fluid flow is becoming increasing popular among scientific disciplines. With increased computational power, and technological advancements it is now possible to create realistic models of fluid flow through highly complex porous media by using a number of fluid dynamic techniques. One such technique that has gained popularity is lattice Boltzmann for its relative ease of programming and ability to capture and represent complex geometries with simple boundary conditions. In this study lattice Boltzmann fluid models are used on macro-porous silt loam soil imagery that was obtained using an industrial CT scanner. The soil imagery was segmented with six separate automated segmentation standards to reduce operator bias and provide distinction between phases. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We attempt to validate simulated permeability from differing segmentation algorithms to experimental findings. Limitations arise with X-ray micro-CT image data. Polychromatic X-ray CT has the potential to produce low image contrast and image artifacts. In this case, we find that the data is unsegmentable and unable to be modeled in a realistic and unbiased fashion.

  18. Quantitative morphological analysis of curvilinear network for microscopic image based on individual fibre segmentation (IFS).

    PubMed

    Qiu, J; Li, F-F

    2014-12-01

    Microscopic images of curvilinear fibre network structure like cytoskeleton are traditionally analysed by qualitative observation, which can hardly provide quantitative information of their morphological properties. However, such information is crucially contributive to the understanding of important biological events, even helps to learn about the inner relations hard to perceive. Individual fibre segmentation-based curvilinear structure detector proposed in this study can identify each individual fibre in the network, as well as connections between different fibres. Quantitative information of each individual fibre, including length, orientation and position, can be extracted; so are the connecting modes in the fibre network, such as bifurcation, intersection and overlap. Distribution of fibres with different morphological properties is also presented. No manual intervening or subjective judging is required in the analysing process. Both synthesized and experimental microscopic images have verified that the detector is capable to segment curvilinear network at the subcellular level with strong noise immunity. The proposed detector is finally applied to the morphological study on cytoskeleton. It is believed that the individual fibre segmentation-based curvilinear structure detector can greatly enhance our understanding of those biological images generated from tons of biological experiments. PMID:25243901

  19. A computer program for comprehensive ST-segment depression/heart rate analysis of the exercise ECG test.

    PubMed

    Lehtinen, R; Vänttinen, H; Sievänen, H; Malmivuo, J

    1996-06-01

    The ST-segment depression/heart rate (ST/HR) analysis has been found to improve the diagnostic accuracy of the exercise ECG test in detecting myocardial ischemia. Recently, three different continuous diagnostic variables based on the ST/HR analysis have been introduced; the ST/HR slope, the ST/HR index and the ST/HR hysteresis. The latter utilises both the exercise and recovery phases of the exercise ECG test, whereas the two former are based on the exercise phase only. This present article presents a computer program which not only calculates the above three diagnostic variables but also plots the full diagrams of ST-segment depression against heart rate during both exercise and recovery phases for each ECG lead from given ST/HR data. The program can be used in the exercise ECG diagnosis of daily clinical practice provided that the ST/HR data from the ECG measurement system can be linked to the program. At present, the main purpose of the program is to provide clinical and medical researchers with a practical tool for comprehensive clinical evaluation and development of the ST/HR analysis. PMID:8835841

  20. Three-dimensional volume analysis of vasculature in engineered tissues

    NASA Astrophysics Data System (ADS)

    YousefHussien, Mohammed; Garvin, Kelley; Dalecki, Diane; Saber, Eli; Helguera, María.

    2013-01-01

    Three-dimensional textural and volumetric image analysis holds great potential in understanding the image data produced by multi-photon microscopy. In this paper, an algorithm that quantitatively analyzes the texture and the morphology of vasculature in engineered tissues is proposed. The investigated 3D artificial tissues consist of Human Umbilical Vein Endothelial Cells (HUVEC) embedded in collagen exposed to two regimes of ultrasound standing wave fields under different pressure conditions. Textural features were evaluated using the normalized Gray-Scale Cooccurrence Matrix (GLCM) combined with Gray-Level Run Length Matrix (GLRLM) analysis. To minimize error resulting from any possible volume rotation and to provide a comprehensive textural analysis, an averaged version of nine GLCM and GLRLM orientations is used. To evaluate volumetric features, an automatic threshold using the gray level mean value is utilized. Results show that our analysis is able to differentiate among the exposed samples, due to morphological changes induced by the standing wave fields. Furthermore, we demonstrate that providing more textural parameters than what is currently being reported in the literature, enhances the quantitative understanding of the heterogeneity of artificial tissues.

  1. Flow Analysis on a Limited Volume Chilled Water System

    SciTech Connect

    Zheng, Lin

    2012-07-31

    LANL Currently has a limited volume chilled water system for use in a glove box, but the system needs to be updated. Before we start building our new system, a flow analysis is needed to ensure that there are no high flow rates, extreme pressures, or any other hazards involved in the system. In this project the piping system is extremely important to us because it directly affects the overall design of the entire system. The primary components necessary for the chilled water piping system are shown in the design. They include the pipes themselves (perhaps of more than one diameter), the various fitting used to connect the individual pipes to form the desired system, the flow rate control devices (valves), and the pumps that add energy to the fluid. Even the most simple pipe systems are actually quite complex when they are viewed in terms of rigorous analytical considerations. I used an 'exact' analysis and dimensional analysis considerations combined with experimental results for this project. When 'real-world' effects are important (such as viscous effects in pipe flows), it is often difficult or impossible to use only theoretical methods to obtain the desired results. A judicious combination of experimental data with theoretical considerations and dimensional analysis are needed in order to reduce risks to an acceptable level.

  2. Oil-spill risk analysis: Cook inlet outer continental shelf lease sale 149. Volume 2: Conditional risk contour maps of seasonal conditional probabilities. Final report

    SciTech Connect

    Johnson, W.R.; Marshall, C.F.; Anderson, C.M.; Lear, E.M.

    1994-08-01

    The Federal Government has proposed to offer Outer Continental Shelf (OCS) lands in Cook Inlet for oil and gas leasing. Because oil spills may occur from activities associated with offshore oil production, the Minerals Management Service conducts a formal risk assessment. In evaluating the significance of accidental oil spills, it is important to remember that the occurrence of such spills is fundamentally probabilistic. The effects of oil spills that could occur during oil and gas production must be considered. This report summarizes results of an oil-spill risk analysis conducted for the proposed Cook Inlet OCS Lease Sale 149. The objective of this analysis was to estimate relative risks associated with oil and gas production for the proposed lease sale. To aid the analysis, conditional risk contour maps of seasonal conditional probabilities of spill contact were generated for each environmental resource or land segment in the study area. This aspect is discussed in this volume of the two volume report.

  3. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 3

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.

  4. Style, content and format guide for writing safety analysis documents. Volume 1, Safety analysis reports for DOE nuclear facilities

    SciTech Connect

    Not Available

    1994-06-01

    The purpose of Volume 1 of this 4-volume style guide is to furnish guidelines on writing and publishing Safety Analysis Reports (SARs) for DOE nuclear facilities at Sandia National Laboratories. The scope of Volume 1 encompasses not only the general guidelines for writing and publishing, but also the prescribed topics/appendices contents along with examples from typical SARs for DOE nuclear facilities.

  5. Texture analysis of automatic graph cuts segmentations for detection of lung cancer recurrence after stereotactic radiotherapy

    NASA Astrophysics Data System (ADS)

    Mattonen, Sarah A.; Palma, David A.; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.

    2015-03-01

    Stereotactic ablative radiotherapy (SABR) is a treatment for early-stage lung cancer with local control rates comparable to surgery. After SABR, benign radiation induced lung injury (RILI) results in tumour-mimicking changes on computed tomography (CT) imaging. Distinguishing recurrence from RILI is a critical clinical decision determining the need for potentially life-saving salvage therapies whose high risks in this population dictate their use only for true recurrences. Current approaches do not reliably detect recurrence within a year post-SABR. We measured the detection accuracy of texture features within automatically determined regions of interest, with the only operator input being the single line segment measuring tumour diameter, normally taken during the clinical workflow. Our leave-one-out cross validation on images taken 2-5 months post-SABR showed robustness of the entropy measure, with classification error of 26% and area under the receiver operating characteristic curve (AUC) of 0.77 using automatic segmentation; the results using manual segmentation were 24% and 0.75, respectively. AUCs for this feature increased to 0.82 and 0.93 at 8-14 months and 14-20 months post SABR, respectively, suggesting even better performance nearer to the date of clinical diagnosis of recurrence; thus this system could also be used to support and reinforce the physician's decision at that time. Based on our ongoing validation of this automatic approach on a larger sample, we aim to develop a computer-aided diagnosis system which will support the physician's decision to apply timely salvage therapies and prevent patients with RILI from undergoing invasive and risky procedures.

  6. Texture-based segmentation and analysis of emphysema depicted on CT images

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Zheng, Bin; Wang, Xingwei; Lederman, Dror; Pu, Jiantao; Sciurba, Frank C.; Gur, David; Leader, J. Ken

    2011-03-01

    In this study we present a texture-based method of emphysema segmentation depicted on CT examination consisting of two steps. Step 1, a fractal dimension based texture feature extraction is used to initially detect base regions of emphysema. A threshold is applied to the texture result image to obtain initial base regions. Step 2, the base regions are evaluated pixel-by-pixel using a method that considers the variance change incurred by adding a pixel to the base in an effort to refine the boundary of the base regions. Visual inspection revealed a reasonable segmentation of the emphysema regions. There was a strong correlation between lung function (FEV1%, FEV1/FVC, and DLCO%) and fraction of emphysema computed using the texture based method, which were -0.433, -.629, and -0.527, respectively. The texture-based method produced more homogeneous emphysematous regions compared to simple thresholding, especially for large bulla, which can appear as speckled regions in the threshold approach. In the texture-based method, single isolated pixels may be considered as emphysema only if neighboring pixels meet certain criteria, which support the idea that single isolated pixels may not be sufficient evidence that emphysema is present. One of the strength of our complex texture-based approach to emphysema segmentation is that it goes beyond existing approaches that typically extract a single or groups texture features and individually analyze the features. We focus on first identifying potential regions of emphysema and then refining the boundary of the detected regions based on texture patterns.

  7. Segmentation of acute pyelonephritis area on kidney SPECT images using binary shape analysis

    NASA Astrophysics Data System (ADS)

    Wu, Chia-Hsiang; Sun, Yung-Nien; Chiu, Nan-Tsing

    1999-05-01

    Acute pyelonephritis is a serious disease in children that may result in irreversible renal scarring. The ability to localize the site of urinary tract infection and the extent of acute pyelonephritis has considerable clinical importance. In this paper, we are devoted to segment the acute pyelonephritis area from kidney SPECT images. A two-step algorithm is proposed. First, the original images are translated into binary versions by automatic thresholding. Then the acute pyelonephritis areas are located by finding convex deficiencies in the obtained binary images. This work gives important diagnosis information for physicians and improves the quality of medical care for children acute pyelonephritis disease.

  8. ANALYSIS OF A HIGH ORDER FINITE VOLUME SCHEME FOR THE VLASOV-POISSON SYSTEM

    E-print Network

    Filbet, Francis

    ANALYSIS OF A HIGH ORDER FINITE VOLUME SCHEME FOR THE VLASOV-POISSON SYSTEM ROLAND DUCLOUS, BRUNO DUBROCA AND FRANCIS FILBET Abstract. We propose a second order finite volume scheme to discretize the one. Finite Volume Schemes, Vlasov-Poisson System, Weak BV Estimate. AMS subject classifications. 65M12, 82D10

  9. Analysis of cell centred finite volume methods for incompressible fluid flows

    E-print Network

    Herbin, Raphaèle

    on the analysis of cell centred finite volume schemes for incompressible fluid flows. We first recall the cell centred finite volume scheme for convection di#usion equations, and give the mathematical tools which gradient. #12; Contents 1 Introduction 2 1.1 Finite volume schemes for conservation laws

  10. Analysis of cell centred finite volume methods for incompressible fluid flows

    E-print Network

    Herbin, Raphaèle

    on the analysis of cell centred finite volume schemes for incompressible fluid flows. We first recall the cell centred finite volume scheme for convection diffusion equations, and give the mathematical tools which gradient. #12;Contents 1 Introduction 2 1.1 Finite volume schemes for conservation laws

  11. On the development of weighting factors for ballast ranking prioritization & development of the relationship and rate of defective segments based on volume of missing ballast

    NASA Astrophysics Data System (ADS)

    Cronin, John

    This thesis explores the effects of missing ballast on track behavior and degradation. As ballast is an integral part of the track structure, the hypothesized effect of missing ballast is that defects will be more common which in turn leads to more derailments. In order to quantify the volume of missing ballast, remote sensing technologies were used to provide an accurate profile of the ballast. When the existing profile is compared to an idealized profile, the area of missing ballast can be computed. The area is then subdivided into zones which represent the area in which the ballast performs a key function in the track structure. These areas are then extrapolated into the volume of missing ballast for each zone based on the distance between collected profiles. In order to emphasize the key functions that the zones previously created perform, weighting factors were developed based on common risk-increasing hazards, such as curves and heavy axle loads, which are commonly found on railways. These weighting factors are applied to the specified zones' missing ballast volume when such a hazard exists in that segment of track. Another set of weighting factors were developed to represent the increased risk, or preference for lower risk, for operational factors such as the transport of hazardous materials or for being a key route. Through these weighting factors, ballast replenishment can be prioritized to focus on the areas that pose a higher risk of derailments and their associated costs. For the special cases where the risk or aversion to risk comes from what is being transported, such as the case with hazardous materials or passengers, an economic risk assessment was completed in order to quantify the risk associated with their transport. This economic risk assessment looks at the increased costs associated with incidents that occur and how they compare to incidents which do not directly involve the special cargos. In order to provide support for the use of the previously developed weightings as well as to quantify the actual impact that missing ballast has on the rate of geometry defects, analyses which quantified the risk of missing ballast were performed. In addition to quantifying the rate of defects, analyses were performed which looked at the impact associated with curved track, how the location of missing ballast impacts the rate of geometry defects and how the combination of the two compared with the previous analyses. Through this research, the relationship between the volume of missing ballast and ballast-related defects has been identified and quantified. This relationship is positive for the aggregate of all ballast-related defects but does not always exist for individual defects which occasionally have unique behavior. For the non-ballast defects, a relationship between missing ballast and their rate of occurrence did not always appear to exist. The impact of curves was apparent, showing that the rate of defects was either similar to or exceeded the rate of defects for tangent track. For the analyses which looked at the location of ballast in crib or shoulder, the results were quite similar to the previous analyses. The development, application and improvements of a risk-based ballast maintenance prioritization system provides a relatively low-cost and effective method to improve the operational safety for all railroads.

  12. Automatic segmentation and identification of solitary pulmonary nodules on follow-up CT scans based on local intensity structure analysis and non-rigid image registration

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Naito, Hideto; Nakamura, Yoshihiko; Kitasaka, Takayuki; Rueckert, Daniel; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2011-03-01

    This paper presents a novel method that can automatically segment solitary pulmonary nodule (SPN) and match such segmented SPNs on follow-up thoracic CT scans. Due to the clinical importance, a physician needs to find SPNs on chest CT and observe its progress over time in order to diagnose whether it is benign or malignant, or to observe the effect of chemotherapy for malignant ones using follow-up data. However, the enormous amount of CT images makes large burden tasks to a physician. In order to lighten this burden, we developed a method for automatic segmentation and assisting observation of SPNs in follow-up CT scans. The SPNs on input 3D thoracic CT scan are segmented based on local intensity structure analysis and the information of pulmonary blood vessels. To compensate lung deformation, we co-register follow-up CT scans based on an affine and a non-rigid registration. Finally, the matches of detected nodules are found from registered CT scans based on a similarity measurement calculation. We applied these methods to three patients including 14 thoracic CT scans. Our segmentation method detected 96.7% of SPNs from the whole images, and the nodule matching method found 83.3% correspondences from segmented SPNs. The results also show our matching method is robust to the growth of SPN, including integration/separation and appearance/disappearance. These confirmed our method is feasible for segmenting and identifying SPNs on follow-up CT scans.

  13. Coal gasification systems engineering and analysis. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Feasibility analyses and systems engineering studies for a 20,000 tons per day medium Btu (MBG) coal gasification plant to be built by TVA in Northern Alabama were conducted. Major objectives were as follows: (1) provide design and cost data to support the selection of a gasifier technology and other major plant design parameters, (2) provide design and cost data to support alternate product evaluation, (3) prepare a technology development plan to address areas of high technical risk, and (4) develop schedules, PERT charts, and a work breakdown structure to aid in preliminary project planning. Volume one contains a summary of gasification system characterizations. Five gasification technologies were selected for evaluation: Koppers-Totzek, Texaco, Lurgi Dry Ash, Slagging Lurgi, and Babcock and Wilcox. A summary of the trade studies and cost sensitivity analysis is included.

  14. Risk for Adjacent Segment and Same Segment Reoperation After Surgery for Lumbar Stenosis: A subgroup analysis of the Spine Patient Outcomes Research Trial (SPORT)

    PubMed Central

    Radcliff, Kris; Curry, Patrick; Hilibrand, Alan; Kepler, Chris; Lurie, Jon; Zhao, Wenyan; Albert, Todd; Weinstein, James

    2013-01-01

    Study Design Subgroup analysis of prospective, randomized database. Objective The purpose of this study was to compare surgical or patient characteristics, such as fusion, instrumentation, or obesity, to identify whether these factors were associated with increased risk of reoperation for spinal stenosis. This prognostic information would be valuable to patients, healthcare professionals, and society as strategies to reduce reoperation, such as motion preservation, are developed. Summary of Background Data Reoperation due to recurrence of index level pathology or adjacent segment disease is a common clinical problem. Despite multiple studies on the incidence of reoperation, there have been few comparative studies establishing risk factors of reoperation after spinal stenosis surgery. The hypothesis of this subgroup analysis was that lumbar fusion or particular patient characteristics, such as obesity, would render patients with lumbar stenosis more susceptible to reoperation at the index or adjacent levels. Methods The study population combined the randomized and observational cohorts enrolled in SPORT for treatment of spinal stenosis. The surgically treated patients were stratified according to those who had reoperation (n=54) or no-reoperation (n= 359). Outcome measures were assessed at baseline, 1 year, 2 years, 3 years, and 4 years. The difference in improvement between those who had reoperation and those who did not was determined at each follow-period. Results Of the 413 patients who underwent surgical treatment for spinal stenosis, 54 patients had a reoperation within four years. At baseline, there were no significant differences in demographic characteristics or clinical outcome scores between reoperation and non-reoperation groups. Furthermore, between groups there were no differences in the severity of symptoms, obesity, physical examination signs, levels of stenosis, location of stenosis, stenosis severity, levels of fusion, levels of laminectomy, levels decompressed, operation time, intraoperative or postoperative complications. There was an increased percentage of patients with duration of symptoms greater than 12 months in the reoperation group (56% reoperation vs 36% no-reoperation, p<0.008). At final follow-up, there was significantly less improvement in the outcome of the reoperation group in SF36 PF (14.4 vs 22.6, p < 0.05), ODI (?12.4 vs. ?21.1, p < 0.01), and Sciatica Bothersomeness Index (?5 vs ?8.1, p < 0.006). Conclusion Lumbar fusion and instrumentation were not associated with increased rate of reoperation at index or adjacent levels compared to nonfusion techniques. The only specific risk factor for reoperation after treatment of spinal stenosis was duration of pretreatment symptoms > 12 months. The overall incidence of reoperations for spinal stenosis surgery was 13% and reoperations were equally distributed between index and adjacent lumbar levels. Reoperation may be related to the natural history of spinal degenerative disease. PMID:23154835

  15. Phylogenetic analysis, genomic diversity and classification of M class gene segments of turkey reoviruses.

    PubMed

    Mor, Sunil K; Marthaler, Douglas; Verma, Harsha; Sharafeldin, Tamer A; Jindal, Naresh; Porter, Robert E; Goyal, Sagar M

    2015-03-23

    From 2011 to 2014, 13 turkey arthritis reoviruses (TARVs) were isolated from cases of swollen hock joints in 2-18-week-old turkeys. In addition, two isolates from similar cases of turkey arthritis were received from another laboratory. Eight turkey enteric reoviruses (TERVs) isolated from fecal samples of turkeys were also used for comparison. The aims of this study were to characterize turkey reovirus (TRV) based on complete M class genome segments and to determine genetic diversity within TARVs in comparison to TERVs and chicken reoviruses (CRVs). Nucleotide (nt) cut off values of 84%, 83% and 85% for the M1, M2 and M3 gene segments were proposed and used for genotype classification, generating 5, 7, and 3 genotypes, respectively. Using these nt cut off values, we propose M class genotype constellations (GCs) for avian reoviruses. Of the seven GCs, GC1 and GC3 were shared between the TARVs and TERVs, indicating possible reassortment between turkey and chicken reoviruses. The TARVs and TERVs were divided into three GCs, and GC2 was unique to TARVs and TERVs. The proposed new GC approach should be useful in identifying reassortant viruses, which may ultimately be used in the design of a universal vaccine against both chicken and turkey reoviruses. PMID:25655814

  16. Synfuel program analysis. Volume 1: Procedures-capabilities

    NASA Astrophysics Data System (ADS)

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    The analytic procedures and capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative are described. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specify cases and interpret outputs. It contains an explicit description (with examples) of the types of results which can be obtained when applied for the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. The objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

  17. Evaluation of automated brain MR image segmentation and volumetry methods.

    PubMed

    Klauschen, Frederick; Goldman, Aaron; Barra, Vincent; Meyer-Lindenberg, Andreas; Lundervold, Arvid

    2009-04-01

    We compare three widely used brain volumetry methods available in the software packages FSL, SPM5, and FreeSurfer and evaluate their performance using simulated and real MR brain data sets. We analyze the accuracy of gray and white matter volume measurements and their robustness against changes of image quality using the BrainWeb MRI database. These images are based on "gold-standard" reference brain templates. This allows us to assess between- (same data set, different method) and also within-segmenter (same method, variation of image quality) comparability, for both of which we find pronounced variations in segmentation results for gray and white matter volumes. The calculated volumes deviate up to >10% from the reference values for gray and white matter depending on method and image quality. Sensitivity is best for SPM5, volumetric accuracy for gray and white matter was similar in SPM5 and FSL and better than in FreeSurfer. FSL showed the highest stability for white (<5%), FreeSurfer (6.2%) for gray matter for constant image quality BrainWeb data. Between-segmenter comparisons show discrepancies of up to >20% for the simulated data and 24% on average for the real data sets, whereas within-method performance analysis uncovered volume differences of up to >15%. Since the discrepancies between results reach the same order of magnitude as volume changes observed in disease, these effects limit the usability of the segmentation methods for following volume changes in individual patients over time and should be taken into account during the planning and analysis of brain volume studies. PMID:18537111

  18. Geometrical principal component analysis of planar-segments of the three-channel Lissajous' trajectory of human auditory brain stem evoked potentials.

    PubMed

    Pratt, H; Har'el, Z; Golos, E

    1986-05-01

    Three-Channel Lissajous' Trajectories (3CLTs) of Auditory Brain Stem Evoked Potentials (ABEP) were obtained from 15 normal humans. Planar-segments of 3CLT were identified and the orientations of the first two geometrical principal components, which interact to produce the planar-segments, were calculated. Each principal component's orientation in voltage space was quantified by its coefficients (A, B and C). Intersubject variability of these orientations was comparable to the variability of plane orientations. The principal components of planar-segments can indicate the type of generator activity that is involved in the formation of planar-segments. The results of this analysis indicate that planarity of each 3CLT component is produced by the interaction of simultaneous multiple generators, or by a single synchronous generator which changes its orientation. The coefficients of these principal components may complement plane coefficients as quantitative indices of 3CLT of ABEP. PMID:3721605

  19. Comparison of CLASS and ITK-SNAP in segmentation of urinary bladder in CT urography

    NASA Astrophysics Data System (ADS)

    Cha, Kenny; Hadjiiski, Lubomir; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Zhou, Chuan

    2014-03-01

    We are developing a computerized method for bladder segmentation in CT urography (CTU) for computeraided diagnosis of bladder cancer. We have developed a Conjoint Level set Analysis and Segmentation System (CLASS) consisting of four stages: preprocessing and initial segmentation, 3D and 2D level set segmentation, and post-processing. In case the bladder contains regions filled with intravenous (IV) contrast and without contrast, CLASS segments the noncontrast (NC) region and the contrast (C) filled region separately and conjoins the contours. In this study, we compared the performance of CLASS to ITK-SNAP 2.4, which is a publicly available software application for segmentation of structures in 3D medical images. ITK-SNAP performs segmentation by using the edge-based level set on preprocessed images. The level set were initialized by manually placing a sphere at the boundary between the C and NC parts of the bladders with C and NC regions, and in the middle of the bladders that had only C or NC region. Level set parameters and the number of iterations were chosen after experimentation with bladder cases. Segmentation performances were compared using 30 randomly selected bladders. 3D hand-segmented contours were obtained as reference standard, and computerized segmentation accuracy was evaluated in terms of the average volume intersection %, average % volume error, average absolute % volume error, average minimum distance, and average Jaccard index. For CLASS, the values for these performance metrics were 79.0±8.2%, 16.1±16.3%, 19.9±11.1%, 3.5±1.3 mm, 75.7±8.4%, respectively. For ITK-SNAP, the corresponding values were 78.8±8.2%, 8.3±33.1%, 24.2±23.7%, 5.2±2.6 mm, 71.0±15.4%, respectively. CLASS on average performed better and exhibited less variations than ITK-SNAP for bladder segmentation.

  20. 3D multiscale segmentation and morphological analysis of X-ray microtomography from cold-sprayed coatings.

    PubMed

    Gillibert, L; Peyrega, C; Jeulin, D; Guipont, V; Jeandin, M

    2012-11-01

    X-ray microtomography from cold-sprayed coatings brings a new insight on this deposition process. A noise-tolerant segmentation algorithm is introduced, based on the combination of two segmentations: a deterministic multiscale segmentation and a stochastic segmentation. The stochastic approach uses random Poisson lines as markers. Results on a X-ray microtomographic image of aluminium particles are presented and validated. PMID:22946787

  1. Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images

    PubMed Central

    de Castro, J.; Méndez, A.; Tarquis, A. M.

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  2. A link-segment model of upright human posture for analysis of head-trunk coordination

    NASA Technical Reports Server (NTRS)

    Nicholas, S. C.; Doxey-Gasway, D. D.; Paloski, W. H.

    1998-01-01

    Sensory-motor control of upright human posture may be organized in a top-down fashion such that certain head-trunk coordination strategies are employed to optimize visual and/or vestibular sensory inputs. Previous quantitative models of the biomechanics of human posture control have examined the simple case of ankle sway strategy, in which an inverted pendulum model is used, and the somewhat more complicated case of hip sway strategy, in which multisegment, articulated models are used. While these models can be used to quantify the gross dynamics of posture control, they are not sufficiently detailed to analyze head-trunk coordination strategies that may be crucial to understanding its underlying mechanisms. In this paper, we present a biomechanical model of upright human posture that extends an existing four mass, sagittal plane, link-segment model to a five mass model including an independent head link. The new model was developed to analyze segmental body movements during dynamic posturography experiments in order to study head-trunk coordination strategies and their influence on sensory inputs to balance control. It was designed specifically to analyze data collected on the EquiTest (NeuroCom International, Clackamas, OR) computerized dynamic posturography system, where the task of maintaining postural equilibrium may be challenged under conditions in which the visual surround, support surface, or both are in motion. The performance of the model was tested by comparing its estimated ground reaction forces to those measured directly by support surface force transducers. We conclude that this model will be a valuable analytical tool in the search for mechanisms of balance control.

  3. Analysis of a segmented q-plate tunable retarder for the generation of first-order vector beams.

    PubMed

    Davis, Jeffrey A; Hashimoto, Nobuyuki; Kurihara, Makoto; Hurtado, Enrique; Pierce, Melanie; Sánchez-López, María M; Badham, Katherine; Moreno, Ignacio

    2015-11-10

    In this work we study a prototype q-plate segmented tunable liquid crystal retarder device. It shows a large modulation range (5? rad for a wavelength of 633 nm and near 2? for 1550 nm) and a large clear aperture of one inch diameter. We analyze the operation of the q-plate in terms of Jones matrices and provide different matrix decompositions useful for its analysis, including the polarization transformations, the effect of the tunable phase shift, and the effect of quantization levels (the device is segmented in 12 angular sectors). We also show a very simple and robust optical system capable of generating all polarization states on the first-order Poincaré sphere. An optical polarization rotator and a linear retarder are used in a geometry that allows the generation of all states in the zero-order Poincaré sphere simply by tuning two retardance parameters. We then use this system with the q-plate device to directly map an input arbitrary state of polarization to a corresponding first-order vectorial beam. This optical system would be more practical for high speed and programmable generation of vector beams than other systems reported so far. Experimental results are presented. PMID:26560790

  4. Validation tools for image segmentation

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  5. A Posteriori Error Analysis of a Cell-centered Finite Volume Method for Semilinear Elliptic Problems

    SciTech Connect

    Michael Pernice

    2009-11-01

    In this paper, we conduct an a posteriori analysis for the error in a quantity of interest computed from a cell-centered finite volume scheme. The a posteriori error analysis is based on variational analysis, residual error and the adjoint problem. To carry out the analysis, we use an equivalence between the cell-centered finite volume scheme and a mixed finite element method with special choice of quadrature.

  6. Ventriculogram segmentation using boosted decision trees

    NASA Astrophysics Data System (ADS)

    McDonald, John A.; Sheehan, Florence H.

    2004-05-01

    Left ventricular status, reflected in ejection fraction or end systolic volume, is a powerful prognostic indicator in heart disease. Quantitative analysis of these and other parameters from ventriculograms (cine xrays of the left ventricle) is infrequently performed due to the labor required for manual segmentation. None of the many methods developed for automated segmentation has achieved clinical acceptance. We present a method for semi-automatic segmentation of ventriculograms based on a very accurate two-stage boosted decision-tree pixel classifier. The classifier determines which pixels are inside the ventricle at key ED (end-diastole) and ES (end-systole) frames. The test misclassification rate is about 1%. The classifier is semi-automatic, requiring a user to select 3 points in each frame: the endpoints of the aortic valve and the apex. The first classifier stage is 2 boosted decision-trees, trained using features such as gray-level statistics (e.g. median brightness) and image geometry (e.g. coordinates relative to user supplied 3 points). Second stage classifiers are trained using the same features as the first, plus the output of the first stage. Border pixels are determined from the segmented images using dilation and erosion. A curve is then fit to the border pixels, minimizing a penalty function that trades off fidelity to the border pixels with smoothness. ED and ES volumes, and ejection fraction are estimated from border curves using standard area-length formulas. On independent test data, the differences between automatic and manual volumes (and ejection fractions) are similar in size to the differences between two human observers.

  7. A registration-based segmentation method with application to adiposity analysis of mice microCT images

    NASA Astrophysics Data System (ADS)

    Bai, Bing; Joshi, Anand; Brandhorst, Sebastian; Longo, Valter D.; Conti, Peter S.; Leahy, Richard M.

    2014-04-01

    Obesity is a global health problem, particularly in the U.S. where one third of adults are obese. A reliable and accurate method of quantifying obesity is necessary. Visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) are two measures of obesity that reflect different associated health risks, but accurate measurements in humans or rodent models are difficult. In this paper we present an automatic, registration-based segmentation method for mouse adiposity studies using microCT images. We co-register the subject CT image and a mouse CT atlas. Our method is based on surface matching of the microCT image and an atlas. Surface-based elastic volume warping is used to match the internal anatomy. We acquired a whole body scan of a C57BL6/J mouse injected with contrast agent using microCT and created a whole body mouse atlas by manually delineate the boundaries of the mouse and major organs. For method verification we scanned a C57BL6/J mouse from the base of the skull to the distal tibia. We registered the obtained mouse CT image to our atlas. Preliminary results show that we can warp the atlas image to match the posture and shape of the subject CT image, which has significant differences from the atlas. We plan to use this software tool in longitudinal obesity studies using mouse models.

  8. Fully Automated Brain Tumor Segmentation using two MRI Modalities

    E-print Network

    Alberta, University of

    , and (3) segmenting both Gross Tumor Volume and edema. The method is tested using 19 hand-segmented real coefficient. Improvements of 5% and 20% respec- tively for segmentation of edema and Gross Tumor Volume have and associated edema in MR images [3]. In practice, practitioners currently rely on experts manual delineations

  9. Estimating temperature-dependent anisotropic hydrogen displacements with the invariom database and a new segmented rigid-body analysis program

    PubMed Central

    Lübben, Jens; Bourhis, Luc J.; Dittrich, Birger

    2015-01-01

    Invariom partitioning and notation are used to estimate anisotropic hydrogen displacements for incorporation in crystallographic refinement models. Optimized structures of the generalized invariom database and their frequency computations provide the information required: frequencies are converted to internal atomic displacements and combined with the results of a TLS (translation–libration–screw) fit of experimental non-hydrogen anisotropic displacement parameters to estimate those of H atoms. Comparison with TLS+ONIOM and neutron diffraction results for four example structures where high-resolution X-ray and neutron data are available show that electron density transferability rules established in the invariom approach are also suitable for streamlining the transfer of atomic vibrations. A new segmented-body TLS analysis program called APD-Toolkit has been coded to overcome technical limitations of the established program THMA. The influence of incorporating hydrogen anisotropic displacement parameters on conventional refinement is assessed. PMID:26664341

  10. Yucca Mountain transportation routes: Preliminary characterization and risk analysis; Volume 2, Figures [and] Volume 3, Technical Appendices

    SciTech Connect

    Souleyrette, R.R. II; Sathisan, S.K.; di Bartolo, R.

    1991-05-31

    This report presents appendices related to the preliminary assessment and risk analysis for high-level radioactive waste transportation routes to the proposed Yucca Mountain Project repository. Information includes data on population density, traffic volume, ecologically sensitive areas, and accident history.

  11. Multiscale remote sensing data segmentation and post-segmentation change detection based on logical modeling: Theoretical exposition and experimental results for forestland cover change analysis

    NASA Astrophysics Data System (ADS)

    Ouma, Yashon O.; Josaphat, S. S.; Tateishi, Ryutaro

    2008-07-01

    Quantification of forestland cover extents, changes and causes thereof are currently of regional and global research priority. Remote sensing data (RSD) play a significant role in this exercise. However, supervised classification-based forest mapping from RSD are limited by lack of ground-truth- and spectral-only-based methods. In this paper, first results of a methodology to detect change/no change based on unsupervised multiresolution image transformation are presented. The technique combines directional wavelet transformation texture and multispectral imagery in an anisotropic diffusion aggregation or segmentation algorithm. The segmentation algorithm was implemented in unsupervised self-organizing feature map neural network. Using Landsat TM (1986) and ETM+ (2001), logical-operations-based change detection results for part of Mau forest in Kenya are presented. An overall accuracy for change detection of 88.4%, corresponding to kappa of 0.8265, was obtained. The methodology is able to predict the change information a-posteriori as opposed to the conventional methods that require land cover classes a priori for change detection. Most importantly, the approach can be used to predict the existence, location and extent of disturbances within natural environmental systems.

  12. Linkage disequilibrium analysis by searching for shared segments: Mapping a locus for benign recurrent intrahepatic cholestasis (BRIC)

    SciTech Connect

    Freimer, N.; Baharloo, S.; Blankenship, K.

    1994-09-01

    The lod score method of linkage analysis has two important drawbacks: parameters must be specified for the transmission of the disease (e.g. penetrance), and large numbers of genetically informative individuals must be studied. Although several robust non-parametric methods are available, these also require large sample sizes. The availability of dense genetic maps permits genome screening to be conducted by linkage disequilibrium (LD) mapping methods, which are statistically powerful and non-parametric. Lander & Botstein proposed that LD mapping could be employed to screen the human genome for disease loci; we have now applied this strategy to map a gene for an autosomal recessive disorder, benign recurrent intrahepatic cholestatis (BRIC). Our approach to LD mapping was based on identifying chromosome segments shared between distantly related patients; we used 256 microsatellite markers to genotype three affected individuals, and their parents, from an isolated town in The Netherlands. Because endogamy occurred in this population for several generations, all of the BRIC patients are known to be distantly related to each other, but the pedigree structure and connections could not be certainly established more than three generations before the present, so lod score analysis was impossible. A 20 cM region on chromosome 18 is shared by 5/6 patient chromosomes; subsequently, we noted that 6/6 chromosomes shared an interval of about 3 cM in this region. Calculations indicate that it is extremely unlikely that such a region could be inherited by chance rather than by descent from a common ancestor. Thus, LD mapping by searching for shared chromosomal segments is an extremely powerful approach for genome screening to identify disease loci.

  13. Extensive serum biomarker analysis in patients with ST segment elevation myocardial infarction (STEMI).

    PubMed

    Zhang, Yi; Lin, Peiyi; Jiang, Huilin; Xu, Jieling; Luo, Shuhong; Mo, Junrong; Li, Yunmei; Chen, Xiaohui

    2015-12-01

    ST segment elevation myocardial infarction (STEMI) is one of the leading causes of morbidity and mortality and some characteristics of STEMI are poorly understood. The aim of the present study is to detect protein expression profiles in the serum of STEMI patients, and to identify biomarkers for this disease. Cytokine profiles of serum from STEMI patients and healthy controls were analyzed with a semi-quantitative human antibody array for 174 proteins, and the results showed blood serum concentrations of 21 cytokines differed considerably between STEMI patients and healthy subjects. In the next phase, a sandwich ELISA kit individually validated eight biomarker results from 21 of the microarray experiments. Clinical validation demonstrated a significant increase of BNDF, PDGF-AA and MMP-9 in patients with AMI. Meanwhile, BNDF, PDGF-AA and MMP-9 distinguished AMI patients from healthy controls with a mean area under the receiver operating characteristic (ROC) curves of 0.870, 0.885, and 0.81, respectively, with diagnostic cut-off points of 0.688ng/mL, 297.86ng/mL and 690.066ng/mL. Our study indicated that these three cytokines were up-regulated in STEMI samples, and may hold promise for the assessment of STEMI. PMID:26153394

  14. Who Will More Likely Buy PHEV: A Detailed Market Segmentation Analysis

    SciTech Connect

    Lin, Zhenhong; Greene, David L

    2010-01-01

    Understanding the diverse PHEV purchase behaviors among prospective new car buyers is key for designing efficient and effective policies for promoting new energy vehicle technologies. The ORNL MA3T model developed for the U.S. Department of Energy is described and used to project PHEV purchase probabilities by different consumers. MA3T disaggregates the U.S. household vehicle market into 1458 consumer segments based on region, residential area, driver type, technology attitude, home charging availability and work charging availability and is calibrated to the EIA s Annual Energy Outlook. Simulation results from MA3T are used to identify the more likely PHEV buyers and provide explanations. It is observed that consumers who have home charging, drive more frequently and live in urban area are more likely to buy a PHEV. Early adopters are projected to be more likely PHEV buyers in the early market, but the PHEV purchase probability by the late majority consumer can increase over time when PHEV gradually becomes a familiar product. Copyright Form of EVS25.

  15. Comparative analysis of the distribution of segmented filamentous bacteria in humans, mice and chickens.

    PubMed

    Yin, Yeshi; Wang, Yu; Zhu, Liying; Liu, Wei; Liao, Ningbo; Jiang, Mizu; Zhu, Baoli; Yu, Hongwei D; Xiang, Charlie; Wang, Xin

    2013-03-01

    Segmented filamentous bacteria (SFB) are indigenous gut commensal bacteria. They are commonly detected in the gastrointestinal tracts of both vertebrates and invertebrates. Despite the significant role they have in the modulation of the development of host immune systems, little information exists regarding the presence of SFB in humans. The aim of this study was to investigate the distribution and diversity of SFB in humans and to determine their phylogenetic relationships with their hosts. Gut contents from 251 humans, 92 mice and 72 chickens were collected for bacterial genomic DNA extraction and subjected to SFB 16S rRNA-specific PCR detection. The results showed SFB colonization to be age-dependent in humans, with the majority of individuals colonized within the first 2 years of life, but this colonization disappeared by the age of 3 years. Results of 16S rRNA sequencing showed that multiple operational taxonomic units of SFB could exist in the same individuals. Cross-species comparison among human, mouse and chicken samples demonstrated that each host possessed an exclusive predominant SFB sequence. In summary, our results showed that SFB display host specificity, and SFB colonization, which occurs early in human life, declines in an age-dependent manner. PMID:23151642

  16. Segmentation and Tracking of Adherens Junctions in 3D for the Analysis of Epithelial Tissue Morphogenesis

    PubMed Central

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-01-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT) PMID:25884654

  17. Photogrammetric Digital Outcrop Model analysis of a segment of the Centovalli Line (Trontano, Italy)

    NASA Astrophysics Data System (ADS)

    Consonni, Davide; Pontoglio, Emanuele; Bistacchi, Andrea; Tunesi, Annalisa

    2015-04-01

    The Centovalli Line is a complex network of brittle faults developing between Domodossola (West) and Locarno (East), where it merges with the Canavese Line (western segment of the Periadriatic Lineament). The Centovalli Line roughly follows the Southern Steep Belt which characterizes the inner or "root" zone of the Penninic and Austroalpine units, which underwent several deformation phases under variable P-T conditions over all the Alpine orogenic history. The last deformation phases in this area developed under brittle conditions, resulting in an array of dextral-reverse subvertical faults with a general E-W trend that partly reactivates and partly crosscuts the metamorphic foliations and lithological boundaries. Here we report on a quantitative digital outcrop model (DOM) study aimed at quantifying the fault zone architecture in a particularly well exposed outcrop near Trontano, at the western edge of the Centovalli Line. The DOM was reconstructed with photogrammetry and allowed to perform a complete characterization of the damage zones and multiple fault cores on both point cloud and textured surfaces models. Fault cores have been characterized in terms of attitude, thickness, and internal distribution of fault rocks (gouge-bearing), including possibly seismogenic localized slip surfaces. In the damage zones, the fracture network has been characterized in terms of fracture intensity (both P10 and P21 on virtual scanlines and scan-areas), fracture attitude, fracture connectivity, etc.

  18. 3D shape descriptors for face segmentation and fiducial points detection: an anatomical-based analysis

    NASA Astrophysics Data System (ADS)

    Salazar, Augusto E.; Cerón, Alexander; Prieto, Flavio A.

    2011-03-01

    The behavior of nine 3D shape descriptors which were computed on the surface of 3D face models, is studied. The set of descriptors includes six curvature-based ones, SPIN images, Folded SPIN Images, and Finger prints. Instead of defining clusters of vertices based on the value of a given primitive surface feature, a face template composed by 28 anatomical regions, is used to segment the models and to extract the location of different landmarks and fiducial points. Vertices are grouped by: region, region boundaries, and subsampled versions of them. The aim of this study is to analyze the discriminant capacity of each descriptor to characterize regions and to identify key points on the facial surface. The experiment includes testing with data from neutral faces and faces showing expressions. Also, in order to see the usefulness of the bending-invariant canonical form (BICF) to handle variations due to facial expressions, the descriptors are computed directly from the surface and also from its BICF. In the results: the values, distributions, and relevance indexes of each set of vertices, were analyzed.

  19. Analysis of the Vancouver lung nodule malignancy model with respect to manual and automated segmentation

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Boroczky, Lilla; Bergtholdt, Martin; Klinder, Tobias

    2015-03-01

    The recently published Vancouver model for lung nodule malignancy prediction holds great promise as a practically feasible tool to mitigate the clinical decision problem of how to act on a lung nodule detected at baseline screening. It provides a formula to compute a probability of malignancy from only nine clinical and radiologic features. The feature values are provided by user interaction but in principle could also be automatically pre-filled by appropriate image processing algorithms and RIS requests. Nodule diameter is a feature with crucial influence on the predicted malignancy, and leads to uncertainty caused by inter-reader variability. The purpose of this paper is to analyze how strongly the malignancy prediction of a lung nodule found with CT screening is affected by the inter-reader variation of the nodule diameter estimation. To this aim we have estimated the magnitude of the malignancy variability by applying the Vancouver malignancy model to the LIDC-IDRI database which contains independent delineations from several readers. It can be shown that using fully automatic nodule segmentation can significantly lower the variability of the estimated malignancy, while demonstrating excellent agreement with the expert readers.

  20. Cerebrospinal fluid volume analysis for hydrocephalus diagnosis and clinical research.

    PubMed

    Lebret, Alain; Hodel, Jérôme; Rahmouni, Alain; Decq, Philippe; Petit, Eric

    2013-04-01

    In this paper we analyze volumes of the cerebrospinal fluid spaces for the diagnosis of hydrocephalus, which are served as reference values for future studies. We first present an automatic method to estimate those volumes from a new three-dimensional whole body magnetic resonance imaging sequence. This enables us to statistically analyze the fluid volumes, and to show that the ratio of subarachnoid volume to ventricular one is a proportionality constant for healthy adults (=10.73), while in range [0.63, 4.61] for hydrocephalus patients. This indicates that a robust distinction between pathological and healthy cases can be achieved by using this ratio as an index. PMID:23570816

  1. Analysis of flexible aircraft longitudinal dynamics and handling qualities. Volume 1: Analysis methods

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Schmidt, D. S.

    1985-01-01

    As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.

  2. Three-Dimensional MRI Analysis of Individual Volume of Lacunes in CADASIL

    PubMed Central

    Hervé, Dominique; Godin, Ophélia; Dufouil, Carole; Viswanathan, Anand; Jouvent, Eric; Pachaï, Chahin; Guichard, Jean-Pierre; Bousser, Marie-Germaine; Dichgans, Martin; Chabriat, Hugues

    2011-01-01

    Background and Purpose Three-dimensional MRI segmentation may be useful to better understand the physiopathology of lacunar infarctions. Using this technique, the distribution of lacunar infarctions volumes has been recently reported in patients with cerebral autosomal-dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL). Whether the volume of each lacune (individual lacunar volume [ILV]) is associated with the patients’ other MRI lesions or vascular risk factors has never been investigated. The purpose of this study was to study the impact of age, vascular risk factors, and MRI markers on the ILV in a large cohort of patients with CADASIL. Methods Of 113 patients with CADASIL, 1568 lacunes were detected and ILV was estimated after automatic segmentation on 3-dimensional T1-weighted imaging. Relationships between ILV and age, blood pressure, cholesterol, diabetes, white matter hyperintensities load, number of cerebral microbleeds, apparent diffusion coefficient, brain parenchymal fraction, and mean and median of distribution of lacunes volumes at the patient level were investigated. We used random effect models to take into account intraindividual correlations. Results The ILV varied from 4.28 to 1619 mm3. ILV was not significantly correlated with age, vascular risk factors, or different MRI markers (white matter hyperintensity volume, cerebral microbleed number, mean apparent diffusion coefficient or brain parenchymal fraction). In contrast, ILV was positively correlated with the patients’ mean and median of lacunar volume distribution (P=0.0001). Conclusions These results suggest that the ILV is not related to the associated cerebral lesions or to vascular risk factors in CADASIL, but that an individual predisposition may explain predominating small or predominating large lacunes among patients. Local anatomic factors or genetic factors may be involved in these variations. PMID:18948610

  3. Determination of fiber volume in graphite/epoxy materials using computer image analysis

    NASA Technical Reports Server (NTRS)

    Viens, Michael J.

    1990-01-01

    The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.

  4. Multispectral microscopy and cell segmentation for analysis of thyroid fine needle aspiration cytology smears.

    PubMed

    Wu, Xuqing; Thigpen, James; Shah, Shishir K

    2009-01-01

    This paper discusses the needs for automated tools to aid in the diagnosis of thyroid nodules based on analysis of fine needle aspiration cytology smears. While conventional practices rely on the analysis of grey scale or RGB color images, we present a multispectral microscopy system that uses thirty-one spectral bands for analysis. Discussed are methods and results for system calibration and cell delineation. PMID:19964406

  5. Volume component analysis for classification of LiDAR data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2015-03-01

    One of the most difficult challenges of working with LiDAR data is the large amount of data points that are produced. Analysing these large data sets is an extremely time consuming process. For this reason, automatic perception of LiDAR scenes is a growing area of research. Currently, most LiDAR feature extraction relies on geometrical features specific to the point cloud of interest. These geometrical features are scene-specific, and often rely on the scale and orientation of the object for classification. This paper proposes a robust method for reduced dimensionality feature extraction of 3D objects using a volume component analysis (VCA) approach.1 This VCA approach is based on principal component analysis (PCA). PCA is a method of reduced feature extraction that computes a covariance matrix from the original input vector. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix are used to describe an image. Block-based PCA is an adapted method for feature extraction in facial images because PCA, when performed in local areas of the image, can extract more significant features than can be extracted when the entire image is considered. The image space is split into several of these blocks, and PCA is computed individually for each block. This VCA proposes that a LiDAR point cloud can be represented as a series of voxels whose values correspond to the point density within that relative location. From this voxelized space, block-based PCA is used to analyze sections of the space where the sections, when combined, will represent features of the entire 3-D object. These features are then used as the input to a support vector machine which is trained to identify four classes of objects, vegetation, vehicles, buildings and barriers with an overall accuracy of 93.8%

  6. A Genetic Analysis of Brain Volumes and IQ in Children

    ERIC Educational Resources Information Center

    van Leeuwen, Marieke; Peper, Jiska S.; van den Berg, Stephanie M.; Brouwer, Rachel M.; Hulshoff Pol, Hilleke E.; Kahn, Rene S.; Boomsma, Dorret I.

    2009-01-01

    In a population-based sample of 112 nine-year old twin pairs, we investigated the association among total brain volume, gray matter and white matter volume, intelligence as assessed by the Raven IQ test, verbal comprehension, perceptual organization and perceptual speed as assessed by the Wechsler Intelligence Scale for Children-III. Phenotypic…

  7. NeuroBlocks - Visual Tracking of Segmentation and Proofreading for Large Connectomics Projects.

    PubMed

    Ai-Awami, Ali K; Beyer, Johanna; Haehn, Daniel; Kasthuri, Narayanan; Lichtman, Jeff W; Pfister, Hanspeter; Hadwiger, Markus

    2016-01-01

    In the field of connectomics, neuroscientists acquire electron microscopy volumes at nanometer resolution in order to reconstruct a detailed wiring diagram of the neurons in the brain. The resulting image volumes, which often are hundreds of terabytes in size, need to be segmented to identify cell boundaries, synapses, and important cell organelles. However, the segmentation process of a single volume is very complex, time-intensive, and usually performed using a diverse set of tools and many users. To tackle the associated challenges, this paper presents NeuroBlocks, which is a novel visualization system for tracking the state, progress, and evolution of very large volumetric segmentation data in neuroscience. NeuroBlocks is a multi-user web-based application that seamlessly integrates the diverse set of tools that neuroscientists currently use for manual and semi-automatic segmentation, proofreading, visualization, and analysis. NeuroBlocks is the first system that integrates this heterogeneous tool set, providing crucial support for the management, provenance, accountability, and auditing of large-scale segmentations. We describe the design of NeuroBlocks, starting with an analysis of the domain-specific tasks, their inherent challenges, and our subsequent task abstraction and visual representation. We demonstrate the utility of our design based on two case studies that focus on different user roles and their respective requirements for performing and tracking the progress of segmentation and proofreading in a large real-world connectomics project. PMID:26529725

  8. Comparison of segmented flow analysis and ion chromatography for the quantitative characterization of carbohydrates in tobacco products.

    PubMed

    Shifflett, John R; Jones, Lindsey A; Limowski, Edward R; Bezabeh, Dawit Z

    2012-11-28

    Segmented flow analysis (SFA) and ion chromatography with pulsed amperometric detection (IC-PAD) are widely used analytical techniques for the analysis of glucose, fructose, and sucrose in tobacco. In the work presented here, 27 cured tobacco leaves and 21 tobacco products were analyzed for sugars using SFA and IC. The results of these analyses demonstrated that both techniques identified the same trends in sugar content across tobacco leaf and tobacco product types. However, comparison of results between techniques was limited by the selectivity of the SFA method, which relies on the specificity of the reaction of p-hydroxybenzoic acid hydrazide (PAHBAH) with glucose and fructose to generate a detectable derivative. Sugar amines and chlorogenic acid, which are found in tobacco, are also known to react with PAHBAH to form a reaction product that interferes with the analysis of fructose and glucose. To mitigate this problem, solid phase extraction (SPE) was used to remove interferences such as sugar amines and chlorogenic acid from sample matrices prior to SFA. A combination of C18 and cation exchange solid phase extraction cartridges was used, and the results from SFA and IC analyses showed significant convergence in the results of both analytical methods. For example, the average difference between the results from the SFA and IC analyses for flue-cured tobacco samples dropped by 73% when the two-step C18/cation exchange resin sample cleanup was used. PMID:23131129

  9. Automated segmentation of the lungs from high resolution CT images for quantitative study of chronic obstructive pulmonary diseases

    NASA Astrophysics Data System (ADS)

    Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.

    2005-04-01

    Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.

  10. Cargo Logistics Airlift Systems Study (CLASS). Volume 1: Analysis of current air cargo system

    NASA Technical Reports Server (NTRS)

    Burby, R. J.; Kuhlman, W. H.

    1978-01-01

    The material presented in this volume is classified into the following sections; (1) analysis of current routes; (2) air eligibility criteria; (3) current direct support infrastructure; (4) comparative mode analysis; (5) political and economic factors; and (6) future potential market areas. An effort was made to keep the observations and findings relating to the current systems as objective as possible in order not to bias the analysis of future air cargo operations reported in Volume 3 of the CLASS final report.

  11. Improving image segmentation performance and quantitative analysis via a computer-aided grading methodology for optical coherence tomography retinal image analysis

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Salinas, Harry M.; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M.; Puliafito, Carmen A.

    2010-07-01

    We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 ?m and 26.71 ?m when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 ?m and 0.6 and 1.76 ?m for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R2>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.

  12. Individual Trabeculae Segmentation (ITS)–Based Morphological Analysis of High-Resolution Peripheral Quantitative Computed Tomography Images Detects Abnormal Trabecular Plate and Rod Microarchitecture in Premenopausal Women With Idiopathic Osteoporosis

    PubMed Central

    Liu, X Sherry; Cohen, Adi; Shane, Elizabeth; Stein, Emily; Rogers, Halley; Kokolus, Shannon L; Yin, Perry T; McMahon, Donald J; Lappe, Joan M; Recker, Robert R; Guo, X Edward

    2010-01-01

    Idiopathic osteoporosis (IOP) in premenopausal women is a poorly understood entity in which otherwise healthy women have low-trauma fracture or very low bone mineral density (BMD). In this study, we applied individual trabeculae segmentation (ITS)–based morphological analysis to high-resolution peripheral quantitative computed tomography (HR-pQCT) images of the distal radius and distal tibia to gain greater insight into skeletal microarchitecture in premenopausal women with IOP. HR-pQCT scans were performed for 26 normal control individuals and 31 women with IOP. A cubic subvolume was extracted from the trabecular bone compartment and subjected to ITS-based analysis. Three Young's moduli and three shear moduli were calculated by micro–finite element (µFE) analysis. ITS-based morphological analysis of HR-pQCT images detected significantly decreased trabecular plate and rod bone volume fraction and number, decreased axial bone volume fraction in the longitudinal axis, increased rod length, and decreased rod-to-rod, plate-to-rod, and plate-to-plate junction densities at the distal radius and distal tibia in women with IOP. However, trabecular plate and rod thickness did not differ. A more rod-like trabecular microstructure was found in the distal radius, but not in the distal tibia. Most ITS measurements contributed significantly to the elastic moduli of trabecular bone independent of bone volume fraction (BV/TV). At a fixed BV/TV, plate-like trabeculae contributed positively to the mechanical properties of trabecular bone. The results suggest that ITS-based morphological analysis of HR-pQCT images is a sensitive and promising clinical tool for the investigation of trabecular bone microstructure in human studies of osteoporosis. © 2010 American Society for Bone and Mineral Research. PMID:20200967

  13. Risk factors for neovascular glaucoma after carbon ion radiotherapy of choroidal melanoma using dose-volume histogram analysis

    SciTech Connect

    Hirasawa, Naoki . E-mail: naoki_h@nirs.go.jp; Tsuji, Hiroshi; Ishikawa, Hitoshi; Koyama-Ito, Hiroko; Kamada, Tadashi; Mizoe, Jun-Etsu; Ito, Yoshiyuki; Naganawa, Shinji; Ohnishi, Yoshitaka; Tsujii, Hirohiko

    2007-02-01

    Purpose: To determine the risk factors for neovascular glaucoma (NVG) after carbon ion radiotherapy (C-ion RT) of choroidal melanoma. Methods and Materials: A total of 55 patients with choroidal melanoma were treated between 2001 and 2005 with C-ion RT based on computed tomography treatment planning. All patients had a tumor of large size or one located close to the optic disk. Univariate and multivariate analyses were performed to identify the risk factors of NVG for the following parameters; gender, age, dose-volumes of the iris-ciliary body and the wall of eyeball, and irradiation of the optic disk (ODI). Results: Neovascular glaucoma occurred in 23 patients and the 3-year cumulative NVG rate was 42.6 {+-} 6.8% (standard error), but enucleation from NVG was performed in only three eyes. Multivariate analysis revealed that the significant risk factors for NVG were V50{sub IC} (volume irradiated {>=}50 GyE to iris-ciliary body) (p = 0.002) and ODI (p = 0.036). The 3-year NVG rate for patients with V50{sub IC} {>=}0.127 mL and those with V50{sub IC} <0.127 mL were 71.4 {+-} 8.5% and 11.5 {+-} 6.3%, respectively. The corresponding rate for the patients with and without ODI were 62.9 {+-} 10.4% and 28.4 {+-} 8.0%, respectively. Conclusion: Dose-volume histogram analysis with computed tomography indicated that V50{sub IC} and ODI were independent risk factors for NVG. An irradiation system that can reduce the dose to both the anterior segment and the optic disk might be worth adopting to investigate whether or not incidence of NVG can be decreased with it.

  14. Automated analysis of high-throughput B-cell sequencing data reveals a high frequency of novel immunoglobulin V gene segment alleles.

    PubMed

    Gadala-Maria, Daniel; Yaari, Gur; Uduman, Mohamed; Kleinstein, Steven H

    2015-02-24

    Individual variation in germline and expressed B-cell immunoglobulin (Ig) repertoires has been associated with aging, disease susceptibility, and differential response to infection and vaccination. Repertoire properties can now be studied at large-scale through next-generation sequencing of rearranged Ig genes. Accurate analysis of these repertoire-sequencing (Rep-Seq) data requires identifying the germline variable (V), diversity (D), and joining (J) gene segments used by each Ig sequence. Current V(D)J assignment methods work by aligning sequences to a database of known germline V(D)J segment alleles. However, existing databases are likely to be incomplete and novel polymorphisms are hard to differentiate from the frequent occurrence of somatic hypermutations in Ig sequences. Here we develop a Tool for Ig Genotype Elucidation via Rep-Seq (TIgGER). TIgGER analyzes mutation patterns in Rep-Seq data to identify novel V segment alleles, and also constructs a personalized germline database containing the specific set of alleles carried by a subject. This information is then used to improve the initial V segment assignments from existing tools, like IMGT/HighV-QUEST. The application of TIgGER to Rep-Seq data from seven subjects identified 11 novel V segment alleles, including at least one in every subject examined. These novel alleles constituted 13% of the total number of unique alleles in these subjects, and impacted 3% of V(D)J segment assignments. These results reinforce the highly polymorphic nature of human Ig V genes, and suggest that many novel alleles remain to be discovered. The integration of TIgGER into Rep-Seq processing pipelines will increase the accuracy of V segment assignments, thus improving B-cell repertoire analyses. PMID:25675496

  15. Analysis of Volume Testing of the AccuVote TSx / AccuView

    E-print Network

    Wagner, David

    Analysis of Volume Testing of the AccuVote TSx / AccuView Matt Bishop, Loretta Guarino, David Jefferson, David Wagner Voting Systems Technology Assessment Advisory Board with assistance from analyze the results of the recent volume testing conducted in Stockton on July 20, 2005 of 96 Diebold Accu

  16. A STANDARD PROCEDURE FOR COST ANALYSIS OF POLLUTION CONTROL OPERATIONS. VOLUME II. APPENDICES

    EPA Science Inventory

    Volume I is a user guide for a standard procedure for the engineering cost analysis of pollution abatement operations and processes. The procedure applies to projects in various economic sectors: private, regulated, and public. Volume II, the bulk of the document, contains 11 app...

  17. Structural stability for p(x)-laplacian Generalities on FV schemes and Discrete Duality Classical co-volume scheme Co-volume scheme on Donald mesh Formulation and analysis of Discrete

    E-print Network

    Jeanjean, Louis

    co-volume scheme Co-volume scheme on Donald mesh Formulation and analysis of Discrete Duality Finite of Finite Volume schemes Gradient approximation on diamonds Discrete Duality 3 The classical co-volume Volume schemes. Part I. Co-volume schemes for the p(x)-laplacian. B. Andreianov1 based on joint works

  18. The ACODEA Framework: Developing Segmentation and Classification Schemes for Fully Automatic Analysis of Online Discussions

    ERIC Educational Resources Information Center

    Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank

    2012-01-01

    Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…

  19. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  20. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-5, NO. 1, JANUARY 1983 Fig. 9. Segmentation for F

    E-print Network

    Pal, Sankar Kumar

    in the brightness function correspond to the perceptual differences. It is to be expected that regions of the image. Rosenfeld and L. S. Davis, "Image segmentation and image model," Proc. IEEE, pp. 764-772, May 1979. [3] G. B. Coleman and H. C. Andrews, "Image segmentation by cluster- ing," Proc. IEEE, vol. 67, pp. 773-785, May

  1. Patterns of magma ow in segmented silicic dikes at Summer Coon volcano, Colorado: AMS and thin section analysis

    E-print Network

    ; magma £ow 1. Introduction Dike intrusion is considered a common form of magmatism at composite volcanoesPatterns of magma £ow in segmented silicic dikes at Summer Coon volcano, Colorado: AMS and thin and away from the center of the volcano. Segments that are proximal to the central intrusion

  2. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 4: Mission peculiar spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) peculiar spacecraft segment and associated subsystems and modules are presented. The specifications considered include the following: (1) wideband communications subsystem module, (2) mission peculiar software, (3) hydrazine propulsion subsystem module, (4) solar array assembly, and (5) the scanning spectral radiometer.

  3. A Software Tool for Volume Registration and Atlas-Based Segmentation of Human Fat-Water MRI Data in Longitudinal A. A. Joshi1

    E-print Network

    Southern California, University of

    of Southern California, Los Angeles, CA INTRODUCTION ­ Obesity remains a worldwide epidemic in children measures against obesity. The monitoring of changes in adiposity and organ fat during the course be beneficial to obesity researchers. Previous works have focused primarily on the segmentation and registration

  4. Comparative Analysis of Modified Laparoscopic Swenson and Laparoscopic Soave Procedure for Short-Segment Hirschsprung Disease in Children.

    PubMed

    Deng, Xiaogeng; Wu, Yaohao; Zeng, LeXiang; Zhang, Jie; Zhou, Jiajia; Qiu, Ronglin

    2015-10-01

    Introduction?This clinical analysis compared the characteristics and outcomes of modified laparoscopic Swenson (MLSw) and laparoscopic Soave (LS) procedures for short-segment Hirschsprung disease (HD) in children. Patients and Methods?This clinical analysis involved a retrospective series of 42 pediatric patients with HD who underwent surgery from March 2007 to July 2012. Patients were divided into two groups: the LS group (n?=?15) and the MLSw group (n?=?27). Preoperative, operative, and postoperative data were collected, through patient follow-up periods ranging from 12 to 48 months, to compare perioperative/operative characteristics, postoperative complications, and outcomes between the two groups. Major measurements were analyzed statistically. Results?On average, the patients in the LS group had a longer operating time (mean?±?standard deviation, 199?±?60 minutes) than those in the MLSw group (148?±?23 minutes) (p??0.05). The MLSw group was discharged after a shorter hospitalization time (8?±?2 days) than the LS group (12?±?4 days) (p?segment HD. The early postoperative outcome was much better in the MLSw group than in the LS group, but long-term outcomes were similar. However, the MLSw procedure was simpler, resulting in reduced operating time and less intraoperative blood loss. PMID:25111270

  5. Structural Analysis and Testing of an Erectable Truss for Precision Segmented Reflector Application

    NASA Technical Reports Server (NTRS)

    Collins, Timothy J.; Fichter, W. B.; Adams, Richard R.; Javeed, Mehzad

    1995-01-01

    This paper describes analysis and test results obtained at Langley Research Center (LaRC) on a doubly curved testbed support truss for precision reflector applications. Descriptions of test procedures and experimental results that expand upon previous investigations are presented. A brief description of the truss is given, and finite-element-analysis models are described. Static-load and vibration test procedures are discussed, and experimental results are shown to be repeatable and in generally good agreement with linear finite-element predictions. Truss structural performance (as determined by static deflection and vibration testing) is shown to be predictable and very close to linear. Vibration test results presented herein confirm that an anomalous mode observed during initial testing was due to the flexibility of the truss support system. Photogrammetric surveys with two 131-in. reference scales show that the root-mean-square (rms) truss-surface accuracy is about 0.0025 in. Photogrammetric measurements also indicate that the truss coefficient of thermal expansion (CTE) is in good agreement with that predicted by analysis. A detailed description of the photogrammetric procedures is included as an appendix.

  6. Design and experimental gait analysis of a multi-segment in-pipe robot inspired by earthworm's peristaltic locomotion

    NASA Astrophysics Data System (ADS)

    Fang, Hongbin; Wang, Chenghao; Li, Suyi; Xu, Jian; Wang, K. W.

    2014-03-01

    This paper reports the experimental progress towards developing a multi-segment in-pipe robot inspired by earthworm's body structure and locomotion mechanism. To mimic the alternating contraction and elongation of a single earthworm's segment, a robust, servomotor based actuation mechanism is developed. In each robot segment, servomotor-driven cords and spring steel belts are utilized to imitate the earthworm's longitudinal and circular muscles, respectively. It is shown that the designed segment can contract and relax just like an earthworm's body segment. The axial and radial deformation of a single segment is measured experimentally, which agrees with the theoretical predictions. Then a multisegment earthworm-like robot is fabricated by assembling eight identical segments in series. The locomotion performance of this robot prototype is then extensively tested in order to investigate the correlation between gait design and dynamic locomotion characteristics. Based on the principle of retrograde peristalsis wave, a gait generator is developed for the multi-segment earthworm-like robot, following which gaits of the robot can be constructed. Employing the generated gaits, the 8-segment earthworm-like robot can successfully perform both horizontal locomotion and vertical climb in pipes. By changing gait parameters, i.e., with different gaits, locomotion characteristics including average speed and anchor slippage can be significantly tailored. The proposed actuation method and prototype of the multi-segment in-pipe robot as well as the gait generator provide a bionic realization of earthworm's locomotion with promising potentials in various applications such as pipeline inspection and cleaning.

  7. A new method of cardiographic image segmentation based on grammar

    NASA Astrophysics Data System (ADS)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.

    2011-10-01

    The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.

  8. Study of the Utah uranium milling industry. Volume I. A policy analysis

    SciTech Connect

    Turley, R.E.

    1981-01-01

    Volume I is an analysis of the major problems raised by milling operators - primarily the issue of whether the federal government or the state should be responsible for the perpetual surveillance, monitoring, and maintenance of uranium tailings. (DMC)

  9. Computer Assisted Data Analysis in the Dye Dilution Technique for Plasma Volume Measurement.

    ERIC Educational Resources Information Center

    Bishop, Marvin; Robinson, Gerald D.

    1981-01-01

    Describes a method for undergraduate physiology students to measure plasma volume by the dye dilution technique, in which a computer is used to interpret data. Includes the computer program for the data analysis. (CS)

  10. Learning static object segmentation from motion segmentation

    E-print Network

    Ross, Michael G. (Michael Gregory), 1975-

    2005-01-01

    This thesis describes the SANE (Segmentation According to Natural Examples) algorithm for learning to segment objects in static images from video data. SANE uses background subtraction to find the segmentation of moving ...

  11. Segmentation and Tracking of Multiple Moving Objects for Intelligent Video Analysis

    NASA Astrophysics Data System (ADS)

    Xu, L.-Q.; Landabaso, J. L.; Lei, B.

    In recent years, there has been considerable interest in visual surveillance of a wide range of indoor and outdoor sites by various parties. This is manifested by the widespread and unabated deployment of CCTV cameras in public and private areas. In particular, the increasing connectivity of broadband wired and wireless IP networks, and the emergence of IP-CCTV systems with smart sensors, enabling centralised or distributed remote monitoring, have further fuelled this trend. It is not uncommon nowadays to see a bank of displays in an organisation showing the activities of dozens of surveillance sites simultaneously. However, the limitations and deficiencies, together with the costs associated with human operators in monitoring the overwhelming video sources, have created urgent demands for automated video analysis solutions. Indeed, the ability of a system to automatically analyse and interpret visual scenes is of increasing importance to decision making, offering enormous business opportunities in the sector of information and communications technologies.

  12. Analysis of the structural behaviour of colonic segments by inflation tests: Experimental activity and physio-mechanical model.

    PubMed

    Carniel, Emanuele L; Mencattelli, Margherita; Bonsignori, Gabriella; Fontanella, Chiara G; Frigo, Alessandro; Rubini, Alessandro; Stefanini, Cesare; Natali, Arturo N

    2015-11-01

    A coupled experimental and computational approach is provided for the identification of the structural behaviour of gastrointestinal regions, accounting for both elastic and visco-elastic properties. The developed procedure is applied to characterize the mechanics of gastrointestinal samples from pig colons. Experimental data about the structural behaviour of colonic segments are provided by inflation tests. Different inflation processes are performed according to progressively increasing top pressure conditions. Each inflation test consists of an air in-flow, according to an almost constant increasing pressure rate, such as 3.5?mmHg/s, up to a prescribed top pressure, which is held constant for about 300?s to allow the development of creep phenomena. Different tests are interspersed by 600?s of rest to allow the recovery of the tissues' mechanical condition. Data from structural tests are post-processed by a physio-mechanical model in order to identify the mechanical parameters that interpret both the non-linear elastic behaviour of the sample, as the instantaneous pressure-stretch trend, and the time-dependent response, as the stretch increase during the creep processes. The parameters are identified by minimizing the discrepancy between experimental and model results. Different sets of parameters are evaluated for different specimens from different pigs. A statistical analysis is performed to evaluate the distribution of the parameters and to assess the reliability of the experimental and computational activities. PMID:26396226

  13. Capillary electrophoresis study on segment/segment system for segments based on phase of mixed micelles and its role in transport of particles between the two segments.

    PubMed

    Oszwa?dowski, S?awomir; Kubá?, Pavel

    2015-09-18

    Capillary electrophoresis coupled with contactless conductivity detector was applied to characterize BGE/segment/segment/BGE and BGE/segment/electrolyte/segment/BGE systems, where segment is the phase of mixed micelles migrating surrounded by BGE and composition of the first segment?second segment. It was established that both systems are subject of evolution during electrophoretic run induced by different electrophoretic mobilities of segments and the phenomenon that generates the evolution is exchange of micelles between the two segments. This leads to segments re-equilibration during a run, which generates sub-zones from the two segments in the form of a cumulative zone or two isolated zones, depending on the injection scheme applied. Further analysis based on the system BGE/segment/electrolyte/segment/BGE shows that electrolyte solution between segments can act as a spacer to isolate the two micellar segments, and thereby to control the exchange of micelles between the two segments. Established features for both systems were further implemented towards characterization of the transport of nanocrystals (NCs) between two segments using CE/UV-vis technique and two examples were discussed: (i) on-line coating of NCs with surfactants and (ii) distribution of NCs between segments. The former aspect was found to be useful to discuss the state of particle in micellar media, whereas the latter shows system ability for the transport of NCs from the first segment or BGE based sample to the second segment, controlled by the electrolyte characteristics. It was concluded that transport of micelles and NCs is the subject of the same phenomena since basic electrolyte characteristics, i.e. length and concentration, act in the same way. This means that NCs in these systems can play the role of pseudomicelles, which mimic behaviour of micelles. Definitely, the tools established in the present work can be used to examine dynamic phenomena for pseudophase during electrophoresis and for NCs migrating in the presence of pseudophase in various configurations. PMID:26296987

  14. Automated segmentation of serous pigment epithelium detachment in SD-OCT images

    NASA Astrophysics Data System (ADS)

    Sun, Zhuli; Shi, Fei; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian

    2015-03-01

    Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorio-retinal disease processes, which can cause the loss of central vision. A 3-D method is proposed to automatically segment serous PED in SD-OCT images. The proposed method consists of five steps: first, a curvature anisotropic diffusion filter is applied to remove speckle noise. Second, the graph search method is applied for abnormal retinal layer segmentation associated with retinal pigment epithelium (RPE) deformation. During this process, Bruch's membrane, which doesn't show in the SD-OCT images, is estimated with the convex hull algorithm. Third, the foreground and background seeds are automatically obtained from retinal layer segmentation result. Fourth, the serous PED is segmented based on the graph cut method. Finally, a post-processing step is applied to remove false positive regions based on mathematical morphology. The proposed method was tested on 20 SD-OCT volumes from 20 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 97.19%, 0.03%, 96.34% and 95.59%, respectively. Linear regression analysis shows a strong correlation (r = 0.975) comparing the segmented PED volumes with the ground truth labeled by an ophthalmology expert. The proposed method can provide clinicians with accurate quantitative information, including shape, size and position of the PED regions, which can assist diagnose and treatment.

  15. Estimation of the Click Volume by Large Scale Regression Analysis

    E-print Network

    Lifshits, Yury

    , an advertising engine (AE) (1) maintains a database of advertise- ments, (2) receives ad requests "some person for advertising engines. We propose a model of computing an estimation of the click volume. A key component of our for sponsored search. Google AdSenseis an example of an AE for contextual advertisements. Finally, the Amazon

  16. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 3: General purpose spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) general purpose aircraft segment are presented. The satellite is designed to provide attitude stabilization, electrical power, and a communications data handling subsystem which can support various mission peculiar subsystems. The various specifications considered include the following: (1) structures subsystem, (2) thermal control subsystem, (3) communications and data handling subsystem module, (4) attitude control subsystem module, (5) power subsystem module, and (6) electrical integration subsystem.

  17. Comparative analysis of geodynamic activity of the Caucasian and Eastern Mediterranean segments of the Alpine-Himalayan convergence zone

    NASA Astrophysics Data System (ADS)

    Chelidze, Tamaz; Eppelbaum, Lev

    2013-04-01

    The Alpine-Himalayan convergence zone (AHCZ) underwent recent transverse shortening under the effect of collisional compression. The process was accompanied by rotation of separate microplates. The Caucasian and Eastern Mediterranean regions are segments of the of the AHCZ and are characterized by intensive endogenous and exogenous geodynamic processes, which manifest themselves in occurrence of powerful (with magnitude of 8-9) earthquakes accompanied by development of secondary catastrophic processes. Large landslides, rock falls, avalanches, mud flows, etc. cause human deaths and great material losses. The development of the aforesaid endogenous processes is set forth by peculiarities of the deep structure of the region and an impact of deep geological processes. The Caucasus is divided into several main tectonic terranes: platform (sub-platform, quasi-platform) and fold-thrust units. Existing data enable to perform a division of the Caucasian region into two large-scale geological provinces: southern Tethyan and northern Tethyan located to the south of and to the north of the Lesser Caucasian ophiolite suture, respectively. The recent investigations show that the assessments of the seismic hazard in these regions are not quite correct - for example in the West Caucasus the seismic hazard can be significantly underestimated, which affects the corresponding risk assessments. Integrated analysis of gravity, magnetic, seismic and thermal data enables to refine the assessment of the seismic hazard of the region, taking into account real rates of the geodynamic movements. Important role play the last rheological constructions. According to Reilinger et al. (2006) tectonic scheme, the West flanking of the Arabian Plate manifests strike-slip motion, when the East Caucasian block is converging and shortening. The Eastern Mediterranean is a tectonically complex region located in the midst of the progressive Afro-Eurasian collision. The recent increasing geotectonic activity in this region highlights the need for combined analysis of seismo-neotectonic signatures. For this purpose, this article presents the key features of the tectonic zonation of the Eastern Mediterranean. Map of derivatives of the gravity field retracked from the Geosat satellite and novel map of the Moho discontinuity illustrate the most important tectonic features of the region. The Post-Jurassic map of the deformation of surface leveling reflects the modern tectonic stage of Eastern Mediterranean evolution. The developed tectono-geophysical zonation map integrates the potential geophysical field analysis and seismic section utilization, as well as tectonic-structural, paleogeographical and facial analyses. Tectonically the map agrees with the earlier model of continental accretion (Ben-Avraham and Ginzburg, 1990). Overlaying the seismicity map of the Eastern Mediterranean tectonic region (for the period between 1900 and 2012) on the tectonic zonation chart reveals the key features of the seismo-neotectonic pattern of the Eastern Mediterranean. The results have important implications for tectonic-seismological analysis in this region (Eppelbaum and Katz, 2012). A difference in the geotectonic patterns makes interesting comparison of geodynamic activity and seismic hazard of the Caucasian and Eastern Mediterranean segments of the AHCZ.

  18. Adaptive Breast Radiation Therapy Using Modeling of Tissue Mechanics: A Breast Tissue Segmentation Study

    SciTech Connect

    Juneja, Prabhjot; Harris, Emma J.; Kirby, Anna M.; Evans, Philip M.

    2012-11-01

    Purpose: To validate and compare the accuracy of breast tissue segmentation methods applied to computed tomography (CT) scans used for radiation therapy planning and to study the effect of tissue distribution on the segmentation accuracy for the purpose of developing models for use in adaptive breast radiation therapy. Methods and Materials: Twenty-four patients receiving postlumpectomy radiation therapy for breast cancer underwent CT imaging in prone and supine positions. The whole-breast clinical target volume was outlined. Clinical target volumes were segmented into fibroglandular and fatty tissue using the following algorithms: physical density thresholding; interactive thresholding; fuzzy c-means with 3 classes (FCM3) and 4 classes (FCM4); and k-means. The segmentation algorithms were evaluated in 2 stages: first, an approach based on the assumption that the breast composition should be the same in both prone and supine position; and second, comparison of segmentation with tissue outlines from 3 experts using the Dice similarity coefficient (DSC). Breast datasets were grouped into nonsparse and sparse fibroglandular tissue distributions according to expert assessment and used to assess the accuracy of the segmentation methods and the agreement between experts. Results: Prone and supine breast composition analysis showed differences between the methods. Validation against expert outlines found significant differences (P<.001) between FCM3 and FCM4. Fuzzy c-means with 3 classes generated segmentation results (mean DSC = 0.70) closest to the experts' outlines. There was good agreement (mean DSC = 0.85) among experts for breast tissue outlining. Segmentation accuracy and expert agreement was significantly higher (P<.005) in the nonsparse group than in the sparse group. Conclusions: The FCM3 gave the most accurate segmentation of breast tissues on CT data and could therefore be used in adaptive radiation therapy-based on tissue modeling. Breast tissue segmentation methods should be used with caution in patients with sparse fibroglandular tissue distribution.

  19. Segmentation of urinary bladder in CT urography (CTU) using CLASS with enhanced contour conjoint procedure

    NASA Astrophysics Data System (ADS)

    Cha, Kenny; Hadjiiski, Lubomir; Chan, Heang-Ping; Cohan, Richard H.; Caoili, Elaine M.; Zhou, Chuan

    2014-03-01

    We are developing a computerized method for bladder segmentation in CT urography (CTU) for computeraided diagnosis of bladder cancer. A challenge for computerized bladder segmentation in CTU is that the bladder often contains regions filled with intravenous (IV) contrast and without contrast. Previously, we proposed a Conjoint Level set Analysis and Segmentation System (CLASS) consisting of four stages: preprocessing and initial segmentation, 3D and 2D level set segmentation, and post-processing. In case the bladder is partially filled with contrast, CLASS segments the non-contrast (NC) region and the contrast (C) filled region separately and conjoins the contours with a Contour Conjoint Procedure (CCP). The CCP is not trivial. Inaccuracies in the NC and C contours may cause CCP to exclude portions of the bladder. To alleviate this problem, we implemented model-guided refinement to propagate the C contour if the level set propagation in the region stops prematurely due to substantial non-uniformity of the contrast. An enhanced CCP with regularized energies further propagates the conjoint contours to the correct bladder boundary. Segmentation performance was evaluated using 70 cases. For all cases, 3D hand segmented contours were obtained as reference standard, and computerized segmentation accuracy was evaluated in terms of average volume intersection %, average % volume error, and average minimum distance. With enhanced CCP, those values were 84.4±10.6%, 8.3±16.1%, 3.4±1.8 mm, respectively. With CLASS, those values were 74.6±13.1%, 19.6±18.6%, 4.4±2.2 mm, respectively. The enhanced CCP improved bladder segmentation significantly (p<0.001) for all three performance measures.

  20. Metric Learning to Enhance Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Castano, Rebecca; Bue, Brian; Gilmore, Martha S.

    2013-01-01

    Unsupervised hyperspectral image segmentation can reveal spatial trends that show the physical structure of the scene to an analyst. They highlight borders and reveal areas of homogeneity and change. Segmentations are independently helpful for object recognition, and assist with automated production of symbolic maps. Additionally, a good segmentation can dramatically reduce the number of effective spectra in an image, enabling analyses that would otherwise be computationally prohibitive. Specifically, using an over-segmentation of the image instead of individual pixels can reduce noise and potentially improve the results of statistical post-analysis. In this innovation, a metric learning approach is presented to improve the performance of unsupervised hyperspectral image segmentation. The prototype demonstrations attempt a superpixel segmentation in which the image is conservatively over-segmented; that is, the single surface features may be split into multiple segments, but each individual segment, or superpixel, is ensured to have homogenous mineralogy.

  1. Analysis of the Wnt gene repertoire in an onychophoran provides new insights into the evolution of segmentation

    PubMed Central

    2014-01-01

    Background The Onychophora are a probable sister group to Arthropoda, one of the most intensively studied animal phyla from a developmental perspective. Pioneering work on the fruit fly Drosophila melanogaster and subsequent investigation of other arthropods has revealed important roles for Wnt genes during many developmental processes in these animals. Results We screened the embryonic transcriptome of the onychophoran Euperipatoides kanangrensis and found that at least 11 Wnt genes are expressed during embryogenesis. These genes represent 11 of the 13 known subfamilies of Wnt genes. Conclusions Many onychophoran Wnt genes are expressed in segment polarity gene-like patterns, suggesting a general role for these ligands during segment regionalization, as has been described in arthropods. During early stages of development, Wnt2, Wnt4, and Wnt5 are expressed in broad multiple segment-wide domains that are reminiscent of arthropod gap and Hox gene expression patterns, which suggests an early instructive role for Wnt genes during E. kanangrensis segmentation. PMID:24708787

  2. Compatibility of segmented thermoelectric generators

    NASA Technical Reports Server (NTRS)

    Snyder, J.; Ursell, T.

    2002-01-01

    It is well known that power generation efficiency improves when materials with appropriate properties are combined either in a cascaded or segmented fashion across a temperature gradient. Past methods for determining materials used in segmentation weremainly concerned with materials that have the highest figure of merit in the temperature range. However, the example of SiGe segmented with Bi2Te3 and/or various skutterudites shows a marked decline in device efficiency even though SiGe has the highest figure of merit in the temperature range. The origin of the incompatibility of SiGe with other thermoelectric materials leads to a general definition of compatibility and intrinsic efficiency. The compatibility factor derived as = (Jl+zr - 1) a is a function of only intrinsic material properties and temperature, which is represented by a ratio of current to conduction heat. For maximum efficiency the compatibility factor should not change with temperature both within a single material, and in the segmented leg as a whole. This leads to a measure of compatibility not only between segments, but also within a segment. General temperature trends show that materials are more self compatible at higher temperatures, and segmentation is more difficult across a larger -T. The compatibility factor can be used as a quantitative guide for deciding whether a material is better suited for segmentation orcascading. Analysis of compatibility factors and intrinsic efficiency for optimal segmentation are discussed, with intent to predict optimal material properties, temperature interfaces, and/or currentheat ratios.

  3. A dynamic finite element surface model for segmentation and tracking in multidimensional medical images with application to cardiac 4D image analysis.

    PubMed

    McInerney, T; Terzopoulos, D

    1995-01-01

    This paper presents a physics-based approach to anatomical surface segmentation, reconstruction, and tracking in multidimensional medical images. The approach makes use of a dynamic "balloon" model--a spherical thin-plate under tension surface spline which deforms elastically to fit the image data. The fitting process is mediated by internal forces stemming from the elastic properties of the spline and external forces which are produced form the data. The forces interact in accordance with Lagrangian equations of motion that adjust the model's deformational degrees of freedom to fit the data. We employ the finite element method to represent the continuous surface in the form of weighted sums of local polynomial basis functions. We use a quintic triangular finite element whose nodal variables include positions as well as the first and second partial derivatives of the surface. We describe a system, implemented on a high performance graphics workstation, which applies the model fitting technique to the segmentation of the cardiac LV surface in volume (3D) CT images and LV tracking in dynamic volume (4D) CT images to estimate its nonrigid motion over the cardiac cycle. The system features a graphical user interface which minimizes error by affording specialist users interactive control over the dynamic model fitting process. PMID:7736420

  4. Conferences on Orthodontics Advances in Science and Technology, Monterey, September 2002 (in 3D Visualization of the Craniofacial Patient: Volume Segmentation, Data

    E-print Network

    Southern California, University of

    , memon, jamesmah} @usc.edu Keywords: 3D Visualization, volume rendering, CT, dentition models, jaw in this area, with a previous method described using spherical markers placed on the skeleton and dentition

  5. Analysis of layered assays and volume microarrays in stratified media.

    PubMed

    Ghafari, Homanaz; Hanley, Quentin S

    2012-12-01

    Changing traditional microarray methods by using both sides of a substrate or stacking microarrays combined with optical sectioning enables the detection of more than one assay along the z-axis. Here we demonstrate two sided substrates, multilayer arrays with up to 5 substrates, and 2- and 3-dimensional antigen microarrays. By replacing standard substrates with multiple 30 ?m layers of glass or mica, high density multilayer and 3-dimensional volume arrays were created within a stratified medium. Although a decrease in fluorescence intensity with increasing number of substrate layers was observed together with a concomitant broadening of the axial resolution, quantitative results were obtained from this stratified system using calibrated intensities. Two- and three-dimensional antigen microarrays were generated via microcontact printing and detected as indirect immunoassays with quantum dot conjugated antibodies. Volume arrays were analysed by confocal laser scanning microscopy producing clear patterns, even when the assays were overlapped spatially. PMID:22911003

  6. Industrial process heat data analysis and evaluation. Volume 1

    SciTech Connect

    Lewandowski, A; Gee, R; May, K

    1984-07-01

    The Solar Energy Research Institute (SERI) has modeled seven of the Department of Energy (DOE) sponsored solar Industrial Process Heat (IPH) field experiments and has generated thermal performance predictions for each project. Additionally, these performance predictions have been compared with actual performance measurements taken at the projects. Predictions were generated using SOLIPH, an hour-by-hour computer code with the capability for modeling many types of solar IPH components and system configurations. Comparisons of reported and predicted performance resulted in good agreement when the field test reliability and availability was high. Volume I contains the main body of the work: objective, model description, site configurations, model results, data comparisons, and summary. Volume II contains complete performance prediction results (tabular and graphic output) and computer program listings.

  7. Example based lesion segmentation

    NASA Astrophysics Data System (ADS)

    Roy, Snehashis; He, Qing; Carass, Aaron; Jog, Amod; Cuzzocreo, Jennifer L.; Reich, Daniel S.; Prince, Jerry; Pham, Dzung

    2014-03-01

    Automatic and accurate detection of white matter lesions is a significant step toward understanding the progression of many diseases, like Alzheimer's disease or multiple sclerosis. Multi-modal MR images are often used to segment T2 white matter lesions that can represent regions of demyelination or ischemia. Some automated lesion segmentation methods describe the lesion intensities using generative models, and then classify the lesions with some combination of heuristics and cost minimization. In contrast, we propose a patch-based method, in which lesions are found using examples from an atlas containing multi-modal MR images and corresponding manual delineations of lesions. Patches from subject MR images are matched to patches from the atlas and lesion memberships are found based on patch similarity weights. We experiment on 43 subjects with MS, whose scans show various levels of lesion-load. We demonstrate significant improvement in Dice coefficient and total lesion volume compared to a state of the art model-based lesion segmentation method, indicating more accurate delineation of lesions.

  8. Comparison of retinal thickness by Fourier-domain optical coherence tomography and OCT retinal image analysis software segmentation analysis derived from Stratus optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Tátrai, Erika; Ranganathan, Sudarshan; Ferencz, Mária; Debuc, Delia Cabrera; Somfai, Gábor Márk

    2011-05-01

    Purpose: To compare thickness measurements between Fourier-domain optical coherence tomography (FD-OCT) and time-domain OCT images analyzed with a custom-built OCT retinal image analysis software (OCTRIMA). Methods: Macular mapping (MM) by StratusOCT and MM5 and MM6 scanning protocols by an RTVue-100 FD-OCT device are performed on 11 subjects with no retinal pathology. Retinal thickness (RT) and the thickness of the ganglion cell complex (GCC) obtained with the MM6 protocol are compared for each early treatment diabetic retinopathy study (ETDRS)-like region with corresponding results obtained with OCTRIMA. RT results are compared by analysis of variance with Dunnett post hoc test, while GCC results are compared by paired t-test. Results: A high correlation is obtained for the RT between OCTRIMA and MM5 and MM6 protocols. In all regions, the StratusOCT provide the lowest RT values (mean difference 43 +/- 8 ?m compared to OCTRIMA, and 42 +/- 14 ?m compared to RTVue MM6). All RTVue GCC measurements were significantly thicker (mean difference between 6 and 12 ?m) than the GCC measurements of OCTRIMA. Conclusion: High correspondence of RT measurements is obtained not only for RT but also for the segmentation of intraretinal layers between FD-OCT and StratusOCT-derived OCTRIMA analysis. However, a correction factor is required to compensate for OCT-specific differences to make measurements more comparable to any available OCT device.

  9. Study of Alzheimer?s Disease Progression In MR Brain Images based on Segmentation and Analysis of Ventricles using Modified DRLSE Method and Minkowski Functionals.

    PubMed

    Kayalvizhi, M; Kavitha, G; Sujatha, C M; Ramakrishnan, S

    2015-01-01

    In this work, the ventricles in MR brain images are segmented using edge based modified Distance Regularized Level Set Evolution (DRLSE) method and the structural changes in the disease is further analysed using Minkowski functionals (MFs). Twenty normal and abnormal T1-weighted coronal mid slice MR image are considered for the analysis. The MR brain image is pre-processed using contrast enhancement method. The edge based modified DRLSE with a new penalty term is used to segment the ventricles from the enhanced images. The results of the level set method are compared with geodesic active contour method. The segmentation results are validated using ZSI (Zijdenbos Similarity Index) and F-score. The Minkowski functionals such as MF-area, MF-perimeter and MF-Euler number are calculated from the extracted ventricle region. The longitudinal analysis of ventricles is performed using these features. The results show that the DRLSE based level set method is able to extract the ventricle edges with less discontinuity. The F-score and ZSI is high for DRLSE (0.83 and 0.84) compared to geodesic method (0.79 and 0.80). The MF-area is able to discriminate the controls and the AD subjects with high statistical significance (p < 0.001). This analysis also shows that the MF- area increases with severity. These results could be used for the study of discrimination and progression of the Alzheimer's disease like disorders. PMID:25996736

  10. Combined Biomarker Analysis for Risk of Acute Kidney Injury in Patients with ST-Segment Elevation Myocardial Infarction

    PubMed Central

    Tung, Ying-Chang; Chang, Chih-Hsiang; Chen, Yung-Chang; Chu, Pao-Hsien

    2015-01-01

    Background Acute kidney injury (AKI) complicating ST-segment elevation myocardial infarction (STEMI) increases subsequent morbidity and mortality. We combined the biomarkers of heart failure (HF; B-type natriuretic peptide [BNP] and soluble ST2 [sST2]) and renal injury (NGAL [neutrophil gelatinase-associated lipocalin] and cystatin C) in predicting the development of AKI in patients with STEMI undergoing primary percutaneous coronary intervention (PCI). Methods and Results From March 2010 to September 2013, 189 STEMI patients were sequentially enrolled and serum samples were collected at presentation for BNP, sST2, NGAL and cystatin C analysis. 37 patients (19.6%) developed AKI of varying severity within 48 hours of presentation. Univariate analysis showed age, Killip class ?2, hypertension, white blood cell counts, hemoglobin, estimated glomerular filtration rate, blood urea nitrogen, creatinine, and all the four biomarkers were predictive of AKI. Serum levels of the biomarkers were correlated with risk of AKI and the Acute Kidney Injury Network (AKIN) stage and all significantly discriminated AKI (area under the receiver operating characteristic [ROC] curve: BNP: 0.86, sST2: 0.74, NGAL: 0.75, cystatin C: 0.73; all P < 0.05). Elevation of ?2 of the biomarkers higher than the cutoff values derived from the ROC analysis improved AKI risk stratification, regardless of the creatine level (creatinine < 1.24 mg/dL: odds ratio [OR] 11.25, 95% confidence interval [CI] 1.63-77.92, P = 0.014; creatinine ? 1.24: OR 15.0, 95% CI 1.23-183.6, P = 0.034). Conclusions In this study of STEMI patients undergoing primary PCI, the biomarkers of heart failure (BNP and sST2) and renal injury (NGAL and cystatin C) at presentation were predictive of AKI. High serum levels of the biomarkers were associated with an elevated risk and more advanced stage of AKI. Regardless of the creatinine level, elevation of ?2 of the biomarkers higher than the cutoff values indicated a further rise in AKI risk. Combined biomarker approach may assist in risk stratification of AKI in patients with STEMI. PMID:25853556

  11. Underground Test Area Subproject Phase I Data Analysis Task. Volume VIII - Risk Assessment Documentation Package

    SciTech Connect

    1996-12-01

    Volume VIII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the risk assessment documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  12. Underground Test Area Subproject Phase I Data Analysis Task. Volume VII - Tritium Transport Model Documentation Package

    SciTech Connect

    1996-12-01

    Volume VII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the tritium transport model documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  13. Underground Test Area Subproject Phase I Data Analysis Task. Volume II - Potentiometric Data Document Package

    SciTech Connect

    1996-12-01

    Volume II of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the potentiometric data. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  14. Microfabrication of a Segmented-Involute-Foil Regenerator, Testing in a Sunpower Stirling Convertor and Supporting Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir B.; Tew, Roy C.; Gedeon, David; Wood, Gary; McLean, Jeff

    2008-01-01

    Under Phase II of a NASA Research Award contract, a prototype nickel segmented-involute-foil regenerator was microfabricated via LiGA and tested in the NASA/Sunpower oscillating-flow test rig. The resulting figure-of-merit was about twice that of the approx.90% porosity random-fiber material currently used in the small 50-100 W Stirling engines recently manufactured for NASA. That work was reported at the 2007 International Energy Conversion Engineering Conference in St. Louis, was also published as a NASA report, NASA/TM-2007-2149731, and has been more completely described in a recent NASA Contractor Report, NASA/CR-2007-2150062. Under a scaled-back version of the original Phase III plan, a new nickel segmentedinvolute- foil regenerator was microfabricated and has been tested in a Sunpower Frequency-Test-Bed (FTB) Stirling convertor. Testing in the FTB convertor produced about the same efficiency as testing with the original random-fiber regenerator. But the high thermal conductivity of the prototype nickel regenerator was responsible for a significant performance degradation. An efficiency improvement (by a 1.04 factor, according to computer predictions) could have been achieved if the regenerator been made from a low-conductivity material. Also the FTB convertor was not reoptimized to take full advantage of the microfabricated regenerator's low flow resistance; thus the efficiency would likely have been even higher had the FTB been completely reoptimized. This report discusses the regenerator microfabrication process, testing of the regenerator in the Stirling FTB convertor, and the supporting analysis. Results of the pre-test computational fluid dynamics (CFD) modeling of the effects of the regenerator-test-configuration diffusers (located at each end of the regenerator) is included. The report also includes recommendations for accomplishing further development of involute-foil regenerators from a higher-temperature material than nickel.

  15. BAC-Pool Sequencing and Analysis of Large Segments of A12 and D12 Homoeologous Chromosomes in Upland Cotton

    PubMed Central

    Buyyarapu, Ramesh; Kantety, Ramesh V.; Yu, John Z.; Xu, Zhanyou; Kohel, Russell J.; Percy, Richard G.; Macmil, Simone; Wiley, Graham B.; Roe, Bruce A.; Sharma, Govind C.

    2013-01-01

    Although new and emerging next-generation sequencing (NGS) technologies have reduced sequencing costs significantly, much work remains to implement them for de novo sequencing of complex and highly repetitive genomes such as the tetraploid genome of Upland cotton (Gossypium hirsutum L.). Herein we report the results from implementing a novel, hybrid Sanger/454-based BAC-pool sequencing strategy using minimum tiling path (MTP) BACs from Ctg-3301 and Ctg-465, two large genomic segments in A12 and D12 homoeologous chromosomes (Ctg). To enable generation of longer contig sequences in assembly, we implemented a hybrid assembly method to process ~35x data from 454 technology and 2.8-3x data from Sanger method. Hybrid assemblies offered higher sequence coverage and better sequence assemblies. Homology studies revealed the presence of retrotransposon regions like Copia and Gypsy elements in these contigs and also helped in identifying new genomic SSRs. Unigenes were anchored to the sequences in Ctg-3301 and Ctg-465 to support the physical map. Gene density, gene structure and protein sequence information derived from protein prediction programs were used to obtain the functional annotation of these genes. Comparative analysis of both contigs with Arabidopsis genome exhibited synteny and microcollinearity with a conserved gene order in both genomes. This study provides insight about use of MTP-based BAC-pool sequencing approach for sequencing complex polyploid genomes with limited constraints in generating better sequence assemblies to build reference scaffold sequences. Combining the utilities of MTP-based BAC-pool sequencing with current longer and short read NGS technologies in multiplexed format would provide a new direction to cost-effectively and precisely sequence complex plant genomes. PMID:24116150

  16. Development of automatic surveillance of animal behaviour and welfare using image analysis and machine learned segmentation technique.

    PubMed

    Nilsson, M; Herlin, A H; Ardö, H; Guzhva, O; Åström, K; Bergsten, C

    2015-11-01

    In this paper the feasibility to extract the proportion of pigs located in different areas of a pig pen by advanced image analysis technique is explored and discussed for possible applications. For example, pigs generally locate themselves in the wet dunging area at high ambient temperatures in order to avoid heat stress, as wetting the body surface is the major path to dissipate the heat by evaporation. Thus, the portion of pigs in the dunging area and resting area, respectively, could be used as an indicator of failure of controlling the climate in the pig environment as pigs are not supposed to rest in the dunging area. The computer vision methodology utilizes a learning based segmentation approach using several features extracted from the image. The learning based approach applied is based on extended state-of-the-art features in combination with a structured prediction framework based on a logistic regression solver using elastic net regularization. In addition, the method is able to produce a probability per pixel rather than form a hard decision. This overcomes some of the limitations found in a setup using grey-scale information only. The pig pen is a difficult imaging environment because of challenging lighting conditions like shadows, poor lighting and poor contrast between pig and background. In order to test practical conditions, a pen containing nine young pigs was filmed from a top view perspective by an Axis M3006 camera with a resolution of 640×480 in three, 10-min sessions under different lighting conditions. The results indicate that a learning based method improves, in comparison with greyscale methods, the possibility to reliable identify proportions of pigs in different areas of the pen. Pigs with a changed behaviour (location) in the pen may indicate changed climate conditions. Changed individual behaviour may also indicate inferior health or acute illness. PMID:26189971

  17. Bivariate analysis of flood peaks and volumes using copulas. An application to the Danube River

    NASA Astrophysics Data System (ADS)

    Papaioannou, George; Bacigal, Tomas; Jeneiova, Katarina; Kohnová, Silvia; Szolgay, Jan; Loukas, Athanasios

    2014-05-01

    A multivariate analysis on flood variables such as flood peaks, volumes and durations, is essential for hydrotechnical projects design. A lot of authors have suggested the use of bivariate distributions for the frequency analysis of flood peaks and volumes due to the supposition that the marginal probability distribution type is the same for these variables. The application of Copulas, which are becoming gradually widespread, can overcome this constraint. The selection of the appropriate copula type/families is not extensively treated in the literature and it remains a challenge in copula analysis. In this study a bivariate copula analysis with the use of different copula families is carried out on the basis of flood peak and the corresponding volumes along a river. This bivariate analysis of flood peaks and volumes is based on streamflow daily data of a time-series more than 100 years from several gauged stations of the Danube River. The methodology applied using annual maximum flood peaks (AMF) with the independent annual maximum volumes of fixed durations at 5, 10, 15,20,25,30 and 60 days. The discharge-volume pairs correlation are examined using Kendall's tau correlation analysis. The copulas families that selected for the bivariate modeling of the extracted pairs discharge and volumes are the Archimedean, Extreme-value and other copula families. The evaluation of the copulas performance achieved with the use of scatterplots of the observed and bootstrapped simulated pairs and formal tests of goodness of fit. Suitability of copulas was statistically compared. Archimedean (e.g. Frank and Clayton) copulas revealed to be more capable for bivariate modeling of floods than the other examined copula families at the Danube River. Results showed in general that copulas are effective tools for bivariate modeling of the two study random variables.

  18. Thermal characterization and analysis of microliter liquid volumes using the three-omega method

    NASA Astrophysics Data System (ADS)

    Roy-Panzer, Shilpi; Kodama, Takashi; Lingamneni, Srilakshmi; Panzer, Matthew A.; Asheghi, Mehdi; Goodson, Kenneth E.

    2015-02-01

    Thermal phenomena in many biological systems offer an alternative detection opportunity for quantifying relevant sample properties. While there is substantial prior work on thermal characterization methods for fluids, the push in the biology and biomedical research communities towards analysis of reduced sample volumes drives a need to extend and scale these techniques to these volumes of interest, which can be below 100 pl. This work applies the 3? technique to measure the temperature-dependent thermal conductivity and heat capacity of de-ionized water, silicone oil, and salt buffer solution droplets from 24 to 80 °C. Heater geometries range in length from 200 to 700 ?m and in width from 2 to 5 ?m to accommodate the size restrictions imposed by small volume droplets. We use these devices to measure droplet volumes of 2 ?l and demonstrate the potential to extend this technique down to pl droplet volumes based on an analysis of the thermally probed volume. Sensitivity and uncertainty analyses provide guidance for relevant design variables for characterizing properties of interest by investigating the tradeoffs between measurement frequency regime, device geometry, and substrate material. Experimental results show that we can extract thermal conductivity and heat capacity with these sample volumes to within less than 1% of thermal properties reported in the literature.

  19. Thermal characterization and analysis of microliter liquid volumes using the three-omega method.

    PubMed

    Roy-Panzer, Shilpi; Kodama, Takashi; Lingamneni, Srilakshmi; Panzer, Matthew A; Asheghi, Mehdi; Goodson, Kenneth E

    2015-02-01

    Thermal phenomena in many biological systems offer an alternative detection opportunity for quantifying relevant sample properties. While there is substantial prior work on thermal characterization methods for fluids, the push in the biology and biomedical research communities towards analysis of reduced sample volumes drives a need to extend and scale these techniques to these volumes of interest, which can be below 100 pl. This work applies the 3? technique to measure the temperature-dependent thermal conductivity and heat capacity of de-ionized water, silicone oil, and salt buffer solution droplets from 24 to 80?°C. Heater geometries range in length from 200 to 700 ?m and in width from 2 to 5 ?m to accommodate the size restrictions imposed by small volume droplets. We use these devices to measure droplet volumes of 2 ?l and demonstrate the potential to extend this technique down to pl droplet volumes based on an analysis of the thermally probed volume. Sensitivity and uncertainty analyses provide guidance for relevant design variables for characterizing properties of interest by investigating the tradeoffs between measurement frequency regime, device geometry, and substrate material. Experimental results show that we can extract thermal conductivity and heat capacity with these sample volumes to within less than 1% of thermal properties reported in the literature. PMID:25725871

  20. Satellite power systems (SPS) concept definition study. Volume 7: SPS program plan and economic analysis, appendixes

    NASA Technical Reports Server (NTRS)

    Hanley, G.

    1978-01-01

    Three appendixes in support of Volume 7 are contained in this document. The three appendixes are: (1) Satellite Power System Work Breakdown Structure Dictionary; (2) SPS cost Estimating Relationships; and (3) Financial and Operational Concept. Other volumes of the final report that provide additional detail are: Executive Summary; SPS Systems Requirements; SPS Concept Evolution; SPS Point Design Definition; Transportation and Operations Analysis; and SPS Technology Requirements and Verification.

  1. Marketing Segmentation Analysis.

    ERIC Educational Resources Information Center

    Weeks, Ann A.

    A study was conducted to differentiate by cities and towns the various demographic characteristics of students that Dutchess Community College (DCC) was receiving from its major service area, Dutchess County, in order to ascertain if DCC was receiving its expected share of students from these cities and towns. All the students enrolling at DCC…

  2. Synfuel program analysis. Volume 2: VENVAL users manual

    NASA Astrophysics Data System (ADS)

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This volume is intended for program analysts and is a users manual for the VENVAL model. It contains specific explanations as to input data requirements and programming procedures for the use of this model. VENVAL is a generalized computer program to aid in evaluation of prospective private sector production ventures. The program can project interrelated values of installed capacity, production, sales revenue, operating costs, depreciation, investment, dent, earnings, taxes, return on investment, depletion, and cash flow measures. It can also compute related public sector and other external costs and revenues if unit costs are furnished.

  3. Synfuel program analysis. Volume II. VENVAL users manual

    SciTech Connect

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This volume is intended for program analysts and is a users manual for the VENVAL model. It contains specific explanations as to input data requirements and programming procedures for the use of this model in handling various cases. VENVAL is a generalized computer program to aid in evaluation of prospective private-sector production ventures. The program can project interrelated values of installed capacity, production, sales revenue, operating costs, depreciation, investment, debt, earnings, taxes, return on investment, depletion, and cash flow measures. It can also compute related public sector and other external costs and revenues if unit costs are furnished. (DMC)

  4. Virtual Mastoidectomy Performance Evaluation through Multi-Volume Analysis

    PubMed Central

    Kerwin, Thomas; Stredney, Don; Wiet, Gregory; Shen, Han-Wei

    2012-01-01

    Purpose Development of a visualization system that provides surgical instructors with a method to compare the results of many virtual surgeries (n > 100). Methods A masked distance field models the overlap between expert and resident results. Multiple volume displays are used side-by-side with a 2D point display. Results Performance characteristics were examined by comparing the results of specific residents with those of experts and the entire class. Conclusions The software provides a promising approach for comparing performance between large groups of residents learning mastoidectomy techniques. PMID:22528058

  5. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    SciTech Connect

    Schoot, A. J. A. J. van de Schooneveldt, G.; Wognum, S.; Stalpers, L. J. A.; Rasch, C. R. N.; Bel, A.; Hoogeman, M. S.; Chai, X.

    2014-03-15

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation results significantly (p < 0.01) based on DSC (6.72%) and SD of contour-to-contour distances (0.08 cm) and decreased the 95% confidence intervals of the bladder volume differences. Moreover, expanding the shape model improved the segmentation results significantly (p < 0.01) based on DSC and SD of contour-to-contour distances. Conclusions: This patient-specific shape model based automatic bladder segmentation method on CBCT is accurate and generic. Our segmentation method only needs two pretreatment imaging data sets as prior knowledge, is independent of patient gender and patient treatment position and has the possibility to manually adapt the segmentation locally.

  6. Interactive lung segmentation in abnormal human and animal chest CT scans

    SciTech Connect

    Kockelkorn, Thessa T. J. P. Viergever, Max A.; Schaefer-Prokop, Cornelia M.; Bozovic, Gracijela; Muñoz-Barrutia, Arrate; Rikxoort, Eva M. van; Brown, Matthew S.; Jong, Pim A. de; Ginneken, Bram van

    2014-08-15

    Purpose: Many medical image analysis systems require segmentation of the structures of interest as a first step. For scans with gross pathology, automatic segmentation methods may fail. The authors’ aim is to develop a versatile, fast, and reliable interactive system to segment anatomical structures. In this study, this system was used for segmenting lungs in challenging thoracic computed tomography (CT) scans. Methods: In volumetric thoracic CT scans, the chest is segmented and divided into 3D volumes of interest (VOIs), containing voxels with similar densities. These VOIs are automatically labeled as either lung tissue or nonlung tissue. The automatic labeling results can be corrected using an interactive or a supervised interactive approach. When using the supervised interactive system, the user is shown the classification results per slice, whereupon he/she can adjust incorrect labels. The system is retrained continuously, taking the corrections and approvals of the user into account. In this way, the system learns to make a better distinction between lung tissue and nonlung tissue. When using the interactive framework without supervised learning, the user corrects all incorrectly labeled VOIs manually. Both interactive segmentation tools were tested on 32 volumetric CT scans of pigs, mice and humans, containing pulmonary abnormalities. Results: On average, supervised interactive lung segmentation took under 9 min of user interaction. Algorithm computing time was 2 min on average, but can easily be reduced. On average, 2.0% of all VOIs in a scan had to be relabeled. Lung segmentation using the interactive segmentation method took on average 13 min and involved relabeling 3.0% of all VOIs on average. The resulting segmentations correspond well to manual delineations of eight axial slices per scan, with an average Dice similarity coefficient of 0.933. Conclusions: The authors have developed two fast and reliable methods for interactive lung segmentation in challenging chest CT images. Both systems do not require prior knowledge of the scans under consideration and work on a variety of scans.

  7. Ureter tracking and segmentation in CT urography (CTU) using COMPASS

    SciTech Connect

    Hadjiiski, Lubomir Zick, David; Chan, Heang-Ping; Cohan, Richard H.; Caoili, Elaine M.; Cha, Kenny; Zhou, Chuan; Wei, Jun

    2014-12-15

    Purpose: The authors are developing a computerized system for automated segmentation of ureters in CTU, referred to as combined model-guided path-finding analysis and segmentation system (COMPASS). Ureter segmentation is a critical component for computer-aided diagnosis of ureter cancer. Methods: COMPASS consists of three stages: (1) rule-based adaptive thresholding and region growing, (2) path-finding and propagation, and (3) edge profile extraction and feature analysis. With institutional review board approval, 79 CTU scans performed with intravenous (IV) contrast material enhancement were collected retrospectively from 79 patient files. One hundred twenty-four ureters were selected from the 79 CTU volumes. On average, the ureters spanned 283 computed tomography slices (range: 116–399, median: 301). More than half of the ureters contained malignant or benign lesions and some had ureter wall thickening due to malignancy. A starting point for each of the 124 ureters was identified manually to initialize the tracking by COMPASS. In addition, the centerline of each ureter was manually marked and used as reference standard for evaluation of tracking performance. The performance of COMPASS was quantitatively assessed by estimating the percentage of the length that was successfully tracked and segmented for each ureter and by estimating the average distance and the average maximum distance between the computer and the manually tracked centerlines. Results: Of the 124 ureters, 120 (97%) were segmented completely (100%), 121 (98%) were segmented through at least 70%, and 123 (99%) were segmented through at least 50% of its length. In comparison, using our previous method, 85 (69%) ureters were segmented completely (100%), 100 (81%) were segmented through at least 70%, and 107 (86%) were segmented at least 50% of its length. With COMPASS, the average distance between the computer and the manually generated centerlines is 0.54 mm, and the average maximum distance is 2.02 mm. With our previous method, the average distance between the centerlines was 0.80 mm, and the average maximum distance was 3.38 mm. The improvements in the ureteral tracking length and both distance measures were statistically significant (p < 0.0001). Conclusions: COMPASS improved significantly the ureter tracking, including regions across ureter lesions, wall thickening, and the narrowing of the lumen.

  8. Stochastic watershed segmentation Jess Angulo and Dominique Jeulin

    E-print Network

    Angulo,Jesús

    Stochastic watershed segmentation Jesús Angulo and Dominique Jeulin Centre de Morphologie.angulo,dominique.jeulin@ensmp.fr Abstract This paper introduces a watershed-based stochastic segmentation methodology. The approach is based is then segmented by volumic watershed for den- ing the R most signicant regions. It over-performs the standard

  9. Space tug economic analysis study. Volume 2: Tug concepts analysis. Part 2: Economic analysis

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of space tug operations is presented. The subjects discussed are: (1) cost uncertainties, (2) scenario analysis, (3) economic sensitivities, (4) mixed integer programming formulation of the space tug problem, and (5) critical parameters in the evaluation of a public expenditure.

  10. Multivariate hippocampal subfield analysis of local MRI intensity and volume: application to temporal lobe epilepsy.

    PubMed

    Kim, Hosung; Bernhardt, Boris C; Kulaga-Yoskovitz, Jessie; Caldairou, Benoit; Bernasconi, Andrea; Bernasconi, Neda

    2014-01-01

    We propose a multispectral MRI-based clinical decision support approach to carry out automated seizure focus lateralization in patients with temporal lobe epilepsy (TLE). Based on high-resolution T1- and T2-weighted MRI with hippocampal subfield segmentations, our approach samples MRI features along the medial sheet of each subfield to minimize partial volume effects. To establish correspondence of sampling points across subjects, we propagate a spherical harmonic parameterization derived from the hippocampal boundary along a Laplacian gradient field towards the medial sheet. Volume and intensity data sampled on the medial sheet are finally fed into a supervised classifier. Testing our approach in TLE patients in whom the seizure focus could not be lateralized using conventional MR volumetry, the proposed approach correctly lateralized all patients and outperformed classification performance based on global subfield volumes or mean T2-intensity (100% vs. 68%). Moreover, statistical group-level comparisons revealed patterns of subfield abnormalities that were not evident in the global measurements and that largely agree with known histopathological changes. PMID:25485376

  11. Genetic analysis of members of the species Oropouche virus and identification of a novel M segment sequence

    PubMed Central

    Tilston-Lunel, Natasha L.; Hughes, Joseph; Acrani, Gustavo Olszanski; da Silva, Daisy E. A.; Azevedo, Raimunda S. S.; Rodrigues, Sueli G.; Vasconcelos, Pedro F. C.; Nunes, Marcio R. T.

    2015-01-01

    Oropouche virus (OROV) is a public health threat in South America, and in particular in northern Brazil, causing frequent outbreaks of febrile illness. Using a combination of deep sequencing and Sanger sequencing approaches, we determined the complete genome sequences of eight clinical isolates that were obtained from patient sera during an Oropouche fever outbreak in Amapa state, northern Brazil, in 2009. We also report the complete genome sequences of two OROV reassortants isolatd from two marmosets in Minas Gerais state, south-east Brazil, in 2012 that contained a novel M genome segment. Interestingly, all 10 isolates possessed a 947 nt S segment that lacked 11 residues in the S-segment 3? UTR compared with the recently redetermined Brazilian prototype OROV strain BeAn19991. OROV maybe circulating more widely in Brazil and in the non-human primate population than previously appreciated, and the identification of yet another reassortant highlights the importance of bunyavirus surveillance in South America. PMID:25735305

  12. STS-1 operational flight profile. Volume 6: Abort analysis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The abort analysis for the cycle 3 Operational Flight Profile (OFP) for the Space Transportation System 1 Flight (STS-1) is defined, superseding the abort analysis previously presented. Included are the flight description, abort analysis summary, flight design groundrules and constraints, initialization information, general abort description and results, abort solid rocket booster and external tank separation and disposal results, abort monitoring displays and discussion on both ground and onboard trajectory monitoring, abort initialization load summary for the onboard computer, list of the key abort powered flight dispersion analysis.

  13. Segmentation and Analysis of Corpus Callosum in Alzheimer MR Images using Total Variation Based Diffusion Filter and Level Set Method.

    PubMed

    Anandh, K R; Sujatha, C M; Ramakrishnan, S

    2015-01-01

    Alzheimer?s Disease (AD) is a common form of dementia that affects gray and white matter structures of brain. Manifestation of AD leads to cognitive deficits such as memory impairment problems, ability to think and difficulties in performing day to day activities. Although the etiology of this disease is unclear, imaging biomarkers are highly useful in the early diagnosis of AD. Magnetic resonance imaging is an indispensible non-invasive imaging modality that reflects both the geometry and pathology of the brain. Corpus Callosum (CC) is the largest white matter structure as well as the main inter-hemispheric fiber connection that undergoes regional alterations due to AD. Therefore, segmentation and feature extraction are predominantly essential to characterize the CC atrophy. In this work, an attempt has been made to segment CC using edge based level set method. Prior to segmentation, the images are pre-processed using Total Variation (TV) based diffusion filtering to enhance the edge information. Shape based geometric features are extracted from the segmented CC images to analyze the CC atrophy. Results show that the edge based level set method is able to segment CC in both the normal and AD images. TV based diffusion filtering has performed uniform region specific smoothing thereby preserving the texture and small scale details of the image. Consequently, the edge map of CC in both the normal and AD are apparently sharp and distinct with continuous boundaries. This facilitates the final contour to correctly segment CC from the nearby structures. The extracted geometric features such as area, perimeter and minor axis are found to have the percentage difference of 5.97%, 22.22% and 9.52% respectively in the demarcation of AD subjects. As callosal atrophy is significant in the diagnosis of AD, this study seems to be clinically useful. PMID:25996739

  14. Economic analysis of the space shuttle system, volume 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of the space shuttle system is presented. The analysis is based on economic benefits, recurring costs, non-recurring costs, and ecomomic tradeoff functions. The most economic space shuttle configuration is determined on the basis of: (1) objectives of reusable space transportation system, (2) various space transportation systems considered and (3) alternative space shuttle systems.

  15. Space shuttle navigation analysis. Volume 2: Baseline system navigation

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Rains, R. G.

    1980-01-01

    Studies related to the baseline navigation system for the orbiter are presented. The baseline navigation system studies include a covariance analysis of the Inertial Measurement Unit calibration and alignment procedures, postflight IMU error recovery for the approach and landing phases, on-orbit calibration of IMU instrument biases, and a covariance analysis of entry and prelaunch navigation system performance.

  16. Multispectral brain tumor segmentation based on histogram model adaptation

    NASA Astrophysics Data System (ADS)

    Rexilius, Jan; Hahn, Horst K.; Klein, Jan; Lentschig, Markus G.; Peitgen, Heinz-Otto

    2007-03-01

    Brain tumor segmentation and quantification from MR images is a challenging task. The boundary of a tumor and its volume are important parameters that can have direct impact on surgical treatment, radiation therapy, or on quantitative measurements of tumor regression rates. Although a wide range of different methods has already been proposed, a commonly accepted approach is not yet established. Today, the gold standard at many institutions still consists of a manual tumor outlining, which is potentially subjective, and a time consuming and tedious process. We propose a new method that allows for fast multispectral segmentation of brain tumors. An efficient initialization of the segmentation is obtained using a novel probabilistic intensity model, followed by an iterative refinement of the initial segmentation. A progressive region growing that combines probability and distance information provides a new, flexible tumor segmentation. In order to derive a robust model for brain tumors that can be easily applied to a new dataset, we retain information not on the anatomical, but on the global cross-subject intensity variability. Therefore, a set of multispectral histograms from different patient datasets is registered onto a reference histogram using global affine and non-rigid registration methods. The probability model is then generated from manual expert segmentations that are transferred to the histogram feature domain. A forward and backward transformation of a manual segmentation between histogram and image domain allows for a statistical analysis of the accuracy and robustness of the selected features. Experiments are carried out on patient datasets with different tumor shapes, sizes, locations, and internal texture.

  17. Oil-spill risk analysis: Cook inlet outer continental shelf lease sale 149. Volume 1. The analysis. Final report

    SciTech Connect

    Johnson, W.R.; Marshall, C.F.; Anderson, C.M.; Lear, E.M.

    1994-08-01

    This report summarizes results of an oil-spill risk analysis (OSRA) conducted for the proposed lower Cook Inlet Outer Continental Shelf (OCS) Lease Sale 149. The objective of this analysis was to estimate relative oil-spill risks associated with oil and gas production from the leasing alternatives proposed for the lease sale. The Minerals Management Service (MMS) will consider the analysis in the environmental impact statement (EIS) prepared for the lease sale. The analysis for proposed OCS Lease Sale 149 was conducted in three parts corresponding to different aspects of the overall problem. The first part dealt with the probability of oil-spill occurrence. The second dealt with trajectories of oil spills from potential spill sites to various environmental resources or land segments. The third part combined the results of the first two parts to give estimates of the overall oil-spill risk if there is oil production as a result of the lease sale. To aid the analysis, conditional risk contour maps of seasonal conditional probabilities of spill contact were generated for each environmental resource or land segment in the study area (see vol. 2).

  18. Measurement and analysis of grain boundary grooving by volume diffusion

    NASA Technical Reports Server (NTRS)

    Hardy, S. C.; Mcfadden, G. B.; Coriell, S. R.; Voorhees, P. W.; Sekerka, R. F.

    1991-01-01

    Experimental measurements of isothermal grain boundary grooving by volume diffusion are carried out for Sn bicrystals in the Sn-Pb system near the eutectic temperature. The dimensions of the groove increase with a temporal exponent of 1/3, and measurement of the associated rate constant allows the determination of the product of the liquid diffusion coefficient D and the capillarity length Gamma associated with the interfacial free energy of the crystal-melt interface. The small-slope theory of Mullins is generalized to the entire range of dihedral angles by using a boundary integral formulation of the associated free boundary problem, and excellent agreement with experimental groove shapes is obtained. By using the diffusivity measured by Jordon and Hunt, the present measured values of Gamma are found to agree to within 5 percent with the values obtained from experiments by Gunduz and Hunt on grain boundary grooving in a temperature gradient.

  19. Demand modelling of passenger air travel: An analysis and extension, volume 2

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.

    1978-01-01

    Previous intercity travel demand models in terms of their ability to predict air travel in a useful way and the need for disaggregation in the approach to demand modelling are evaluated. The viability of incorporating non-conventional factors (i.e. non-econometric, such as time and cost) in travel demand forecasting models are determined. The investigation of existing models is carried out in order to provide insight into their strong points and shortcomings. The model is characterized as a market segmentation model. This is a consequence of the strengths of disaggregation and its natural evolution to a usable aggregate formulation. The need for this approach both pedagogically and mathematically is discussed. In addition this volume contains two appendices which should prove useful to the non-specialist in the area.

  20. Random harmonic analysis program, L221 (TEV156). Volume 2: Supplemental system design and maintenenace document

    NASA Technical Reports Server (NTRS)

    Graham, M. L.; Clemmons, R. E.; Miller, R. D.

    1979-01-01

    Volume 2 of a two volume document is presented. A computer program, L222 (TEV 156), available for execution on the CDC 6600 computer is described. The program is capable of calculating steady-state solutions for linear second-order differential equations due to sinusoidal forcing functions. From this, steady-state solutions, generalized coordinates, and load frequency responses may be determined. Statistical characteristics of loads for the forcing function spectral shape may also be calculated using random harmonic analysis techniques. The particular field of application of the program is the analysis of airplane response and loads due to continuous random air turbulence.

  1. Analysis of cell concentration, volume concentration, and colony size of Microcystis via laser particle analyzer.

    PubMed

    Li, Ming; Zhu, Wei; Gao, Li

    2014-05-01

    The analysis of the cell concentration, volume concentration, and colony size of Microcystis is widely used to provide early warnings of the occurrence of blooms and to facilitate the development of predictive tools to mitigate their impact. This study developed a new approach for the analysis of the cell concentration, volume concentration, and colony size of Microcystis by applying a laser particle analyzer. Four types of Microcystis samples (55 samples in total) were analyzed by a laser particle analyzer and a microscope. By the application of the laser particle analyzer (1) when n = 1.40 and k = 0.1 (n is the intrinsic refractive index, whereas k is absorption of light by the particle), the results of the laser particle analyzer showed good agreement with the microscopic results for the obscuration indicator, volume concentration, and size distribution of Microcystis; (2) the Microcystis cell concentration can be calculated based on its linear relationship with obscuration; and (3) the volume concentration and size distribution of Microcystis particles (including single cells and colonies) can be obtained. The analytical processes involved in this new approach are simpler and faster compared to that by microscopic counting method. From the results, it was identified that the relationship between cell concentration and volume concentration depended on the colony size of Microcystis because the intercellular space was high when the colony size was high. Calculation of cell concentration and volume concentration may occur when the colony size information is sufficient. PMID:24570208

  2. Drive-Response Analysis of Global Ice Volume, CO2, and Insolation using Information Transfer

    NASA Astrophysics Data System (ADS)

    Brendryen, J.; Hannisdal, B.

    2014-12-01

    The processes and interactions that drive global ice volume variability and deglaciations are a topic of considerable debate. Here we analyze the drive-response relationships between data sets representing global ice volume, CO2 and insolation over the past 800 000 years using an information theoretic approach. Specifically, we use a non-parametric measure of directional information transfer (IT) based on the construct of transfer entropy to detect the relative strength and directionality of interactions in the potentially chaotic and non-linear glacial-interglacial climate system. Analyses of unfiltered data suggest a tight coupling between CO2 and ice volume, detected as strong, symmetric information flow consistent with a two-way interaction. In contrast, IT from Northern Hemisphere (NH) summer insolation to CO2 is highly asymmetric, suggesting that insolation is an important driver of CO2. Conditional analysis further suggests that CO2 is a dominant influence on ice volume, with the effect of insolation also being significant but limited to smaller-scale variability. However, the strong correlation between CO2 and ice volume renders them information redundant with respect to insolation, confounding further drive-response attribution. We expect this information redundancy to be partly explained by the shared glacial-interglacial "sawtooth" pattern and its overwhelming influence on the transition probability distributions over the target interval. To test this, we filtered out the abrupt glacial terminations from the ice volume and CO2 records to focus on the residual variability. Preliminary results from this analysis confirm insolation as a driver of CO2 and two-way interactions between CO2 and ice volume. However, insolation is reduced to a weak influence on ice volume. Conditional analyses support CO2 as a dominant driver of ice volume, while ice volume and insolation both have a strong influence on CO2. These findings suggest that the effect of orbital variability on global ice volume may work primarily through its influence on CO2. Our preliminary results are consistent with the idea that the coupling between CO2 and ice volume likely occurs via a feedback loop that involves meltwater-induced shifts in oceanic circulation and associated changes in the carbon cycle.

  3. Feasibility analysis and residual evaluation of automated planar segmentation results of large-scale Martian surface structures

    NASA Astrophysics Data System (ADS)

    Székely, B.; Dorninger, P.; Koma, Zs.; Jansa, J.; Kovács, G.; Nothegger, C.

    2012-04-01

    As increasingly larger coverage of DTMs is available for the Martian surface, not only the number of studies on individual specific Martian features increase, but the need for large-scale geomorphometric evaluation is amplified as well. The computer power and the increasingly sophisticated methods are about to allow such extensive studies. Our DTM segmentation method that has been tailored and tested recently for various geoscientific applications, now allows to process large DTMs created within the framework of ESA Mars Express HRSC project. The implementation uses computation parallelization, kd-tree approach for storage and several sophisticated techniques in seeking for seed points to improve performance. Test runs on high-capacity multi-core computers demonstrate that now processing of complete DTMs of an orbit is feasible. The possibility to process large areas also implies that the segmentation results in high number of planar facets, typically several thousand features. Furthermore, the segmentation is often sensitive to the initial parameters (number of points to calculate local normal vectors, point-to-plane distance, angular tolerance, etc.) and also the use of splitting segments parameter has typically a stronger influence on the corresponding segmentation pattern. This complexity may complicate the evaluation of the results. In order to recognize the general behaviour a number of test runs have been carried out. The resulting sets of planar facets were then evaluated whether the segmentation fulfilled the original purpose (e.g., in the case of the modeling of an impact crater, its typical features should be modeled. In case of unsatisfying coverage or residual values those models have been sorted out. Model results considered to be satisfying are then analysed from the point of view of the residual values (the pointwise difference of measured height and modeled height). The distributions of the residuals are sometimes asymmetric, but the results are typically still acceptable. Asymmetric and non-continuous segmentations arise if the area is complex, composed of various landforms. This may also imply the need to process the area with various parameter sets, in order to cover features like impact craters, volcanoes, topographic scarps, debris slopes and landslides. According to our experience it is not easy to have low residuals, a good coverage of all features and high percentage of meaningful planar facets. However, this type of result can be achieved by the introduction of successive segmentation phases; the phases process a given number of points and the remaining points will be put into the next segmentation step. The final goal of the whole segmentation is the geostatistical evaluation of the parameters planar features (size, slope, aspect, average of residual values, etc.). This is a co-investigator contribution of the ESA Mars Express High Resolution Stereo Camera research group (principal investigator G. Neukum), and the TMIS.ascrea project has been supported by the Austrian Research Promotion Agency (FFG).

  4. Viscous wing theory development. Volume 1: Analysis, method and results

    NASA Technical Reports Server (NTRS)

    Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.

    1986-01-01

    Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.

  5. SLUDGE TREATMENT PROJECT ALTERNATIVES ANALYSIS SUMMARY REPORT [VOLUME 1

    SciTech Connect

    FREDERICKSON JR; ROURK RJ; HONEYMAN JO; JOHNSON ME; RAYMOND RE

    2009-01-19

    Highly radioactive sludge (containing up to 300,000 curies of actinides and fission products) resulting from the storage of degraded spent nuclear fuel is currently stored in temporary containers located in the 105-K West storage basin near the Columbia River. The background, history, and known characteristics of this sludge are discussed in Section 2 of this report. There are many compelling reasons to remove this sludge from the K-Basin. These reasons are discussed in detail in Section1, and they include the following: (1) Reduce the risk to the public (from a potential release of highly radioactive material as fine respirable particles by airborne or waterborn pathways); (2) Reduce the risk overall to the Hanford worker; and (3) Reduce the risk to the environment (the K-Basin is situated above a hazardous chemical contaminant plume and hinders remediation of the plume until the sludge is removed). The DOE-RL has stated that a key DOE objective is to remove the sludge from the K-West Basin and River Corridor as soon as possible, which will reduce risks to the environment, allow for remediation of contaminated areas underlying the basins, and support closure of the 100-KR-4 operable unit. The environmental and nuclear safety risks associated with this sludge have resulted in multiple legal and regulatory remedial action decisions, plans,and commitments that are summarized in Table ES-1 and discussed in more detail in Volume 2, Section 9.

  6. Mining volume measurement system

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph Saul (inventor)

    1988-01-01

    In a shaft with a curved or straight primary segment and smaller off-shooting segments, at least one standing wave is generated in the primary segment. The shaft has either an open end or a closed end and approximates a cylindrical waveguide. A frequency of a standing wave that represents the fundamental mode characteristic of the primary segment can be measured. Alternatively, a frequency differential between two successive harmonic modes that are characteristic of the primary segment can be measured. In either event, the measured frequency or frequency differential is characteristic of the length and thus the volume of the shaft based on length times the bore area.

  7. On 3-D inelastic analysis methods for hot section components. Volume 1: Special finite element models

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional analysis of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. This report is presented in two volumes. Volume 1 describes effort performed under Task 4B, Special Finite Element Special Function Models, while Volume 2 concentrates on Task 4C, Advanced Special Functions Models.

  8. 3D segmentation of the true and false lumens on CT aortic dissection images

    NASA Astrophysics Data System (ADS)

    Fetnaci, Nawel; ?ubniewski, Pawe?; Miguel, Bruno; Lohou, Christophe

    2013-03-01

    Our works are related to aortic dissections which are a medical emergency and can quickly lead to death. In this paper, we want to retrieve in CT images the false and the true lumens which are aortic dissection features. Our aim is to provide a 3D view of the lumens that we can difficultly obtain either by volume rendering or by another visualization tool which only directly gives the outer contour of the aorta; or by other segmentation methods because they mainly directly segment either only the outer contour of the aorta or other connected arteries and organs both. In our work, we need to segment the two lumens separately; this segmentation will allow us to: distinguish them automatically, facilitate the landing of the aortic prosthesis, propose a virtual 3d navigation and do quantitative analysis. We chose to segment these data by using a deformable model based on the fast marching method. In the classical fast marching approach, a speed function is used to control the front propagation of a deforming curve. The speed function is only based on the image gradient. In our CT images, due to the low resolution, with the fast marching the front propagates from a lumen to the other; therefore, the gradient data is insufficient to have accurate segmentation results. In the paper, we have adapted the fast marching method more particularly by modifying the speed function and we succeed in segmenting the two lumens separately.

  9. Comparison of automated brain segmentation using a brain phantom and patients with early Alzheimer's dementia or mild cognitive impairment.

    PubMed

    Fellhauer, Iven; Zöllner, Frank G; Schröder, Johannes; Degen, Christina; Kong, Li; Essig, Marco; Thomann, Philipp A; Schad, Lothar R

    2015-09-30

    Magnetic resonance imaging (MRI) and brain volumetry allow for the quantification of changes in brain volume using automatic algorithms which are widely used in both, clinical and scientific studies. However, studies comparing the reliability of these programmes are scarce and mainly involved MRI derived from younger healthy controls. This study evaluates the reliability of frequently used segmentation programmes (SPM, FreeSurfer, FSL) using a realistic digital brain phantom and MRI brain acquisitions from patients with manifest Alzheimer's disease (AD, n=34), mild cognitive impairment (MCI, n=60), and healthy subjects (n=32) matched for age and sex. Analysis of the brain phantom dataset demonstrated that SPM, FSL and FreeSurfer underestimate grey matter and overestimate white matter volumes with increasing noise. FreeSurfer calculated overall smaller brain volumes with increasing noise. Image inhomogeneity had only minor, non- significant effects on the results obtained with SPM and FreeSurfer 5.1, but had effects on the FSL results (increased white matter volumes with decreased grey matter volumes). The analysis of the patient data yielded decreasing volumes of grey and white matter with progression of brain atrophy independent of the method used. FreeSurfer calculated the largest grey matter and the smallest white matter volumes. FSL calculated the smallest grey matter volumes; SPM the largest white matter volumes. Best results are obtained with good image quality. With poor image quality, especially noise, SPM provides the best segmentation results. An optimised template for segmentation had no significant effect on segmentation results. While our findings underline the applicability of the programmes investigated, SPM may be the programme of choice when MRIs with limited image quality or brain images of elderly should be analysed. PMID:26211622

  10. Atmospheric analysis and prediction model development, volume 1

    NASA Technical Reports Server (NTRS)

    Kesel, P. G.; Wellck, R. E.; Langland, R. A.; Lewit, H. L.

    1976-01-01

    A set of hemispheric atmospheric analysis and prediction models was designed and tested. All programs were executed on either a 63 x 63 or 187 x 187 polar stereographic grid of the Northern Hemisphere. Parameters for objective analysis included sea surface temperature, sea level pressure, and twelve levels (from 1,000 to 100 millibars) of temperatures, heights, and winds. Stratospheric extensions (up to 10 millibars) were also provided. Four versions of a complex atmospheric prediction model, based on primitive equations, were programmed and tested. These models were executed on either the 63 x 63 or 187 x 187 grid, using either five or ten computational layers. The coarse-mesh (63 x 63) models were tested using real data for the period 21-23 April 1976. The fine-mesh (187 x 187) models were debugged, but insufficient computer resources precluded production tests. Preliminary test results for the 63 x 63 models are provided. Problem areas and proposed solutions are discussed.

  11. Spaceborne power systems preference analyses. Volume 2: Decision analysis

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Feinberg, A.; Miles, R. F., Jr.

    1985-01-01

    Sixteen alternative spaceborne nuclear power system concepts were ranked using multiattribute decision analysis. The purpose of the ranking was to identify promising concepts for further technology development and the issues associated with such development. Four groups were interviewed to obtain preference. The four groups were: safety, systems definition and design, technology assessment, and mission analysis. The highest ranked systems were the heat-pipe thermoelectric systems, heat-pipe Stirling, in-core thermionic, and liquid-metal thermoelectric systems. The next group contained the liquid-metal Stirling, heat-pipe Alkali Metal Thermoelectric Converter (AMTEC), heat-pipe Brayton, liquid-metal out-of-core thermionic, and heat-pipe Rankine systems. The least preferred systems were the liquid-metal AMTEC, heat-pipe thermophotovoltaic, liquid-metal Brayton and Rankine, and gas-cooled Brayton. The three nonheat-pipe technologies selected matched the top three nonheat-pipe systems ranked by this study.

  12. Integrated operations/payloads/fleet analysis. Volume 2: Payloads

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The payloads for NASA and non-NASA missions of the integrated fleet are analyzed to generate payload data for the capture and cost analyses for the period 1979 to 1990. Most of the effort is on earth satellites, probes, and planetary missions because of the space shuttle's ability to retrieve payloads for repair, overhaul, and maintenance. Four types of payloads are considered: current expendable payload; current reusable payload; low cost expendable payload, (satellite to be used with expendable launch vehicles); and low cost reusable payload (satellite to be used with the space shuttle/space tug system). Payload weight analysis, structural sizing analysis, and the influence of mean mission duration on program cost are also discussed. The payload data were computerized, and printouts of the data for payloads for each program or mission are included.

  13. Journal of Quantitative Analysis in Volume 6, Issue 3 2010 Article 8

    E-print Network

    Jensen, Shane T.

    pitchers may have some ability to prevent hits on balls in play, the effect is small. And any ef- fectJournal of Quantitative Analysis in Sports Volume 6, Issue 3 2010 Article 8 A Point-Mass Mixture Random Effects Model for Pitching Metrics James Piette Alexander Braunstein Blakeley B. McShane Shane T

  14. Journal of Quantitative Analysis in Volume 3, Issue 3 2007 Article 2

    E-print Network

    Jensen, Shane T.

    Journal of Quantitative Analysis in Sports Volume 3, Issue 3 2007 Article 2 Evaluating Throwing of unsuccessful events. In addition, there is a more subtle effect of throwing ability that is not captured, 2007) but does not consider the influence of outfield ball-in-play location on these events. Research

  15. Site-SpecificAnalysis&Management Agronomy Journal Volume 100, Issue 5 2008 1463

    E-print Network

    on temperature, precipitation, solar radiation, and county corn and soybean yields throughout the United StatesSite-SpecificAnalysis&Management Agronomy Journal · Volume 100, Issue 5 · 2008 1463 Published different fac- tors, such as topography, soil properties, and management practices (Lamb et al., 1997

  16. Advances in Analysis of Behaviour, Volume 3 Edited by MoDo Zeiler and Po Harzem

    E-print Network

    Timberlake, William D.

    ~ Advances in Analysis of Behaviour, Volume 3 Edited by MoDo Zeiler and Po Harzem @ 1983 John Wiley a circulating anecdote. Thousands of cats on thousands of occasions sit helplessly yowling, and no one takes thought of it or writes to his friend, the professor; but let one cat claw at the knob of a door

  17. Fast Analysis of Intracranial Aneurysms based on Interactive Direct Volume Rendering and CTA

    E-print Network

    Blanz, Volker

    Fast Analysis of Intracranial Aneurysms based on Interactive Direct Volume Rendering and CTA P of intracranial aneurysms and the planning of related interventions is effectively assisted by spiral CT and aneurysms directly within the 3D viewer. Thereby, the expensive material required for coil- ing procedures

  18. CITY OF TAMPA MANAGEMENT ANALYSIS AND REPORT SYSTEM (MARS). VOLUME 1. CASE STUDY

    EPA Science Inventory

    This three-volume report describes the development and implementation of a management analysis and report system (MARS) in the Tampa, Florida, Water and Sanitary Sewer Departments. Original system development was based on research conducted in a smaller water utility in Kenton Co...

  19. CITY OF TAMPA MANAGEMENT ANALYSIS AND REPORT SYSTEM (MARS). VOLUME 2. OPERATIONS MANUAL

    EPA Science Inventory

    This three-volume report describes the development and implementation of a management analysis and report system (MARS) in the Tampa, Florida, Water and Sanitary Sewer Departments. Original system development was based on research conducted in a smaller water utility in Kenton Co...

  20. Waste Isolation Pilot Plant Geotechnical Analysis Report for July 2005 - June 2006, Volume 2, Supporting Data

    SciTech Connect

    Washington TRU Solutions LLC

    2007-03-25

    This report is a compilation of geotechnical data presented as plots for each active instrument installed in the underground at the Waste Isolation Pilot Plant (WIPP) through June 30, 2006. A summary of the geotechnical analyses that were performed using the enclosed data is provided in Volume 1 of the Geotechnical Analysis Report (GAR).

  1. Geotechnical Analysis Report for July 2004 - June 2005, Volume 2, Supporting Data

    SciTech Connect

    Washington TRU Solutions LLC

    2006-03-20

    This report is a compilation of geotechnical data presented as plots for each active instrument installed in the underground at the Waste Isolation Pilot Plant (WIPP) through June 30, 2005. A summary of the geotechnical analyses that were performed using the enclosed data is provided in Volume 1 of the Geotechnical Analysis Report (GAR).

  2. A STANDARD PROCEDURE FOR COST ANALYSIS OF POLLUTION CONTROL OPERATIONS. VOLUME I. USER GUIDE

    EPA Science Inventory

    Volume I is a user guide for a standard procedure for the engineering cost analysis of pollution abatement operations and processes. The procedure applies to projects in various economic sectors: private, regulated, and public. The models are consistent with cost evaluation pract...

  3. Functional Analysis of the Vertebral Column based on MR and Direct Volume Rendering

    E-print Network

    Blanz, Volker

    Functional Analysis of the Vertebral Column based on MR and Direct Volume Rendering P. Hastreiter1 of Neurosurgery, University of Erlangen­Nuremberg, Germany Abstract. Degenerative diseases of the vertebral column are mainly combined with misalignments of the intervertebral discs and deformations of the spinal cord

  4. SOLVENT-BASED TO WATERBASED ADHESIVE-COATED SUBSTRATE RETROFIT - VOLUME I: COMPARATIVE ANALYSIS

    EPA Science Inventory

    This volume represents the analysis of case study facilities' experience with waterbased adhesive use and retrofit requirements. (NOTE: The coated and laminated substrate manufacturing industry was selected as part of NRMRL'S support of the 33/50 Program because of its significan...

  5. Content Analysis of the "Journal of Counseling & Development": Volumes 74 to 84

    ERIC Educational Resources Information Center

    Blancher, Adam T.; Buboltz, Walter C.; Soper, Barlow

    2010-01-01

    A content analysis of the research published in the "Journal of Counseling & Development" ("JCD") was conducted for Volumes 74 (1996) through 84 (2006). Frequency distributions were used to identify the most published authors and their institutional affiliations, as well as some basic characteristics (type of sample, gender, and ethnicity) of the…

  6. CITY OF TAMPA MANAGEMENT ANALYSIS AND REPORT SYSTEM (MARS). VOLUME 3. PROGRAMMING MANUAL

    EPA Science Inventory

    This three-volume report describes the development and implementation of a management analysis and report system (MARS) in the Tampa, Florida, Water and Sanitary Sewer Departments. MARS will help both the Water and Sanitary Sewer Departments control costs and manage expanding ser...

  7. nature genetics volume 25 june 2000 239 Gene Index analysis of the human genome estimates

    E-print Network

    Salzberg, Steven

    letter nature genetics · volume 25 · june 2000 239 Gene Index analysis of the human genome the dbEST division of GenBank. These were `cleaned' to remove contaminating sequences (see Methods, http://genetics.nature (ET) sequences from the TIGR expressed gene anatomy database (EGAD; http://www.tigr.org/ tdb

  8. The Journal of Fourier Analysis and Applications Volume 8, Issue 1, 2002

    E-print Network

    Behmard, Hamid

    ABSTRACT. We consider Shannon sampling theory for sampling sets which are unions of shifted lattices theorem. For a more detailed introduction to sampling theory on LCA groups we refer to the recent articleThe Journal of Fourier Analysis and Applications Volume 8, Issue 1, 2002 Sampling of Bandlimited

  9. Structural analysis of cylindrical thrust chambers, volume 3

    NASA Technical Reports Server (NTRS)

    Pearson, M. L.

    1981-01-01

    A system of three computer programs is described for use in conjunction with the BOPAGE finite element program. The programs are demonstrated by analyzing cumulative plastic deformation in a regeneratively cooled rocket thrust chamber. The codes provide the capability to predict geometric and material nonlinear behavior of cyclically loaded structures without performing a cycle-by-cycle analysis over the life of the structure. The program set consists of a BOPACE restart tape reader routine, and extrapolation program and a plot package.

  10. Satellite services system analysis study. Volume 5: Programmatics

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The overall program and resources needed for development and operation of a Satellite Services System is reviewed. Program requirements covered system operations through 1993 and were completed in preliminary form. Program requirements were refined based on equipment preliminary design and analysis. Schedules, costs, equipment utilization, and facility/advanced technology requirements were included in the update. Equipment user charges were developed for each piece of equipment and for representative satellite servicing missions.

  11. Space tug economic analysis study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of space tug operations is presented. The space tug is defined as any liquid propulsion stage under 100,000 pounds propellant loading that is flown from the space shuttle cargo bay. Two classes of vehicles are the orbit injection stages and reusable space tugs. The vehicle configurations, propellant combinations, and operating modes used for the study are reported. The summary contains data on the study approach, results, conclusions, and recommendations.

  12. Dynamic contact analysis technique for rapidly sliding elastic bodies with node-to-segment contact and differentiated constraints

    NASA Astrophysics Data System (ADS)

    Lee, Kisu

    2014-04-01

    For a stabilized Newmark time integration of dynamic contact problems of the rapidly sliding bodies, considering the equality and inequality contact constraints and a high-speed contact point motion sliding on the deforming contact surface, the velocity and acceleration contact constraints are derived. Also, to suppress the numerical oscillations accompanied by the node-to-segment contact of the finite element models, a pseudo-node-to-node contact technique is suggested with the linear shape function elements having the almost equal segment lengths on the contact surface. The numerical simulations are performed with a high-speed punch moving on the beam and the high-speed rotating disks to check the stability and accuracy of the solution.

  13. X-ray diffraction strain analysis of a single axial InAs1–xPx nanowire segment

    PubMed Central

    Keplinger, Mario; Mandl, Bernhard; Kriegner, Dominik; Holý, Václav; Samuelsson, Lars; Bauer, Günther; Deppert, Knut; Stangl, Julian

    2015-01-01

    The spatial strain distribution in and around a single axial InAs1–xPx hetero-segment in an InAs nanowire was analyzed using nano-focused X-ray diffraction. In connection with finite-element-method simulations a detailed quantitative picture of the nanowire’s inhomogeneous strain state was achieved. This allows for a detailed understanding of how the variation of the nanowire’s and hetero-segment’s dimensions affect the strain in its core region and in the region close to the nanowire’s side facets. Moreover, ensemble-averaging high-resolution diffraction experiments were used to determine statistical information on the distribution of wurtzite and zinc-blende crystal polytypes in the nanowires. PMID:25537589

  14. Analysis of benefits and costs (ABC's) guideline: Volume 2, An analyst's handbook for analysis of benefits and costs

    SciTech Connect

    Not Available

    1988-06-01

    This handbook is for information technology personnel with little or no previous experience in the analysis of benefits and costs (ABC). This handbook describes the essential concepts and procedures necessary to conduct an ABC. It also explains the use of ABC's to support decisions related to information resources management (IRM). A companion volume,''A Manager's Guide to Analysis of Benefits and Costs,'' explains the importance of ABC's to the Department of Energy's IRM decision making.

  15. An investigation of wing buffeting response at subsonic and transonic speeds. Phase 1: F-111A flight data analysis. Volume 3: Tabulated power spectra

    NASA Technical Reports Server (NTRS)

    Benepe, D. B.; Cunningham, A. M., Jr.; Dunmyer, W. D.

    1978-01-01

    Volume 3 of this three volume report is presented. This volume presents power spectral density in tabular form for the convenience of those who might wish to perform additional analysis. Some of the information contained in Volume 1 is again repeated (as in volume 2) in this volume to allow the reader to identify the specific conditions appropriate to each tabular listing and for further analysis.

  16. Automatic segmentation of occluded vasculature via pulsatile motion analysis in endoscopic robot-assisted partial nephrectomy video.

    PubMed

    Amir-Khalili, Alborz; Hamarneh, Ghassan; Peyrat, Jean-Marc; Abinahed, Julien; Al-Alao, Osama; Al-Ansari, Abdulla; Abugharbieh, Rafeef

    2015-10-01

    Hilar dissection is an important and delicate stage in partial nephrectomy, during which surgeons remove connective tissue surrounding renal vasculature. Serious complications arise when the occluded blood vessels, concealed by fat, are missed in the endoscopic view and as a result are not appropriately clamped. Such complications may include catastrophic blood loss from internal bleeding and associated occlusion of the surgical view during the excision of the cancerous mass (due to heavy bleeding), both of which may compromise the visibility of surgical margins or even result in a conversion from a minimally invasive to an open intervention. To aid in vessel discovery, we propose a novel automatic method to segment occluded vasculature from labeling minute pulsatile motion that is otherwise imperceptible with the naked eye. Our segmentation technique extracts subtle tissue motions using a technique adapted from phase-based video magnification, in which we measure motion from periodic changes in local phase information albeit for labeling rather than magnification. Based on measuring local phase through spatial decomposition of each frame of the endoscopic video using complex wavelet pairs, our approach assigns segmentation labels by detecting regions exhibiting temporal local phase changes matching the heart rate. We demonstrate how our technique is a practical solution for time-critical surgical applications by presenting quantitative and qualitative performance evaluations of our vessel detection algorithms with a retrospective study of fifteen clinical robot-assisted partial nephrectomies. PMID:25977157

  17. Comparison of gray matter volume and thickness for analysis of cortical changes in Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Liu, Jiachao; Li, Ziyi; Chen, Kewei; Yao, Li; Wang, Zhiqun; Li, Kunchen; Guo, Xiaojuan

    2011-03-01

    Gray matter volume and cortical thickness are two indices of concern in brain structure magnetic resonance imaging research. Gray matter volume reflects mixed-measurement information of cerebral cortex, while cortical thickness reflects only the information of distance between inner surface and outer surface of cerebral cortex. Using Scaled Subprofile Modeling based on Principal Component Analysis (SSM_PCA) and Pearson's Correlation Analysis, this study further provided quantitative comparisons and depicted both global relevance and local relevance to comprehensively investigate morphometrical abnormalities in cerebral cortex in Alzheimer's disease (AD). Thirteen patients with AD and thirteen age- and gender-matched healthy controls were included in this study. Results showed that factor scores from the first 8 principal components accounted for ~53.38% of the total variance for gray matter volume, and ~50.18% for cortical thickness. Factor scores from the fifth principal component showed significant correlation. In addition, gray matter voxel-based volume was closely related to cortical thickness alterations in most cortical cortex, especially, in some typical abnormal brain regions such as insula and the parahippocampal gyrus in AD. These findings suggest that these two measurements are effective indices for understanding the neuropathology in AD. Studies using both gray matter volume and cortical thickness can separate the causes of the discrepancy, provide complementary information and carry out a comprehensive description of the morphological changes of brain structure.

  18. Development of a rotorcraft. Propulsion dynamics interface analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Hull, R.

    1982-01-01

    A study was conducted to establish a coupled rotor/propulsion analysis that would be applicable to a wide range of rotorcraft systems. The effort included the following tasks: (1) development of a model structure suitable for simulating a wide range of rotorcraft configurations; (2) defined a methodology for parameterizing the model structure to represent a particular rotorcraft; (3) constructing a nonlinear coupled rotor/propulsion model as a test case to use in analyzing coupled system dynamics; and (4) an attempt to develop a mostly linear coupled model derived from the complete nonlinear simulations. Documentation of the computer models developed is presented.

  19. Development of a rotorcraft. Propulsion dynamics interface analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Hull, R.

    1982-01-01

    The details of the modeling process and its implementation approach are presented. A generic methodology and model structure for performing coupled propulsion/rotor response analysis that is applicable to a variety of rotorcraft types was developed. A method for parameterizing the model structure to represent a particular rotorcraft is defined. The generic modeling methodology, the development of the propulsion system and the rotor/fuselage models, and the formulation of the resulting coupled rotor/propulsion system model are described. A test case that was developed is described.

  20. Analysis of space tug operating techniques. Volume 2: Study results

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design requirements for space tug systems and cost analysis of the refurbishment phases are discussed. The vehicle is an integral propulsion stage using liquid hydrogen and liquid oxygen as propellants and is capable of operating either as a fully or a partially autonomous vehicle. Structural features are an integral liquid hydrogen tank, a liquid oxygen tank, a meteoroid shield, an aft conical docking and structural support ring, and a staged combustion main engine. The vehicle is constructed of major modules for ease of maintenance. Line drawings and block diagrams are included to explain the maintenance requirements for the subsystems.

  1. Shared Segment Analysis and Next-Generation Sequencing Implicates the Retinoic Acid Signaling Pathway in Total Anomalous Pulmonary Venous Return (TAPVR)

    PubMed Central

    Nash, Dustin; Arrington, Cammon B.; Kennedy, Brett J.; Yandell, Mark; Wu, Wilfred; Zhang, Wenying; Ware, Stephanie; Jorde, Lynn B.; Gruber, Peter J.; Yost, H. Joseph

    2015-01-01

    Most isolated congenital heart defects are thought to be sporadic and are often ascribed to multifactorial mechanisms with poorly understood genetics. Total Anomalous Pulmonary Venous Return (TAPVR) occurs in 1 in 15,000 live-born infants and occurs either in isolation or as part of a syndrome involving aberrant left-right development. Previously, we reported causative links between TAVPR and the PDGFRA gene. TAPVR has also been linked to the ANKRD1/CARP genes. However, these genes only explain a small fraction of the heritability of the condition. By examination of phased single nucleotide polymorphism genotype data from 5 distantly related TAPVR patients we identified a single 25 cM shared, Identical by Descent genomic segment on the short arm of chromosome 12 shared by 3 of the patients and their obligate-carrier parents. Whole genome sequence (WGS) analysis identified a non-synonymous variant within the shared segment in the retinol binding protein 5 (RBP5) gene. The RBP5 variant is predicted to be deleterious and is overrepresented in the TAPVR population. Gene expression and functional analysis of the zebrafish orthologue, rbp7, supports the notion that RBP5 is a TAPVR susceptibility gene. Additional sequence analysis also uncovered deleterious variants in genes associated with retinoic acid signaling, including NODAL and retinol dehydrogenase 10. These data indicate that genetic variation in the retinoic acid signaling pathway confers, in part, susceptibility to TAPVR. PMID:26121141

  2. Wind tunnel test IA300 analysis and results, volume 1

    NASA Technical Reports Server (NTRS)

    Kelley, P. B.; Beaufait, W. B.; Kitchens, L. L.; Pace, J. P.

    1987-01-01

    The analysis and interpretation of wind tunnel pressure data from the Space Shuttle wind tunnel test IA300 are presented. The primary objective of the test was to determine the effects of the Space Shuttle Main Engine (SSME) and the Solid Rocket Booster (SRB) plumes on the integrated vehicle forebody pressure distributions, the elevon hinge moments, and wing loads. The results of this test will be combined with flight test results to form a new data base to be employed in the IVBC-3 airloads analysis. A secondary objective was to obtain solid plume data for correlation with the results of gaseous plume tests. Data from the power level portion was used in conjunction with flight base pressures to evaluate nominal power levels to be used during the investigation of changes in model attitude, eleveon deflection, and nozzle gimbal angle. The plume induced aerodynamic loads were developed for the Space Shuttle bases and forebody areas. A computer code was developed to integrate the pressure data. Using simplified geometrical models of the Space Shuttle elements and components, the pressure data were integrated to develop plume induced force and moments coefficients that can be combined with a power-off data base to develop a power-on data base.

  3. HV-SOFAST: High Volume Sandia Optical Fringe Analysis

    Energy Science and Technology Software Center (ESTSC)

    2012-09-13

    SOFAST is used to characterize the surface slope of reflective mirrors for solar applications. SOFAST uses a large monitor or projections screen to display fringe patterns, and a machine vision camera to image the reflection of these patterns in the subject mirror. From these images, a detailed map of surface normals can be generated and compared to design or fitted mirror shapes. SOFAST uses standard Fringe Reflection (Deflectometry) approaches to measure the mirror surface normals.more »SOFAST uses an extrinsic analysis of key points on the facet to locate the camera and monitor relative to the facet coordinate system. It then refines this position based on the measured surface slope and integrated shape of the mirror facet. The facet is placed into a reference frame such that key points on the facet match the design facet in orientation and position. This is key to evaluating a facet as suitable for a specific solar application. SOFAST reports the measurements of the facet as detailed surface normal location in a format suitable for ray tracing optical analysis codes. SOFAST also reports summary information as to the facet fitted shape (monomial) and error parameters. Useful plots of the error distribution are also presented.« less

  4. Segmentation of Ultrasound Images for Tumor Surgery

    NASA Astrophysics Data System (ADS)

    Gutiérrez Medina, L. R.; Arámbula Cosío, F.; Hazan Lasri, E.

    2006-09-01

    A surgical navigator for treatment of tumors in the musculoskeletal system is being developed at the Image Analysis and Visualization Lab. of CCADET, UNAM. The navigator is designed to assist the surgeon during radiofrecuency (RF) ablation of the tumors, through real time computer graphics models of the tumor, the adjacent structures (bones), and the active volume of the RF probe. The three dimensional model of the tumor and adjacent structures will be constructed from a preoperative MRI study and then registered intraoperatively to the patient using an optically tracked ultrasound probe. In this paper are reported our preliminary results from the semiautomatic segmentation of the tumor and adjacent bones in ultrasound images. The use of ultrasound for intraoperative registration has many advantages such as relative low cost, portability, avoidance of radiation exposure and fiducial markers.

  5. Neural network for image segmentation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.; Prasad, Lakshman; Schlei, Bernd R.

    2000-10-01

    Image analysis is an important requirement of many artificial intelligence systems. Though great effort has been devoted to inventing efficient algorithms for image analysis, there is still much work to be done. It is natural to turn to mammalian vision systems for guidance because they are the best known performers of visual tasks. The pulse- coupled neural network (PCNN) model of the cat visual cortex has proven to have interesting properties for image processing. This article describes the PCNN application to the processing of images of heterogeneous materials; specifically PCNN is applied to image denoising and image segmentation. Our results show that PCNNs do well at segmentation if we perform image smoothing prior to segmentation. We use PCNN for obth smoothing and segmentation. Combining smoothing and segmentation enable us to eliminate PCNN sensitivity to the setting of the various PCNN parameters whose optimal selection can be difficult and can vary even for the same problem. This approach makes image processing based on PCNN more automatic in our application and also results in better segmentation.

  6. Prevention of altered hemodynamics after spinal anesthesia: A comparison of volume preloading with tetrastarch, succinylated gelatin and ringer lactate solution for the patients undergoing lower segment caesarean section

    PubMed Central

    Mitra, Tapobrata; Das, Anjan; Majumdar, Saikat; Bhattacharyya, Tapas; Mandal, Rahul Deb; Hajra, Bimal Kumar

    2014-01-01

    Background: Spinal anesthesia has replaced general anesthesia in obstetric practice. Hemodynamic instability is a common, but preventable complication of spinal anesthesia. Preloading the circulation with intravenous fluids is considered a safe and effective method of preventing hypotension following spinal anesthesia. We had conducted a study to compare the hemodynamic stability after volume preloading with either Ringer's lactate (RL) or tetrastarch hydroxyethyl starch (HES) or succinylated gelatin (SG) in the patients undergoing cesarean section under spinal anesthesia. Materials and Methods: It was a prospective, double-blinded and randomized controlled study. Ninety six ASA-I healthy, nonlaboring parturients were randomly divided in 3 groups HES, SG, RL (n = 32 each) and received 10 ml/kg HES 130/0.4; 10 ml/kg SG (4% modified fluid gelatin) and 20 ml/kg RL respectively prior to SA scheduled for cesarean section. Heart rate, blood pressure (BP), oxygen saturation was measured. Results: The fall in systolic blood pressure (SBP) (<100 mm Hg) noted among 5 (15.63%), 12 (37.5%) and 14 (43.75%) parturients in groups HES, SG, RL respectively. Vasopressor (phenylephrine) was used to treat hypotension when SBP <90 mm Hg. Both the results and APGAR scores were comparable in all the groups. Lower preloading volume and less intra-operative vasopressor requirement was noted in HES group for maintaining BP though it has no clinical significance. Conclusion: RL which is cheap, physiological and widely available crystalloid can preload effectively and maintain hemodynamic stability well in cesarean section and any remnant hypotension can easily be manageable with vasopressor. PMID:25422601

  7. Dioxin analysis of Philadelphia Northwest Incinerator. Summary report. Volume 2. Appendices A - F. Technical report

    SciTech Connect

    Neulicht, R.

    1985-10-31

    A study was conducted by US EPA Region 3 to determine the dioxin-related impact of the Philadelphia Northwest Incinerator on public health. Specifically, it was designed to assess quantitatively the risks to public health resulting from emissions into the ambient air of dioxins as well as the potential effect of deposition of dioxins on the soil in the vicinity of the incinerator. Volume 1 is an executive summary of the study findings. Volume 2 contains contractor reports, laboratory analysis results and other documentation.

  8. An analysis for high speed propeller-nacelle aerodynamic performance prediction. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Egolf, T. Alan; Anderson, Olof L.; Edwards, David E.; Landgrebe, Anton J.

    1988-01-01

    A user's manual for the computer program developed for the prediction of propeller-nacelle aerodynamic performance reported in, An Analysis for High Speed Propeller-Nacelle Aerodynamic Performance Prediction: Volume 1 -- Theory and Application, is presented. The manual describes the computer program mode of operation requirements, input structure, input data requirements and the program output. In addition, it provides the user with documentation of the internal program structure and the software used in the computer program as it relates to the theory presented in Volume 1. Sample input data setups are provided along with selected printout of the program output for one of the sample setups.

  9. Metric Learning for Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  10. Small V/STOL aircraft analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Smith, K. R., Jr.; Belina, F. W.

    1974-01-01

    A study has been made of the economic viability of advanced V/STOL aircraft concepts in performing general aviation missions. A survey of general aviation aircraft users, operators, and manufacturers indicated that personnel transport missions formulated around business executive needs, commuter air service, and offshore oil supply are the leading potential areas of application using VTOL aircraft. Advanced VTOL concepts potentially available in the late 1970 time period were evaluated as alternatives to privately owned contemporary aircraft and commercial airline service in satisfying these personnel transport needs. Economic analysis incorporating the traveler's value of time as the principle figure of merit were used to identify the relative merits of alternative VTOL air transportation concepts.

  11. Pulsed Direct Current Electrospray: Enabling Systematic Analysis of Small Volume Sample by Boosting Sample Economy.

    PubMed

    Wei, Zhenwei; Xiong, Xingchuang; Guo, Chengan; Si, Xingyu; Zhao, Yaoyao; He, Muyi; Yang, Chengdui; Xu, Wei; Tang, Fei; Fang, Xiang; Zhang, Sichun; Zhang, Xinrong

    2015-11-17

    We had developed pulsed direct current electrospray ionization mass spectrometry (pulsed-dc-ESI-MS) for systematically profiling and determining components in small volume sample. Pulsed-dc-ESI utilized constant high voltage to induce the generation of single polarity pulsed electrospray remotely. This method had significantly boosted the sample economy, so as to obtain several minutes MS signal duration from merely picoliter volume sample. The elongated MS signal duration enable us to collect abundant MS(2) information on interested components in a small volume sample for systematical analysis. This method had been successfully applied for single cell metabolomics analysis. We had obtained 2-D profile of metabolites (including exact mass and MS(2) data) from single plant and mammalian cell, concerning 1034 components and 656 components for Allium cepa and HeLa cells, respectively. Further identification had found 162 compounds and 28 different modification groups of 141 saccharides in a single Allium cepa cell, indicating pulsed-dc-ESI a powerful tool for small volume sample systematical analysis. PMID:26488206

  12. Automated multimodality concurrent classification for segmenting vessels in 3D spectral OCT and color fundus images

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.

    2011-03-01

    Segmenting vessels in spectral-domain optical coherence tomography (SD-OCT) volumes is particularly challenging in the region near and inside the neural canal opening (NCO). Furthermore, accurately segmenting them in color fundus photographs also presents a challenge near the projected NCO. However, both modalities also provide complementary information to help indicate vessels, such as a better NCO contrast from the NCO-aimed OCT projection image and a better vessel contrast inside the NCO from fundus photographs. We thus present a novel multimodal automated classification approach for simultaneously segmenting vessels in SD-OCT volumes and fundus photographs, with a particular focus on better segmenting vessels near and inside the NCO by using a combination of their complementary features. In particular, in each SD-OCT volume, the algorithm pre-segments the NCO using a graph-theoretic approach and then applies oriented Gabor wavelets with oriented NCO-based templates to generate OCT image features. After fundus-to-OCT registration, the fundus image features are computed using Gaussian filter banks and combined with OCT image features. A k-NN classifier is trained on 5 and tested on 10 randomly chosen independent image pairs of SD-OCT volumes and fundus images from 15 subjects with glaucoma. Using ROC analysis, we demonstrate an improvement over two closest previous works performed in single modal SD-OCT volumes with an area under the curve (AUC) of 0.87 (0.81 for our and 0.72 for Niemeijer's single modal approach) in the region around the NCO and 0.90 outside the NCO (0.84 for our and 0.81 for Niemeijer's single modal approach).

  13. A hydrogen energy carrier. Volume 2: Systems analysis

    NASA Technical Reports Server (NTRS)

    Savage, R. L. (editor); Blank, L. (editor); Cady, T. (editor); Cox, K. (editor); Murray, R. (editor); Williams, R. D. (editor)

    1973-01-01

    A systems analysis of hydrogen as an energy carrier in the United States indicated that it is feasible to use hydrogen in all energy use areas, except some types of transportation. These use areas are industrial, residential and commercial, and electric power generation. Saturation concept and conservation concept forecasts of future total energy demands were made. Projected costs of producing hydrogen from coal or from nuclear heat combined with thermochemical decomposition of water are in the range $1.00 to $1.50 per million Btu of hydrogen produced. Other methods are estimated to be more costly. The use of hydrogen as a fuel will require the development of large-scale transmission and storage systems. A pipeline system similar to the existing natural gas pipeline system appears practical, if design factors are included to avoid hydrogen environment embrittlement of pipeline metals. Conclusions from the examination of the safety, legal, environmental, economic, political and societal aspects of hydrogen fuel are that a hydrogen energy carrier system would be compatible with American values and the existing energy system.

  14. Small-Volume Analysis of Cell–Cell Signaling Molecules in the Brain

    PubMed Central

    Romanova, Elena V; Aerts, Jordan T; Croushore, Callie A; Sweedler, Jonathan V

    2014-01-01

    Modern science is characterized by integration and synergy between research fields. Accordingly, as technological advances allow new and more ambitious quests in scientific inquiry, numerous analytical and engineering techniques have become useful tools in biological research. The focus of this review is on cutting edge technologies that aid direct measurement of bioactive compounds in the nervous system to facilitate fundamental research, diagnostics, and drug discovery. We discuss challenges associated with measurement of cell-to-cell signaling molecules in the nervous system, and advocate for a decrease of sample volumes to the nanoliter volume regimen for improved analysis outcomes. We highlight effective approaches for the collection, separation, and detection of such small-volume samples, present strategies for targeted and discovery-oriented research, and describe the required technology advances that will empower future translational science. PMID:23748227

  15. Corneal Segmentation Analysis Increases Glaucoma Diagnostic Ability of Optic Nerve Head Examination, Heidelberg Retina Tomograph's Moorfield's Regression Analysis, and Glaucoma Probability Score

    PubMed Central

    Saenz-Frances, F.; Jañez, L.; Berrozpe-Villabona, C.; Borrego-Sanz, L.; Morales-Fernández, L.; Acebal-Montero, A.; Mendez-Hernandez, C. D.; Martinez-de-la-Casa, J. M.; Santos-Bueso, E.; Garcia-Sanchez, J.; Garcia-Feijoo, J.

    2015-01-01

    Purpose. To study whether a corneal thickness segmentation model, consisting in a central circular zone of 1?mm radius centered at the corneal apex (zone I) and five concentric rings of 1?mm width (moving outwards: zones II to VI), could boost the diagnostic accuracy of Heidelberg Retina Tomograph's (HRT's) MRA and GPS. Material and Methods. Cross-sectional study. 121 healthy volunteers and 125 patients with primary open-angle glaucoma. Six binary multivariate logistic regression models were constructed (MOD-A1, MOD-A2, MOD-B1, MOD-B2, MOD-C1, and MOD-C2). The dependent variable was the presence of glaucoma. In MOD-A1, the predictor was the result (presence of glaucoma) of the analysis of the stereophotography of the optic nerve head (ONH). In MOD-B1 and MOD-C1, the predictor was the result of the MRA and GPS, respectively. In MOD-B2 and MOD-C2, the predictors were the same along with corneal variables: central, overall, and zones I to VI thicknesses. This scheme was reproduced for model MOD-A2 (stereophotography along with corneal variables). Models were compared using the area under the receiver operator characteristic curve (AUC). Results. MOD-A1-AUC: 0.771; MOD-A2-AUC: 0.88; MOD-B1-AUC: 0.736; MOD-B2-AUC: 0.845; MOD-C1-AUC: 0.712; MOD-C2-AUC: 0.838. Conclusion. Corneal thickness variables enhance ONH assessment and HRT's MRA and GPS diagnostic capacity. PMID:26180641

  16. Segmenting images analytically in shape space

    NASA Astrophysics Data System (ADS)

    Rathi, Yogesh; Dambreville, Samuel; Niethammer, Marc; Malcolm, James; Levitt, James; Shenton, Martha E.; Tannenbaum, Allen

    2008-03-01

    This paper presents a novel analytic technique to perform shape-driven segmentation. In our approach, shapes are represented using binary maps, and linear PCA is utilized to provide shape priors for segmentation. Intensity based probability distributions are then employed to convert a given test volume into a binary map representation, and a novel energy functional is proposed whose minimum can be analytically computed to obtain the desired segmentation in the shape space. We compare the proposed method with the log-likelihood based energy to elucidate some key differences. Our algorithm is applied to the segmentation of brain caudate nucleus and hippocampus from MRI data, which is of interest in the study of schizophrenia and Alzheimer's disease. Our validation (we compute the Hausdorff distance and the DICE coefficient between the automatic segmentation and ground-truth) shows that the proposed algorithm is very fast, requires no initialization and outperforms the log-likelihood based energy.

  17. Superpixel segmentation for analysis of hyperspectral data sets, with application to Compact Reconnaissance Imaging Spectrometer for Mars data, Moon Mineralogy Mapper data, and Ariadnes Chaos, Mars

    NASA Astrophysics Data System (ADS)

    Gilmore, Martha S.; Thompson, David R.; Anderson, Laura J.; Karamzadeh, Nader; Mandrake, Lukas; Castaño, Rebecca

    2011-07-01

    We present a semiautomated method to extract spectral end-members from hyperspectral images. This method employs superpixels, which are spectrally homogeneous regions of spatially contiguous pixels. The superpixel segmentation is combined with an unsupervised end-member extraction algorithm. Superpixel segmentation can complement per pixel classification techniques by reducing both scene-specific noise and computational complexity. The end-member extraction step explores the entire spectrum, recognizes target mineralogies within spectral mixtures, and enhances the discovery of unanticipated spectral classes. The method is applied to Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) images and compared to a manual expert classification and to state-of the-art image analysis techniques. The technique successfully recognizes all classes identified by the expert, producing spectral end-members that match well to target classes. Application of the technique to CRISM multispectral data and Moon Mineralogy Mapper (M3) hyperspectral data demonstrates the flexibility of the method in the analysis of a range of data sets. The technique is then used to analyze CRISM data in Ariadnes Chaos, Mars, and recognizes both phyllosilicates and sulfates in the chaos mounds. These aqueous deposits likely reflect changing environmental conditions during the Late Noachian/Early Hesperian. This semiautomated focus-of-attention tool will facilitate the identification of materials of interest on planetary surfaces whose constituents are unknown.

  18. Addressing the Vaccine Hesitancy Continuum: An Audience Segmentation Analysis of American Adults Who Did Not Receive the 2009 H1N1 Vaccine

    PubMed Central

    Ramanadhan, Shoba; Galarce, Ezequiel; Xuan, Ziming; Alexander-Molloy, Jaclyn; Viswanath, Kasisomayajula

    2015-01-01

    Understanding the heterogeneity of groups along the vaccine hesitancy continuum presents an opportunity to tailor and increase the impact of public engagement efforts with these groups. Audience segmentation can support these goals, as demonstrated here in the context of the 2009 H1N1 vaccine. In March 2010, we surveyed 1569 respondents, drawn from a nationally representative sample of American adults, with oversampling of racial/ethnic minorities and persons living below the United States Federal Poverty Level. Guided by the Structural Influence Model, we assessed knowledge, attitudes, and behaviors related to H1N1; communication outcomes; and social determinants. Among those who did not receive the vaccine (n = 1166), cluster analysis identified three vaccine-hesitant subgroups. Disengaged Skeptics (67%) were furthest from vaccine acceptance, with low levels of concern and engagement. The Informed Unconvinced (19%) were sophisticated consumers of media and health information who may not have been reached with information to motivate vaccination. The Open to Persuasion cluster (14%) had the highest levels of concern and motivation and may have required engagement about vaccination broadly. There were significant sociodemographic differences between groups. This analysis highlights the potential to use segmentation techniques to identify subgroups on the vaccine hesitancy continuum and tailor public engagement efforts accordingly. PMID:26350595

  19. Study of the Utah uranium-milling industry. Volume I. A policy analysis

    SciTech Connect

    Turley, R.E.

    1980-05-01

    This is the first volume of a two volume study of the Utah Uranium Milling Industry. The study was precipitated by a 1977 report issued by the Western Interstate Nuclear Board entitled Policy Recommendations on Financing Stabilization. Perpetual Surveillance and Maintenance of Uranium Mill Tailings. Volume I of this study is a policy analysis or technology assessment of the uranium milling industry in the state of Utah; specifically, the study addresses issues that deal with the perpetual surveillance, monitoring, and maintenance of uranium tailings piles at the end of uranium milling operations, i.e., following shutdown and decommissioning. Volume II of this report serves somewhat as an appendix. It represents a full description of the uranium industry in the state of Utah, including its history and statements regarding its future. The topics covered in volume I are as follows: today's uranium industry in Utah; management of the industry's characteristic nuclear radiation; uranium mill licensing and regulation; state licensing and regulation of uranium mills; forecast of future milling operations; policy needs relative to perpetual surveillance, monitoring, and maintenance of tailings; policy needs relative to perpetual oversight; economic aspects; state revenue from uranium; and summary with conclusions and recommendations. Appendices, figures and tables are also presented.

  20. SRM Internal Flow Tests and Computational Fluid Dynamic Analysis. Volume 2; CFD RSRM Full-Scale Analyses

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This document presents the full-scale analyses of the CFD RSRM. The RSRM model was developed with a 20 second burn time. The following are presented as part of the full-scale analyses: (1) RSRM embedded inclusion analysis; (2) RSRM igniter nozzle design analysis; (3) Nozzle Joint 4 erosion anomaly; (4) RSRM full motor port slag accumulation analysis; (5) RSRM motor analysis of two-phase flow in the aft segment/submerged nozzle region; (6) Completion of 3-D Analysis of the hot air nozzle manifold; (7) Bates Motor distributed combustion test case; and (8) Three Dimensional Polysulfide Bump Analysis.

  1. Principal Component Analysis-Based Pattern Analysis of Dose-Volume Histograms and Influence on Rectal Toxicity

    SciTech Connect

    Soehn, Matthias Alber, Markus; Yan Di

    2007-09-01

    Purpose: The variability of dose-volume histogram (DVH) shapes in a patient population can be quantified using principal component analysis (PCA). We applied this to rectal DVHs of prostate cancer patients and investigated the correlation of the PCA parameters with late bleeding. Methods and Materials: PCA was applied to the rectal wall DVHs of 262 patients, who had been treated with a four-field box, conformal adaptive radiotherapy technique. The correlated changes in the DVH pattern were revealed as 'eigenmodes,' which were ordered by their importance to represent data set variability. Each DVH is uniquely characterized by its principal components (PCs). The correlation of the first three PCs and chronic rectal bleeding of Grade 2 or greater was investigated with uni- and multivariate logistic regression analyses. Results: Rectal wall DVHs in four-field conformal RT can primarily be represented by the first two or three PCs, which describe {approx}94% or 96% of the DVH shape variability, respectively. The first eigenmode models the total irradiated rectal volume; thus, PC1 correlates to the mean dose. Mode 2 describes the interpatient differences of the relative rectal volume in the two- or four-field overlap region. Mode 3 reveals correlations of volumes with intermediate doses ({approx}40-45 Gy) and volumes with doses >70 Gy; thus, PC3 is associated with the maximal dose. According to univariate logistic regression analysis, only PC2 correlated significantly with toxicity. However, multivariate logistic regression analysis with the first two or three PCs revealed an increased probability of bleeding for DVHs with more than one large PC. Conclusions: PCA can reveal the correlation structure of DVHs for a patient population as imposed by the treatment technique and provide information about its relationship to toxicity. It proves useful for augmenting normal tissue complication probability modeling approaches.

  2. Semi-Automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    SciTech Connect

    Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.

  3. Simple segmental hair analysis for ?-pyrrolidinophenone-type designer drugs by MonoSpin extraction for evaluation of abuse history.

    PubMed

    Namera, Akira; Konuma, Kyohei; Saito, Takeshi; Ota, Shigenori; Oikawa, Hiroshi; Miyazaki, Shota; Urabe, Shumari; Shiraishi, Hiroaki; Nagao, Masataka

    2013-12-30

    For detection of a history of drug abused, we developed a simple method for extracting pyrrolidinophenone-type designer drugs in human hair by using a MonoSpin(®) C18 column. Target drugs were extracted from a single alkaline-digested hair segment (length, 10mm; weight, ca 0.1mg). The analytes extracted were then analyzed by high-performance liquid chromatography-mass spectrometry without evaporation of the eluent after MonoSpin extraction. Linearity from 0.5 to 500ng/mg was observed for all the tested drugs using an internal standard method (correlation coefficients >0.998) and the limit of detection was 0.2ng/mg. The recoveries were between 0.7 and 11.1%. The coefficients for intraday and interday variations at 4, 40, 200, and 400ng/mg in hair were between 0.7 and 11.1%. This method was successfully applied to the identification of these designer drugs in segmented human hair from drug abusers and indicated their history of drug abuse. The results were consistent with the patients' statements, indicating that this rapid method can be used to detect a history of drug abuse. PMID:24212139

  4. Perfusion analysis using a wide coverage flat-panel volume CT: feasibility study

    NASA Astrophysics Data System (ADS)

    Grasruck, M.; Gupta, R.; Reichardt, B.; Klotz, E.; Schmidt, B.; Flohr, T.

    2007-03-01

    We developed a Flat-panel detector based Volume CT (VCT) prototype scanner with large z-coverage. In that prototype scanner a Varian 4030CB a-Si flat-panel detector was mounted in a multi slice CT-gantry (Siemens Medical Solutions) which provides a 25 cm field of view with 18 cm z-coverage at isocenter. The large volume covered in one rotation can be used for visualization of complete organs of small animals, e.g. rabbits. By implementing a mode with continuous scanning, we are able to reconstruct the complete volume at any point in time during the propagation of a contrast bolus. Multiple volumetric reconstructions over time elucidate the first pass dynamics of a bolus of contrast resulting in 4-D angiography and potentially allowing whole organ perfusion analysis. We studied to which extent pixel based permeability and blood volume calculation with a modified Patlak approach was possible. Experimental validation was performed by imaging evolution of contrast bolus in New Zealand rabbits. Despite the short circulation time of a rabbit, the temporal resolution was sufficient to visually resolve various phases of the first pass of the contrast bolus. Perfusion imaging required substantial spatial smoothing but allowed a qualitative discrimination of different types of parenchyma in brain and liver. If a true quantitative analysis is possible, requires further studies.

  5. Medical Image Analysis (1996) volume 1, number 2, pp 109127 c Oxford University Press

    E-print Network

    Grimson, Eric

    1996-01-01

    , and Brigham and Women's Hospital, 75 Francis Street, Boston, MA 02115, USA Abstract Segmentation of medical from the computer vision literature: expectation/maximization segmentation, binary mathematical morphology, statistical classification, validation Received December 29, 1995; revised April 2, 1996

  6. Fluid Vessel Quantity Using Non-invasive PZT Technology Flight Volume Measurements Under Zero G Analysis

    NASA Technical Reports Server (NTRS)

    Garofalo, Anthony A

    2013-01-01

    The purpose of the project is to perform analysis of data using the Systems Engineering Educational Discovery (SEED) program data from 2011 and 2012 Fluid Vessel Quantity using Non-Invasive PZT Technology flight volume measurements under Zero G conditions (parabolic Plane flight data). Also experimental planning and lab work for future sub-orbital experiments to use the NASA PZT technology for fluid volume measurement. Along with conducting data analysis of flight data, I also did a variety of other tasks. I provided the lab with detailed technical drawings, experimented with 3d printers, made changes to the liquid nitrogen skid schematics, and learned how to weld. I also programmed microcontrollers to interact with various sensors and helped with other things going on around the lab.

  7. Fluid Vessel Quantity using Non-Invasive PZT Technology Flight Volume Measurements Under Zero G Analysis

    NASA Technical Reports Server (NTRS)

    Garofalo, Anthony A.

    2013-01-01

    The purpose of the project is to perform analysis of data using the Systems Engineering Educational Discovery (SEED) program data from 2011 and 2012 Fluid Vessel Quantity using Non-Invasive PZT Technology flight volume measurements under Zero G conditions (parabolic Plane flight data). Also experimental planning and lab work for future sub-orbital experiments to use the NASA PZT technology for fluid volume measurement. Along with conducting data analysis of flight data, I also did a variety of other tasks. I provided the lab with detailed technical drawings, experimented with 3d printers, made changes to the liquid nitrogen skid schematics, and learned how to weld. I also programmed microcontrollers to interact with various sensors and helped with other things going on around the lab.

  8. SU-F-BRF-02: Automated Lung Segmentation Method Using Atlas-Based Sparse Shape Composition with a Shape Constrained Deformable Model

    SciTech Connect

    Zhou, J; Yan, Z; Zhang, S; Zhang, B; Lasio, G; Prado, K; D'Souza, W

    2014-06-15

    Purpose: To develop an automated lung segmentation method, which combines the atlas-based sparse shape composition with a shape constrained deformable model in thoracic CT for patients with compromised lung volumes. Methods: Ten thoracic computed tomography scans for patients with large lung tumors were collected and reference lung ROIs in each scan was manually segmented to assess the performance of the method. We propose an automated and robust framework for lung tissue segmentation by using single statistical atlas registration to initialize a robust deformable model in order to perform fine segmentation that includes compromised lung tissue. First, a statistical image atlas with sparse shape composition is constructed and employed to obtain an approximate estimation of lung volume. Next, a robust deformable model with shape prior is initialized from this estimation. Energy terms from ROI edge potential and interior ROI region based potential as well as the initial ROI are combined in this model for accurate and robust segmentation. Results: The proposed segmentation method is applied to segment right lung on three CT scans. The quantitative results of our segmentation method achieved mean dice score of (0.92–0.95), mean accuracy of (0.97,0.98), and mean relative error of (0.10,0.16) with 95% CI. The quantitative results of previously published RASM segmentation method achieved mean dice score of (0.74,0.96), mean accuracy of (0.66,0.98), and mean relative error of (0.04, 0.38) with 95% CI. The qualitative and quantitative comparisons show that our proposed method can achieve better segmentation accuracy with less variance compared with a robust active shape model method. Conclusion: The atlas-based segmentation approach achieved relatively high accuracy with less variance compared to RASM in the sample dataset and the proposed method will be useful in image analysis applications for lung nodule or lung cancer diagnosis and radiotherapy assessment in thoracic computed tomography.

  9. Space shuttle/food system study. Volume 2, appendix E: Alternate flight systems analysis

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The functional requirements of stowage, preparation, serving, consumption, and cleanup were applied to each of the five food mixes selected for study in terms of the overall design of the space shuttle food system. The analysis led to a definition of performance requirements for each food mix, along with a definition of equipment to meet those requirements. Weight and volume data for all five systems, in terms of food and packaging, support equipment, and galley installation penalties, are presented.

  10. Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features.

    PubMed

    Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L

    2015-01-01

    Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r?=?0.35, 0.43 and 0.36; manual r?=?0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research. PMID:26576732

  11. Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features

    PubMed Central

    Rios Velazquez, Emmanuel; Meier, Raphael; Dunn Jr, William D.; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A.; Reyes, Mauricio; Aerts, Hugo J.W.L.

    2015-01-01

    Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 – 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r?=?0.35, 0.43 and 0.36; manual r?=?0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55–0.77 and 0.65, CI: 0.54–0.76), comparable to manually defined volumes (0.64, CI: 0.53–0.75 and 0.63, CI: 0.52–0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research. PMID:26576732

  12. Independent Orbiter Assessment (IOA): Analysis of the reaction control system, volume 3

    NASA Technical Reports Server (NTRS)

    Burkemper, V. J.; Haufler, W. A.; Odonnell, R. A.; Paul, D. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Reaction Control System (RCS). The RCS is situated in three independent modules, one forward in the orbiter nose and one in each OMS/RCS pod. Each RCS module consists of the following subsystems: Helium Pressurization Subsystem; Propellant Storage and Distribution Subsystem; Thruster Subsystem; and Electrical Power Distribution and Control Subsystem. Volume 3 continues the presentation of IOA analysis worksheets and the potential critical items list.

  13. Independent Orbiter Assessment (IOA): Analysis of the Electrical Power Distribution and Control Subsystem, Volume 2

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Volume 2 continues the presentation of IOA analysis worksheets and contains the potential critical items list.

  14. Predicting Nonauditory Adverse Radiation Effects Following Radiosurgery for Vestibular Schwannoma: A Volume and Dosimetric Analysis

    SciTech Connect

    Hayhurst, Caroline; Monsalves, Eric; Bernstein, Mark; Gentili, Fred; Heydarian, Mostafa; Tsao, May; Schwartz, Michael; Prooijen, Monique van; Millar, Barbara-Ann; Menard, Cynthia; Kulkarni, Abhaya V.; Laperriere, Norm; Zadeh, Gelareh

    2012-04-01

    Purpose: To define clinical and dosimetric predictors of nonauditory adverse radiation effects after radiosurgery for vestibular schwannoma treated with a 12 Gy prescription dose. Methods: We retrospectively reviewed our experience of vestibular schwannoma patients treated between September 2005 and December 2009. Two hundred patients were treated at a 12 Gy prescription dose; 80 had complete clinical and radiological follow-up for at least 24 months (median, 28.5 months). All treatment plans were reviewed for target volume and dosimetry characteristics; gradient index; homogeneity index, defined as the maximum dose in the treatment volume divided by the prescription dose; conformity index; brainstem; and trigeminal nerve dose. All adverse radiation effects (ARE) were recorded. Because the intent of our study was to focus on the nonauditory adverse effects, hearing outcome was not evaluated in this study. Results: Twenty-seven (33.8%) patients developed ARE, 5 (6%) developed hydrocephalus, 10 (12.5%) reported new ataxia, 17 (21%) developed trigeminal dysfunction, 3 (3.75%) had facial weakness, and 1 patient developed hemifacial spasm. The development of edema within the pons was significantly associated with ARE (p = 0.001). On multivariate analysis, only target volume is a significant predictor of ARE (p = 0.001). There is a target volume threshold of 5 cm3, above which ARE are more likely. The treatment plan dosimetric characteristics are not associated with ARE, although the maximum dose to the 5th nerve is a significant predictor of trigeminal dysfunction, with a threshold of 9 Gy. The overall 2-year tumor control rate was 96%. Conclusions: Target volume is the most important predictor of adverse radiation effects, and we identified the significant treatment volume threshold to be 5 cm3. We also established through our series that the maximum tolerable dose to the 5th nerve is 9 Gy.

  15. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    SciTech Connect

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

  16. HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual

    SciTech Connect

    McCann, R.A.; Lowery, P.S.; Lessor, D.L.

    1987-09-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.

  17. Magnetic resonance velocity imaging derived pressure differential using control volume analysis

    PubMed Central

    2011-01-01

    Background Diagnosis and treatment of hydrocephalus is hindered by a lack of systemic understanding of the interrelationships between pressures and flow of cerebrospinal fluid in the brain. Control volume analysis provides a fluid physics approach to quantify and relate pressure and flow information. The objective of this study was to use control volume analysis and magnetic resonance velocity imaging to non-invasively estimate pressure differentials in vitro. Method A flow phantom was constructed and water was the experimental fluid. The phantom was connected to a high-resolution differential pressure sensor and a computer controlled pump producing sinusoidal flow. Magnetic resonance velocity measurements were taken and subsequently analyzed to derive pressure differential waveforms using momentum conservation principles. Independent sensor measurements were obtained for comparison. Results Using magnetic resonance data the momentum balance in the phantom was computed. The measured differential pressure force had amplitude of 14.4 dynes (pressure gradient amplitude 0.30 Pa/cm). A 12.5% normalized root mean square deviation between derived and directly measured pressure differential was obtained. These experiments demonstrate one example of the potential utility of control volume analysis and the concepts involved in its application. Conclusions This study validates a non-invasive measurement technique for relating velocity measurements to pressure differential. These methods may be applied to clinical measurements to estimate pressure differentials in vivo which could not be obtained with current clinical sensors. PMID:21414222

  18. Amygdala volume in Major Depressive Disorder: A meta-analysis of magnetic resonance imaging studies

    PubMed Central

    Hamilton, J. Paul; Siemer, Matthias; Gotlib, Ian H.

    2009-01-01

    Major Depressive Disorder has been associated with volumetric abnormality in the amygdala. In this meta-analysis we examine results from magnetic resonance imaging volumetry studies of the amygdala in depression in order to assess both the nature of the relationship between depression and amygdala volume as well as the influence of extra-experimental factors that may account for significant variability in reported findings. We searched PubMed and ISI Web of Knowledge databases for articles published from 1985 to 2008 that used the wildcard terms “Depress*” and “Amygdal*” in the title, keywords, or abstract. From the 13 studies that met inclusion criteria for our meta-analysis, we calculated aggregate effect size and heterogeneity estimates from amygdala volumetric data; we then used meta-regression to determine whether variability in specific extra-experimental factors accounted for variability in findings. The lack of a reliable difference in amygdala volume between depressed and never-depressed individuals was accounted for by a positive correlation between amygdala volume differences and the proportion of medicated depressed persons in study samples: whereas the aggregate effect size calculated from studies that included only medicated individuals indicated that amygdala volume was significantly increased in depressed relative to healthy persons, studies with only unmedicated depressed individuals showed a reliable decrease in amygdala volume in depression. These findings are consistent with a formulation in which an antidepressant-mediated increase in levels of brain derived neurotrophic factor promotes neurogenesis and protects against glucocorticoid toxicity in the amygdala in medicated but not in unmedicated depression. PMID:18504424

  19. Effect of varicocelectomy on testis volume and semen parameters in adolescents: a meta-analysis.

    PubMed

    Zhou, Tie; Zhang, Wei; Chen, Qi; Li, Lei; Cao, Huan; Xu, Chuan-Liang; Chen, Guang-Hua; Sun, Ying-Hao

    2015-01-01

    Varicocele repair in adolescent remains controversial. Our aim is to identify and combine clinical trials results published thus far to ascertain the efficacy of varicocelectomy in improving testis volume and semen parameters compared with nontreatment control. A literature search was performed using Medline, Embase and Web of Science, which included results obtained from meta-analysis, randomized and nonrandomized controlled studies. The study population was adolescents with clinically palpable varicocele with or without the testicular asymmetry or abnormal semen parameters. Cases were allocated to treatment and observation groups, and testis volume or semen parameters were adopted as outcome measures. As a result, seven randomized controlled trials (RCTs) and nonrandomized controlled trials studying bilateral testis volume or semen parameters in both treatment and observation groups were identified. Using a random effect model, mean difference of testis volume between the treatment group and the observation group was 2.9 ml (95% confidence interval [CI]: 0.6, 5.2; P< 0.05) for the varicocele side and 1.5 ml (95% CI: 0.3, 2.7; P< 0.05) for the healthy side. The random effect model analysis demonstrated that the mean difference of semen concentration, total semen motility, and normal morphology between the two groups was 13.7 × 10 6 ml-1 (95% CI: -1.4, 28.8; P = 0.075), 2.5% (95% CI: -3.6, 8.6; P= 0.424), and 2.9% (95% CI: -3.0, 8.7; P= 0.336) respectively. In conclusion, although varicocelectomy significantly improved bilateral testis volume in adolescents with varicocele compared with observation cases, semen parameters did not have any statistically significant difference between two groups. Well-planned, properly conducted RCTs are needed in order to confirm the above-mentioned conclusion further and to explore whether varicocele repair in adolescents could improve subsequently spontaneous pregnancy rates. PMID:25677136

  20. Numerical analysis of the influence of nucleus pulposus removal on the biomechanical behavior of a lumbar motion segment.

    PubMed

    Huang, Juying; Yan, Huagang; Jian, Fengzeng; Wang, Xingwen; Li, Haiyun

    2015-01-01

    Nucleus replacement was deemed to have therapeutic potential for patients with intervertebral disc herniation. However, whether a patient would benefit from nucleus replacement is technically unclear. This study aimed to investigate the influence of nucleus pulposus (NP) removal on the biomechanical behavior of a lumbar motion segment and to further explore a computational method of biomechanical characteristics of NP removal, which can evaluate the mechanical stability of pulposus replacement. We, respectively, reconstructed three types of models for a mildly herniated disc and three types of models for a severely herniated disc based on a L4-L5 segment finite element model with computed tomography image data from a healthy adult. First, the NP was removed from the herniated disc models, and the biomechanical behavior of NP removal was simulated. Second, the NP cavities were filled with an experimental material (Poisson's ratio = 0.3; elastic modulus = 3 MPa), and the biomechanical behavior of pulposus replacement was simulated. The simulations were carried out under the five loadings of axial compression, flexion, lateral bending, extension, and axial rotation. The changes of the four biomechanical characteristics, i.e. the rotation degree, the maximum stress in the annulus fibrosus (AF), joint facet contact forces, and the maximum disc deformation, were computed for all models. Experimental results showed that the rotation range, the maximum AF stress, and joint facet contact forces increased, and the maximum disc deformation decreased after NP removal, while they changed in the opposite way after the nucleus cavities were filled with the experimental material. PMID:24893132

  1. Segment alignment control system

    NASA Technical Reports Server (NTRS)

    Aubrun, JEAN-N.; Lorell, Ken R.

    1988-01-01

    The segmented primary mirror for the LDR will require a special segment alignment control system to precisely control the orientation of each of the segments so that the resulting composite reflector behaves like a monolith. The W.M. Keck Ten Meter Telescope will utilize a primary mirror made up of 36 actively controlled segments. Thus the primary mirror and its segment alignment control system are directly analogous to the LDR. The problems of controlling the segments in the face of disturbances and control/structures interaction, as analyzed for the TMT, are virtually identical to those for the LDR. The two systems are briefly compared.

  2. A control-volume method for analysis of unsteady thrust augmenting ejector flows

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1988-01-01

    A method for predicting transient thrust augmenting ejector characteristics is presented. The analysis blends classic self-similar turbulent jet descriptions with a control volume mixing region discretization to solicit transient effects in a new way. Division of the ejector into an inlet, diffuser, and mixing region corresponds with the assumption of viscous-dominated phenomenon in the latter. Inlet and diffuser analyses are simplified by a quasi-steady analysis, justified by the assumptions that pressure is the forcing function in those regions. Details of the theoretical foundation, the solution algorithm, and sample calculations are given.

  3. Fabrication, testing, and analysis of anisotropic carbon/glass hybrid composites: volume 1: technical report.

    SciTech Connect

    Wetzel, Kyle K. (Wetzel Engineering, Inc. Lawrence, Kansas); Hermann, Thomas M. (Wichita state University, Wichita, Kansas); Locke, James (Wichita state University, Wichita, Kansas)

    2005-11-01

    Anisotropic carbon/glass hybrid composite laminates have been fabricated, tested, and analyzed. The laminates have been fabricated using vacuum-assisted resin transfer molding (VARTM). Five fiber complexes and a two-part epoxy resin system have been used in the study to fabricate panels of twenty different laminate constructions. These panels have been subjected to physical testing to measure density, fiber volume fraction, and void fraction. Coupons machined from these panels have also been subjected to mechanical testing to measure elastic properties and strength of the laminates using tensile, compressive, transverse tensile, and in-plane shear tests. Interlaminar shear strength has also been measured. Out-of-plane displacement, axial strain, transverse strain, and inplane shear strain have also been measured using photogrammetry data obtained during edgewise compression tests. The test data have been reduced to characterize the elastic properties and strength of the laminates. Constraints imposed by test fixtures might be expected to affect measurements of the moduli of anisotropic materials; classical lamination theory has been used to assess the magnitude of such effects and correct the experimental data for the same. The tensile moduli generally correlate well with experiment without correction and indicate that factors other than end constraints dominate. The results suggest that shear moduli of the anisotropic materials are affected by end constraints. Classical lamination theory has also been used to characterize the level of extension-shear coupling in the anisotropic laminates. Three factors affecting the coupling have been examined: the volume fraction of unbalanced off-axis layers, the angle of the off-axis layers, and the composition of the fibers (i.e., carbon or glass) used as the axial reinforcement. The results indicate that extension/shear coupling is maximized with the least loss in axial tensile stiffness by using carbon fibers oriented 15{sup o} from the long axis for approximately two-thirds of the laminate volume (discounting skin layers), with reinforcing carbon fibers oriented axially comprising the remaining one-third of the volume. Finite element analysis of each laminate has been performed to examine first ply failure. Three failure criteria--maximum stress, maximum strain, and Tsai-Wu--have been compared. Failure predicted by all three criteria proves generally conservative, with the stress-based criteria the most conservative. For laminates that respond nonlinearly to loading, large error is observed in the prediction of failure using maximum strain as the criterion. This report documents the methods and results in two volumes. Volume 1 contains descriptions of the laminates, their fabrication and testing, the methods of analysis, the results, and the conclusions and recommendations. Volume 2 contains a comprehensive summary of the individual test results for all laminates.

  4. Coupled Structural, Thermal, Phase-Change and Electromagnetic Analysis for Superconductors. Volume 1

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.

    1996-01-01

    Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromagnetic subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase-change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermal and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. This volume, Volume 1, describes mostly formulations for specific problems. Volume 2 describes generalization of those formulations.

  5. Segmentation of Offline Handwritten Bengali Script

    E-print Network

    Basu, Subhadip; Kundu, Mahantapas; Nasipuri, Mita; Basu, Dipak K

    2012-01-01

    Character segmentation has long been one of the most critical areas of optical character recognition process. Through this operation, an image of a sequence of characters, which may be connected in some cases, is decomposed into sub-images of individual alphabetic symbols. In this paper, segmentation of cursive handwritten script of world's fourth popular language, Bengali, is considered. Unlike English script, Bengali handwritten characters and its components often encircle the main character, making the conventional segmentation methodologies inapplicable. Experimental results, using the proposed segmentation technique, on sample cursive handwritten data containing 218 ideal segmentation points show a success rate of 97.7%. Further feature-analysis on these segments may lead to actual recognition of handwritten cursive Bengali script.

  6. Light-Weight Radioisotope Heater Unit Final Safety Analysis Report (LWRHU FSAR). Volume 3: Nuclear risk analysis document

    NASA Astrophysics Data System (ADS)

    1988-11-01

    The Light-Weight Radioisotope Heater Unit (LWRHU) Final Safety Analysis Report (FSAR), Volume 2, Accident Model Document (AMD) describes potential accident scenarios during the Galileo mission and evaluates the response of the LWRHUs to the associated accident environments. Any resulting source terms, consisting of PuO2 (with Pu-238 the dominant radionuclide), are then described in terms of curies released, particle size distribution, release location, and probabilities. This volume (LWRHU-FSAR, Volume 3, Nuclear Risk Analysis Document (NRAD)) contains the radiological analyses which estimate the consequences of the accident scenarios described in the AMD. It also contains the quantification of mission risks resulting from the LWRHUs based on consideration of all accident scenarios and their probabilities. Estimates of source terms and their characteristics derived in the AMD are used as inputs to the analyses in the NRAD. The Failure Abort Sequence Trees (FASTs) presented in the AMD define events for which source terms occur and quantify them. Based on this information, three types of source term cases (most probable, maximum, and expectation) for each mission phase were developed for use in evaluating the radiological consequences and mission risks.

  7. Modifications to a two-control-volume, frequency dependent, transfer-function analysis of hole-pattern gas annular seals 

    E-print Network

    Shin, Yoon Shik

    2007-04-25

    A rotordynamic analysis of hole-pattern gas annular seals using a two-control-volume model, Ha and Childs and frequency dependent transfer-function model, Kleynhans and Childs is modified with four features. The energy ...

  8. Three-dimensional volumetry in 107 normal livers reveals clinically relevant inter-segment variation in size

    PubMed Central

    Mise, Yoshihiro; Satou, Shouichi; Shindoh, Junichi; Conrad, Claudius; Aoki, Taku; Hasegawa, Kiyoshi; Sugawara, Yasuhiko; Kokudo, Norihiro

    2014-01-01

    Background The anatomic resection of Couinaud's segments is one of the key techniques in liver surgery. However, the territories and volumes of the eight segments are not adequately assessed based on portal branching. Methods Three-dimensional (3D) perfusion-based volumetry was performed in 107 normal livers. Based on Couinaud classification, the portal branches were identified and the volumes of each segment were calculated. The relationships between branching patterns of the portal veins and segmental volumes were assessed. Results In descending order of volume, median volumes of segments VIII, VII, IV, V, III, VI, II and I were recorded. Segment VIII was the largest, accounting for a median of 26.1% (range: 11.1–38.0%) of total liver volume (TLV), whereas segments II and III each represented <10% of TLV. In 69.2% of subjects, the portal branches of segment V diverged from the trunk of the branches of segment VIII. No relationship was found between branching type and segment volume. Conclusions The territories and volumes of Couinaud's segments vary among segments, as well as among individuals. Detailed 3D volumetry is useful for preoperative evaluations of the dissection line and of future liver remnant volume in anatomic segmentectomy. PMID:24033584

  9. Automatic segmentation of dynamic neuroreceptor single-photon emission tomography images using fuzzy clustering.

    PubMed

    Acton, P D; Pilowsky, L S; Kung, H F; Ell, P J

    1999-06-01

    The segmentation of medical images is one of the most important steps in the analysis and quantification of imaging data. However, partial volume artefacts make accurate tissue boundary definition difficult, particularly for images with lower resolution commonly used in nuclear medicine. In single-photon emission tomography (SPET) neuroreceptor studies, areas of specific binding are usually delineated by manually drawing regions of interest (ROIs), a time-consuming and subjective process. This paper applies the technique of fuzzy c-means clustering (FCM) to automatically segment dynamic neuroreceptor SPET images. Fuzzy clustering was tested using a realistic, computer-generated, dynamic SPET phantom derived from segmenting an MR image of an anthropomorphic brain phantom. Also, the utility of applying FCM to real clinical data was assessed by comparison against conventional ROI analysis of iodine-123 iodobenzamide (IBZM) binding to dopamine D2/D3 receptors in the brains of humans. In addition, a further test of the methodology was assessed by applying FCM segmentation to [123I]IDAM images (5-iodo-2-[[2-2-[(dimethylamino)methyl]phenyl]thio] benzyl alcohol) of serotonin transporters in non-human primates. In the simulated dynamic SPET phantom, over a wide range of counts and ratios of specific binding to background, FCM correlated very strongly with the true counts (correlation coefficient r2>0.99, P<0.0001). Similarly, FCM gave segmentation of the [123I]IBZM data comparable with manual ROI analysis, with the binding ratios derived from both methods significantly correlated (r2=0.83, P<0.0001). Fuzzy clustering is a powerful tool for the automatic, unsupervised segmentation of dynamic neuroreceptor SPET images. Where other automated techniques fail completely, and manual ROI definition would be highly subjective, FCM is capable of segmenting noisy images in a robust and repeatable manner. PMID:10369943

  10. Segmented trapped vortex cavity

    NASA Technical Reports Server (NTRS)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  11. Final safety analysis report for the Galileo Mission: Volume 1, Reference design document

    SciTech Connect

    Not Available

    1988-05-01

    The Galileo mission uses nuclear power sources called Radioisotope Thermoelectric Generators (RTGs) to provide the spacecraft's primary electrical power. Because these generators contain nuclear material, a Safety Analysis Report (SAR) is required. A preliminary SAR and an updated SAR were previously issued that provided an evolving status report on the safety analysis. As a result of the Challenger accident, the launch dates for both Galileo and Ulysses missions were later rescheduled for November 1989 and October 1990, respectively. The decision was made by agreement between the DOE and the NASA to have a revised safety evaluation and report (FSAR) prepared on the basis of these revised vehicle accidents and environments. The results of this latest revised safety evaluation are presented in this document (Galileo FSAR). Volume I, this document, provides the background design information required to understand the analyses presented in Volumes II and III. It contains descriptions of the RTGs, the Galileo spacecraft, the Space Shuttle, the Inertial Upper Stage (IUS), the trajectory and flight characteristics including flight contingency modes, and the launch site. There are two appendices in Volume I which provide detailed material properties for the RTG.

  12. Millisecond single-molecule localization microscopy combined with convolution analysis and automated image segmentation to determine protein concentrations in complexly structured, functional cells, one cell at a time

    E-print Network

    Wollman, Adam J M

    2015-01-01

    We present a single-molecule tool called the CoPro (Concentration of Proteins) method that uses millisecond imaging with convolution analysis, automated image segmentation and super-resolution localization microscopy to generate robust estimates for protein concentration in different compartments of single living cells, validated using realistic simulations of complex multiple compartment cell types. We demonstrates its utility experimentally on model Escherichia coli bacteria and Saccharomyces cerevisiae budding yeast cells, and use it to address the biological question of how signals are transduced in cells. Cells in all domains of life dynamically sense their environment through signal transduction mechanisms, many involving gene regulation. The glucose sensing mechanism of S. cerevisiae is a model system for studying gene regulatory signal transduction. It uses the multi-copy expression inhibitor of the GAL gene family, Mig1, to repress unwanted genes in the presence of elevated extracellular glucose conc...

  13. Buckling of a Longitudinally Jointed Curved Composite Panel Arc Segment for Next Generation of Composite Heavy Lift Launch Vehicles: Verification Testing Analysis

    NASA Technical Reports Server (NTRS)

    Farrokh, Babak; Segal, Kenneth N.; Akkerman, Michael; Glenn, Ronald L.; Rodini, Benjamin T.; Fan, Wei-Ming; Kellas, Sortiris; Pineda, Evan J.

    2014-01-01

    In this work, an all-bonded out-of-autoclave (OoA) curved longitudinal composite joint concept, intended for use in the next generation of composite heavy lift launch vehicles, was evaluated and verified through finite element (FE) analysis, fabrication, testing, and post-test inspection. The joint was used to connect two curved, segmented, honeycomb sandwich panels representative of a Space Launch System (SLS) fairing design. The overall size of the resultant panel was 1.37 m by 0.74 m (54 in by 29 in), of which the joint comprised a 10.2 cm (4 in) wide longitudinal strip at the center. NASTRAN and ABAQUS were used to perform linear and non-linear analyses of the buckling and strength performance of the jointed panel. Geometric non-uniformities (i.e., surface contour imperfections) were measured and incorporated into the FE model and analysis. In addition, a sensitivity study of the specimens end condition showed that bonding face-sheet doublers to the panel's end, coupled with some stress relief features at corner-edges, can significantly reduce the stress concentrations near the load application points. Ultimately, the jointed panel was subjected to a compressive load. Load application was interrupted at the onset of buckling (at 356 kN 80 kips). A post-test non-destructive evaluation (NDE) showed that, as designed, buckling occurred without introducing any damage into the panel or the joint. The jointed panel was further capable of tolerating an impact damage to the same buckling load with no evidence of damage propagation. The OoA cured all-composite joint shows promise as a low mass factory joint for segmented barrels.

  14. Quantitative OCT-based corneal topography in keratoconus with intracorneal ring segments

    PubMed Central

    Ortiz, Sergio; Pérez-Merino, Pablo; Alejandre, Nicolas; Gambra, E.; Jimenez-Alfaro, I.; Marcos, Susana

    2012-01-01

    Custom high-resolution high-speed anterior segment spectral domain Optical Coherence Tomography (OCT) was used to characterize three-dimensionally (3-D) corneal topography in keratoconus before and after implantation of intracorneal ring segments (ICRS). Previously described acquisition protocols were followed to minimize the impact of the motions of the eye. The collected set of images was corrected from distortions: fan (scanning) and optical (refraction). Custom algorithms were developed for automatic detection and classification of volumes in the anterior segment of the eye, in particular for the detection and classification of the implanted ICRS. Surfaces were automatically detected for quantitative analysis of the corneal elevation maps (fitted by biconicoids and Zernike polynomials) and pachymetry. Automatic tools were developed for the estimation of the 3-D positioning of the ICRS. The pupil center reference was estimated from the segmented iris volume. The developed algorithms are illustrated in a keratoconic eye (grade III) pre- and 30 days post-operatively after implantation of two triangular-section, 0.3-mm thick Ferrara ring segments. Quantitative corneal topographies reveal that the ICRS produced a flattening of the anterior surface, a steepening of the posterior surface, meridional differences in the changes in curvature and asphericity, and increased symmetry of the anterior topography. Optical distortion correction through the ICRS (of a different refractive index from the cornea) allowed accurate pachymetric estimates, which showed increased thickness in the ectatic area as well as in peripheral corneal areas. Automatic tools allowed estimation of the depth of the implanted ICRS ring, as well as its rotation with respect to the pupil plane. Anterior segment sOCT provided with fan and optical distortion correction and analysis tools is an excellent instrument for evaluating and monitoring keratoconic eyes and for the quantification of the changes produced by ICRS treatment. PMID:22567577

  15. High volume methane gas hydrate deposits in fine grained sediments from the Krishna-Godavari Basin: Analysis from Micro CT scanning

    NASA Astrophysics Data System (ADS)

    Rees, E. V.; Clayton, C.; Priest, J.; Schultheiss, P. J.

    2009-12-01

    The Indian National Gas Hydrate Program (NGHP) Expedition 1, of 2006, investigated several methane gas hydrate deposits on the continental shelf around the coast of India. Using pressure coring techniques (HYACINTH and PCS), intact gas-hydrate bearing, fine-grained sediment cores were recovered during the expedition. Once recovered, these cores were rapidly depressurized and submerged in liquid nitrogen, therefore preserving the structure and form of the hydrate within the host sediment. High resolution X-Ray CT scanning was later employed to image the internal structure of the gas hydrate, analyze the trends in vein orientation, and collect volumetric data. A scanning resolution of 0.08mm allowed for a detailed view of the three-dimensional distribution of the hydrate within the sediment from which detailed analysis of vein orientation could be made. Two distinct directions of vein growth were identified in each core section studied, which suggested the presence of a specific stress regime in the Krishna-Godavari basin during hydrate formation. In addition, image segmentation of gas hydrate from the sediment allowed for volumetric analysis of the hydrate content within each core section. Results from this analysis showed that high volumes of gas hydrate, up to approximately 70% of the pore space, were present. This high volume of methane gas hydrate can have a significant impact on the stability of the host sediment if dissociation of the hydrate were to occur in-situ, through the development of excess pore pressure, increase in water content and change in salinity of the host sediment.

  16. Plasma Exchange for the Recurrence of Primary Focal Segmental Glomerulosclerosis in Adult Renal Transplant Recipients: A Meta-Analysis

    PubMed Central

    Vlachopanos, Georgios; Georgalis, Argyrios; Gakiopoulou, Harikleia

    2015-01-01

    Background. Posttransplant recurrence of primary focal segmental glomerulosclerosis (rFSGS) in the form of massive proteinuria is not uncommon and has detrimental consequences on renal allograft survival. A putative circulating permeability factor has been implicated in the pathogenesis leading to widespread use of plasma exchange (PLEX). We reviewed published studies to assess the role of PLEX on treatment of rFSGS in adults. Methods. Eligible manuscripts compared PLEX or variants with conventional care for inducing proteinuria remission (PR) in rFSGS and were identified through MEDLINE and reference lists. Data were abstracted in parallel by two reviewers. Results. We detected 6 nonrandomized studies with 117 cases enrolled. In a random effects model, the pooled risk ratio for the composite endpoint of partial or complete PR was 0,38 in favour of PLEX (95% CI: 0,23–0,61). No statistical heterogeneity was observed among included studies (I2 = 0%, p = 0,42). On average, 9–26 PLEX sessions were performed to achieve PR. Renal allograft loss due to recurrence was lower (range: 0%–67%) in patients treated with PLEX. Conclusion. Notwithstanding the inherent limitations of small, observational trials, PLEX appears to be effective for PR in rFSGS. Additional research is needed to further elucidate its optimal use and impact on long-term allograft survival.

  17. Template-based automatic breast segmentation on MRI by excluding the chest region

    SciTech Connect

    Lin, Muqing; Chen, Jeon-Hor; Wang, Xiaoyong; Su, Min-Ying; Chan, Siwa; Chen, Siping

    2013-12-15

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1.03%. The total error by adding the inclusion and exclusion errors ranged from 0.16% to 11.8%, with a mean of 2.89% ± 2.55%. Conclusions: The automatic chest template-based breast MRI segmentation method worked well for cases with different body and breast shapes and different density patterns. Compared to the radiologist-established truth, the mean difference in segmented breast volume was approximately 1%, and the total error by considering the additive inclusion and exclusion errors was approximately 3%. This method may provide a reliable tool for MRI-based segmentation of breast density.

  18. A supervised learning framework of statistical shape and probability priors for automatic prostate segmentation in ultrasound images.

    PubMed

    Ghose, Soumya; Oliver, Arnau; Mitra, Jhimli; Martí, Robert; Lladó, Xavier; Freixenet, Jordi; Sidibé, Désiré; Vilanova, Joan C; Comet, Josep; Meriaudeau, Fabrice

    2013-08-01

    Prostate segmentation aids in prostate volume estimation, multi-modal image registration, and to create patient specific anatomical models for surgical planning and image guided biopsies. However, manual segmentation is time consuming and suffers from inter-and intra-observer variabilities. Low contrast images of trans rectal ultrasound and presence of imaging artifacts like speckle, micro-calcifications, and shadow regions hinder computer aided automatic or semi-automatic prostate segmentation. In this paper, we propose a prostate segmentation approach based on building multiple mean parametric models derived from principal component analysis of shape and posterior probabilities in a multi-resolution framework. The model parameters are then modified with the prior knowledge of the optimization space to achieve optimal prostate segmentation. In contrast to traditional statistical models of shape and intensity priors, we use posterior probabilities of the prostate region determined from random forest classification to build our appearance model, initialize and propagate our model. Furthermore, multiple mean models derived from spectral clustering of combined shape and appearance parameters are applied in parallel to improve segmentation accuracies. The proposed method achieves mean Dice similarity coefficient value of 0.91 ± 0.09 for 126 images containing 40 images from the apex, 40 images from the base and 46 images from central regions in a leave-one-patient-out validation framework. The mean segmentation time of the procedure is 0.67 ± 0.02 s. PMID:23666263

  19. Impact of Luminal Fluid Volume on the Drug Absorption After Oral Administration: Analysis Based on In Vivo Drug Concentration-Time Profile in the Gastrointestinal Tract.

    PubMed

    Tanaka, Yusuke; Goto, Takanori; Kataoka, Makoto; Sakuma, Shinji; Yamashita, Shinji

    2015-09-01

    The objective of this study is to clarify the influence of fluid volume in the gastrointestinal (GI) tract on the oral drug absorption. In vivo rat luminal concentrations of FITC-dextran (FD-4), a nonabsorbable marker, and drugs (metoprolol and atenolol) after oral coadministration as solutions with different osmolarity were determined by direct sampling of residual water in each segment of the GI tract. The luminal FD-4 concentration after oral administration as hyposmotic solution was significantly higher than that after administration as isosmotic or hyperosmotic solution. As the change in FD-4 concentration reflects the change in the volume of luminal fluid, it indicated that the luminal volume was greatly influenced by osmolality of solution ingested orally. Then, fraction of drug absorbed (Fa) in these segments was calculated by comparing the area under the luminal concentration-time curve of FD-4 with those of drugs. Fa values of two model drugs in each GI segment decreased with increase in luminal fluid volume, and the impact of the fluid volume was marked for Fa of atenolol (a low permeable drug) than for that of metoprolol (a high permeable drug). These findings should be beneficial to assure the effectiveness and safety of oral drug therapy. PMID:25821198

  20. Robust Radiomics Feature Quantification Using Semiautomatic Volumetric Segmentation

    PubMed Central

    Leijenaar, Ralph; Jermoumi, Mohammed; Carvalho, Sara; Mak, Raymond H.; Mitra, Sushmita; Shankar, B. Uma; Kikinis, Ron; Haibe-Kains, Benjamin; Lambin, Philippe; Aerts, Hugo J. W. L.

    2014-01-01

    Due to advances in the acquisition and analysis of medical imaging, it is currently possible to quantify the tumor phenotype. The emerging field of Radiomics addresses this issue by converting medical images into minable data by extracting a large number of quantitative imaging features. One of the main challenges of Radiomics is tumor segmentation. Where manual delineation is time consuming and prone to inter-observer variability, it has been shown that semi-automated approaches are fast and reduce inter-observer variability. In this study, a semiautomatic region growing volumetric segmentation algorithm, implemented in the free and publicly available 3D-Slicer platform, was investigated in terms of its robustness for quantitative imaging feature extraction. Fifty-six 3D-radiomic features, quantifying phenotypic differences based on tumor intensity, shape and texture, were extracted from the computed tomography images of twenty lung cancer patients. These radiomic features were derived from the 3D-tumor volumes defined by three independent observers twice using 3D-Slicer, and compared to manual slice-by-slice delineations of five independent physicians in terms of intra-class correlation coefficient (ICC) and feature range. Radiomic features extracted from 3D-Slicer segmentations had significantly higher reproducibility (ICC?=?0.85±0.15, p?=?0.0009) compared to the features extracted from the manual segmentations (ICC?=?0.77±0.17). Furthermore, we found that features extracted from 3D-Slicer segmentations were more robust, as the range was significantly smaller across observers (p?=?3.819e-07), and overlapping with the feature ranges extracted from manual contouring (boundary lower: p?=?0.007, higher: p?=?5.863e-06). Our results show that 3D-Slicer segmented tumor volumes provide a better alternative to the manual delineation for feature quantification, as they yield more reproducible imaging descriptors. Therefore, 3D-Slicer can be employed for quantitative image feature extraction and image data mining research in large patient cohorts. PMID:25025374

  1. Robust Radiomics feature quantification using semiautomatic volumetric segmentation.

    PubMed

    Parmar, Chintan; Rios Velazquez, Emmanuel; Leijenaar, Ralph; Jermoumi, Mohammed; Carvalho, Sara; Mak, Raymond H; Mitra, Sushmita; Shankar, B Uma; Kikinis, Ron; Haibe-Kains, Benjamin; Lambin, Philippe; Aerts, Hugo J W L

    2014-01-01

    Due to advances in the acquisition and analysis of medical imaging, it is currently possible to quantify the tumor phenotype. The emerging field of Radiomics addresses this issue by converting medical images into minable data by extracting a large number of quantitative imaging features. One of the main challenges of Radiomics is tumor segmentation. Where manual delineation is time consuming and prone to inter-observer variability, it has been shown that semi-automated approaches are fast and reduce inter-observer variability. In this study, a semiautomatic region growing volumetric segmentation algorithm, implemented in the free and publicly available 3D-Slicer platform, was investigated in terms of its robustness for quantitative imaging feature extraction. Fifty-six 3D-radiomic features, quantifying phenotypic differences based on tumor intensity, shape and texture, were extracted from the computed tomography images of twenty lung cancer patients. These radiomic features were derived from the 3D-tumor volumes defined by three independent observers twice using 3D-Slicer, and compared to manual slice-by-slice delineations of five independent physicians in terms of intra-class correlation coefficient (ICC) and feature range. Radiomic features extracted from 3D-Slicer segmentations had significantly higher reproducibility (ICC?=?0.85±0.15, p?=?0.0009) compared to the features extracted from the manual segmentations (ICC?=?0.77±0.17). Furthermore, we found that features extracted from 3D-Slicer segmentations were more robust, as the range was significantly smaller across observers (p?=?3.819e-07), and overlapping with the feature ranges extracted from manual contouring (boundary lower: p?=?0.007, higher: p?=?5.863e-06). Our results show that 3D-Slicer segmented tumor volumes provide a better alternative to the manual delineation for feature quantification, as they yield more reproducible imaging descriptors. Therefore, 3D-Slicer can be employed for quantitative image feature extraction and image data mining research in large patient cohorts. PMID:25025374

  2. To appear in MICCAI 2006. Multilevel Segmentation and Integrated

    E-print Network

    Yuille, Alan L.

    and segment- ing brain tumor and edema in multimodal MR volumes. Our results indicate the benefit-modal data since different modes give different cues for the presence of tumors, edema (swelling), and other for segmenting the edema (swelling) than the T1 weighted modality. Our method combines two of the most effective

  3. Self-Paced Physics, Segment 18.

    ERIC Educational Resources Information Center

    New York Inst. of Tech., Old Westbury.

    Eighty-seven problems are included in this volume which is arranged to match study segments 2 through 14. The subject matter is related to projectiles, simple harmonic motion, kinetic friction, multiple pulley arrangements, motion on inclined planes, circular motion, potential energy, kinetic energy, center of mass, Newton's laws, elastic and…

  4. Morphological analysis of age-related iridocorneal angle changes in normal and glaucomatous cases using anterior segment optical coherence tomography

    PubMed Central

    Maruyama, Yuko; Mori, Kazuhiko; Ikeda, Yoko; Ueno, Morio; Kinoshita, Shigeru

    2014-01-01

    Purpose To analyze age-related morphological changes of the iridocorneal angle in normal subjects and glaucomatous cases, using anterior segment optical coherence tomography (AS-OCT). Methods This study involved 58 eyes of 58 open-angle glaucoma cases and 72 eyes of 72 age-matched normal-open-angle control subjects. Iridocorneal angle structures in nasal and temporal regions and anterior chamber depth (ACD) were measured using AS-OCT. Axial length and refractive error were measured by use of an ocular biometer and auto refractor keratometer. Angle opening distance (AOD), angle recess area (ARA), and trabecular-iris space area (TISA), measured at 500 ?m (TISA500) and 750 ?m (TISA750) distant from the scleral spur, were calculated, in the nasal and temporal regions. A new index, the peripheral angle frame index (PAFI), which represents the peripheral angle structure, was proposed, and was defined as (TISA750-TISA500)/TISA500. Results Refractive power in the glaucoma cases was less than in control cases (P<0.0001). Axial length (P<0.0001) and ACD (P=0.0004) were longer and deeper, respectively, in the glaucoma cases, compared with the control cases. In both control and glaucoma groups, ACD, AOD, ARA, and TISA decreased linearly in an age-dependent manner, while PAFI stayed at relatively constant values throughout the age distribution. AOD in the glaucoma group was longer than in the control group, in both the temporal and nasal regions; ARA and TISA were larger in the glaucoma than in the control group. However, no significant differences in nasal or temporal PAFI were found between the glaucoma and control groups. Conclusion The findings of this study show that AS-OCT is useful for the quantitative evaluation of age-related changes in peripheral angle structure in glaucoma and control cases. PMID:24379654

  5. Next Generation Sequencing Analysis Reveals Segmental Patterns of microRNA Expression in Mouse Epididymal Epithelial Cells

    PubMed Central

    Nixon, Brett; Stanger, Simone J.; Mihalas, Bettina P.; Reilly, Jackson N.; Anderson, Amanda L.; Dun, Matthew D.; Tyagi, Sonika; Holt, Janet E.; McLaughlin, Eileen A.

    2015-01-01

    The functional maturation of mammalian spermatozoa is accomplished as the cells descend through the highly specialized microenvironment of the epididymis. This dynamic environment is, in turn, created by the combined secretory and absorptive activity of the surrounding epithelium and displays an extraordinary level of regionalization. Although the regulatory network responsible for spatial coordination of epididymal function remains unclear, recent evidence has highlighted a novel role for the RNA interference pathway. Indeed, as noncanonical regulators of gene expression, small noncoding RNAs have emerged as key elements of the circuitry involved in regulating epididymal function and hence sperm maturation. Herein we have employed next generation sequencing technology to profile the genome-wide miRNA signatures of mouse epididymal cells and characterize segmental patterns of expression. An impressive profile of some 370 miRNAs were detected in the mouse epididymis, with a subset of these specifically identified within the epithelial cells that line the tubule (218). A majority of the latter miRNAs (75%) were detected at equivalent levels along the entire length of the mouse epididymis. We did however identify a small cohort of miRNAs that displayed highly regionalized patterns of expression, including miR-204-5p and miR-196b-5p, which were down- and up-regulated by approximately 39- and 45-fold between the caput/caudal regions, respectively. In addition we identified 79 miRNAs (representing ~ 21% of all miRNAs) as displaying conserved expression within all regions of the mouse, rat and human epididymal tissue. These included 8/14 members of let-7 family of miRNAs that have been widely implicated in the control of androgen signaling and the repression of cell proliferation and oncogenic pathways. Overall these data provide novel insights into the sophistication of the miRNA network that regulates the function of the male reproductive tract. PMID:26270822

  6. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  7. Update of Part 61 Impacts Analysis Methodology. Methodology report. Volume 1

    SciTech Connect

    Oztunali, O.I.; Roles, G.W.

    1986-01-01

    Under contract to the US Nuclear Regulatory Commission, the Envirosphere Company has expanded and updated the impacts analysis methodology used during the development of the 10 CFR Part 61 rule to allow improved consideration of the costs and impacts of treatment and disposal of low-level waste that is close to or exceeds Class C concentrations. The modifications described in this report principally include: (1) an update of the low-level radioactive waste source term, (2) consideration of additional alternative disposal technologies, (3) expansion of the methodology used to calculate disposal costs, (4) consideration of an additional exposure pathway involving direct human contact with disposed waste due to a hypothetical drilling scenario, and (5) use of updated health physics analysis procedures (ICRP-30). Volume 1 of this report describes the calculational algorithms of the updated analysis methodology.

  8. Cognitive, Social, and Literacy Competencies: The Chelsea Bank Simulation Project. Year One: Final Report. [Volume 2]: Appendices.

    ERIC Educational Resources Information Center

    Duffy, Thomas; And Others

    This supplementary volume presents appendixes A-E associated with a 1-year study which determined what secondary school students were doing as they engaged in the Chelsea Bank computer software simulation activities. Appendixes present the SCANS Analysis Coding Sheet; coding problem analysis of 50 video segments; student and teacher interview…

  9. DT-MRI segmentation using graph cuts

    NASA Astrophysics Data System (ADS)

    Weldeselassie, Yonas T.; Hamarneh, Ghassan

    2007-03-01

    An important problem in medical image analysis is the segmentation of anatomical regions of interest. Once regions of interest are segmented, one can extract shape, appearance, and structural features that can be analyzed for disease diagnosis or treatment evaluation. Diffusion tensor magnetic resonance imaging (DT-MRI) is a relatively new medical imaging modality that captures unique water diffusion properties and fiber orientation information of the imaged tissues. In this paper, we extend the interactive multidimensional graph cuts segmentation technique to operate on DT-MRI data by utilizing latest advances in tensor calculus and diffusion tensor dissimilarity metrics. The user interactively selects certain tensors as object ("obj") or background ("bkg") to provide hard constraints for the segmentation. Additional soft constraints incorporate information about both regional tissue diffusion as well as boundaries between tissues of different diffusion properties. Graph cuts are used to find globally optimal segmentation of the underlying 3D DT-MR image among all segmentations satisfying the constraints. We develop a graph structure from the underlying DT-MR image with the tensor voxels corresponding to the graph vertices and with graph edge weights computed using either Log-Euclidean or the J-divergence tensor dissimilarity metric. The topology of our segmentation is unrestricted and both obj and bkg segments may consist of several isolated parts. We test our method on synthetic DT data and apply it to real 2D and 3D MRI, providing segmentations of the corpus callosum in the brain and the ventricles of the heart.

  10. Portable microcomputer for the analysis of plutonium gamma-ray spectra. Volume I. Data analysis methodology and hardware description

    SciTech Connect

    Ruhter, W.D.

    1984-05-01

    A portable microcomputer has been developed and programmed for the International Atomic Energy Agency (IAEA) to perform in-field analysis of plutonium gamma-ray spectra. The unit includes a 16-bit LSI-11/2 microprocessor, 32-K words of memory, a 20-character display for user prompting, a numeric keyboard for user responses, and a 20-character thermal printer for hard-copy output of results. The unit weights 11 kg and had dimensions of 33.5 x 30.5 x 23.0 cm. This compactness allows the unit to be stored under an airline seat. Only the positions of the 148-keV /sup 241/Pu and 208-keV /sup 237/U peaks are required for spectral analysis that gives plutonium isotopic ratios and weight percent abundances. Volume I of this report provides a detailed description of the data analysis methodology, operation instructions, hardware, and maintenance and troubleshooting. Volume II describes the software and provides software listings.

  11. Ocean Optics Protocols for Satellite Ocean Color Sensor Validation. Volume 4; Inherent Optical Properties: Instruments, Characterizations, Field Measurements and Data Analysis Protocols; Revised

    NASA Technical Reports Server (NTRS)

    Mueller, J. L. (Editor); Fargion, Giuletta S. (Editor); McClain, Charles R. (Editor); Pegau, Scott; Zaneveld, J. Ronald V.; Mitchell, B. Gregg; Kahru, Mati; Wieland, John; Stramska, Malgorzat

    2003-01-01

    This document stipulates protocols for measuring bio-optical and radiometric data for the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities and algorithm development. The document is organized into 6 separate volumes as Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 4. Volume I: Introduction, Background and Conventions; Volume II: Instrument Specifications, Characterization and Calibration; Volume III: Radiometric Measurements and Data Analysis Methods; Volume IV: Inherent Optical Properties: Instruments, Characterization, Field Measurements and Data Analysis Protocols; Volume V: Biogeochemical and Bio-Optical Measurements and Data Analysis Methods; Volume VI: Special Topics in Ocean Optics Protocols and Appendices. The earlier version of Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 3 (Mueller and Fargion 2002, Volumes 1 and 2) is entirely superseded by the six volumes of Revision 4 listed above.

  12. What is a segment?

    PubMed Central

    2013-01-01

    Animals have been described as segmented for more than 2,000 years, yet a precise definition of segmentation remains elusive. Here we give the history of the definition of segmentation, followed by a discussion on current controversies in defining a segment. While there is a general consensus that segmentation involves the repetition of units along the anterior-posterior (a-p) axis, long-running debates exist over whether a segment can be composed of only one tissue layer, whether the most anterior region of the arthropod head is considered segmented, and whether and how the vertebrate head is segmented. Additionally, we discuss whether a segment can be composed of a single cell in a column of cells, or a single row of cells within a grid of cells. We suggest that ‘segmentation’ be used in its more general sense, the repetition of units with a-p polarity along the a-p axis, to prevent artificial classification of animals. We further suggest that this general definition be combined with an exact description of what is being studied, as well as a clearly stated hypothesis concerning the specific nature of the potential homology of structures. These suggestions should facilitate dialogue among scientists who study vastly differing segmental structures. PMID:24345042

  13. What is a segment?

    PubMed

    Hannibal, Roberta L; Patel, Nipam H

    2013-01-01

    Animals have been described as segmented for more than 2,000 years, yet a precise definition of segmentation remains elusive. Here we give the history of the definition of segmentation, followed by a discussion on current controversies in defining a segment. While there is a general consensus that segmentation involves the repetition of units along the anterior-posterior (a-p) axis, long-running debates exist over whether a segment can be composed of only one tissue layer, whether the most anterior region of the arthropod head is considered segmented, and whether and how the vertebrate head is segmented. Additionally, we discuss whether a segment can be composed of a single cell in a column of cells, or a single row of cells within a grid of cells. We suggest that 'segmentation' be used in its more general sense, the repetition of units with a-p polarity along the a-p axis, to prevent artificial classification of animals. We further suggest that this general definition be combined with an exact description of what is being studied, as well as a clearly stated hypothesis concerning the specific nature of the potential homology of structures. These suggestions should facilitate dialogue among scientists who study vastly differing segmental structures. PMID:24345042

  14. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  15. Space and Earth Sciences, Computer Systems, and Scientific Data Analysis Support, Volume 1

    NASA Technical Reports Server (NTRS)

    Estes, Ronald H. (editor)

    1993-01-01

    This Final Progress Report covers the specific technical activities of Hughes STX Corporation for the last contract triannual period of 1 June through 30 Sep. 1993, in support of assigned task activities at Goddard Space Flight Center (GSFC). It also provides a brief summary of work throughout the contract period of performance on each active task. Technical activity is presented in Volume 1, while financial and level-of-effort data is presented in Volume 2. Technical support was provided to all Division and Laboratories of Goddard's Space Sciences and Earth Sciences Directorates. Types of support include: scientific programming, systems programming, computer management, mission planning, scientific investigation, data analysis, data processing, data base creation and maintenance, instrumentation development, and management services. Mission and instruments supported include: ROSAT, Astro-D, BBXRT, XTE, AXAF, GRO, COBE, WIND, UIT, SMM, STIS, HEIDI, DE, URAP, CRRES, Voyagers, ISEE, San Marco, LAGEOS, TOPEX/Poseidon, Pioneer-Venus, Galileo, Cassini, Nimbus-7/TOMS, Meteor-3/TOMS, FIFE, BOREAS, TRMM, AVHRR, and Landsat. Accomplishments include: development of computing programs for mission science and data analysis, supercomputer applications support, computer network support, computational upgrades for data archival and analysis centers, end-to-end management for mission data flow, scientific modeling and results in the fields of space and Earth physics, planning and design of GSFC VO DAAC and VO IMS, fabrication, assembly, and testing of mission instrumentation, and design of mission operations center.

  16. Fold distributions at clover, crystal and segment levels for segmented clover detectors

    NASA Astrophysics Data System (ADS)

    Kshetri, R.; Bhattacharya, P.

    2014-10-01

    Fold distributions at clover, crystal and segment levels have been extracted for an array of segmented clover detectors for various gamma energies. A simple analysis of the results based on a model independant approach has been presented. For the first time, the clover fold distribution of an array and associated array addback factor have been extracted. We have calculated the percentages of the number of crystals and segments that fire for a full energy peak event.

  17. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    NASA Astrophysics Data System (ADS)

    Laruelle, G. G.; Dürr, H. H.; Lauerwald, R.; Hartmann, J.; Slomp, C. P.; Goossens, N.; Regnier, P. A. G.

    2013-05-01

    Past characterizations of the land-ocean continuum were constructed either from a continental perspective through an analysis of watershed river basin properties (COSCATs: COastal Segmentation and related CATchments) or from an oceanic perspective, through a regionalization of the proximal and distal continental margins (LMEs: large marine ecosystems). Here, we present a global-scale coastal segmentation, composed of three consistent levels, that includes the whole aquatic continuum with its riverine, estuarine and shelf sea components. Our work delineates comprehensive ensembles by harmonizing previous segmentations and typologies in order to retain the most important physical characteristics of both the land and shelf areas. The proposed multi-scale segmentation results in a distribution of global exorheic watersheds, estuaries and continental shelf seas among 45 major zones (MARCATS: MARgins and CATchments Segmentation) and 149 sub-units (COSCATs). Geographic and hydrologic parameters such as the surface area, volume and freshwater residence time are calculated for each coastal unit as well as different hypsometric profiles. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air-water interface combining global and regional average emission rates derived from local studies.

  18. T3 glottic cancer: an analysis of dose time-volume factors

    SciTech Connect

    Harwood, A.R.; Beale, F.A.; Cummings, B.J.; Hawkins, N.V.; Keane, T.J.; Rider, W.D.

    1980-06-01

    This report analyzes dose-time-volume factors in 112 patients with T3N0M0 glottic cancer who were treated with radical radiotherapy with surgery for salvage between 1963 and 1977. 55% of the patients are alive and well 5 years following treatment; 26% died of glottic cancer and 19% died of intercurrent disease. In the 1965 to 1969 time period, 31% died of tumor as compared to 16% in the 1975 to 1977 time period. Overall local control by radiotherapy was 51%; 2/3 of the failures were surgically salvaged. 44% were locally controlled by radiotherapy in the 1965 to 1969 time period and 57% in the 1975 to 1977 time period. Analysis of dose-time-volume factors reveals that the optimal dose is greater than 1700 ret and a minimal volume of 6 x 8 cm should be used. A dose-cure curve for T3 glottic cancer is constructed and compared with the dose complication curve for the larynx and the dose-cure curve for T1N0M0 glottic cancer. A comparison of cure rates between 112 patients treated with radical radiotherapy and surgery for salvage versus 28 patients treated with combined pre-operative irradiation and surgery reveals no difference in the proportion of patients who died of glottic cancer or in the number of patients alive at 5 years following treatment.

  19. Coupled Structural, Thermal, Phase-change and Electromagnetic Analysis for Superconductors, Volume 2

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.

    1996-01-01

    Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromag subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermel and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. Volume 1 describes mostly formulation specific problems. Volume 2 describes generalization of those formulations.

  20. Structural analysis of the right-lateral strike-slip Qingchuan fault, northeastern segment of the Longmen Shan thrust belt, central China

    NASA Astrophysics Data System (ADS)

    Lin, Aiming; Rao, Gang; Yan, Bing

    2014-11-01

    The eastern margin of the Tibetan Plateau is marked by the Longmen Shan thrust belt (LSTB), which is dominated by thrust faults and thrust-related fold structures that is home to the 2008 Mw 7.9 thrusting-type Wenchuan earthquake. Although previous works demonstrated that the seismogenic fault for the earthquake changed coseismic slip sense from thrust-dominated slip in the central and southeastern segments of the LSTB to right-lateral strike-slip-dominated displacement along the Qingchuan fault (northeastern segment of the LSTB), the related structures and current activity of the Qingchuan fault remains unclear. Topographic analyses of 0.5-m-resolution WorldView imagery and Digital Elevation Model (DEM) data, field investigations and structural analysis of the fault zone reveal that: i) stream channels and late Pleistocene-Holocene terrace risers and alluvial fans are systematically offset dextrally along the Qingchuan fault; ii) foliations developed in the fault zone indicate a right-lateral strike-slip-dominated displacement; and iii) geological evidence and seismic data show that the Qingchuan fault is currently active as the main seismogenic fault dominated by a right-lateral strike-slip with an average slip rate of ca. 3-5 mm/yr. Our results demonstrate that the spatial change in slip sense along the LSTB from thrust-dominated in the central and southwestern sectors to right-lateral strike-slip-dominated in the northeastern sector is mainly caused by a change in the orientation of fault geometry from NE-SW to ENE-WSW along the LSTB.

  1. Segmental distribution of the motor neuron columns that supply the rat hindlimb: A muscle/motor neuron tract-tracing analysis targeting the motor end plates.

    PubMed

    Mohan, R; Tosolini, A P; Morris, R

    2015-10-29

    Spinal cord injury (SCI) that disrupts input from higher brain centers to the lumbar region of the spinal cord results in paraplegia, one of the most debilitating conditions affecting locomotion. Non-human primates have long been considered to be the most appropriate animal to model lower limb dysfunction. More recently, however, there has been a wealth of scientific information gathered in the rat regarding the central control of locomotion. Moreover, rodent models of SCI at lumbar levels have been widely used to validate therapeutic scenarios aimed at the restoration of locomotor activities. Despite the growing use of the rat as a model of locomotor dysfunction, knowledge regarding the anatomical relationship between spinal cord motor neurons and the hindlimb muscles that they innervate is incomplete. Previous studies performed in our laboratory have shown the details of the muscle/motor neuron topographical relationship for the mouse forelimb and hindlimb as well as for the rat forelimb. The present analysis aims to characterize the segmental distribution of the motor neuron pools that innervate the muscles of the rat hindlimb, hence completing this series of studies. The location of the motor end plate (MEP) regions on the main muscles of the rat hindlimb was first revealed with acetylcholinesterase histochemistry. For each muscle under scrutiny, injections of Fluoro-Gold were then performed along the length of the MEP region. Targeting the MEPs gave rise to columns of motor neurons that span more spinal cord segments than previously reported. The importance of this study is discussed in terms of its application to gene therapy for SCI. PMID:26304758

  2. CLASSIFICATION OF AMERICAN CITIES FOR CASE STUDY ANALYSIS. VOLUME III. DOCUMENTATION OF DATA USED IN FACTOR ANALYSIS AND CITY CLASSIFICATION

    EPA Science Inventory

    Volume 3 of a three volume study continues a discussion begun in volume 2 of the methodology for classifying U.S. cities with regard to environmental issues and federal policies for environmental quality.

  3. Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lawrence, William T.

    2005-01-01

    NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.

  4. Survey of contemporary trends in color image segmentation

    NASA Astrophysics Data System (ADS)

    Vantaram, Sreenath Rao; Saber, Eli

    2012-10-01

    In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.

  5. Calculation of standard liver volume in Korean adults with analysis of confounding variables

    PubMed Central

    Um, Eun Hae; Song, Gi-Won; Jung, Dong-Hwan; Ahn, Chul-Soo; Kim, Ki-Hun; Moon, Deok-Bog; Park, Gil-Chun; Lee, Sung-Gyu

    2015-01-01

    Backgrounds/Aims Standard liver volume (SLV) is an important parameter that has been used as a reference value to estimate the graft matching in living donor liver transplantation (LDLT). This study aimed to determine a reliable SLV formula for Korean adult patients as compared with the 15 SLV formulae from other studies and further estimate SLV formula by gender and body mass index (BMI). Methods Computed tomography liver volumetry was performed in 1,000 living donors for LDLT and regression formulae for SLV was calculated. Individual donor data were applied to the 15 previously published SLV formulae, as compared with the SLV formula derived in this study. Analysis for confounding variables of BMI and gender was also performed. Results Two formulae, "SLV (ml)=908.204×BSA-464.728" with DuBois body surface area (BSA) formula and "SLV (ml)=893.485×BSA-439.169" with Monsteller BSA formula, were derived by using the profiles of the 1,000 living donors included in the study. Comparison with other 15 other formulae, all except for Chouker formula showed the mean volume percentage errors of 4.8-5.4%. The gender showed no significant effect on total liver volume (TLV), but there was a significant increase in TLV as BMI increased. Conclusions Our study suggested that most SLV formulae showed a crudely applicable range of SLV estimation for Korean adults. Considering the volume error in estimating SLV, further SLV studies with larger population from multiple centers should be performed to enhance its predictability. Our results suggested that classifying SLV formulae by BMI and gender is unnecessary. PMID:26693231

  6. Graph-based interpretation of the molecular interstellar medium segmentation

    NASA Astrophysics Data System (ADS)

    Colombo, D.; Rosolowsky, E.; Ginsburg, A.; Duarte-Cabral, A.; Hughes, A.

    2015-12-01

    We present a generalization of the giant molecular cloud identification problem based on cluster analysis. The method we designed, SCIMES (Spectral Clustering for Interstellar Molecular Emission Segmentation) considers the dendrogram of emission in the broader framework of graph theory and utilizes spectral clustering to find discrete regions with similar emission properties. For Galactic molecular cloud structures, we show that the characteristic volume and/or integrated CO luminosity are useful criteria to define the clustering, yielding emission structures that closely reproduce `by-eye' identification results. SCIMES performs best on well-resolved, high-resolution data, making it complementary to other available algorithms. Using 12CO(1-0) data for the Orion-Monoceros complex, we demonstrate that SCIMES provides robust results against changes of the dendrogram-construction parameters, noise realizations and degraded resolution. By comparing SCIMES with other cloud decomposition approaches, we show that our method is able to identify all canonical clouds of the Orion-Monoceros region, avoiding the overdivision within high-resolution survey data that represents a common limitation of several decomposition algorithms. The Orion-Monoceros objects exhibit hierarchies and size-line width relationships typical to the turbulent gas in molecular clouds, although `the Scissors' region deviates from this common description. SCIMES represents a significant step forward in moving away from pixel-based cloud segmentation towards a more physical-oriented approach, where virtually all properties of the ISM can be used for the segmentation of discrete objects.

  7. The history of NATO TNF policy: The role of studies, analysis and exercises conference proceedings. Volume 2: Papers and presentations

    SciTech Connect

    Rinne, R.L.

    1994-02-01

    This conference was organized to study and analyze the role of simulation, analysis, modeling, and exercises in the history of NATO policy. The premise was not that the results of past studies will apply to future policy, but rather that understanding what influenced the decision process -- and how -- would be of value. The structure of the conference was built around discussion panels. The panels were augmented by a series of papers and presentations focusing on particular TNF events, issues, studies, or exercises. The conference proceedings consist of three volumes. Volume 1 contains the conference introduction, agenda, biographical sketches of principal participants, and analytical summary of the presentations and panels. This volume contains a short introduction and the papers and presentations from the conference. Volume 3 contains selected papers by Brig. Gen. Robert C. Richardson III (Ret.). Individual papers in this volume were abstracted and indexed for the database.

  8. The history of NATO TNF policy: The role of studies, analysis and exercises conference proceedings. Volume 1, Introduction and summary

    SciTech Connect

    Rinne, R.L.

    1994-02-01

    This conference was organized to study and analyze the role of simulation, analysis, modeling, and exercises in the history of NATO policy. The premise was not that the results of past studies will apply to future policy, but rather that understanding what influenced the decision process -- and how -- would be of value. The structure of the conference was built around discussion panels. The panels were augmented by a series of papers and presentations focusing on particular TNF events, issues, studies or exercise. The conference proceedings consist of three volumes. This volume, Volume 1, contains the conference introduction, agenda, biographical sketches of principal participants, and analytical summary of the presentations and discussion panels. Volume 2 contains a short introduction and the papers and presentations from the conference. Volume 3 contains selected papers by Brig. Gen. Robert C. Richardson III (Ret.).

  9. Effects of elevated vacuum on in-socket residual limb fluid volume: Case study results using bioimpedance analysis

    PubMed Central

    Sanders, JE; Harrison, DS; Myers, TR; Allyn, KJ

    2015-01-01

    Bioimpedance analysis was used to measure residual limb fluid volume on seven trans-tibial amputee subjects using elevated vacuum sockets and non-elevated vacuum sockets. Fluid volume changes were assessed during sessions with the subjects sitting, standing, and walking. In general, fluid volume losses during 3 or 5 min walks and losses over the course of the 30-min test session were less for elevated vacuum than for suction. A number of variables including the time of day data were collected, soft tissue consistency, socket-to-limb size differences and shape differences, and subject health may have affected the results and had an equivalent or greater impact on limb fluid volume compared with elevated vacuum. Researchers should well consider these variables in study design of future investigations on the effects of elevated vacuum on residual limb volume. PMID:22234667

  10. Automatic Contrail Detection and Segmentation

    NASA Technical Reports Server (NTRS)

    Weiss, John M.; Christopher, Sundar A.; Welch, Ronald M.

    1998-01-01

    Automatic contrail detection is of major importance in the study of the atmospheric effects of aviation. Due to the large volume of satellite imagery, selecting contrail images for study by hand is impractical and highly subject to human error. It is far better to have a system in place that will automatically evaluate an image to determine 1) whether it contains contrails and 2) where the contrails are located. Preliminary studies indicate that it is possible to automatically detect and locate contrails in Advanced Very High Resolution Radiometer (AVHRR) imagery with a high degree of confidence. Once contrails have been identified and localized in a satellite image, it is useful to segment the image into contrail versus noncontrail pixels. The ability to partition image pixels makes it possible to determine the optical properties of contrails, including optical thickness and particle size. In this paper, we describe a new technique for segmenting satellite images containing contrails. This method has good potential for creating a contrail climatology in an automated fashion. The majority of contrails are detected, rejecting clutter in the image, even cirrus streaks. Long, thin contrails are most easily detected. However, some contrails may be missed because they are curved, diffused over a large area, or present in short segments. Contrails average 2-3 km in width for the cases studied.

  11. Predictive urinary biomarkers for steroid-resistant and steroid-sensitive focal segmental glomerulosclerosis using high resolution mass spectrometry and multivariate statistical analysis

    PubMed Central

    2014-01-01

    Background Focal segmental glomerulosclerosis (FSGS) is a glomerular scarring disease diagnosed mostly by kidney biopsy. Since there is currently no diagnostic test that can accurately predict steroid responsiveness in FSGS, prediction of the responsiveness of patients to steroid therapy with noninvasive means has become a critical issue. In the present study urinary proteomics was used as a noninvasive tool to discover potential predictive biomarkers. Methods Urinary proteome of 10 patients (n?=?6 steroid-sensitive, n?=?4 steroid-resistant) with biopsy proven FSGS was analyzed using nano-LC-MS/MS and supervised multivariate statistical analysis was performed. Results Twenty one proteins were identified as discriminating species among which apolipoprotein A-1 and Matrix-remodeling protein 8 had the most drastic fold changes being over- and underrepresented, respectively, in steroid sensitive compared to steroid resistant urine samples. Gene ontology enrichment analysis revealed acute inflammatory response as the dominant biological process. Conclusion The obtained results suggest a panel of predictive biomarkers for FSGS. Proteins involved in the inflammatory response are shown to be implicated in the responsiveness. As a tool for biomarker discovery, urinary proteomics is especially fruitful in the area of prediction of responsiveness to drugs. Further validation of these biomarkers is however needed. PMID:25182141

  12. Use of Anisotropy, 3D Segmented Atlas, and Computational Analysis to Identify Gray Matter Subcortical Lesions Common to Concussive Injury from Different Sites on the Cortex

    PubMed Central

    Kulkarni, Praveen; Kenkel, William; Finklestein, Seth P.; Barchet, Thomas M.; Ren, JingMei; Davenport, Mathew; Shenton, Martha E.; Kikinis, Zora; Nedelman, Mark; Ferris, Craig F.

    2015-01-01

    Traumatic brain injury (TBI) can occur anywhere along the cortical mantel. While the cortical contusions may be random and disparate in their locations, the clinical outcomes are often similar and difficult to explain. Thus a question that arises is, do concussions at different sites on the cortex affect similar subcortical brain regions? To address this question we used a fluid percussion model to concuss the right caudal or rostral cortices in rats. Five days later, diffusion tensor MRI data were acquired for indices of anisotropy (IA) for use in a novel method of analysis to detect changes in gray matter microarchitecture. IA values from over 20,000 voxels were registered into a 3D segmented, annotated rat atlas covering 150 brain areas. Comparisons between left and right hemispheres revealed a small population of subcortical sites with altered IA values. Rostral and caudal concussions were of striking similarity in the impacted subcortical locations, particularly the central nucleus of the amygdala, laterodorsal thalamus, and hippocampal complex. Subsequent immunohistochemical analysis of these sites showed significant neuroinflammation. This study presents three significant findings that advance our understanding and evaluation of TBI: 1) the introduction of a new method to identify highly localized disturbances in discrete gray matter, subcortical brain nuclei without postmortem histology, 2) the use of this method to demonstrate that separate injuries to the rostral and caudal cortex produce the same subcortical, disturbances, and 3) the central nucleus of the amygdala, critical in the regulation of emotion, is vulnerable to concussion. PMID:25955025

  13. On 3-D inelastic analysis methods for hot section components. Volume 1: Special finite element models

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.

    1988-01-01

    This annual status report presents the results of work performed during the fourth year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes permitting more accurate and efficient 3-D analysis of selected hot section components, i.e., combustor liners, turbine blades and turbine vanes. The computer codes embody a progression of math models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. Volume 1 of this report discusses the special finite element models developed during the fourth year of the contract.

  14. FBI fingerprint identification automation study: AIDS 3 evaluation report. Volume 6: Environmental analysis

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.

    1980-01-01

    The results of the analysis of the external environment of the FBI Fingerprint Identification Division are presented. Possible trends in the future environment of the Division that may have an effect on the work load were projected to determine if future work load will lie within the capability range of the proposed new system, AIDS 3. Two working models of the environment were developed, the internal and external model, and from these scenarios the projection of possible future work load volume and mixture was developed. Possible drivers of work load change were identified and assessed for upper and lower bounds of effects. Data used for the study were derived from historical information, analysis of the current situation and from interviews with various agencies who are users of or stakeholders in the present system.

  15. Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 1

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of Task 3 is to provide additional analysis and insight necessary to support key design/programmatic decision for options quantification and selection for system definition. This includes: (1) the identification of key trade study topics; (2) the definition of a trade study procedure for each topic (issues to be resolved, key inputs, criteria/weighting, methodology); (3) conduct tradeoff and sensitivity analysis; and (4) the review/verification of results within the context of evolving system design and definition. The trade study topics addressed in this volume include space autonomy and function automation, software transportability, system network topology, communications standardization, onboard local area networking, distributed operating system, software configuration management, and the software development environment facility.

  16. Geotechnical Field Data and Analysis Report, July 1991--June 1992. Volume 1

    SciTech Connect

    Not Available

    1993-09-01

    The Geotechnical Field Data and Analysis Report documents the geotechnical data from the underground excavations at the Waste Isolation Pilot Plant (WIPP) located near Carlsbad, New Mexico. The data are used to characterize conditions, confirm design assumptions, and understand and predict the performance of the underground excavations during operations. The data are obtained as part of a routine monitoring program and do not include data from tests performed by Sandia National Laboratories (SNL), the Scientific Advisor to the project, in support of performance assessment studies. The purpose of the geomechanical monitoring program is to provide in situ data to support continuing assessments of the design for the underground facilities. Specifically, the program provides: Early detection of conditions that could compromise operational safety; evaluation of room closure to ensure retrievability of waste; guidance for design modifications and remedial actions; and data for interpreting the actual behavior of underground openings, in comparison with established design criteria. This Geotechnical Field Data and Analysis Report covers the period July 1, 1991 to June 30, 1992. Volume 1 provides an interpretation of the field data while Volume 2 describes and presents the data itself.

  17. Geotechnical field data and analysis report, July 1990--June 1991. Volume 1

    SciTech Connect

    Not Available

    1992-03-01

    The Geotechnical Field Data and Analysis Report documents the geotechnical data from the underground excavations at the Waste Isolation Pilot Plant (WIPP) located near Carlsbad, New Mexico. The data are used to characterize conditions, confirm design assumption, and understand and predict the performance of the underground excavations during operations. During the construction of the principal underground access and experimental areas, reporting was on a quarterly basis. Since 1987, reporting has been carried out annually because additional excavations such as the waste storage panels, will take place gradually over an extended period. This report presents and analyzes data collected up to June 30, 1991. The two-volume format of the Geotechnical Field Data and Analysis Report was selected to meet the needs of several audiences. Volume I focuses on the geotechnical performance of the various underground facilities including the shafts, shaft stations, access drifts, test rooms, and waste storage areas. The results of excavation effects investigations, stratigraphic mapping, and the occurrence of brine are also documented. It provides an evaluation of the geotechnical aspects of performance in the context of the relevant design criteria. The depth and breadth of the evaluation for the different underground facilities varies according to the types and quantities of data that are available, and the complexity of the recorded geotechnical responses.

  18. Underground Test Area Subproject Phase I Data Analysis Task. Volume V - Transport Parameter and Source Term Data Documentation Package

    SciTech Connect

    1996-12-01

    Volume V of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the transport parameter and source term data. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  19. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, WORKING DRAFT, JUNE 2012 1 Segmentation, Inference and Classification of

    E-print Network

    Ding, Yu

    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, WORKING DRAFT, JUNE 2012 1. Park is with the Department of Industrial and Manufacturing En- gineering, Florida A&M and Florida of Statistics, Texas A&M University, College Station, TX, 77843. · J. Ji is with the Department of Electrical

  20. Concept Area Four and Five Objectives, Hierarchy Charts, and Test Items. Economic Analysis Course. Segments 85-96.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This report deals with concept areas four and five, which focus on international trade and enrichment areas. The behavioral…