Science.gov

Sample records for volume segmentation analysis

  1. Economic Analysis. Volume V. Course Segments 65-79.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    The fifth volume of the multimedia, individualized course in economic analysis produced for the United States Naval Academy covers segments 65-79 of the course. Included in the volume are discussions of monopoly markets, monopolistic competition, oligopoly markets, and the theory of factor demand and supply. Other segments of the course, the…

  2. Automated segmentation and dose-volume analysis with DICOMautomaton

    NASA Astrophysics Data System (ADS)

    Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.

    2014-03-01

    Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.

  3. Volume Segmentation and Ghost Particles

    NASA Astrophysics Data System (ADS)

    Ziskin, Isaac; Adrian, Ronald

    2011-11-01

    Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.

  4. Direct volume estimation without segmentation

    NASA Astrophysics Data System (ADS)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  5. Segmentation-based method incorporating fractional volume analysis for quantification of brain atrophy on magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Wang, Deming; Doddrell, David M.

    2001-07-01

    Partial volume effect is a major problem in brain tissue segmentation on digital images such as magnetic resonance (MR) images. In this paper, special attention has been paid to partial volume effect when developing a method for quantifying brain atrophy. Specifically, partial volume effect is minimized in the process of parameter estimation prior to segmentation by identifying and excluding those voxels with possible partial volume effect. A quantitative measure for partial volume effect was also introduced through developing a model that calculates fractional volumes for voxels with mixtures of two different tissues. For quantifying cerebrospinal fluid (CSF) volumes, fractional volumes are calculated for two classes of mixture involving gray matter and CSF, and white matter and CSF. Tissue segmentation is carried out using 1D and 2D thresholding techniques after images are intensity- corrected. Threshold values are estimated using the minimum error method. Morphological processing and region identification analysis are used extensively in the algorithm. As an application, the method was employed for evaluating rates of brain atrophy based on serially acquired structural brain MR images. Consistent and accurate rates of brain atrophy have been obtained for patients with Alzheimer's disease as well as for elderly subjects due to normal aging process.

  6. Volume analysis of treatment response of head and neck lesions using 3D level set segmentation

    NASA Astrophysics Data System (ADS)

    Hadjiiski, Lubomir; Street, Ethan; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Chan, Heang-Ping; Mukherji, Suresh K.

    2008-03-01

    A computerized system for segmenting lesions in head and neck CT scans was developed to assist radiologists in estimation of the response to treatment of malignant lesions. The system performs 3D segmentations based on a level set model and uses as input an approximate bounding box for the lesion of interest. In this preliminary study, CT scans from a pre-treatment exam and a post one-cycle chemotherapy exam of 13 patients containing head and neck neoplasms were used. A radiologist marked 35 temporal pairs of lesions. 13 pairs were primary site cancers and 22 pairs were metastatic lymph nodes. For all lesions, a radiologist outlined a contour on the best slice on both the pre- and post treatment scans. For the 13 primary lesion pairs, full 3D contours were also extracted by a radiologist. The average pre- and post-treatment areas on the best slices for all lesions were 4.5 and 2.1 cm2, respectively. For the 13 primary site pairs the average pre- and post-treatment primary lesions volumes were 15.4 and 6.7 cm 3 respectively. The correlation between the automatic and manual estimates for the pre-to-post-treatment change in area for all 35 pairs was r=0.97, while the correlation for the percent change in area was r=0.80. The correlation for the change in volume for the 13 primary site pairs was r=0.89, while the correlation for the percent change in volume was r=0.79. The average signed percent error between the automatic and manual areas for all 70 lesions was 11.0+/-20.6%. The average signed percent error between the automatic and manual volumes for all 26 primary lesions was 37.8+/-42.1%. The preliminary results indicate that the automated segmentation system can reliably estimate tumor size change in response to treatment relative to radiologist's hand segmentation.

  7. NSEG, a segmented mission analysis program for low and high speed aircraft. Volume 1: Theoretical development

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is presented. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed characteristics were specified in tabular form. The code also contains extensive flight envelope performance mapping capabilities. Approximate take off and landing analyses were performed. At high speeds, centrifugal lift effects were accounted for. Extensive turbojet and ramjet engine scaling procedures were incorporated in the code.

  8. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 3: Demonstration problems

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    Program NSEG is a rapid mission analysis code based on the use of approximate flight path equations of motion. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelope performance mapping capabilities. For example, rate-of-climb, turn rates, and energy maneuverability parameter values may be mapped in the Mach-altitude plane. Approximate take off and landing analyses are also performed. At high speeds, centrifugal lift effects are accounted for. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  9. Volume rendering for interactive 3D segmentation

    NASA Astrophysics Data System (ADS)

    Toennies, Klaus D.; Derz, Claus

    1997-05-01

    Combined emission/absorption and reflection/transmission volume rendering is able to display poorly segmented structures from 3D medical image sequences. Visual cues such as shading and color let the user distinguish structures in the 3D display that are incompletely extracted by threshold segmentation. In order to be truly helpful, analyzed information needs to be quantified and transferred back into the data. We extend our previously presented scheme for such display be establishing a communication between visual analysis and the display process. The main tool is a selective 3D picking device. For being useful on a rather rough segmentation, the device itself and the display offer facilities for object selection. Selective intersection planes let the user discard information prior to choosing a tissue of interest. Subsequently, a picking is carried out on the 2D display by casting a ray into the volume. The picking device is made pre-selective using already existing segmentation information. Thus, objects can be picked that are visible behind semi-transparent surfaces of other structures. Information generated by a later connected- component analysis can then be integrated into the data. Data examination is continued on an improved display letting the user actively participate in the analysis process. Results of this display-and-interaction scheme proved to be very effective. The viewer's ability to extract relevant information form a complex scene is combined with the computer's ability to quantify this information. The approach introduces 3D computer graphics methods into user- guided image analysis creating an analysis-synthesis cycle for interactive 3D segmentation.

  10. Inter-sport variability of muscle volume distribution identified by segmental bioelectrical impedance analysis in four ball sports

    PubMed Central

    Yamada, Yosuke; Masuo, Yoshihisa; Nakamura, Eitaro; Oda, Shingo

    2013-01-01

    The aim of this study was to evaluate and quantify differences in muscle distribution in athletes of various ball sports using segmental bioelectrical impedance analysis (SBIA). Participants were 115 male collegiate athletes from four ball sports (baseball, soccer, tennis, and lacrosse). Percent body fat (%BF) and lean body mass were measured, and SBIA was used to measure segmental muscle volume (MV) in bilateral upper arms, forearms, thighs, and lower legs. We calculated the MV ratios of dominant to nondominant, proximal to distal, and upper to lower limbs. The measurements consisted of a total of 31 variables. Cluster and factor analyses were applied to identify redundant variables. The muscle distribution was significantly different among groups, but the %BF was not. The classification procedures of the discriminant analysis could correctly distinguish 84.3% of the athletes. These results suggest that collegiate ball game athletes have adapted their physique to their sport movements very well, and the SBIA, which is an affordable, noninvasive, easy-to-operate, and fast alternative method in the field, can distinguish ball game athletes according to their specific muscle distribution within a 5-minute measurement. The SBIA could be a useful, affordable, and fast tool for identifying talents for specific sports. PMID:24379714

  11. Uncertainty-aware guided volume segmentation.

    PubMed

    Prassni, Jörg-Stefan; Ropinski, Timo; Hinrichs, Klaus

    2010-01-01

    Although direct volume rendering is established as a powerful tool for the visualization of volumetric data, efficient and reliable feature detection is still an open topic. Usually, a tradeoff between fast but imprecise classification schemes and accurate but time-consuming segmentation techniques has to be made. Furthermore, the issue of uncertainty introduced with the feature detection process is completely neglected by the majority of existing approaches.In this paper we propose a guided probabilistic volume segmentation approach that focuses on the minimization of uncertainty. In an iterative process, our system continuously assesses uncertainty of a random walker-based segmentation in order to detect regions with high ambiguity, to which the user's attention is directed to support the correction of potential misclassifications. This reduces the risk of critical segmentation errors and ensures that information about the segmentation's reliability is conveyed to the user in a dependable way. In order to improve the efficiency of the segmentation process, our technique does not only take into account the volume data to be segmented, but also enables the user to incorporate classification information. An interactive workflow has been achieved by implementing the presented system on the GPU using the OpenCL API. Our results obtained for several medical data sets of different modalities, including brain MRI and abdominal CT, demonstrate the reliability and efficiency of our approach. PMID:20975176

  12. Volume rendering of segmented image objects.

    PubMed

    Bullitt, Elizabeth; Aylward, Stephen R

    2002-08-01

    This paper describes a new method of combining ray-casting with segmentation. Volume rendering is performed at interactive rates on personal computers, and visualizations include both "superficial" ray-casting through a shell at each object's surface and "deep" ray-casting through the confines of each object. A feature of the approach is the option to smoothly and interactively dilate segmentation boundaries along all axes. This ability, when combined with selective "turning off" of extraneous image objects, can help clinicians detect and evaluate segmentation errors that may affect surgical planning. We describe both a method optimized for displaying tubular objects and a more general method applicable to objects of arbitrary geometry. In both cases, select three-dimensional points are projected onto a modified z buffer that records additional information about the projected objects. A subsequent step selectively volume renders only through the object volumes indicated by the z buffer. We describe how our approach differs from other reported methods for combining segmentation with ray-casting, and illustrate how our method can be useful in helping to detect segmentation errors. PMID:12472272

  13. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 2: Program users manual

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  14. Automated White Matter Total Lesion Volume Segmentation in Diabetes

    PubMed Central

    Maldjian, J.A.; Whitlow, C.T.; Saha, B.N.; Kota, G.; Vandergriff, C.; Davenport, E.M.; Divers, J.; Freedman, B.I.; Bowden, D.W.

    2014-01-01

    Background and Purpose WM lesion segmentation is often performed with the use of subjective rating scales because manual methods are laborious and tedious; however, automated methods are now available. We compared the performance of total lesion volume grading computed by use of an automated WM lesion segmentation algorithm with that of subjective rating scales and expert manual segmentation in a cohort of subjects with type 2 diabetes. Materials and Methods Structural T1 and FLAIR MR imaging data from 50 subjects with diabetes (age, 67.7 ± 7.2 years) and 50 nondiabetic sibling pairs (age, 67.5 ± 9.4 years) were evaluated in an institutional review board–approved study. WM lesion segmentation maps and total lesion volume were generated for each subject by means of the Statistical Parametric Mapping (SPM8) Lesion Segmentation Toolbox. Subjective WM lesion grade was determined by means of a 0–9 rating scale by 2 readers. Ground-truth total lesion volume was determined by means of manual segmentation by experienced readers. Correlation analyses compared manual segmentation total lesion volume with automated and subjective evaluation methods. Results Correlation between average lesion segmentation and ground-truth total lesion volume was 0.84. Maximum correlation between the Lesion Segmentation Toolbox and ground-truth total lesion volume (ρ = 0.87) occurred at the segmentation threshold of k = 0.25, whereas maximum correlation between subjective lesion segmentation and the Lesion Segmentation Toolbox (ρ = 0.73) occurred at k = 0.15. The difference between the 2 correlation estimates with ground-truth was not statistically significant. The lower segmentation threshold (0.15 versus 0.25) suggests that subjective raters overestimate WM lesion burden. Conclusions We validate the Lesion Segmentation Toolbox for determining total lesion volume in diabetes-enriched populations and compare it with a common subjective WM lesion rating scale. The Lesion Segmentation

  15. Bioimpedance Measurement of Segmental Fluid Volumes and Hemodynamics

    NASA Technical Reports Server (NTRS)

    Montgomery, Leslie D.; Wu, Yi-Chang; Ku, Yu-Tsuan E.; Gerth, Wayne A.; DeVincenzi, D. (Technical Monitor)

    2000-01-01

    Bioimpedance has become a useful tool to measure changes in body fluid compartment volumes. An Electrical Impedance Spectroscopic (EIS) system is described that extends the capabilities of conventional fixed frequency impedance plethysmographic (IPG) methods to allow examination of the redistribution of fluids between the intracellular and extracellular compartments of body segments. The combination of EIS and IPG techniques was evaluated in the human calf, thigh, and torso segments of eight healthy men during 90 minutes of six degree head-down tilt (HDT). After 90 minutes HDT the calf and thigh segments significantly (P < 0.05) lost conductive volume (eight and four percent, respectively) while the torso significantly (P < 0.05) gained volume (approximately three percent). Hemodynamic responses calculated from pulsatile IPG data also showed a segmental pattern consistent with vascular fluid loss from the lower extremities and vascular engorgement in the torso. Lumped-parameter equivalent circuit analyses of EIS data for the calf and thigh indicated that the overall volume decreases in these segments arose from reduced extracellular volume that was not completely balanced by increased intracellular volume. The combined use of IPG and EIS techniques enables noninvasive tracking of multi-segment volumetric and hemodynamic responses to environmental and physiological stresses.

  16. A Ray Casting Accelerated Method of Segmented Regular Volume Data

    NASA Astrophysics Data System (ADS)

    Zhu, Min; Guo, Ming; Wang, Liting; Dai, Yujin

    The size of volume data field which is constructed by large-scale war industry product ICT images is large, and empty voxels in the volume data field occupy little ratio. The effect of existing ray casting accelerated methods is not distinct. In 3D visualization fault diagnosis of large-scale war industry product, only some of the information in the volume data field can help surveyor check out fault inside it. Computational complexity will greatly increase if all volume data is 3D reconstructed. So a new ray casting accelerated method based on segmented volume data is put forward. Segmented information volume data field is built by use of segmented result. Consulting the conformation method of existing hierarchical volume data structures, hierarchical volume data structure on the base of segmented information is constructed. According to the structure, the construction parts defined by user are identified automatically in ray casting. The other parts are regarded as empty voxels, hence the sampling step is adjusted dynamically, the sampling point amount is decreased, and the volume rendering speed is improved. Experimental results finally reveal the high efficiency and good display performance of the proposed method.

  17. Uterine fibroid segmentation and volume measurement on MRI

    NASA Astrophysics Data System (ADS)

    Yao, Jianhua; Chen, David; Lu, Wenzhu; Premkumar, Ahalya

    2006-03-01

    Uterine leiomyomas are the most common pelvic tumors in females. The efficacy of medical treatment is gauged by shrinkage of the size of these tumors. In this paper, we present a method to robustly segment the fibroids on MRI and accurately measure the 3D volume. Our method is based on a combination of fast marching level set and Laplacian level set. With a seed point placed inside the fibroid region, a fast marching level set is first employed to obtain a rough segmentation, followed by a Laplacian level set to refine the segmentation. We devised a scheme to automatically determine the parameters for the level set function and the sigmoid function based on pixel statistics around the seed point. The segmentation is conducted on three concurrent views (axial, coronal and sagittal), and a combined volume measurement is computed to obtain a more reliable measurement. We carried out extensive tests on 13 patients, 25 MRI studies and 133 fibroids. The segmentation result was validated against manual segmentation defined by experts. The average segmentation sensitivity (true positive fraction) among all fibroids was 84.6%, and the average segmentation specificity (1-false positive fraction) was 84.3%.

  18. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    PubMed Central

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís; Sastre-Garriga, Jaume; Montalban, Xavier; Rovira, Àlex; Lladó, Xavier

    2015-01-01

    Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS) lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM) and white matter (WM) using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations. PMID:26740917

  19. Fast global interactive volume segmentation with regional supervoxel descriptors

    NASA Astrophysics Data System (ADS)

    Luengo, Imanol; Basham, Mark; French, Andrew P.

    2016-03-01

    In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction.

  20. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  1. Tooth segmentation system with intelligent editing for cephalometric analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shoupu

    2015-03-01

    Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.

  2. Synthesis of intensity gradient and texture information for efficient three-dimensional segmentation of medical volumes

    PubMed Central

    Vantaram, Sreenath Rao; Saber, Eli; Dianat, Sohail A.; Hu, Yang

    2015-01-01

    Abstract. We propose a framework that efficiently employs intensity, gradient, and textural features for three-dimensional (3-D) segmentation of medical (MRI/CT) volumes. Our methodology commences by determining the magnitude of intensity variations across the input volume using a 3-D gradient detection scheme. The resultant gradient volume is utilized in a dynamic volume growing/formation process that is initiated in voxel locations with small gradient magnitudes and is concluded at sites with large gradient magnitudes, yielding a map comprising an initial set of partitions (or subvolumes). This partition map is combined with an entropy-based texture descriptor along with intensity and gradient attributes in a multivariate analysis-based volume merging procedure that fuses subvolumes with similar characteristics to yield a final/refined segmentation output. Additionally, a semiautomated version of the aforestated algorithm that allows a user to interactively segment a desired subvolume of interest as opposed to the entire volume is also discussed. Our approach was tested on several MRI and CT datasets and the results show favorable performance in comparison to the state-of-the-art ITK-SNAP technique. PMID:26158098

  3. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  4. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    NASA Astrophysics Data System (ADS)

    Hatt, M.; Lamare, F.; Boussion, N.; Turzo, A.; Collet, C.; Salzenstein, F.; Roux, C.; Jarritt, P.; Carson, K.; Cheze-LeRest, C.; Visvikis, D.

    2007-07-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both

  5. Semiautomatic Regional Segmentation to Measure Orbital Fat Volumes in Thyroid-Associated Ophthalmopathy

    PubMed Central

    Comerci, M.; Elefante, A.; Strianese, D.; Senese, R.; Bonavolontà, P.; Alfano, B.; Bonavolontà, G.; Brunetti, A.

    2013-01-01

    Summary This study was designed to validate a novel semi-automated segmentation method to measure regional intra-orbital fat tissue volume in Graves' ophthalmopathy. Twenty-four orbits from 12 patients with Graves' ophthalmopathy, 24 orbits from 12 controls, ten orbits from five MRI study simulations and two orbits from a digital model were used. Following manual region of interest definition of the orbital volumes performed by two operators with different levels of expertise, an automated procedure calculated intra-orbital fat tissue volumes (global and regional, with automated definition of four quadrants). In patients with Graves' disease, clinical activity score and degree of exophthalmos were measured and correlated with intra-orbital fat volumes. Operator performance was evaluated and statistical analysis of the measurements was performed. Accurate intra-orbital fat volume measurements were obtained with coefficients of variation below 5%. The mean operator difference in total fat volume measurements was 0.56%. Patients had significantly higher intra-orbital fat volumes than controls (p<0.001 using Student's t test). Fat volumes and clinical score were significantly correlated (p<0.001). The semi-automated method described here can provide accurate, reproducible intra-orbital fat measurements with low inter-operator variation and good correlation with clinical data. PMID:24007725

  6. Relaxed image foresting transforms for interactive volume image segmentation

    NASA Astrophysics Data System (ADS)

    Malmberg, Filip; Nyström, Ingela; Mehnert, Andrew; Engstrom, Craig; Bengtsson, Ewert

    2010-03-01

    The Image Foresting Transform (IFT) is a framework for image partitioning, commonly used for interactive segmentation. Given an image where a subset of the image elements (seed-points) have been assigned correct segmentation labels, the IFT completes the labeling by computing minimal cost paths from all image elements to the seed-points. Each image element is then given the same label as the closest seed-point. Here, we propose the relaxed IFT (RIFT). This modified version of the IFT features an additional parameter to control the smoothness of the segmentation boundary. The RIFT yields more intuitive segmentation results in the presence of noise and weak edges, while maintaining a low computational complexity. We show an application of the method to the refinement of manual segmentations of a thoracolumbar muscle in magnetic resonance images. The performed study shows that the refined segmentations are qualitatively similar to the manual segmentations, while intra-user variations are reduced by more than 50%.

  7. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region

    PubMed Central

    Tian, Jing; Varga, Boglárka; Somfai, Gábor Márk; Lee, Wen-Hsiang; Smiddy, William E.; Cabrera DeBuc, Delia

    2015-01-01

    Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra’s algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (∼ 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data. PMID:26258430

  8. Volume quantization of the mouse cerebellum by semiautomatic 3D segmentation of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sijbers, Jan; Van der Linden, Anne-Marie; Scheunders, Paul; Van Audekerke, Johan; Van Dyck, Dirk; Raman, Erik R.

    1996-04-01

    The aim of this work is the development of a non-invasive technique for efficient and accurate volume quantization of the cerebellum of mice. This enables an in-vivo study on the development of the cerebellum in order to define possible alterations in cerebellum volume of transgenic mice. We concentrate on a semi-automatic segmentation procedure to extract the cerebellum from 3D magnetic resonance data. The proposed technique uses a 3D variant of Vincent and Soille's immersion based watershed algorithm which is applied to the gradient magnitude of the MR data. The algorithm results in a partitioning of the data in volume primitives. The known drawback of the watershed algorithm, over-segmentation, is strongly reduced by a priori application of an adaptive anisotropic diffusion filter on the gradient magnitude data. In addition, over-segmentation is a posteriori contingently reduced by properly merging volume primitives, based on the minimum description length principle. The outcome of the preceding image processing step is presented to the user for manual segmentation. The first slice which contains the object of interest is quickly segmented by the user through selection of basic image regions. In the sequel, the subsequent slices are automatically segmented. The segmentation results are contingently manually corrected. The technique is tested on phantom objects, where segmentation errors less than 2% were observed. Three-dimensional reconstructions of the segmented data are shown for the mouse cerebellum and the mouse brains in toto.

  9. Hitchhiker's Guide to Voxel Segmentation for Partial Volume Correction of In Vivo Magnetic Resonance Spectroscopy.

    PubMed

    Quadrelli, Scott; Mountford, Carolyn; Ramadan, Saadallah

    2016-01-01

    Partial volume effects have the potential to cause inaccuracies when quantifying metabolites using proton magnetic resonance spectroscopy (MRS). In order to correct for cerebrospinal fluid content, a spectroscopic voxel needs to be segmented according to different tissue contents. This article aims to detail how automated partial volume segmentation can be undertaken and provides a software framework for researchers to develop their own tools. While many studies have detailed the impact of partial volume correction on proton magnetic resonance spectroscopy quantification, there is a paucity of literature explaining how voxel segmentation can be achieved using freely available neuroimaging packages. PMID:27147822

  10. Theoretical analysis of multispectral image segmentation criteria.

    PubMed

    Kerfoot, I B; Bresler, Y

    1999-01-01

    Markov random field (MRF) image segmentation algorithms have been extensively studied, and have gained wide acceptance. However, almost all of the work on them has been experimental. This provides a good understanding of the performance of existing algorithms, but not a unified explanation of the significance of each component. To address this issue, we present a theoretical analysis of several MRF image segmentation criteria. Standard methods of signal detection and estimation are used in the theoretical analysis, which quantitatively predicts the performance at realistic noise levels. The analysis is decoupled into the problems of false alarm rate, parameter selection (Neyman-Pearson and receiver operating characteristics), detection threshold, expected a priori boundary roughness, and supervision. Only the performance inherent to a criterion, with perfect global optimization, is considered. The analysis indicates that boundary and region penalties are very useful, while distinct-mean penalties are of questionable merit. Region penalties are far more important for multispectral segmentation than for greyscale. This observation also holds for Gauss-Markov random fields, and for many separable within-class PDFs. To validate the analysis, we present optimization algorithms for several criteria. Theoretical and experimental results agree fairly well. PMID:18267494

  11. Volume Averaging of Spectral-Domain Optical Coherence Tomography Impacts Retinal Segmentation in Children

    PubMed Central

    Trimboli-Heidler, Carmelina; Vogt, Kelly; Avery, Robert A.

    2016-01-01

    Purpose To determine the influence of volume averaging on retinal layer thickness measures acquired with spectral-domain optical coherence tomography (SD-OCT) in children. Methods Macular SD-OCT images were acquired using three different volume settings (i.e., 1, 3, and 9 volumes) in children enrolled in a prospective OCT study. Total retinal thickness and five inner layers were measured around an Early Treatment Diabetic Retinopathy Scale (ETDRS) grid using beta version automated segmentation software for the Spectralis. The magnitude of manual segmentation required to correct the automated segmentation was classified as either minor (<12 lines adjusted), moderate (>12 and <25 lines adjusted), severe (>26 and <48 lines adjusted), or fail (>48 lines adjusted or could not adjust due to poor image quality). The frequency of each edit classification was assessed for each volume setting. Thickness, paired difference, and 95% limits of agreement of each anatomic quadrant were compared across volume density. Results Seventy-five subjects (median age 11.8 years, range 4.3–18.5 years) contributed 75 eyes. Less than 5% of the 9- and 3-volume scans required more than minor manual segmentation corrections, compared with 71% of 1-volume scans. The inner (3 mm) region demonstrated similar measures across all layers, regardless of volume number. The 1-volume scans demonstrated greater variability of the retinal nerve fiber layer (RNLF) thickness, compared with the other volumes in the outer (6 mm) region. Conclusions In children, volume averaging of SD-OCT acquisitions reduce retinal layer segmentation errors. Translational Relevance This study highlights the importance of volume averaging when acquiring macula volumes intended for multilayer segmentation. PMID:27570711

  12. Midbrain volume segmentation using active shape models and LBPs

    NASA Astrophysics Data System (ADS)

    Olveres, Jimena; Nava, Rodrigo; Escalante-Ramírez, Boris; Cristóbal, Gabriel; García-Moreno, Carla María.

    2013-09-01

    In recent years, the use of Magnetic Resonance Imaging (MRI) to detect different brain structures such as midbrain, white matter, gray matter, corpus callosum, and cerebellum has increased. This fact together with the evidence that midbrain is associated with Parkinson's disease has led researchers to consider midbrain segmentation as an important issue. Nowadays, Active Shape Models (ASM) are widely used in literature for organ segmentation where the shape is an important discriminant feature. Nevertheless, this approach is based on the assumption that objects of interest are usually located on strong edges. Such a limitation may lead to a final shape far from the actual shape model. This paper proposes a novel method based on the combined use of ASM and Local Binary Patterns for segmenting midbrain. Furthermore, we analyzed several LBP methods and evaluated their performance. The joint-model considers both global and local statistics to improve final adjustments. The results showed that our proposal performs substantially better than the ASM algorithm and provides better segmentation measurements.

  13. Automated lung tumor segmentation for whole body PET volume based on novel downhill region growing

    NASA Astrophysics Data System (ADS)

    Ballangan, Cherry; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Feng, Dagan

    2010-03-01

    We propose an automated lung tumor segmentation method for whole body PET images based on a novel downhill region growing (DRG) technique, which regards homogeneous tumor hotspots as 3D monotonically decreasing functions. The method has three major steps: thoracic slice extraction with K-means clustering of the slice features; hotspot segmentation with DRG; and decision tree analysis based hotspot classification. To overcome the common problem of leakage into adjacent hotspots in automated lung tumor segmentation, DRG employs the tumors' SUV monotonicity features. DRG also uses gradient magnitude of tumors' SUV to improve tumor boundary definition. We used 14 PET volumes from patients with primary NSCLC for validation. The thoracic region extraction step achieved good and consistent results for all patients despite marked differences in size and shape of the lungs and the presence of large tumors. The DRG technique was able to avoid the problem of leakage into adjacent hotspots and produced a volumetric overlap fraction of 0.61 +/- 0.13 which outperformed four other methods where the overlap fraction varied from 0.40 +/- 0.24 to 0.59 +/- 0.14. Of the 18 tumors in 14 NSCLC studies, 15 lesions were classified correctly, 2 were false negative and 15 were false positive.

  14. 3D robust Chan-Vese model for industrial computed tomography volume data segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Linghui; Zeng, Li; Luan, Xiao

    2013-11-01

    Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.

  15. Multi-region unstructured volume segmentation using tetrahedron filling

    SciTech Connect

    Willliams, Sean Jamerson; Dillard, Scott E; Thoma, Dan J; Hlawitschka, Mario; Hamann, Bernd

    2010-01-01

    Segmentation is one of the most common operations in image processing, and while there are several solutions already present in the literature, they each have their own benefits and drawbacks that make them well-suited for some types of data and not for others. We focus on the problem of breaking an image into multiple regions in a single segmentation pass, while supporting both voxel and scattered point data. To solve this problem, we begin with a set of potential boundary points and use a Delaunay triangulation to complete the boundaries. We use heuristic- and interaction-driven Voronoi clustering to find reasonable groupings of tetrahedra. Apart from the computation of the Delaunay triangulation, our algorithm has linear time complexity with respect to the number of tetrahedra.

  16. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  17. Scintigraphic method for the assessment of intraluminal volume and motility of isolated intestinal segments. [Dogs

    SciTech Connect

    Mitchell, A.; Macey, D.J.; Collin, J.

    1983-07-01

    The isolated in vivo intestinal segment is a popular experimental preparation for the investigation of intestinal function, but its value has been limited because no method has been available for measuring changes in intraluminal volume under experimental conditions. We report a scintigraphic technique for measuring intraluminal volume and assessing intestinal motility. Between 30 and 180 ml, the volume of a 75-cm segment of canine jejunum, perfused with Tc-99m-labeled tin colloid, was found to be proportional to the recorded count rate. This method has been used to monitor the effects of the hormone vasopressin on intestinal function.

  18. Automatic segmentation of the fetal cerebellum on ultrasound volumes, using a 3D statistical shape model.

    PubMed

    Gutiérrez-Becker, Benjamín; Arámbula Cosío, Fernando; Guzmán Huerta, Mario E; Benavides-Serralde, Jesús Andrés; Camargo-Marín, Lisbeth; Medina Bañuelos, Verónica

    2013-09-01

    Previous work has shown that the segmentation of anatomical structures on 3D ultrasound data sets provides an important tool for the assessment of the fetal health. In this work, we present an algorithm based on a 3D statistical shape model to segment the fetal cerebellum on 3D ultrasound volumes. This model is adjusted using an ad hoc objective function which is in turn optimized using the Nelder-Mead simplex algorithm. Our algorithm was tested on ultrasound volumes of the fetal brain taken from 20 pregnant women, between 18 and 24 gestational weeks. An intraclass correlation coefficient of 0.8528 and a mean Dice coefficient of 0.8 between cerebellar volumes measured using manual techniques and the volumes calculated using our algorithm were obtained. As far as we know, this is the first effort to automatically segment fetal intracranial structures on 3D ultrasound data. PMID:23686392

  19. Automated segmentation of mesothelioma volume on CT scan

    NASA Astrophysics Data System (ADS)

    Zhao, Binsheng; Schwartz, Lawrence; Flores, Raja; Liu, Fan; Kijewski, Peter; Krug, Lee; Rusch, Valerie

    2005-04-01

    In mesothelioma, response is usually assessed by computed tomography (CT). In current clinical practice the Response Evaluation Criteria in Solid Tumors (RECIST) or WHO, i.e., the uni-dimensional or the bi-dimensional measurements, is applied to the assessment of therapy response. However, the shape of the mesothelioma volume is very irregular and its longest dimension is almost never in the axial plane. Furthermore, the sections and the sites where radiologists measure the tumor are rather subjective, resulting in poor reproducibility of tumor size measurements. We are developing an objective three-dimensional (3D) computer algorithm to automatically identify and quantify tumor volumes that are associated with malignant pleural mesothelioma to assess therapy response. The algorithm first extracts the lung pleural surface from the volumetric CT images by interpolating the chest ribs over a number of adjacent slices and then forming a volume that includes the thorax. This volume allows a separation of mesothelioma from the chest wall. Subsequently, the structures inside the extracted pleural lung surface, including the mediastinal area, lung parenchyma, and pleural mesothelioma, can be identified using a multiple thresholding technique and morphological operations. Preliminary results have shown the potential of utilizing this algorithm to automatically detect and quantify tumor volumes on CT scans and thus to assess therapy response for malignant pleural mesothelioma.

  20. LANDSAT-D program. Volume 2: Ground segment

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Raw digital data, as received from the LANDSAT spacecraft, cannot generate images that meet specifications. Radiometric corrections must be made to compensate for aging and for differences in sensitivity among the instrument sensors. Geometric corrections must be made to compensate for off-nadir look angle, and to calculate spacecraft drift from its prescribed path. Corrections must also be made for look-angle jitter caused by vibrations induced by spacecraft equipment. The major components of the LANDSAT ground segment and their functions are discussed.

  1. Knowledge-based segmentation of pediatric kidneys in CT for measuring parenchymal volume

    NASA Astrophysics Data System (ADS)

    Brown, Matthew S.; Feng, Waldo C.; Hall, Theodore R.; McNitt-Gray, Michael F.; Churchill, Bernard M.

    2000-06-01

    The purpose of this work was to develop an automated method for segmenting pediatric kidneys in contrast-enhanced helical CT images and measuring the volume of the renal parenchyma. An automated system was developed to segment the abdomen, spine, aorta and kidneys. The expected size, shape, topology an X-ray attenuation of anatomical structures are stored as features in an anatomical model. These features guide 3-D threshold-based segmentation and then matching of extracted image regions to anatomical structures in the model. Following segmentation, the kidney volumes are calculated by summing included voxels. To validate the system, the kidney volumes of 4 swine were calculated using our approach and compared to the 'true' volumes measured after harvesting the kidneys. Automated volume calculations were also performed retrospectively in a cohort of 10 children. The mean difference between the calculated and measured values in the swine kidneys was 1.38 (S.D. plus or minus 0.44) cc. For the pediatric cases, calculated volumes ranged from 41.7 - 252.1 cc/kidney, and the mean ratio of right to left kidney volume was 0.96 (S.D. plus or minus 0.07). These results demonstrate the accuracy of the volumetric technique that may in the future provide an objective assessment of renal damage.

  2. Hybrid segmentation framework for 3D medical image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  3. High volume production trial of mirror segments for the Thirty Meter Telescope

    NASA Astrophysics Data System (ADS)

    Oota, Tetsuji; Negishi, Mahito; Shinonaga, Hirohiko; Gomi, Akihiko; Tanaka, Yutaka; Akutsu, Kotaro; Otsuka, Itaru; Mochizuki, Shun; Iye, Masanori; Yamashita, Takuya

    2014-07-01

    The Thirty Meter Telescope is a next-generation optical/infrared telescope to be constructed on Mauna Kea, Hawaii toward the end of this decade, as an international project. Its 30 m primary mirror consists of 492 off-axis aspheric segmented mirrors. High volume production of hundreds of segments has started in 2013 based on the contract between National Astronomical Observatory of Japan and Canon Inc.. This paper describes the achievements of the high volume production trials. The Stressed Mirror Figuring technique which is established by Keck Telescope engineers is arranged and adopted. To measure the segment surface figure, a novel stitching algorithm is evaluated by experiment. The integration procedure is checked with prototype segment.

  4. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  5. Multi-Segment Hemodynamic and Volume Assessment With Impedance Plethysmography: Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Ku, Yu-Tsuan E.; Montgomery, Leslie D.; Webbon, Bruce W. (Technical Monitor)

    1995-01-01

    Definition of multi-segmental circulatory and volume changes in the human body provides an understanding of the physiologic responses to various aerospace conditions. We have developed instrumentation and testing procedures at NASA Ames Research Center that may be useful in biomedical research and clinical diagnosis. Specialized two, four, and six channel impedance systems will be described that have been used to measure calf, thigh, thoracic, arm, and cerebral hemodynamic and volume changes during various experimental investigations.

  6. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes

    PubMed Central

    Eapen, Maya; Korah, Reeba; Geetha, G.

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  7. Lung Segmentation in 4D CT Volumes Based on Robust Active Shape Model Matching

    PubMed Central

    Gill, Gurman; Beichel, Reinhard R.

    2015-01-01

    Dynamic and longitudinal lung CT imaging produce 4D lung image data sets, enabling applications like radiation treatment planning or assessment of response to treatment of lung diseases. In this paper, we present a 4D lung segmentation method that mutually utilizes all individual CT volumes to derive segmentations for each CT data set. Our approach is based on a 3D robust active shape model and extends it to fully utilize 4D lung image data sets. This yields an initial segmentation for the 4D volume, which is then refined by using a 4D optimal surface finding algorithm. The approach was evaluated on a diverse set of 152 CT scans of normal and diseased lungs, consisting of total lung capacity and functional residual capacity scan pairs. In addition, a comparison to a 3D segmentation method and a registration based 4D lung segmentation approach was performed. The proposed 4D method obtained an average Dice coefficient of 0.9773 ± 0.0254, which was statistically significantly better (p value ≪0.001) than the 3D method (0.9659 ± 0.0517). Compared to the registration based 4D method, our method obtained better or similar performance, but was 58.6% faster. Also, the method can be easily expanded to process 4D CT data sets consisting of several volumes. PMID:26557844

  8. Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory.

    PubMed

    Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin

    2012-01-01

    In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region. PMID:23286081

  9. Analysis of Random Segment Errors on Coronagraph Performance

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; N'Diaye, Mamadou; Stahl, Mark T.; Stahl, H. Philip

    2016-01-01

    At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

  10. Automated segmentation and measurement of global white matter lesion volume in patients with multiple sclerosis.

    PubMed

    Alfano, B; Brunetti, A; Larobina, M; Quarantelli, M; Tedeschi, E; Ciarmiello, A; Covelli, E M; Salvatore, M

    2000-12-01

    A fully automated magnetic resonance (MR) segmentation method for identification and volume measurement of demyelinated white matter has been developed. Spin-echo MR brain scans were performed in 38 patients with multiple sclerosis (MS) and in 46 healthy subjects. Segmentation of normal tissues and white matter lesions (WML) was obtained, based on their relaxation rates and proton density maps. For WML identification, additional criteria included three-dimensional (3D) lesion shape and surrounding tissue composition. Segmented images were generated, and normal brain tissues and WML volumes were obtained. Sensitivity, specificity, and reproducibility of the method were calculated, using the WML identified by two neuroradiologists as the gold standard. The average volume of "abnormal" white matter in normal subjects (false positive) was 0.11 ml (range 0-0.59 ml). In MS patients the average WML volume was 31.0 ml (range 1.1-132.5 ml), with a sensitivity of 87.3%. In the reproducibility study, the mean SD of WML volumes was 2.9 ml. The procedure appears suitable for monitoring disease changes over time. J. Magn. Reson. Imaging 2000;12:799-807. PMID:11105017

  11. Semi-automatic tool for segmentation and volumetric analysis of medical images.

    PubMed

    Heinonen, T; Dastidar, P; Kauppinen, P; Malmivuo, J; Eskola, H

    1998-05-01

    Segmentation software is described, developed for medical image processing and run on Windows. The software applies basic image processing techniques through a graphical user interface. For particular applications, such as brain lesion segmentation, the software enables the combination of different segmentation techniques to improve its efficiency. The program is applied for magnetic resonance imaging, computed tomography and optical images of cryosections. The software can be utilised in numerous applications, including pre-processing for three-dimensional presentations, volumetric analysis and construction of volume conductor models. PMID:9747567

  12. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    NASA Astrophysics Data System (ADS)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  13. Generalized method for partial volume estimation and tissue segmentation in cerebral magnetic resonance images

    PubMed Central

    Khademi, April; Venetsanopoulos, Anastasios; Moody, Alan R.

    2014-01-01

    Abstract. An artifact found in magnetic resonance images (MRI) called partial volume averaging (PVA) has received much attention since accurate segmentation of cerebral anatomy and pathology is impeded by this artifact. Traditional neurological segmentation techniques rely on Gaussian mixture models to handle noise and PVA, or high-dimensional feature sets that exploit redundancy in multispectral datasets. Unfortunately, model-based techniques may not be optimal for images with non-Gaussian noise distributions and/or pathology, and multispectral techniques model probabilities instead of the partial volume (PV) fraction. For robust segmentation, a PV fraction estimation approach is developed for cerebral MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead, the PV fraction is estimated directly from each image using an adaptively defined global edge map constructed by exploiting a relationship between edge content and PVA. The final PVA map is used to segment anatomy and pathology with subvoxel accuracy. Validation on simulated and real, pathology-free T1 MRI (Gaussian noise), as well as pathological fluid attenuation inversion recovery MRI (non-Gaussian noise), demonstrate that the PV fraction is accurately estimated and the resultant segmentation is robust. Comparison to model-based methods further highlight the benefits of the current approach. PMID:26158022

  14. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    NASA Astrophysics Data System (ADS)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  15. Volume rendering segmented data using 3D textures: a practical approach for intra-operative visualization

    NASA Astrophysics Data System (ADS)

    Subramanian, Navneeth; Mullick, Rakesh; Vaidya, Vivek

    2006-03-01

    Volume rendering has high utility in visualization of segmented datasets. However, volume rendering of the segmented labels along with the original data causes undesirable intermixing/bleeding artifacts arising from interpolation at the sharp boundaries. This issue is further amplified in 3D textures based volume rendering due to the inaccessibility of the interpolation stage. We present an approach which helps minimize intermixing artifacts while maintaining the high performance of 3D texture based volume rendering - both of which are critical for intra-operative visualization. Our approach uses a 2D transfer function based classification scheme where label distinction is achieved through an encoding that generates unique gradient values for labels. This helps ensure that labelled voxels always map to distinct regions in the 2D transfer function, irrespective of interpolation. In contrast to previously reported algorithms, our algorithm does not require multiple passes for rendering and supports greater than 4 masks. It also allows for real-time modification of the colors/opacities of the segmented structures along with the original data. Additionally, these capabilities are available with minimal texture memory requirements amongst comparable algorithms. Results are presented on clinical and phantom data.

  16. Trabecular-Iris Circumference Volume in Open Angle Eyes Using Swept-Source Fourier Domain Anterior Segment Optical Coherence Tomography

    PubMed Central

    Rigi, Mohammed; Blieden, Lauren S.; Nguyen, Donna; Chuang, Alice Z.; Baker, Laura A.; Bell, Nicholas P.; Lee, David A.; Mankiewicz, Kimberly A.; Feldman, Robert M.

    2014-01-01

    Purpose. To introduce a new anterior segment optical coherence tomography parameter, trabecular-iris circumference volume (TICV), which measures the integrated volume of the peripheral angle, and establish a reference range in normal, open angle eyes. Methods. One eye of each participant with open angles and a normal anterior segment was imaged using 3D mode by the CASIA SS-1000 (Tomey, Nagoya, Japan). Trabecular-iris space area (TISA) and TICV at 500 and 750 µm were calculated. Analysis of covariance was performed to examine the effect of age and its interaction with spherical equivalent. Results. The study included 100 participants with a mean age of 50 (±15) years (range 20–79). TICV showed a normal distribution with a mean (±SD) value of 4.75 µL (±2.30) for TICV500 and a mean (±SD) value of 8.90 µL (±3.88) for TICV750. Overall, TICV showed an age-related reduction (P = 0.035). In addition, angle volume increased with increased myopia for all age groups, except for those older than 65 years. Conclusions. This study introduces a new parameter to measure peripheral angle volume, TICV, with age-adjusted normal ranges for open angle eyes. Further investigation is warranted to determine the clinical utility of this new parameter. PMID:25210623

  17. Trabecular-iris circumference volume in open angle eyes using swept-source fourier domain anterior segment optical coherence tomography.

    PubMed

    Rigi, Mohammed; Blieden, Lauren S; Nguyen, Donna; Chuang, Alice Z; Baker, Laura A; Bell, Nicholas P; Lee, David A; Mankiewicz, Kimberly A; Feldman, Robert M

    2014-01-01

    Purpose. To introduce a new anterior segment optical coherence tomography parameter, trabecular-iris circumference volume (TICV), which measures the integrated volume of the peripheral angle, and establish a reference range in normal, open angle eyes. Methods. One eye of each participant with open angles and a normal anterior segment was imaged using 3D mode by the CASIA SS-1000 (Tomey, Nagoya, Japan). Trabecular-iris space area (TISA) and TICV at 500 and 750 µm were calculated. Analysis of covariance was performed to examine the effect of age and its interaction with spherical equivalent. Results. The study included 100 participants with a mean age of 50 (±15) years (range 20-79). TICV showed a normal distribution with a mean (±SD) value of 4.75 µL (±2.30) for TICV500 and a mean (±SD) value of 8.90 µL (±3.88) for TICV750. Overall, TICV showed an age-related reduction (P = 0.035). In addition, angle volume increased with increased myopia for all age groups, except for those older than 65 years. Conclusions. This study introduces a new parameter to measure peripheral angle volume, TICV, with age-adjusted normal ranges for open angle eyes. Further investigation is warranted to determine the clinical utility of this new parameter. PMID:25210623

  18. Exploratory analysis of genomic segmentations with Segtools

    PubMed Central

    2011-01-01

    Background As genome-wide experiments and annotations become more prevalent, researchers increasingly require tools to help interpret data at this scale. Many functional genomics experiments involve partitioning the genome into labeled segments, such that segments sharing the same label exhibit one or more biochemical or functional traits. For example, a collection of ChlP-seq experiments yields a compendium of peaks, each labeled with one or more associated DNA-binding proteins. Similarly, manually or automatically generated annotations of functional genomic elements, including cis-regulatory modules and protein-coding or RNA genes, can also be summarized as genomic segmentations. Results We present a software toolkit called Segtools that simplifies and automates the exploration of genomic segmentations. The software operates as a series of interacting tools, each of which provides one mode of summarization. These various tools can be pipelined and summarized in a single HTML page. We describe the Segtools toolkit and demonstrate its use in interpreting a collection of human histone modification data sets and Plasmodium falciparum local chromatin structure data sets. Conclusions Segtools provides a convenient, powerful means of interpreting a genomic segmentation. PMID:22029426

  19. Segments.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Presents a market taxonomy for higher education, including what it reveals about the structure of the market, the model's technical attributes, and its capacity to explain pricing behavior. Details the identification of the principle seams separating one market segment from another and how student aspirations help to organize the market, making…

  20. Cost, volume and profitability analysis.

    PubMed

    Tarantino, David P

    2002-01-01

    If you want to increase your income by seeing more patients, it's important to figure out the financial impact such a move could have on your practice. Learn how to run a cost, volume, and profitability analysis to determine how business decisions can change your financial picture. PMID:11806235

  1. Hitchhiker’s Guide to Voxel Segmentation for Partial Volume Correction of In Vivo Magnetic Resonance Spectroscopy

    PubMed Central

    Quadrelli, Scott; Mountford, Carolyn; Ramadan, Saadallah

    2016-01-01

    Partial volume effects have the potential to cause inaccuracies when quantifying metabolites using proton magnetic resonance spectroscopy (MRS). In order to correct for cerebrospinal fluid content, a spectroscopic voxel needs to be segmented according to different tissue contents. This article aims to detail how automated partial volume segmentation can be undertaken and provides a software framework for researchers to develop their own tools. While many studies have detailed the impact of partial volume correction on proton magnetic resonance spectroscopy quantification, there is a paucity of literature explaining how voxel segmentation can be achieved using freely available neuroimaging packages. PMID:27147822

  2. Automated cerebellar segmentation: Validation and application to detect smaller volumes in children prenatally exposed to alcohol☆

    PubMed Central

    Cardenas, Valerie A.; Price, Mathew; Infante, M. Alejandra; Moore, Eileen M.; Mattson, Sarah N.; Riley, Edward P.; Fein, George

    2014-01-01

    Objective To validate an automated cerebellar segmentation method based on active shape and appearance modeling and then segment the cerebellum on images acquired from adolescents with histories of prenatal alcohol exposure (PAE) and non-exposed controls (NC). Methods Automated segmentations of the total cerebellum, right and left cerebellar hemispheres, and three vermal lobes (anterior, lobules I–V; superior posterior, lobules VI–VII; inferior posterior, lobules VIII–X) were compared to expert manual labelings on 20 subjects, studied twice, that were not used for model training. The method was also used to segment the cerebellum on 11 PAE and 9 NC adolescents. Results The test–retest intraclass correlation coefficients (ICCs) of the automated method were greater than 0.94 for all cerebellar volume and mid-sagittal vermal area measures, comparable or better than the test–retest ICCs for manual measurement (all ICCs > 0.92). The ICCs computed on all four cerebellar measurements (manual and automated measures on the repeat scans) to compare comparability were above 0.97 for non-vermis parcels, and above 0.89 for vermis parcels. When applied to patients, the automated method detected smaller cerebellar volumes and mid-sagittal areas in the PAE group compared to controls (p < 0.05 for all regions except the superior posterior lobe, consistent with prior studies). Discussion These results demonstrate excellent reliability and validity of automated cerebellar volume and mid-sagittal area measurements, compared to manual measurements. These data also illustrate that this new technology for automatically delineating the cerebellum leads to conclusions regarding the effects of prenatal alcohol exposure on the cerebellum consistent with prior studies that used labor intensive manual delineation, even with a very small sample. PMID:25061566

  3. Automatic coronary lumen segmentation with partial volume modeling improves lesions' hemodynamic significance assessment

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Lamash, Y.; Gilboa, G.; Nickisch, H.; Prevrhal, S.; Schmitt, H.; Vembar, M.; Goshen, L.

    2016-03-01

    The determination of hemodynamic significance of coronary artery lesions from cardiac computed tomography angiography (CCTA) based on blood flow simulations has the potential to improve CCTA's specificity, thus resulting in improved clinical decision making. Accurate coronary lumen segmentation required for flow simulation is challenging due to several factors. Specifically, the partial-volume effect (PVE) in small-diameter lumina may result in overestimation of the lumen diameter that can lead to an erroneous hemodynamic significance assessment. In this work, we present a coronary artery segmentation algorithm tailored specifically for flow simulations by accounting for the PVE. Our algorithm detects lumen regions that may be subject to the PVE by analyzing the intensity values along the coronary centerline and integrates this information into a machine-learning based graph min-cut segmentation framework to obtain accurate coronary lumen segmentations. We demonstrate the improvement in hemodynamic significance assessment achieved by accounting for the PVE in the automatic segmentation of 91 coronary artery lesions from 85 patients. We compare hemodynamic significance assessments by means of fractional flow reserve (FFR) resulting from simulations on 3D models generated by our segmentation algorithm with and without accounting for the PVE. By accounting for the PVE we improved the area under the ROC curve for detecting hemodynamically significant CAD by 29% (N=91, 0.85 vs. 0.66, p<0.05, Delong's test) with invasive FFR threshold of 0.8 as the reference standard. Our algorithm has the potential to facilitate non-invasive hemodynamic significance assessment of coronary lesions.

  4. Four-chamber heart modeling and automatic segmentation for 3D cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Georgescu, Bogdan; Barbu, Adrian; Scheuering, Michael; Comaniciu, Dorin

    2008-03-01

    Multi-chamber heart segmentation is a prerequisite for quantification of the cardiac function. In this paper, we propose an automatic heart chamber segmentation system. There are two closely related tasks to develop such a system: heart modeling and automatic model fitting to an unseen volume. The heart is a complicated non-rigid organ with four chambers and several major vessel trunks attached. A flexible and accurate model is necessary to capture the heart chamber shape at an appropriate level of details. In our four-chamber surface mesh model, the following two factors are considered and traded-off: 1) accuracy in anatomy and 2) easiness for both annotation and automatic detection. Important landmarks such as valves and cusp points on the interventricular septum are explicitly represented in our model. These landmarks can be detected reliably to guide the automatic model fitting process. We also propose two mechanisms, the rotation-axis based and parallel-slice based resampling methods, to establish mesh point correspondence, which is necessary to build a statistical shape model to enforce priori shape constraints in the model fitting procedure. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3D computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9-dimensional similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3D shape through learning-based boundary delineation. Extensive experiments demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the state-of-the-art. This

  5. A novel colonic polyp volume segmentation method for computer tomographic colonography

    NASA Astrophysics Data System (ADS)

    Wang, Huafeng; Li, Lihong C.; Han, Hao; Song, Bowen; Peng, Hao; Wang, Yunhong; Wang, Lihua; Liang, Zhengrong

    2014-03-01

    Colorectal cancer is the third most common type of cancer. However, this disease can be prevented by detection and removal of precursor adenomatous polyps after the diagnosis given by experts on computer tomographic colonography (CTC). During CTC diagnosis, the radiologist looks for colon polyps and measures not only the size but also the malignancy. It is a common sense that to segment polyp volumes from their complicated growing environment is of much significance for accomplishing the CTC based early diagnosis task. Previously, the polyp volumes are mainly given from the manually or semi-automatically drawing by the radiologists. As a result, some deviations cannot be avoided since the polyps are usually small (6~9mm) and the radiologists' experience and knowledge are varying from one to another. In order to achieve automatic polyp segmentation carried out by the machine, we proposed a new method based on the colon decomposition strategy. We evaluated our algorithm on both phantom and patient data. Experimental results demonstrate our approach is capable of segment the small polyps from their complicated growing background.

  6. Segmentation of brain image volumes using the data list management library.

    PubMed

    Román-Alonso, G; Jiménez-Alaniz, J R; Buenabad-Chávez, J; Castro-García, M A; Vargas-Rodríguez, A H

    2007-01-01

    The segmentation of head images is useful to detect neuroanatomical structures and to follow and quantify the evolution of several brain lesions. 2D images correspond to brain slices. The more images are used the higher the resolution obtained is, but more processing power is required and parallelism becomes desirable. We present a new approach to segmentation of brain image volumes using DLML (Data List Management Library), a tool developed by our team. We organise the integer numbers identifying images into a list, and our DLML version process them both in parallel and with dynamic load balancing transparently to the programmer. We compare the performance of our DLML version to other typical parallel approaches developed with MPI (master-slave and static data distribution), using cluster configurations with 4-32 processors. PMID:18002398

  7. Segmentation and quantitative analysis of individual cells in developmental tissues.

    PubMed

    Nandy, Kaustav; Kim, Jusub; McCullough, Dean P; McAuliffe, Matthew; Meaburn, Karen J; Yamaguchi, Terry P; Gudla, Prabhakar R; Lockett, Stephen J

    2014-01-01

    Image analysis is vital for extracting quantitative information from biological images and is used extensively, including investigations in developmental biology. The technique commences with the segmentation (delineation) of objects of interest from 2D images or 3D image stacks and is usually followed by the measurement and classification of the segmented objects. This chapter focuses on the segmentation task and here we explain the use of ImageJ, MIPAV (Medical Image Processing, Analysis, and Visualization), and VisSeg, three freely available software packages for this purpose. ImageJ and MIPAV are extremely versatile and can be used in diverse applications. VisSeg is a specialized tool for performing highly accurate and reliable 2D and 3D segmentation of objects such as cells and cell nuclei in images and stacks. PMID:24318825

  8. Segment-to-segment contact elements for modelling joint interfaces in finite element analysis

    NASA Astrophysics Data System (ADS)

    Mayer, M. H.; Gaul, L.

    2007-02-01

    This paper presents an efficient approach to model contact interfaces of joints in finite element analysis (FEA) with segment-to-segment contact elements like thin layer or zero thickness elements. These elements originate from geomechanics and have been applied recently in modal analysis as an efficient way to define the contact stiffness of fixed joints for model updating. A big advantage of these elements is that no global contact search algorithm is employed as used in master-slave contacts. Contact search algorithms are not necessary for modelling contact interfaces of fixed joints since the interfaces are always in contact and restricted to small relative movements, which saves much computing time. We first give an introduction into the theory of segment-to-segment contact elements leading to zero thickness and thin layer elements. As a new application of zero thickness elements, we demonstrate the implementation of a structural contact damping model, derived from a Masing model, as non-linear constitutive laws for the contact element. This damping model takes into account the non-linear influence of frictional microslip in the contact interface of fixed joints. With this model we simulate the non-linear response of a bolted structure. This approach constitutes a new way to simulate multi-degree-of-freedom systems with structural joints and predict modal damping properties.

  9. Systematic Error in Hippocampal Volume Asymmetry Measurement is Minimal with a Manual Segmentation Protocol

    PubMed Central

    Rogers, Baxter P.; Sheffield, Julia M.; Luksik, Andrew S.; Heckers, Stephan

    2012-01-01

    Hemispheric asymmetry of hippocampal volume is a common finding that has biological relevance, including associations with dementia and cognitive performance. However, a recent study has reported the possibility of systematic error in measurements of hippocampal asymmetry by magnetic resonance volumetry. We manually traced the volumes of the anterior and posterior hippocampus in 40 healthy people to measure systematic error related to image orientation. We found a bias due to the side of the screen on which the hippocampus was viewed, such that hippocampal volume was larger when traced on the left side of the screen than when traced on the right (p = 0.05). However, this bias was smaller than the anatomical right > left asymmetry of the anterior hippocampus. We found right > left asymmetry of hippocampal volume regardless of image presentation (radiological versus neurological). We conclude that manual segmentation protocols can minimize the effect of image orientation in the study of hippocampal volume asymmetry, but our confirmation that such bias exists suggests strategies to avoid it in future studies. PMID:23248580

  10. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  11. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    NASA Astrophysics Data System (ADS)

    Leibfarth, S.; Eckert, F.; Welz, S.; Siegel, C.; Schmidt, H.; Schwenzer, N.; Zips, D.; Thorwarth, D.

    2015-07-01

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only.

  12. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    NASA Astrophysics Data System (ADS)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  13. Leaf image segmentation method based on multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Jin-Wei; Shi, Wen; Liao, Gui-Ping

    2013-12-01

    To identify singular regions of crop leaf affected by diseases, based on multifractal detrended fluctuation analysis (MF-DFA), an image segmentation method is proposed. In the proposed method, first, we defend a new texture descriptor: local generalized Hurst exponent, recorded as LHq based on MF-DFA. And then, box-counting dimension f(LHq) is calculated for sub-images constituted by the LHq of some pixels, which come from a specific region. Consequently, series of f(LHq) of the different regions can be obtained. Finally, the singular regions are segmented according to the corresponding f(LHq). Six kinds of corn diseases leaf's images are tested in our experiments. Both the proposed method and other two segmentation methods—multifractal spectrum based and fuzzy C-means clustering have been compared in the experiments. The comparison results demonstrate that the proposed method can recognize the lesion regions more effectively and provide more robust segmentations.

  14. Bayesian Analysis and Segmentation of Multichannel Image Sequences

    NASA Astrophysics Data System (ADS)

    Chang, Michael Ming Hsin

    This thesis is concerned with the segmentation and analysis of multichannel image sequence data. In particular, we use maximum a posteriori probability (MAP) criterion and Gibbs random fields (GRF) to formulate the problems. We start by reviewing the significance of MAP estimation with GRF priors and study the feasibility of various optimization methods for implementing the MAP estimator. We proceed to investigate three areas where image data and parameter estimates are present in multichannels, multiframes, and interrelated in complicated manners. These areas of study include color image segmentation, multislice MR image segmentation, and optical flow estimation and segmentation in multiframe temporal sequences. Besides developing novel algorithms in each of these areas, we demonstrate how to exploit the potential of MAP estimation and GRFs, and we propose practical and efficient implementations. Illustrative examples and relevant experimental results are included.

  15. Whole-body and segmental muscle volume are associated with ball velocity in high school baseball pitchers

    PubMed Central

    Yamada, Yosuke; Yamashita, Daichi; Yamamoto, Shinji; Matsui, Tomoyuki; Seo, Kazuya; Azuma, Yoshikazu; Kida, Yoshikazu; Morihara, Toru; Kimura, Misaka

    2013-01-01

    The aim of the study was to examine the relationship between pitching ball velocity and segmental (trunk, upper arm, forearm, upper leg, and lower leg) and whole-body muscle volume (MV) in high school baseball pitchers. Forty-seven male high school pitchers (40 right-handers and seven left-handers; age, 16.2 ± 0.7 years; stature, 173.6 ± 4.9 cm; mass, 65.0 ± 6.8 kg, years of baseball experience, 7.5 ± 1.8 years; maximum pitching ball velocity, 119.0 ± 9.0 km/hour) participated in the study. Segmental and whole-body MV were measured using segmental bioelectrical impedance analysis. Maximum ball velocity was measured with a sports radar gun. The MV of the dominant arm was significantly larger than the MV of the non-dominant arm (P < 0.001). There was no difference in MV between the dominant and non-dominant legs. Whole-body MV was significantly correlated with ball velocity (r = 0.412, P < 0.01). Trunk MV was not correlated with ball velocity, but the MV for both lower legs, and the dominant upper leg, upper arm, and forearm were significantly correlated with ball velocity (P < 0.05). The results were not affected by age or years of baseball experience. Whole-body and segmental MV are associated with ball velocity in high school baseball pitchers. However, the contribution of the muscle mass on pitching ball velocity is limited, thus other fundamental factors (ie, pitching skill) are also important. PMID:24379713

  16. Whole-body and segmental muscle volume are associated with ball velocity in high school baseball pitchers.

    PubMed

    Yamada, Yosuke; Yamashita, Daichi; Yamamoto, Shinji; Matsui, Tomoyuki; Seo, Kazuya; Azuma, Yoshikazu; Kida, Yoshikazu; Morihara, Toru; Kimura, Misaka

    2013-01-01

    The aim of the study was to examine the relationship between pitching ball velocity and segmental (trunk, upper arm, forearm, upper leg, and lower leg) and whole-body muscle volume (MV) in high school baseball pitchers. Forty-seven male high school pitchers (40 right-handers and seven left-handers; age, 16.2 ± 0.7 years; stature, 173.6 ± 4.9 cm; mass, 65.0 ± 6.8 kg, years of baseball experience, 7.5 ± 1.8 years; maximum pitching ball velocity, 119.0 ± 9.0 km/hour) participated in the study. Segmental and whole-body MV were measured using segmental bioelectrical impedance analysis. Maximum ball velocity was measured with a sports radar gun. The MV of the dominant arm was significantly larger than the MV of the non-dominant arm (P < 0.001). There was no difference in MV between the dominant and non-dominant legs. Whole-body MV was significantly correlated with ball velocity (r = 0.412, P < 0.01). Trunk MV was not correlated with ball velocity, but the MV for both lower legs, and the dominant upper leg, upper arm, and forearm were significantly correlated with ball velocity (P < 0.05). The results were not affected by age or years of baseball experience. Whole-body and segmental MV are associated with ball velocity in high school baseball pitchers. However, the contribution of the muscle mass on pitching ball velocity is limited, thus other fundamental factors (ie, pitching skill) are also important. PMID:24379713

  17. MR volume segmentation of gray matter and white matter using manual thresholding: Dependence on image brightness

    SciTech Connect

    Harris, G.J.; Barta, P.E.; Peng, L.W.; Lee, S.; Brettschneider, P.D.; Shah, A.; Henderer, J.D.; Schlaepfer, T.E.; Pearlson, G.D. Tufts Univ. School of Medicine, Boston, MA )

    1994-02-01

    To describe a quantitative MR imaging segmentation method for determination of the volume of cerebrospinal fluid, gray matter, and white matter in living human brain, and to determine the method's reliability. We developed a computer method that allows rapid, user-friendly determination of cerebrospinal fluid, gray matter, and white matter volumes in a reliable manner, both globally and regionally. This method was applied to a large control population (N = 57). Initially, image brightness had a strong correlation with the gray-white ratio (r = .78). Bright images tended to overestimate, dim images to underestimate gray matter volumes. This artifact was corrected for by offsetting each image to an approximately equal brightness. After brightness correction, gray-white ratio was correlated with age (r = -.35). The age-dependent gray-white ratio was similar to that for the same age range in a prior neuropathology report. Interrater reliability was high (.93 intraclass correlation coefficient). The method described here for gray matter, white matter, and cerebrospinal fluid volume calculation is reliable and valid. A correction method for an artifact related to image brightness was developed. 12 refs., 3 figs.

  18. Analysis of the Segmented Features of Indicator of Mine Presence

    NASA Astrophysics Data System (ADS)

    Krtalic, A.

    2016-06-01

    The aim of this research is to investigate possibility for interactive semi-automatic interpretation of digital images in humanitarian demining for the purpose of detection and extraction of (strong) indicators of mine presence which can be seen on the images, according to the parameters of the general geometric shapes rather than radiometric characteristics. For that purpose, objects are created by segmentation. The segments represent the observed indicator and the objects that surround them (for analysis of the degree of discrimination of objects from the environment) in the best possible way. These indicators cover a certain characteristic surface. These areas are determined by segmenting the digital image. Sets of pixels that form such surface on images have specific geometric features. In this way, it is provided to analyze the features of the segments on the basis of the object, rather than the pixel level. Factor analysis of geometric parameters of this segments is performed in order to identify parameters that can be distinguished from the other parameters according to their geometric features. Factor analysis was carried out in two different ways, according to the characteristics of the general geometric shape and to the type of strong indicators of mine presence. The continuation of this research is the implementation of the automatic extraction of indicators of mine presence according results presented in this paper.

  19. Segment clustering methodology for unsupervised Holter recordings analysis

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sotelo, Jose Luis; Peluffo-Ordoñez, Diego; Castellanos Dominguez, German

    2015-01-01

    Cardiac arrhythmia analysis on Holter recordings is an important issue in clinical settings, however such issue implicitly involves attending other problems related to the large amount of unlabelled data which means a high computational cost. In this work an unsupervised methodology based in a segment framework is presented, which consists of dividing the raw data into a balanced number of segments in order to identify fiducial points, characterize and cluster the heartbeats in each segment separately. The resulting clusters are merged or split according to an assumed criterion of homogeneity. This framework compensates the high computational cost employed in Holter analysis, being possible its implementation for further real time applications. The performance of the method is measure over the records from the MIT/BIH arrhythmia database and achieves high values of sensibility and specificity, taking advantage of database labels, for a broad kind of heartbeats types recommended by the AAMI.

  20. 4-D segmentation and normalization of 3He MR images for intrasubject assessment of ventilated lung volumes

    NASA Astrophysics Data System (ADS)

    Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.

    2012-03-01

    Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.

  1. A fuzzy, nonparametric segmentation framework for DTI and MRI analysis.

    PubMed

    Awate, Suyash P; Gee, James C

    2007-01-01

    This paper presents a novel statistical fuzzy-segmentation method for diffusion tensor (DT) images and magnetic resonance (MR) images. Typical fuzzy-segmentation schemes, e.g. those based on fuzzy-C-means (FCM), incorporate Gaussian class models which are inherently biased towards ellipsoidal clusters. Fiber bundles in DT images, however, comprise tensors that can inherently lie on more-complex manifolds. Unlike FCM-based schemes, the proposed method relies on modeling the manifolds underlying the classes by incorporating nonparametric data-driven statistical models. It produces an optimal fuzzy segmentation by maximizing a novel information-theoretic energy in a Markov-random-field framework. For DT images, the paper describes a consistent statistical technique for nonparametric modeling in Riemannian DT spaces that incorporates two very recent works. In this way, the proposed method provides uncertainties in the segmentation decisions, which stem from imaging artifacts including noise, partial voluming, and inhomogeneity. The paper shows results on synthetic and real, DT as well as MR images. PMID:17633708

  2. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  3. Volume change of segments II and III of the liver after gastrectomy in patients with gastric cancer

    PubMed Central

    Ozutemiz, Can; Obuz, Funda; Taylan, Abdullah; Atila, Koray; Bora, Seymen; Ellidokuz, Hulya

    2016-01-01

    PURPOSE We aimed to evaluate the relationship between gastrectomy and the volume of liver segments II and III in patients with gastric cancer. METHODS Computed tomography images of 54 patients who underwent curative gastrectomy for gastric adenocarcinoma were retrospectively evaluated by two blinded observers. Volumes of the total liver and segments II and III were measured. The difference between preoperative and postoperative volume measurements was compared. RESULTS Total liver volumes measured by both observers in the preoperative and postoperative scans were similar (P > 0.05). High correlation was found between both observers (preoperative r=0.99; postoperative r=0.98). Total liver volumes showed a mean reduction of 13.4% after gastrectomy (P = 0.977). The mean volume of segments II and III showed similar decrease in measurements of both observers (38.4% vs. 36.4%, P = 0.363); the correlation between the observers were high (preoperative r=0.97, P < 0.001; postoperative r=0.99, P < 0.001). Volume decrease in the rest of the liver was not different between the observers (8.2% vs. 9.1%, P = 0.388). Time had poor correlation with volume change of segments II and III and the total liver for each observer (observer 1, rseg2/3=0.32, rtotal=0.13; observer 2, rseg2/3=0.37, rtotal=0.16). CONCLUSION Segments II and III of the liver showed significant atrophy compared with the rest of the liver and the total liver after gastrectomy. Volume reduction had poor correlation with time. PMID:26899148

  4. A comprehensive segmentation analysis of crude oil market based on time irreversibility

    NASA Astrophysics Data System (ADS)

    Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi

    2016-05-01

    In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.

  5. Atrophy of the Cerebellar Vermis in Essential Tremor: Segmental Volumetric MRI Analysis.

    PubMed

    Shin, Hyeeun; Lee, Dong-Kyun; Lee, Jong-Min; Huh, Young-Eun; Youn, Jinyoung; Louis, Elan D; Cho, Jin Whan

    2016-04-01

    Postmortem studies of essential tremor (ET) have demonstrated the presence of degenerative changes in the cerebellum, and imaging studies have examined related structural changes in the brain. However, their results have not been completely consistent and the number of imaging studies has been limited. We aimed to study cerebellar involvement in ET using MRI segmental volumetric analysis. In addition, a unique feature of this study was that we stratified ET patients into subtypes based on the clinical presence of cerebellar signs and compared their MRI findings. Thirty-nine ET patients and 36 normal healthy controls, matched for age and sex, were enrolled. Cerebellar signs in ET patients were assessed using the clinical tremor rating scale and International Cooperative Ataxia Rating Scale. ET patients were divided into two groups: patients with cerebellar signs (cerebellar-ET) and those without (classic-ET). MRI volumetry was performed using CIVET pipeline software. Data on whole and segmented cerebellar volumes were analyzed using SPSS. While there was a trend for whole cerebellar volume to decrease from controls to classic-ET to cerebellar-ET, this trend was not significant. The volume of several contiguous segments of the cerebellar vermis was reduced in ET patients versus controls. Furthermore, these vermis volumes were reduced in the cerebellar-ET group versus the classic-ET group. The volume of several adjacent segments of the cerebellar vermis was reduced in ET. This effect was more evident in ET patients with clinical signs of cerebellar dysfunction. The presence of tissue atrophy suggests that ET might be a neurodegenerative disease. PMID:26062905

  6. Linear test bed. Volume 1: Test bed no. 1. [aerospike test bed with segmented combustor

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Linear Test Bed program was to design, fabricate, and evaluation test an advanced aerospike test bed which employed the segmented combustor concept. The system is designated as a linear aerospike system and consists of a thrust chamber assembly, a power package, and a thrust frame. It was designed as an experimental system to demonstrate the feasibility of the linear aerospike-segmented combustor concept. The overall dimensions are 120 inches long by 120 inches wide by 96 inches in height. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure, at a mixture ratio of 5.5. At the design conditions, the sea level thrust is 200,000 pounds. The complete program including concept selection, design, fabrication, component test, system test, supporting analysis and posttest hardware inspection is described.

  7. Integrated multidisciplinary analysis of segmented reflector telescopes

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.; Needels, Laura

    1992-01-01

    The present multidisciplinary telescope-analysis approach, which encompasses thermal, structural, control and optical considerations, is illustrated for the case of an IR telescope in LEO; attention is given to end-to-end evaluations of the effects of mechanical disturbances and thermal gradients in measures of optical performance. Both geometric ray-tracing and surface-to-surface diffraction approximations are used in the telescope's optical model. Also noted is the role played by NASA-JPL's Integrated Modeling of Advanced Optical Systems computation tool, in view of numerical samples.

  8. Analysis of recent segmental duplications in the bovine genome

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Duplicated sequences are an important source of gene innovation and structural variation within mammalian genomes. We describe the first systematic and genome-wide analysis of segmental duplications in the modern domesticated cattle (Bos taurus). Using two distinct computational analyses, we estimat...

  9. 3D surface analysis and classification in neuroimaging segmentation.

    PubMed

    Zagar, Martin; Mlinarić, Hrvoje; Knezović, Josip

    2011-06-01

    This work emphasizes new algorithms for 3D edge and corner detection used in surface extraction and new concept of image segmentation in neuroimaging based on multidimensional shape analysis and classification. We propose using of NifTI standard for describing input data which enables interoperability and enhancement of existing computing tools used widely in neuroimaging research. In methods section we present our newly developed algorithm for 3D edge and corner detection, together with the algorithm for estimating local 3D shape. Surface of estimated shape is analyzed and segmented according to kernel shapes. PMID:21755723

  10. Fingerprint image segmentation based on multi-features histogram analysis

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhang, Youguang

    2007-11-01

    An effective fingerprint image segmentation based on multi-features histogram analysis is presented. We extract a new feature, together with three other features to segment fingerprints. Two of these four features, each of which is related to one of the other two, are reciprocals with each other, so features are divided into two groups. These two features' histograms are calculated respectively to determine which feature group is introduced to segment the aim-fingerprint. The features could also divide fingerprints into two classes with high and low quality. Experimental results show that our algorithm could classify foreground and background effectively with lower computational cost, and it can also reduce pseudo-minutiae detected and improve the performance of AFIS.

  11. Three-dimensional segmentation of pulmonary artery volume from thoracic computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Lindenmaier, Tamas J.; Sheikh, Khadija; Bluemke, Emma; Gyacskov, Igor; Mura, Marco; Licskai, Christopher; Mielniczuk, Lisa; Fenster, Aaron; Cunningham, Ian A.; Parraga, Grace

    2015-03-01

    Chronic obstructive pulmonary disease (COPD), is a major contributor to hospitalization and healthcare costs in North America. While the hallmark of COPD is airflow limitation, it is also associated with abnormalities of the cardiovascular system. Enlargement of the pulmonary artery (PA) is a morphological marker of pulmonary hypertension, and was previously shown to predict acute exacerbations using a one-dimensional diameter measurement of the main PA. We hypothesized that a three-dimensional (3D) quantification of PA size would be more sensitive than 1D methods and encompass morphological changes along the entire central pulmonary artery. Hence, we developed a 3D measurement of the main (MPA), left (LPA) and right (RPA) pulmonary arteries as well as total PA volume (TPAV) from thoracic CT images. This approach incorporates segmentation of pulmonary vessels in cross-section for the MPA, LPA and RPA to provide an estimate of their volumes. Three observers performed five repeated measurements for 15 ex-smokers with ≥10 pack-years, and randomly identified from a larger dataset of 199 patients. There was a strong agreement (r2=0.76) for PA volume and PA diameter measurements, which was used as a gold standard. Observer measurements were strongly correlated and coefficients of variation for observer 1 (MPA:2%, LPA:3%, RPA:2%, TPA:2%) were not significantly different from observer 2 and 3 results. In conclusion, we generated manual 3D pulmonary artery volume measurements from thoracic CT images that can be performed with high reproducibility. Future work will involve automation for implementation in clinical workflows.

  12. Salted and preserved duck eggs: a consumer market segmentation analysis.

    PubMed

    Arthur, Jennifer; Wiseman, Kelleen; Cheng, K M

    2015-08-01

    The combination of increasing ethnic diversity in North America and growing consumer support for local food products may present opportunities for local producers and processors in the ethnic foods product category. Our study examined the ethnic Chinese (pop. 402,000) market for salted and preserved duck eggs in Vancouver, British Columbia (BC), Canada. The objective of the study was to develop a segmentation model using survey data to categorize consumer groups based on their attitudes and the importance they placed on product attributes. We further used post-segmentation acculturation score, demographics and buyer behaviors to define these groups. Data were gathered via a survey of randomly selected Vancouver households with Chinese surnames (n = 410), targeting the adult responsible for grocery shopping. Results from principal component analysis and a 2-step cluster analysis suggest the existence of 4 market segments, described as Enthusiasts, Potentialists, Pragmatists, Health Skeptics (salted duck eggs), and Neutralists (preserved duck eggs). Kruskal Wallis tests and post hoc Mann-Whitney tests found significant differences between segments in terms of attitudes and the importance placed on product characteristics. Health Skeptics, preserved egg Potentialists, and Pragmatists of both egg products were significantly biased against Chinese imports compared to others. Except for Enthusiasts, segments disagreed that eggs are 'Healthy Products'. Preserved egg Enthusiasts had a significantly lower acculturation score (AS) compared to all others, while salted egg Enthusiasts had a lower AS compared to Health Skeptics. All segments rated "produced in BC, not mainland China" products in the "neutral to very likely" range for increasing their satisfaction with the eggs. Results also indicate that buyers of each egg type are willing to pay an average premium of at least 10% more for BC produced products versus imports, with all other characteristics equal. Overall

  13. Small rural hospitals: an example of market segmentation analysis.

    PubMed

    Mainous, A G; Shelby, R L

    1991-01-01

    In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution. PMID:10111266

  14. Documented Safety Analysis for the B695 Segment

    SciTech Connect

    Laycak, D

    2008-09-11

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems

  15. Extracellular and intracellular volume variations during postural change measured by segmental and wrist-ankle bioimpedance spectroscopy.

    PubMed

    Fenech, Marianne; Jaffrin, Michel Y

    2004-01-01

    Extracellular (ECW) and intracellular (ICW) volumes were measured using both segmental and wrist-ankle (W-A) bioimpedance spectroscopy (5-1000 kHz) in 15 healthy subjects (7 men, 8 women). In the 1st protocol, the subject, after sitting for 30 min, laid supine for at least 30 min. In the second protocol, the subject, who had been supine for 1 hr, sat up in bed for 10 min and returned to supine position for another hour. Segmental ECW and ICW resistances of legs, arms and trunk were measured by placing four voltage electrodes on wrist, shoulder, top of thigh and ankle and using Hanai's conductivity theory. W-A resistances were found to be very close to the sum of segmental resistances. When switching from sitting to supine (protocol 1), the mean ECW leg resistance increased by 18.2%, that of arm and W-A by 12.4%. Trunk resistance also increased but not significantly by 4.8%. Corresponding increases in ICW resistance were smaller for legs (3.7%) and arm (-0.7%) but larger for the trunk (21.4%). Total body ECW volumes from segmental measurements were in good agreement with W-A and Watson anthropomorphic correlation. The decrease in total ECW volume (when supine) calculated from segmental resistances was at 0.79 l less than the W-A one (1.12 l). Total ICW volume reductions were 3.4% (segmental) and 3.8% (W-A). Tests of protocol 2 confirmed that resistance and fluid volume values were not affected by a temporary position change. PMID:14723506

  16. Accurate airway segmentation based on intensity structure analysis and graph-cut

    NASA Astrophysics Data System (ADS)

    Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku

    2016-03-01

    This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.

  17. An automatic method of brain tumor segmentation from MRI volume based on the symmetry of brain and level set method

    NASA Astrophysics Data System (ADS)

    Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su

    2010-02-01

    This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.

  18. Influence of cold walls on PET image quantification and volume segmentation: A phantom study

    SciTech Connect

    Berthon, B.; Marshall, C.; Edwards, A.; Spezi, E.; Evans, M.

    2013-08-15

    Purpose: Commercially available fillable plastic inserts used in positron emission tomography phantoms usually have thick plastic walls, separating their content from the background activity. These “cold” walls can modify the intensity values of neighboring active regions due to the partial volume effect, resulting in errors in the estimation of standardized uptake values. Numerous papers suggest that this is an issue for phantom work simulating tumor tissue, quality control, and calibration work. This study aims to investigate the influence of the cold plastic wall thickness on the quantification of 18F-fluorodeoxyglucose on the image activity recovery and on the performance of advanced automatic segmentation algorithms for the delineation of active regions delimited by plastic walls.Methods: A commercial set of six spheres of different diameters was replicated using a manufacturing technique which achieves a reduction in plastic walls thickness of up to 90%, while keeping the same internal volume. Both sets of thin- and thick-wall inserts were imaged simultaneously in a custom phantom for six different tumor-to-background ratios. Intensity values were compared in terms of mean and maximum standardized uptake values (SUVs) in the spheres and mean SUV of the hottest 1 ml region (SUV{sub max}, SUV{sub mean}, and SUV{sub peak}). The recovery coefficient (RC) was also derived for each sphere. The results were compared against the values predicted by a theoretical model of the PET-intensity profiles for the same tumor-to-background ratios (TBRs), sphere sizes, and wall thicknesses. In addition, ten automatic segmentation methods, written in house, were applied to both thin- and thick-wall inserts. The contours obtained were compared to computed tomography derived gold standard (“ground truth”), using five different accuracy metrics.Results: The authors' results showed that thin-wall inserts achieved significantly higher SUV{sub mean}, SUV{sub max}, and RC

  19. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  20. Segmented infrared image analysis for rotating machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Duan, Lixiang; Yao, Mingchao; Wang, Jinjiang; Bai, Tangbo; Zhang, Laibin

    2016-07-01

    As a noncontact and non-intrusive technique, infrared image analysis becomes promising for machinery defect diagnosis. However, the insignificant information and strong noise in infrared image limit its performance. To address this issue, this paper presents an image segmentation approach to enhance the feature extraction in infrared image analysis. A region selection criterion named dispersion degree is also formulated to discriminate fault representative regions from unrelated background information. Feature extraction and fusion methods are then applied to obtain features from selected regions for further diagnosis. Experimental studies on a rotor fault simulator demonstrate that the presented segmented feature enhancement approach outperforms the one from the original image using both Naïve Bayes classifier and support vector machine.

  1. Education, Work and Employment--Volume II. Segmented Labour Markets, Workplace Democracy and Educational Planning, Education and Self-Employment.

    ERIC Educational Resources Information Center

    Carnoy, Martin; And Others

    This volume contains three studies covering separate yet complementary aspects of the problem of the relationships between the educational system and the production system as manpower user. The first monograph on the theories of the markets seeks to answer two questions: what can be learned from the work done on the segmentation of the labor…

  2. A method for avoiding overlap of left and right lungs in shape model guided segmentation of lungs in CT volumes

    PubMed Central

    Gill, Gurman; Bauer, Christian; Beichel, Reinhard R.

    2014-01-01

    Purpose: The automated correct segmentation of left and right lungs is a nontrivial problem, because the tissue layer between both lungs can be quite thin. In the case of lung segmentation with left and right lung models, overlapping segmentations can occur. In this paper, the authors address this issue and propose a solution for a model-based lung segmentation method. Methods: The thin tissue layer between left and right lungs is detected by means of a classification approach and utilized to selectively modify the cost function of the lung segmentation method. The approach was evaluated on a diverse set of 212 CT scans of normal and diseased lungs. Performance was assessed by utilizing an independent reference standard and by means of comparison to the standard segmentation method without overlap avoidance. Results: For cases where the standard approach produced overlapping segmentations, the proposed method significantly (p = 1.65 × 10−9) reduced the overlap by 97.13% on average (median: 99.96%). In addition, segmentation accuracy assessed with the Dice coefficient showed a statistically significant improvement (p = 7.5 × 10−5) and was 0.9845 ± 0.0111. For cases where the standard approach did not produce an overlap, performance of the proposed method was not found to be significantly different. Conclusions: The proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis steps. PMID:25281960

  3. Analysis of recent segmental duplications in the bovine genome

    PubMed Central

    2009-01-01

    Background Duplicated sequences are an important source of gene innovation and structural variation within mammalian genomes. We performed the first systematic and genome-wide analysis of segmental duplications in the modern domesticated cattle (Bos taurus). Using two distinct computational analyses, we estimated that 3.1% (94.4 Mb) of the bovine genome consists of recently duplicated sequences (≥ 1 kb in length, ≥ 90% sequence identity). Similar to other mammalian draft assemblies, almost half (47% of 94.4 Mb) of these sequences have not been assigned to cattle chromosomes. Results In this study, we provide the first experimental validation large duplications and briefly compared their distribution on two independent bovine genome assemblies using fluorescent in situ hybridization (FISH). Our analyses suggest that the (75-90%) of segmental duplications are organized into local tandem duplication clusters. Along with rodents and carnivores, these results now confidently establish tandem duplications as the most likely mammalian archetypical organization, in contrast to humans and great ape species which show a preponderance of interspersed duplications. A cross-species survey of duplicated genes and gene families indicated that duplication, positive selection and gene conversion have shaped primates, rodents, carnivores and ruminants to different degrees for their speciation and adaptation. We identified that bovine segmental duplications corresponding to genes are significantly enriched for specific biological functions such as immunity, digestion, lactation and reproduction. Conclusion Our results suggest that in most mammalian lineages segmental duplications are organized in a tandem configuration. Segmental duplications remain problematic for genome and assembly and we highlight genic regions that require higher quality sequence characterization. This study provides insights into mammalian genome evolution and generates a valuable resource for cattle

  4. Study of tracking and data acquisition system for the 1990's. Volume 4: TDAS space segment architecture

    NASA Technical Reports Server (NTRS)

    Orr, R. S.

    1984-01-01

    Tracking and data acquisition system (TDAS) requirements, TDAS architectural goals, enhanced TDAS subsystems, constellation and networking options, TDAS spacecraft options, crosslink implementation, baseline TDAS space segment architecture, and treat model development/security analysis are addressed.

  5. High volume data storage architecture analysis

    NASA Technical Reports Server (NTRS)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  6. Three dimensional level set based semiautomatic segmentation of atherosclerotic carotid artery wall volume using 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Hossain, Md. Murad; AlMuhanna, Khalid; Zhao, Limin; Lal, Brajesh K.; Sikdar, Siddhartha

    2014-03-01

    3D segmentation of carotid plaque from ultrasound (US) images is challenging due to image artifacts and poor boundary definition. Semiautomatic segmentation algorithms for calculating vessel wall volume (VWV) have been proposed for the common carotid artery (CCA) but they have not been applied on plaques in the internal carotid artery (ICA). In this work, we describe a 3D segmentation algorithm that is robust to shadowing and missing boundaries. Our algorithm uses distance regularized level set method with edge and region based energy to segment the adventitial wall boundary (AWB) and lumen-intima boundary (LIB) of plaques in the CCA, ICA and external carotid artery (ECA). The algorithm is initialized by manually placing points on the boundary of a subset of transverse slices with an interslice distance of 4mm. We propose a novel user defined stopping surface based energy to prevent leaking of evolving surface across poorly defined boundaries. Validation was performed against manual segmentation using 3D US volumes acquired from five asymptomatic patients with carotid stenosis using a linear 4D probe. A pseudo gold-standard boundary was formed from manual segmentation by three observers. The Dice similarity coefficient (DSC), Hausdor distance (HD) and modified HD (MHD) were used to compare the algorithm results against the pseudo gold-standard on 1205 cross sectional slices of 5 3D US image sets. The algorithm showed good agreement with the pseudo gold standard boundary with mean DSC of 93.3% (AWB) and 89.82% (LIB); mean MHD of 0.34 mm (AWB) and 0.24 mm (LIB); mean HD of 1.27 mm (AWB) and 0.72 mm (LIB). The proposed 3D semiautomatic segmentation is the first step towards full characterization of 3D plaque progression and longitudinal monitoring.

  7. An analysis of segmentation dynamics throughout embryogenesis in the centipede Strigamia maritima

    PubMed Central

    2013-01-01

    Background Most segmented animals add segments sequentially as the animal grows. In vertebrates, segment patterning depends on oscillations of gene expression coordinated as travelling waves in the posterior, unsegmented mesoderm. Recently, waves of segmentation gene expression have been clearly documented in insects. However, it remains unclear whether cyclic gene activity is widespread across arthropods, and possibly ancestral among segmented animals. Previous studies have suggested that a segmentation oscillator may exist in Strigamia, an arthropod only distantly related to insects, but further evidence is needed to document this. Results Using the genes even skipped and Delta as representative of genes involved in segment patterning in insects and in vertebrates, respectively, we have carried out a detailed analysis of the spatio-temporal dynamics of gene expression throughout the process of segment patterning in Strigamia. We show that a segmentation clock is involved in segment formation: most segments are generated by cycles of dynamic gene activity that generate a pattern of double segment periodicity, which is only later resolved to the definitive single segment pattern. However, not all segments are generated by this process. The most posterior segments are added individually from a localized sub-terminal area of the embryo, without prior pair-rule patterning. Conclusions Our data suggest that dynamic patterning of gene expression may be widespread among the arthropods, but that a single network of segmentation genes can generate either oscillatory behavior at pair-rule periodicity or direct single segment patterning, at different stages of embryogenesis. PMID:24289308

  8. Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging

    PubMed Central

    Beaton, L.; Mazzaferri, J.; Lalonde, F.; Hidalgo-Aguirre, M.; Descovich, D.; Lesk, M. R.; Costantino, S.

    2015-01-01

    We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373

  9. Fully Automated Renal Tissue Volumetry in MR Volume Data Using Prior-Shape-Based Segmentation in Subject-Specific Probability Maps.

    PubMed

    Gloger, Oliver; Tönnies, Klaus; Laqua, Rene; Völzke, Henry

    2015-10-01

    Organ segmentation in magnetic resonance (MR) volume data is of increasing interest in epidemiological studies and clinical practice. Especially in large-scale population-based studies, organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time consuming and prone to reader variability, large-scale studies need automatic methods to perform organ segmentation. In this paper, we present an automated framework for renal tissue segmentation that computes renal parenchyma, cortex, and medulla volumetry in native MR volume data without any user interaction. We introduce a novel strategy of subject-specific probability map computation for renal tissue types, which takes inter- and intra-MR-intensity variability into account. Several kinds of tissue-related 2-D and 3-D prior-shape knowledge are incorporated in modularized framework parts to segment renal parenchyma in a final level set segmentation strategy. Subject-specific probabilities for medulla and cortex tissue are applied in a fuzzy clustering technique to delineate cortex and medulla tissue inside segmented parenchyma regions. The novel subject-specific computation approach provides clearly improved tissue probability map quality than existing methods. Comparing to existing methods, the framework provides improved results for parenchyma segmentation. Furthermore, cortex and medulla segmentation qualities are very promising but cannot be compared to existing methods since state-of-the art methods for automated cortex and medulla segmentation in native MR volume data are still missing. PMID:25915954

  10. Segmental chloride and fluid handling during correction of chloride-depletion alkalosis without volume expansion in the rat.

    PubMed Central

    Galla, J H; Bonduris, D N; Dumbauld, S L; Luke, R G

    1984-01-01

    To determine whether chloride-depletion metabolic alkalosis (CDA) can be corrected by provision of chloride without volume expansion or intranephronal redistribution of fluid reabsorption, CDA was produced in Sprague-Dawley rats by peritoneal dialysis against 0.15 M NaHCO3; controls (CON) were dialyzed against Ringer's bicarbonate. Animals were infused with isotonic solutions containing the same Cl and total CO2 (tCO2) concentrations as in postdialysis plasma at rates shown to be associated with slight but stable volume contraction. During the subsequent 6 h, serum Cl and tCO2 concentrations remained stable and normal in CON and corrected towards normal in CDA; urinary chloride excretion was less and bicarbonate excretion greater than those in CON during this period. Micropuncture and microinjection studies were performed in the 3rd h after dialysis. Plasma volumes determined by 125I-albumin were not different. Inulin clearance and fractional chloride excretion were lower (P less than 0.05) in CDA. Superficial nephron glomerular filtration rate determined from distal puncture sites was lower (P less than 0.02) in CDA (27.9 +/- 2.3 nl/min) compared with that in CON (37.9 +/- 2.6). Fractional fluid and chloride reabsorption in the proximal convoluted tubule and within the loop segment did not differ. Fractional chloride delivery to the early distal convolution did not differ but that out of this segment was less (P less than 0.01) in group CDA. Urinary recovery of 36Cl injected into the collecting duct segment was lower (P less than 0.01) in CDA (CON 74 +/- 3; CDA 34 +/- 4%). These data show that CDA can be corrected by the provision of chloride without volume expansion or alterations in the intranephronal distribution of fluid reabsorption. Enhanced chloride reabsorption in the collecting duct segment, and possibly in the distal convoluted tubule, contributes importantly to this correction. PMID:6690486

  11. Three-dimensional choroidal segmentation in spectral OCT volumes using optic disc prior information

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Girkin, Christopher A.; Hariri, Amirhossein; Sadda, SriniVas R.

    2016-03-01

    Recently, much attention has been focused on determining the role of the peripapillary choroid - the layer between the outer retinal pigment epithelium (RPE)/Bruchs membrane (BM) and choroid-sclera (C-S) junction, whether primary or secondary in the pathogenesis of glaucoma. However, the automated choroidal segmentation in spectral-domain optical coherence tomography (SD-OCT) images of optic nerve head (ONH) has not been reported probably due to the fact that the presence of the BM opening (BMO, corresponding to the optic disc) can deflect the choroidal segmentation from its correct position. The purpose of this study is to develop a 3D graph-based approach to identify the 3D choroidal layer in ONH-centered SD-OCT images using the BMO prior information. More specifically, an initial 3D choroidal segmentation was first performed using the 3D graph search algorithm. Note that varying surface interaction constraints based on the choroidal morphological model were applied. To assist the choroidal segmentation, two other surfaces of internal limiting membrane and innerouter segment junction were also segmented. Based on the segmented layer between the RPE/BM and C-S junction, a 2D projection map was created. The BMO in the projection map was detected by a 2D graph search. The pre-defined BMO information was then incorporated into the surface interaction constraints of the 3D graph search to obtain more accurate choroidal segmentation. Twenty SD-OCT images from 20 healthy subjects were used. The mean differences of the choroidal borders between the algorithm and manual segmentation were at a sub-voxel level, indicating a high level segmentation accuracy.

  12. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  13. Pulse shape analysis and position determination in segmented HPGe detectors: The AGATA detector library

    NASA Astrophysics Data System (ADS)

    Bruyneel, B.; Birkenbach, B.; Reiter, P.

    2016-03-01

    The AGATA Detector Library (ADL) was developed for the calculation of signals from highly segmented large volume high-purity germanium (HPGe) detectors. ADL basis sets comprise a huge amount of calculated position-dependent detector pulse shapes. A basis set is needed for Pulse Shape Analysis (PSA). By means of PSA the interaction position of a γ-ray inside the active detector volume is determined. Theoretical concepts of the calculations are introduced and cover the relevant aspects of signal formation in HPGe. The approximations and the realization of the computer code with its input parameters are explained in detail. ADL is a versatile and modular computer code; new detectors can be implemented in this library. Measured position resolutions of the AGATA detectors based on ADL are discussed.

  14. Breast Density Analysis Using an Automatic Density Segmentation Algorithm.

    PubMed

    Oliver, Arnau; Tortajada, Meritxell; Lladó, Xavier; Freixenet, Jordi; Ganau, Sergi; Tortajada, Lidia; Vilagran, Mariona; Sentís, Melcior; Martí, Robert

    2015-10-01

    Breast density is a strong risk factor for breast cancer. In this paper, we present an automated approach for breast density segmentation in mammographic images based on a supervised pixel-based classification and using textural and morphological features. The objective of the paper is not only to show the feasibility of an automatic algorithm for breast density segmentation but also to prove its potential application to the study of breast density evolution in longitudinal studies. The database used here contains three complete screening examinations, acquired 2 years apart, of 130 different patients. The approach was validated by comparing manual expert annotations with automatically obtained estimations. Transversal analysis of the breast density analysis of craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts acquired in the same study showed a correlation coefficient of ρ = 0.96 between the mammographic density percentage for left and right breasts, whereas a comparison of both mammographic views showed a correlation of ρ = 0.95. A longitudinal study of breast density confirmed the trend that dense tissue percentage decreases over time, although we noticed that the decrease in the ratio depends on the initial amount of breast density. PMID:25720749

  15. Multi-level segment analysis: definition and applications in turbulence

    NASA Astrophysics Data System (ADS)

    Wang, Lipo

    2015-11-01

    The interaction of different scales is among the most interesting and challenging features in turbulence research. Existing approaches used for scaling analysis such as structure-function and Fourier spectrum method have their respective limitations, for instance scale mixing, i.e. the so-called infrared and ultraviolet effects. For a given function, by specifying different window sizes, the local extremal point set will be different. Such window size dependent feature indicates multi-scale statistics. A new method, multi-level segment analysis (MSA) based on the local extrema statistics, has been developed. The part of the function between two adjacent extremal points is defined as a segment, which is characterized by the functional difference and scale difference. The structure function can be differently derived from these characteristic parameters. Data test results show that MSA can successfully reveal different scaling regimes in turbulence systems such as Lagrangian and two-dimensional turbulence, which have been remaining controversial in turbulence research. In principle MSA can generally be extended for various analyses.

  16. Automated target recognition technique for image segmentation and scene analysis

    NASA Astrophysics Data System (ADS)

    Baumgart, Chris W.; Ciarcia, Christopher A.

    1994-03-01

    Automated target recognition (ATR) software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army's Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multisensor system designed to detect buried and surface- emplaced metallic and nonmetallic antitank mines. The basic requirements for this ATR software were the following: (1) an ability to separate target objects from the background in low signal-noise conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed using an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics, which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a tradeoff between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  17. A segment interaction analysis of proximal-to-distal sequential segment motion patterns.

    PubMed

    Putnam, C A

    1991-01-01

    The purpose of this study was to examine the motion-dependent interaction between adjacent lower extremity segments during the actions of kicking and the swing phases of running and walking. This was done to help explain the proximal-to-distal sequential pattern of segment motions typically observed in these activities and to evaluate general biomechanical principles used to explain this motion pattern. High speed film data were collected for four subjects performing each skill. Equations were derived which expressed the interaction between segments in terms of resultant joint moments at the hip and knee and several interactive moments which were functions of gravitational forces or kinematic variables. The angular motion-dependent interaction between the thigh and leg was found to play a significant role in determining the sequential segment motion patterns observed in all three activities. The general nature of this interaction was consistent across all three movements except during phases in which there were large differences in the knee angle. Support was found for the principle of summation of segment speeds, whereas no support was found for the principle of summation of force or for general statements concerning the effect of negative thigh acceleration on positive leg acceleration. The roles played by resultant joint moments in producing the observed segment motion sequences are discussed. PMID:1997807

  18. Layout pattern analysis using the Voronoi diagram of line segments

    NASA Astrophysics Data System (ADS)

    Dey, Sandeep Kumar; Cheilaris, Panagiotis; Gabrani, Maria; Papadopoulou, Evanthia

    2016-01-01

    Early identification of problematic patterns in very large scale integration (VLSI) designs is of great value as the lithographic simulation tools face significant timing challenges. To reduce the processing time, such a tool selects only a fraction of possible patterns which have a probable area of failure, with the risk of missing some problematic patterns. We introduce a fast method to automatically extract patterns based on their structure and context, using the Voronoi diagram of line-segments as derived from the edges of VLSI design shapes. Designers put line segments around the problematic locations in patterns called "gauges," along which the critical distance is measured. The gauge center is the midpoint of a gauge. We first use the Voronoi diagram of VLSI shapes to identify possible problematic locations, represented as gauge centers. Then we use the derived locations to extract windows containing the problematic patterns from the design layout. The problematic locations are prioritized by the shape and proximity information of the design polygons. We perform experiments for pattern selection in a portion of a 22-nm random logic design layout. The design layout had 38,584 design polygons (consisting of 199,946 line segments) on layer Mx, and 7079 markers generated by an optical rule checker (ORC) tool. The optical rules specify requirements for printing circuits with minimum dimension. Markers are the locations of some optical rule violations in the layout. We verify our approach by comparing the coverage of our extracted patterns to the ORC-generated markers. We further derive a similarity measure between patterns and between layouts. The similarity measure helps to identify a set of representative gauges that reduces the number of patterns for analysis.

  19. Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.

    ERIC Educational Resources Information Center

    Lay, Robert S.; Maguire, John J.

    1983-01-01

    Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)

  20. Analysis of Retinal Peripapillary Segmentation in Early Alzheimer's Disease Patients

    PubMed Central

    Salobrar-Garcia, Elena; Hoyas, Irene; Leal, Mercedes; de Hoz, Rosa; Rojas, Blanca; Ramirez, Ana I.; Salazar, Juan J.; Yubero, Raquel; Gil, Pedro; Triviño, Alberto; Ramirez, José M.

    2015-01-01

    Decreased thickness of the retinal nerve fiber layer (RNFL) may reflect retinal neuronal-ganglion cell death. A decrease in the RNFL has been demonstrated in Alzheimer's disease (AD) in addition to aging by optical coherence tomography (OCT). Twenty-three mild-AD patients and 28 age-matched control subjects with mean Mini-Mental State Examination 23.3 and 28.2, respectively, with no ocular disease or systemic disorders affecting vision, were considered for study. OCT peripapillary and macular segmentation thickness were examined in the right eye of each patient. Compared to controls, eyes of patients with mild-AD patients showed no statistical difference in peripapillary RNFL thickness (P > 0.05); however, sectors 2, 3, 4, 8, 9, and 11 of the papilla showed thinning, while in sectors 1, 5, 6, 7, and 10 there was thickening. Total macular volume and RNFL thickness of the fovea in all four inner quadrants and in the outer temporal quadrants proved to be significantly decreased (P < 0.01). Despite the fact that peripapillary RNFL thickness did not statistically differ in comparison to control eyes, the increase in peripapillary thickness in our mild-AD patients could correspond to an early neurodegeneration stage and may entail the existence of an inflammatory process that could lead to progressive peripapillary fiber damage. PMID:26557684

  1. Bifilar analysis study, volume 1

    NASA Technical Reports Server (NTRS)

    Miao, W.; Mouzakis, T.

    1980-01-01

    A coupled rotor/bifilar/airframe analysis was developed and utilized to study the dynamic characteristics of the centrifugally tuned, rotor-hub-mounted, bifilar vibration absorber. The analysis contains the major components that impact the bifilar absorber performance, namely, an elastic rotor with hover aerodynamics, a flexible fuselage, and nonlinear individual degrees of freedom for each bifilar mass. Airspeed, rotor speed, bifilar mass and tuning variations are considered. The performance of the bifilar absorber is shown to be a function of its basic parameters: dynamic mass, damping and tuning, as well as the impedance of the rotor hub. The effect of the dissimilar responses of the individual bifilar masses which are caused by tolerance induced mass, damping and tuning variations is also examined.

  2. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  3. Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images

    PubMed Central

    Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.

    2015-01-01

    Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634

  4. Automated cerebellar lobule segmentation with application to cerebellar structural analysis in cerebellar disease.

    PubMed

    Yang, Zhen; Ye, Chuyang; Bogovic, John A; Carass, Aaron; Jedynak, Bruno M; Ying, Sarah H; Prince, Jerry L

    2016-02-15

    The cerebellum plays an important role in both motor control and cognitive function. Cerebellar function is topographically organized and diseases that affect specific parts of the cerebellum are associated with specific patterns of symptoms. Accordingly, delineation and quantification of cerebellar sub-regions from magnetic resonance images are important in the study of cerebellar atrophy and associated functional losses. This paper describes an automated cerebellar lobule segmentation method based on a graph cut segmentation framework. Results from multi-atlas labeling and tissue classification contribute to the region terms in the graph cut energy function and boundary classification contributes to the boundary term in the energy function. A cerebellar parcellation is achieved by minimizing the energy function using the α-expansion technique. The proposed method was evaluated using a leave-one-out cross-validation on 15 subjects including both healthy controls and patients with cerebellar diseases. Based on reported Dice coefficients, the proposed method outperforms two state-of-the-art methods. The proposed method was then applied to 77 subjects to study the region-specific cerebellar structural differences in three spinocerebellar ataxia (SCA) genetic subtypes. Quantitative analysis of the lobule volumes shows distinct patterns of volume changes associated with different SCA subtypes consistent with known patterns of atrophy in these genetic subtypes. PMID:26408861

  5. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  6. A novel approach for the automated segmentation and volume quantification of cardiac fats on computed tomography.

    PubMed

    Rodrigues, É O; Morais, F F C; Morais, N A O S; Conci, L S; Neto, L V; Conci, A

    2016-01-01

    The deposits of fat on the surroundings of the heart are correlated to several health risk factors such as atherosclerosis, carotid stiffness, coronary artery calcification, atrial fibrillation and many others. These deposits vary unrelated to obesity, which reinforces its direct segmentation for further quantification. However, manual segmentation of these fats has not been widely deployed in clinical practice due to the required human workload and consequential high cost of physicians and technicians. In this work, we propose a unified method for an autonomous segmentation and quantification of two types of cardiac fats. The segmented fats are termed epicardial and mediastinal, and stand apart from each other by the pericardium. Much effort was devoted to achieve minimal user intervention. The proposed methodology mainly comprises registration and classification algorithms to perform the desired segmentation. We compare the performance of several classification algorithms on this task, including neural networks, probabilistic models and decision tree algorithms. Experimental results of the proposed methodology have shown that the mean accuracy regarding both epicardial and mediastinal fats is 98.5% (99.5% if the features are normalized), with a mean true positive rate of 98.0%. In average, the Dice similarity index was equal to 97.6%. PMID:26474835

  7. Applicability of semi-automatic segmentation for volumetric analysis of brain lesions.

    PubMed

    Heinonen, T; Dastidar, P; Eskola, H; Frey, H; Ryymin, P; Laasonen, E

    1998-01-01

    This project involves the development of a fast semi-automatic segmentation procedure to make an accurate volumetric estimation of brain lesions. This method has been applied in the segmentation of demyelination plaques in Multiple Sclerosis (MS) and right cerebral hemispheric infarctions in patients with neglect. The developed segmentation method includes several image processing techniques, such as image enhancement, amplitude segmentation, and region growing. The entire program operates on a PC-based computer and applies graphical user interfaces. Twenty three patients with MS and 43 patients with right cerebral hemisphere infarctions were studied on a 0.5 T MRI unit. The MS plaques and cerebral infarctions were thereafter segmented. The volumetric accuracy of the program was demonstrated by segmenting Magnetic Resonance (MR) images of fluid filled syringes. The relative error of the total volume measurement based on the MR images of syringes was 1.5%. Also the repeatability test was carried out as inter-and intra-observer study in which MS plaques of six randomly selected patients were segmented. These tests indicated 7% variability in the inter-observer study and 4% variability in the intra-observer study. Average time used to segment and calculate the total plaque volumes for one patient was 10 min. This simple segmentation method can be utilized in the quantitation of anatomical structures, such as air cells in the sinonasal and temporal bone area, as well as in different pathological conditions, such as brain tumours, intracerebral haematomas and bony destructions. PMID:9680601

  8. Automated segmentation of chronic stroke lesions using LINDA: Lesion identification with neighborhood data analysis.

    PubMed

    Pustina, Dorian; Coslett, H Branch; Turkeltaub, Peter E; Tustison, Nicholas; Schwartz, Myrna F; Avants, Brian

    2016-04-01

    The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696 ± 0.16, Hausdorff distance of 17.9 ± 9.8 mm, and average displacement of 2.54 ± 1.38 mm. The manual and predicted lesion volumes correlated at r = 0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. PMID:26756101

  9. A framework for automatic heart sound analysis without segmentation

    PubMed Central

    2011-01-01

    Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set. PMID:21303558

  10. Improving the clinical correlation of multiple sclerosis black hole volume change by paired-scan analysis.

    PubMed

    Tam, Roger C; Traboulsee, Anthony; Riddehough, Andrew; Li, David K B

    2012-01-01

    The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes. PMID:24179734

  11. Performance evaluation of automated segmentation software on optical coherence tomography volume data.

    PubMed

    Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E; Debuc, Delia Cabrera

    2016-05-01

    Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth. PMID:27159849

  12. Segmentation and analysis of emission-computed-tomography images

    NASA Astrophysics Data System (ADS)

    Johnson, Valen E.; Bowsher, James E.; Qian, Jiang; Jaszczak, Ronald J.

    1992-12-01

    This paper describes a statistical model for reconstruction of emission computed tomography (ECT) images. A distinguishing feature of this model is that it is parameterized in terms of quantities of direct physiological significance, rather than only in terms of grey-level voxel values. Specifically, parameters representing regions, region means, and region volumes are included in the model formulation and are estimated directly from projection data. The model is specified hierarchically within the Bayesian paradigm. At the lowest level of the hierarchy, a Gibbs distribution is employed to specify a probability distribution on the space of all possible partitions of the discretized image scene. A novel feature of this distribution is that the number of partitioning elements, or image regions, is not assumed known a priori. In contrast, any other segmentation models (e.g., Liang et al., 1991, Amit et al., 1991) require that the number of regions be specified prior to image reconstruction. Since the number of regions in a source distribution is seldom known a priori, allowing the number of regions to vary within the model framework is an important practical feature of this model. In the second level of the model hierarchy, random variables representing emission intensity are associated with each partitioning element or region. Individual voxel intensities are assumed to be drawn from a gamma distribution with mean equal to the region mean in the third stage, and in the final stage of the hierarchy projection data are assumed to be generated from Poisson distributions with means equal to weighted sums of voxel intensities.

  13. REACH. Teacher's Guide, Volume III. Task Analysis.

    ERIC Educational Resources Information Center

    Morris, James Lee; And Others

    Designed for use with individualized instructional units (CE 026 345-347, CE 026 349-351) in the electromechanical cluster, this third volume of the postsecondary teacher's guide presents the task analysis which was used in the development of the REACH (Refrigeration, Electro-Mechanical, Air Conditioning, Heating) curriculum. The major blocks of…

  14. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation

    PubMed Central

    Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe

    2015-01-01

    Purpose We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). Materials and Methods The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. Results VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). Conclusion It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method. PMID:25793178

  15. SU-E-J-238: Monitoring Lymph Node Volumes During Radiotherapy Using Semi-Automatic Segmentation of MRI Images

    SciTech Connect

    Veeraraghavan, H; Tyagi, N; Riaz, N; McBride, S; Lee, N; Deasy, J

    2014-06-01

    Purpose: Identification and image-based monitoring of lymph nodes growing due to disease, could be an attractive alternative to prophylactic head and neck irradiation. We evaluated the accuracy of the user-interactive Grow Cut algorithm for volumetric segmentation of radiotherapy relevant lymph nodes from MRI taken weekly during radiotherapy. Method: The algorithm employs user drawn strokes in the image to volumetrically segment multiple structures of interest. We used a 3D T2-wturbo spin echo images with an isotropic resolution of 1 mm3 and FOV of 492×492×300 mm3 of head and neck cancer patients who underwent weekly MR imaging during the course of radiotherapy. Various lymph node (LN) levels (N2, N3, N4'5) were individually contoured on the weekly MR images by an expert physician and used as ground truth in our study. The segmentation results were compared with the physician drawn lymph nodes based on DICE similarity score. Results: Three head and neck patients with 6 weekly MR images were evaluated. Two patients had level 2 LN drawn and one patient had level N2, N3 and N4'5 drawn on each MR image. The algorithm took an average of a minute to segment the entire volume (512×512×300 mm3). The algorithm achieved an overall DICE similarity score of 0.78. The time taken for initializing and obtaining the volumetric mask was about 5 mins for cases with only N2 LN and about 15 mins for the case with N2,N3 and N4'5 level nodes. The longer initialization time for the latter case was due to the need for accurate user inputs to separate overlapping portions of the different LN. The standard deviation in segmentation accuracy at different time points was utmost 0.05. Conclusions: Our initial evaluation of the grow cut segmentation shows reasonably accurate and consistent volumetric segmentations of LN with minimal user effort and time.

  16. Fractal Segmentation and Clustering Analysis for Seismic Time Slices

    NASA Astrophysics Data System (ADS)

    Ronquillo, G.; Oleschko, K.; Korvin, G.; Arizabalo, R. D.

    2002-05-01

    Fractal analysis has become part of the standard approach for quantifying texture on gray-tone or colored images. In this research we introduce a multi-stage fractal procedure to segment, classify and measure the clustering patterns on seismic time slices from a 3-D seismic survey. Five fractal classifiers (c1)-(c5) were designed to yield standardized, unbiased and precise measures of the clustering of seismic signals. The classifiers were tested on seismic time slices from the AKAL field, Cantarell Oil Complex, Mexico. The generalized lacunarity (c1), fractal signature (c2), heterogeneity (c3), rugosity of boundaries (c4) and continuity resp. tortuosity (c5) of the clusters are shown to be efficient measures of the time-space variability of seismic signals. The Local Fractal Analysis (LFA) of time slices has proved to be a powerful edge detection filter to detect and enhance linear features, like faults or buried meandering rivers. The local fractal dimensions of the time slices were also compared with the self-affinity dimensions of the corresponding parts of porosity-logs. It is speculated that the spectral dimension of the negative-amplitude parts of the time-slice yields a measure of connectivity between the formation's high-porosity zones, and correlates with overall permeability.

  17. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  18. Blood vessel segmentation using line-direction vector based on Hessian analysis

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Kitasaka, Takayuki; Mori, Kensaku

    2010-03-01

    For decision of the treatment strategy, grading of stenoses is important in diagnosis of vascular disease such as arterial occlusive disease or thromboembolism. It is also important to understand the vasculature in minimally invasive surgery such as laparoscopic surgery or natural orifice translumenal endoscopic surgery. Precise segmentation and recognition of blood vessel regions are indispensable tasks in medical image processing systems. Previous methods utilize only ``lineness'' measure, which is computed by Hessian analysis. However, difference of the intensity values between a voxel of thin blood vessel and a voxel of surrounding tissue is generally decreased by the partial volume effect. Therefore, previous methods cannot extract thin blood vessel regions precisely. This paper describes a novel blood vessel segmentation method that can extract thin blood vessels with suppressing false positives. The proposed method utilizes not only lineness measure but also line-direction vector corresponding to the largest eigenvalue in Hessian analysis. By introducing line-direction information, it is possible to distinguish between a blood vessel voxel and a voxel having a low lineness measure caused by noise. In addition, we consider the scale of blood vessel. The proposed method can reduce false positives in some line-like tissues close to blood vessel regions by utilization of iterative region growing with scale information. The experimental result shows thin blood vessel (0.5 mm in diameter, almost same as voxel spacing) can be extracted finely by the proposed method.

  19. Finite Volume Methods: Foundation and Analysis

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Ohlberger, Mario

    2003-01-01

    Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

  20. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  1. Automatic brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Clark, Matthew C.; Hall, Lawrence O.; Goldgof, Dmitry B.; Velthuizen, Robert P.; Murtaugh, F. R.; Silbiger, Martin L.

    1998-06-01

    A system that automatically segments and labels complete glioblastoma-multiform tumor volumes in magnetic resonance images of the human brain is presented. The magnetic resonance images consist of three feature images (T1- weighted, proton density, T2-weighted) and are processed by a system which integrates knowledge-based techniques with multispectral analysis and is independent of a particular magnetic resonance scanning protocol. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intra-cranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intra-cranial region, with region analysis used in performing the final tumor labeling. This system has been trained on eleven volume data sets and tested on twenty-two unseen volume data sets acquired from a single magnetic resonance imaging system. The knowledge-based tumor segmentation was compared with radiologist-verified `ground truth' tumor volumes and results generated by a supervised fuzzy clustering algorithm. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time.

  2. Application of taxonomy theory, Volume 1: Computing a Hopf bifurcation-related segment of the feasibility boundary. Final report

    SciTech Connect

    Zaborszky, J.; Venkatasubramanian, V.

    1995-10-01

    Taxonomy Theory is the first precise comprehensive theory for large power system dynamics modeled in any detail. The motivation for this project is to show that it can be used, practically, for analyzing a disturbance that actually occurred on a large system, which affected a sizable portion of the Midwest with supercritical Hopf type oscillations. This event is well documented and studied. The report first summarizes Taxonomy Theory with an engineering flavor. Then various computational approaches are sighted and analyzed for desirability to use with Taxonomy Theory. Then working equations are developed for computing a segment of the feasibility boundary that bounds the region of (operating) parameters throughout which the operating point can be moved without losing stability. Then experimental software incorporating large EPRI software packages PSAPAC is developed. After a summary of the events during the subject disturbance, numerous large scale computations, up to 7600 buses, are reported. These results are reduced into graphical and tabular forms, which then are analyzed and discussed. The report is divided into two volumes. This volume illustrates the use of the Taxonomy Theory for computing the feasibility boundary and presents evidence that the event indeed led to a Hopf type oscillation on the system. Furthermore it proves that the Feasibility Theory can indeed be used for practical computation work with very large systems. Volume 2, a separate volume, will show that the disturbance has led to a supercritical (that is stable oscillation) Hopf bifurcation.

  3. Fusing Markov random fields with anatomical knowledge and shape-based analysis to segment multiple sclerosis white matter lesions in magnetic resonance images of the brain

    NASA Astrophysics Data System (ADS)

    AlZubi, Stephan; Toennies, Klaus D.; Bodammer, N.; Hinrichs, Herman

    2002-05-01

    This paper proposes an image analysis system to segment multiple sclerosis lesions of magnetic resonance (MR) brain volumes consisting of 3 mm thick slices using three channels (images showing T1-, T2- and PD -weighted contrast). The method uses the statistical model of Markov Random Fields (MRF) both at low and high levels. The neighborhood system used in this MRF is defined in three types: (1) Voxel to voxel: a low-level heterogeneous neighborhood system is used to restore noisy images. (2) Voxel to segment: a fuzzy atlas, which indicates the probability distribution of each tissue type in the brain, is registered elastically with the MRF. It is used by the MRF as a-priori knowledge to correct miss-classified voxels. (3) Segment to segment: Remaining lesion candidates are processed by a feature based classifier that looks at unary and neighborhood information to eliminate more false positives. An expert's manual segmentation was compared with the algorithm.

  4. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  5. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  6. Atlas-Based Segmentation Improves Consistency and Decreases Time Required for Contouring Postoperative Endometrial Cancer Nodal Volumes

    SciTech Connect

    Young, Amy V.; Wortham, Angela; Wernick, Iddo; Evans, Andrew; Ennis, Ronald D.

    2011-03-01

    Purpose: Accurate target delineation of the nodal volumes is essential for three-dimensional conformal and intensity-modulated radiotherapy planning for endometrial cancer adjuvant therapy. We hypothesized that atlas-based segmentation ('autocontouring') would lead to time savings and more consistent contours among physicians. Methods and Materials: A reference anatomy atlas was constructed using the data from 15 postoperative endometrial cancer patients by contouring the pelvic nodal clinical target volume on the simulation computed tomography scan according to the Radiation Therapy Oncology Group 0418 trial using commercially available software. On the simulation computed tomography scans from 10 additional endometrial cancer patients, the nodal clinical target volume autocontours were generated. Three radiation oncologists corrected the autocontours and delineated the manual nodal contours under timed conditions while unaware of the other contours. The time difference was determined, and the overlap of the contours was calculated using Dice's coefficient. Results: For all physicians, manual contouring of the pelvic nodal target volumes and editing the autocontours required a mean {+-} standard deviation of 32 {+-} 9 vs. 23 {+-} 7 minutes, respectively (p = .000001), a 26% time savings. For each physician, the time required to delineate the manual contours vs. correcting the autocontours was 30 {+-} 3 vs. 21 {+-} 5 min (p = .003), 39 {+-} 12 vs. 30 {+-} 5 min (p = .055), and 29 {+-} 5 vs. 20 {+-} 5 min (p = .0002). The mean overlap increased from manual contouring (0.77) to correcting the autocontours (0.79; p = .038). Conclusion: The results of our study have shown that autocontouring leads to increased consistency and time savings when contouring the nodal target volumes for adjuvant treatment of endometrial cancer, although the autocontours still required careful editing to ensure that the lymph nodes at risk of recurrence are properly included in the target

  7. A level set segmentation for computer-aided dental x-ray analysis

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Fevens, Thomas; Krzyzak, Adam; Li, Song

    2005-04-01

    A level-set-based segmentation framework for Computer Aided Dental X-rays Analysis (CADXA) is proposed. In this framework, we first employ level set methods to segment the dental X-ray image into three regions: Normal Region (NR), Potential Abnormal Region (PAR), Abnormal and Background Region (ABR). The segmentation results are then used to build uncertainty maps based on a proposed uncertainty measurement method and an analysis scheme is applied. The level set segmentation method consists of two stages: a training stage and a segmentation stage. During the training stage, manually chosen representative images are segmented using hierarchical level set region detection. The segmentation results are used to train a support vector machine (SVM) classifier. During the segmentation stage, a dental X-ray image is first classified by the trained SVM. The classifier provides an initial contour which is close to the correct boundary for the coupled level set method which is then used to further segment the image. Different dental X-ray images are used to test the framework. Experimental results show that the proposed framework achieves faster level set segmentation and provides more detailed information and indications of possible problems to the dentist. To our best knowledge, this is one of the first results on CADXA using level set methods.

  8. Simultaneous Segmentation of Retinal Surfaces and Microcystic Macular Edema in SDOCT Volumes

    PubMed Central

    Antony, Bhavna J.; Lang, Andrew; Swingle, Emily K.; Al-Louzi, Omar; Carass, Aaron; Solomon, Sharon; Calabresi, Peter A.; Saidha, Shiv; Prince, Jerry L.

    2016-01-01

    Optical coherence tomography (OCT) is a noninvasive imaging modality that has begun to find widespread use in retinal imaging for the detection of a variety of ocular diseases. In addition to structural changes in the form of altered retinal layer thicknesses, pathological conditions may also cause the formation of edema within the retina. In multiple sclerosis, for instance, the nerve fiber and ganglion cell layers are known to thin. Additionally, the formation of pseudocysts called microcystic macular edema (MME) have also been observed in the eyes of about 5% of MS patients, and its presence has been shown to be correlated with disease severity. Previously, we proposed separate algorithms for the segmentation of retinal layers and MME, but since MME mainly occurs within specific regions of the retina, a simultaneous approach is advantageous. In this work, we propose an automated globally optimal graph-theoretic approach that simultaneously segments the retinal layers and the MME in volumetric OCT scans. SD-OCT scans from one eye of 12 MS patients with known MME and 8 healthy controls were acquired and the pseudocysts manually traced. The overall precision and recall of the pseudocyst detection was found to be 86.0% and 79.5%, respectively. PMID:27199502

  9. Simultaneous segmentation of retinal surfaces and microcystic macular edema in SDOCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Lang, Andrew; Swingle, Emily K.; Al-Louzi, Omar; Carass, Aaron; Solomon, Sharon; Calabresi, Peter A.; Saidha, Shiv; Prince, Jerry L.

    2016-03-01

    Optical coherence tomography (OCT) is a noninvasive imaging modality that has begun to find widespread use in retinal imaging for the detection of a variety of ocular diseases. In addition to structural changes in the form of altered retinal layer thicknesses, pathological conditions may also cause the formation of edema within the retina. In multiple sclerosis, for instance, the nerve fiber and ganglion cell layers are known to thin. Additionally, the formation of pseudocysts called microcystic macular edema (MME) have also been observed in the eyes of about 5% of MS patients, and its presence has been shown to be correlated with disease severity. Previously, we proposed separate algorithms for the segmentation of retinal layers and MME, but since MME mainly occurs within specific regions of the retina, a simultaneous approach is advantageous. In this work, we propose an automated globally optimal graph-theoretic approach that simultaneously segments the retinal layers and the MME in volumetric OCT scans. SD-OCT scans from one eye of 12 MS patients with known MME and 8 healthy controls were acquired and the pseudocysts manually traced. The overall precision and recall of the pseudocyst detection was found to be 86.0% and 79.5%, respectively.

  10. Introduction to Psychology and Leadership. Part Ten; Discipline. Segments I & II, Volume X.

    ERIC Educational Resources Information Center

    Westinghouse Learning Corp., Annapolis, MD.

    The tenth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on discipline and is presented in two documents. This document is a self-instructional text with audiotape and intrinsically programed sections. EM 010 441 is…

  11. Introduction to Psychology and Leadership. Part Ten; Discipline. Segments I & II, Volume X, Script.

    ERIC Educational Resources Information Center

    Westinghouse Learning Corp., Annapolis, MD.

    The tenth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on discipline and is presented in two parts. This document is a self-instructional text with a tape script and intrinsically programed sections. EM 010 442 is…

  12. Adolescents and alcohol: an explorative audience segmentation analysis

    PubMed Central

    2012-01-01

    Background So far, audience segmentation of adolescents with respect to alcohol has been carried out mainly on the basis of socio-demographic characteristics. In this study we examined whether it is possible to segment adolescents according to their values and attitudes towards alcohol to use as guidance for prevention programmes. Methods A random sample of 7,000 adolescents aged 12 to 18 was drawn from the Municipal Basic Administration (MBA) of 29 Local Authorities in the province North-Brabant in the Netherlands. By means of an online questionnaire data were gathered on values and attitudes towards alcohol, alcohol consumption and socio-demographic characteristics. Results We were able to distinguish a total of five segments on the basis of five attitude factors. Moreover, the five segments also differed in drinking behavior independently of socio-demographic variables. Conclusions Our investigation was a first step in the search for possibilities of segmenting by factors other than socio-demographic characteristics. Further research is necessary in order to understand these results for alcohol prevention policy in concrete terms. PMID:22950946

  13. An Experimental Analysis of Phoneme Blending and Segmenting Skills

    ERIC Educational Resources Information Center

    Daly, Edward J., III; Johnson, Sarah; LeClair, Courtney

    2009-01-01

    In this 2-experiment study, experimental analyses of phoneme blending and segmenting skills were conducted with four-first grade students. Intraindividual analyses were conducted to identify the effects of classroom-based instruction on blending phonemes in Experiment 1. In Experiment 2, the effects of an individualized intervention for the…

  14. Fast Hough transform analysis: pattern deviation from line segment

    NASA Astrophysics Data System (ADS)

    Ershov, E.; Terekhin, A.; Nikolaev, D.; Postnikov, V.; Karpenko, S.

    2015-12-01

    In this paper, we analyze properties of dyadic patterns. These pattern were proposed to approximate line segments in the fast Hough transform (FHT). Initially, these patterns only had recursive computational scheme. We provide simple closed form expression for calculating point coordinates and their deviation from corresponding ideal lines.

  15. Infant Word Segmentation and Childhood Vocabulary Development: A Longitudinal Analysis

    ERIC Educational Resources Information Center

    Singh, Leher; Reznick, J. Steven; Xuehua, Liang

    2012-01-01

    Infants begin to segment novel words from speech by 7.5 months, demonstrating an ability to track, encode and retrieve words in the context of larger units. Although it is presumed that word recognition at this stage is a prerequisite to constructing a vocabulary, the continuity between these stages of development has not yet been empirically…

  16. Semi-automatic cone beam CT segmentation of in vivo pre-clinical subcutaneous tumours provides an efficient non-invasive alternative for tumour volume measurements

    PubMed Central

    Brodin, N P; Tang, J; Skalina, K; Quinn, TJ; Basu, I; Guha, C

    2015-01-01

    Objective: To evaluate the feasibility and accuracy of using cone beam CT (CBCT) scans obtained in radiation studies using the small-animal radiation research platform to perform semi-automatic tumour segmentation of pre-clinical tumour volumes. Methods: Volume measurements were evaluated for different anatomical tumour sites, the flank, thigh and dorsum of the hind foot, for a variety of tumour cell lines. The estimated tumour volumes from CBCT and manual calliper measurements using different volume equations were compared with the “gold standard”, measured by weighing the tumours following euthanasia and tumour resection. The correlation between tumour volumes estimated with the different methods, compared with the gold standard, was estimated by the Spearman's rank correlation coefficient, root-mean-square deviation and the coefficient of determination. Results: The semi-automatic CBCT volume segmentation performed favourably compared with manual calliper measures for flank tumours ≤2 cm3 and thigh tumours ≤1 cm3. For tumours >2 cm3 or foot tumours, the CBCT method was not able to accurately segment the tumour volumes and manual calliper measures were superior. Conclusion: We demonstrated that tumour volumes of flank and thigh tumours, obtained as a part of radiation studies using image-guided small-animal irradiators, can be estimated more efficiently and accurately using semi-automatic segmentation from CBCT scans. Advances in knowledge: This is the first study evaluating tumour volume assessment of pre-clinical subcutaneous tumours in different anatomical sites using on-board CBCT imaging. We also compared the accuracy of the CBCT method to manual calliper measures, using various volume calculation equations. PMID:25823502

  17. Interactive high-quality visualization of color volume datasets using GPU-based refinements of segmentation data.

    PubMed

    Lee, Byeonghun; Kwon, Koojoo; Shin, Byeong-Seok

    2016-04-24

    Data sets containing colored anatomical images of the human body, such as Visible Human or Visible Korean, show realistic internal organ structures. However, imperfect segmentations of these color images, which are typically generated manually or semi-automatically, produces poor-quality rendering results. We propose an interactive high-quality visualization method using GPU-based refinements to aid in the study of anatomical structures. In order to represent the boundaries of a region-of-interest (ROI) smoothly, we apply Gaussian filtering to the opacity values of the color volume. Morphological grayscale erosion operations are performed to reduce the region size, which is expanded by Gaussian filtering. Pseudo-coloring and color blending are also applied to the color volume in order to give more informative rendering results. We implement these operations on GPUs to speed up the refinements. As a result, our method delivered high-quality result images with smooth boundaries and provided considerably faster refinements. The speed of these refinements is sufficient to be used with interactive renderings as the ROI changes, especially compared to CPU-based methods. Moreover, the pseudo-coloring methods used presented anatomical structures clearly. PMID:27127935

  18. Latent segmentation based count models: Analysis of bicycle safety in Montreal and Toronto.

    PubMed

    Yasmin, Shamsunnahar; Eluru, Naveen

    2016-10-01

    The study contributes to literature on bicycle safety by building on the traditional count regression models to investigate factors affecting bicycle crashes at the Traffic Analysis Zone (TAZ) level. TAZ is a traffic related geographic entity which is most frequently used as spatial unit for macroscopic crash risk analysis. In conventional count models, the impact of exogenous factors is restricted to be the same across the entire region. However, it is possible that the influence of exogenous factors might vary across different TAZs. To accommodate for the potential variation in the impact of exogenous factors we formulate latent segmentation based count models. Specifically, we formulate and estimate latent segmentation based Poisson (LP) and latent segmentation based Negative Binomial (LNB) models to study bicycle crash counts. In our latent segmentation approach, we allow for more than two segments and also consider a large set of variables in segmentation and segment specific models. The formulated models are estimated using bicycle-motor vehicle crash data from the Island of Montreal and City of Toronto for the years 2006 through 2010. The TAZ level variables considered in our analysis include accessibility measures, exposure measures, sociodemographic characteristics, socioeconomic characteristics, road network characteristics and built environment. A policy analysis is also conducted to illustrate the applicability of the proposed model for planning purposes. This macro-level research would assist decision makers, transportation officials and community planners to make informed decisions to proactively improve bicycle safety - a prerequisite to promoting a culture of active transportation. PMID:27442595

  19. Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Syed, Abdul Haleem

    Land-use-and-land-cover (LULC) mapping is crucial in precision agriculture, environmental monitoring, disaster response, and military applications. The demand for improved and more accurate LULC maps has led to the emergence of a key methodology known as Geographic Object-Based Image Analysis (GEOBIA). The core idea of the GEOBIA for an object-based classification system (OBC) is to change the unit of analysis from single-pixels to groups-of-pixels called `objects' through segmentation. While this new paradigm solved problems and improved global accuracy, it also raised new challenges such as the loss of accuracy in categories that are less abundant, but potentially important. Although this trade-off may be acceptable in some domains, the consequences of such an accuracy loss could be potentially fatal in others (for instance, landmine detection). This thesis proposes a method to improve OBC performance by eliminating such accuracy losses. Specifically, we examine the two key players of an OBC system: Hierarchical Segmentation and Supervised Classification. Further, we propose a model to understand the source of accuracy errors in minority categories and provide a method called Scale Fusion to eliminate those errors. This proposed fusion method involves two stages. First, the characteristic scale for each category is estimated through a combination of segmentation and supervised classification. Next, these estimated scales (segmentation maps) are fused into one combined-object-map. Classification performance is evaluated by comparing results of the multi-cut-and-fuse approach (proposed) to the traditional single-cut (SC) scale selection strategy. Testing on four different data sets revealed that our proposed algorithm improves accuracy on minority classes while performing just as well on abundant categories. Another active obstacle, presented by today's remotely sensed images, is the volume of information produced by our modern sensors with high spatial and

  20. Computed Tomographic Image Analysis Based on FEM Performance Comparison of Segmentation on Knee Joint Reconstruction

    PubMed Central

    Jang, Seong-Wook; Seo, Young-Jin; Yoo, Yon-Sik

    2014-01-01

    The demand for an accurate and accessible image segmentation to generate 3D models from CT scan data has been increasing as such models are required in many areas of orthopedics. In this paper, to find the optimal image segmentation to create a 3D model of the knee CT data, we compared and validated segmentation algorithms based on both objective comparisons and finite element (FE) analysis. For comparison purposes, we used 1 model reconstructed in accordance with the instructions of a clinical professional and 3 models reconstructed using image processing algorithms (Sobel operator, Laplacian of Gaussian operator, and Canny edge detection). Comparison was performed by inspecting intermodel morphological deviations with the iterative closest point (ICP) algorithm, and FE analysis was performed to examine the effects of the segmentation algorithm on the results of the knee joint movement analysis. PMID:25538950

  1. Computed tomographic image analysis based on FEM performance comparison of segmentation on knee joint reconstruction.

    PubMed

    Jang, Seong-Wook; Seo, Young-Jin; Yoo, Yon-Sik; Kim, Yoon Sang

    2014-01-01

    The demand for an accurate and accessible image segmentation to generate 3D models from CT scan data has been increasing as such models are required in many areas of orthopedics. In this paper, to find the optimal image segmentation to create a 3D model of the knee CT data, we compared and validated segmentation algorithms based on both objective comparisons and finite element (FE) analysis. For comparison purposes, we used 1 model reconstructed in accordance with the instructions of a clinical professional and 3 models reconstructed using image processing algorithms (Sobel operator, Laplacian of Gaussian operator, and Canny edge detection). Comparison was performed by inspecting intermodel morphological deviations with the iterative closest point (ICP) algorithm, and FE analysis was performed to examine the effects of the segmentation algorithm on the results of the knee joint movement analysis. PMID:25538950

  2. Extensions to analysis of ignition transients of segmented rocket motors

    NASA Technical Reports Server (NTRS)

    Caveny, L. H.

    1978-01-01

    The analytical procedures described in NASA CR-150162 were extended for the purpose of analyzing the data from the first static test of the Solid Rocket Booster for the Space Shuttle. The component of thrust associated with the rapid changes in the internal flow field was calculated. This dynamic thrust component was shown to be prominent during flame spreading. An approach was implemented to account for the close coupling between the igniter and head end segment of the booster. The tips of the star points were ignited first, followed by radial and longitudinal flame spreading.

  3. Sensitivity analysis of volume scattering phase functions.

    PubMed

    Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael

    2016-08-01

    To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m-3. PMID:27505819

  4. Landmine detection using IR image segmentation by means of fractal dimension analysis

    NASA Astrophysics Data System (ADS)

    Abbate, Horacio A.; Gambini, Juliana; Delrieux, Claudio; Castro, Eduardo H.

    2009-05-01

    This work is concerned with buried landmines detection by long wave infrared images obtained during the heating or cooling of the soil and a segmentation process of the images. The segmentation process is performed by means of a local fractal dimension analysis (LFD) as a feature descriptor. We use two different LFD estimators, box-counting dimension (BC), and differential box counting dimension (DBC). These features are computed in a per pixel basis, and the set of features is clusterized by means of the K-means method. This segmentation technique produces outstanding results, with low computational cost.

  5. Theoretical analysis and experimental verification on valve-less piezoelectric pump with hemisphere-segment bluff-body

    NASA Astrophysics Data System (ADS)

    Ji, Jing; Zhang, Jianhui; Xia, Qixiao; Wang, Shouyin; Huang, Jun; Zhao, Chunsheng

    2014-05-01

    Existing researches on no-moving part valves in valve-less piezoelectric pumps mainly concentrate on pipeline valves and chamber bottom valves, which leads to the complex structure and manufacturing process of pump channel and chamber bottom. Furthermore, position fixed valves with respect to the inlet and outlet also makes the adjustability and controllability of flow rate worse. In order to overcome these shortcomings, this paper puts forward a novel implantable structure of valve-less piezoelectric pump with hemisphere-segments in the pump chamber. Based on the theory of flow around bluff-body, the flow resistance on the spherical and round surface of hemisphere-segment is different when fluid flows through, and the macroscopic flow resistance differences thus formed are also different. A novel valve-less piezoelectric pump with hemisphere-segment bluff-body (HSBB) is presented and designed. HSBB is the no-moving part valve. By the method of volume and momentum comparison, the stress on the bluff-body in the pump chamber is analyzed. The essential reason of unidirectional fluid pumping is expounded, and the flow rate formula is obtained. To verify the theory, a prototype is produced. By using the prototype, experimental research on the relationship between flow rate, pressure difference, voltage, and frequency has been carried out, which proves the correctness of the above theory. This prototype has six hemisphere-segments in the chamber filled with water, and the effective diameter of the piezoelectric bimorph is 30mm. The experiment result shows that the flow rate can reach 0.50 mL/s at the frequency of 6 Hz and the voltage of 110 V. Besides, the pressure difference can reach 26.2 mm H2O at the frequency of 6 Hz and the voltage of 160 V. This research proposes a valve-less piezoelectric pump with hemisphere-segment bluff-body, and its validity and feasibility is verified through theoretical analysis and experiment.

  6. Health lifestyles: audience segmentation analysis for public health interventions.

    PubMed

    Slater, M D; Flora, J A

    1991-01-01

    This article is concerned with the application of market segmentation techniques in order to improve the planning and implementation of public health education programs. Seven distinctive patterns of health attitudes, social influences, and behaviors are identified using cluster analytic techniques in a sample drawn from four central California cities, and are subjected to construct and predictive validation: The lifestyle clusters predict behaviors including seatbelt use, vitamin C use, and attention to health information. The clusters also predict self-reported improvements in health behavior as measured in a two-year follow-up survey, e.g., eating less salt and losing weight, and self-reported new moderate and new vigorous exercise. Implications of these lifestyle clusters for public health education and intervention planning, and the larger potential of lifestyle clustering techniques in public health efforts, are discussed. PMID:2055779

  7. Factor Analysis on Cogging Torques in Segment Core Motors

    NASA Astrophysics Data System (ADS)

    Enomoto, Yuji; Kitamura, Masashi; Sakai, Toshihiko; Ohara, Kouichiro

    The segment core method is a popular method employed in motor core manufacturing; however, this method does not allow the stator core precision to be enhanced because the stator is assembled from many cores. The axial eccentricity of rotor and stator and the internal roundness of the stator core are regarded as the main factors which affect cogging torque. In the present study, the way in which a motor with a split-type stator generates a cogging torque is investigated to determine whether high- precision assembly of stator cores can reduce cogging torque. Here, DC brushless motors were used to verify the influence of stator-rotor eccentricity and roundness of the stator bore on cogging torque. The evaluation results prove the feasibility of reducing cogging torque by improving the stator core precision. Therefore, improving the eccentricity and roundness will enable stable production of well controlled motors with few torque ripples.

  8. Analysis of radially cracked ring segments subject to forces and couples

    NASA Technical Reports Server (NTRS)

    Gross, B.; Strawley, J. E.

    1975-01-01

    Results of planar boundary collocation analysis are given for ring segment (C shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5, and ratios of crack length to segment width in the range 0.1 to 0.8.

  9. Analysis of radially cracked ring segments subject to forces and couples

    NASA Technical Reports Server (NTRS)

    Gross, B.; Srawley, J. E.

    1977-01-01

    Results of planar boundary collocation analysis are given for ring segment (C-shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5 and ratios of crack length to segment width in the range 0.1 to 0.8.

  10. SU-E-J-123: Assessing Segmentation Accuracy of Internal Volumes and Sub-Volumes in 4D PET/CT of Lung Tumors Using a Novel 3D Printed Phantom

    SciTech Connect

    Soultan, D; Murphy, J; James, C; Hoh, C; Moiseenko, V; Cervino, L; Gill, B

    2015-06-15

    Purpose: To assess the accuracy of internal target volume (ITV) segmentation of lung tumors for treatment planning of simultaneous integrated boost (SIB) radiotherapy as seen in 4D PET/CT images, using a novel 3D-printed phantom. Methods: The insert mimics high PET tracer uptake in the core and 50% uptake in the periphery, by using a porous design at the periphery. A lung phantom with the insert was placed on a programmable moving platform. Seven breathing waveforms of ideal and patient-specific respiratory motion patterns were fed to the platform, and 4D PET/CT scans were acquired of each of them. CT images were binned into 10 phases, and PET images were binned into 5 phases following the clinical protocol. Two scenarios were investigated for segmentation: a gate 30–70 window, and no gating. The radiation oncologist contoured the outer ITV of the porous insert with on CT images, while the internal void volume with 100% uptake was contoured on PET images for being indistinguishable from the outer volume in CT images. Segmented ITVs were compared to the expected volumes based on known target size and motion. Results: 3 ideal breathing patterns, 2 regular-breathing patient waveforms, and 2 irregular-breathing patient waveforms were used for this study. 18F-FDG was used as the PET tracer. The segmented ITVs from CT closely matched the expected motion for both no gating and gate 30–70 window, with disagreement of contoured ITV with respect to the expected volume not exceeding 13%. PET contours were seen to overestimate volumes in all the cases, up to more than 40%. Conclusion: 4DPET images of a novel 3D printed phantom designed to mimic different uptake values were obtained. 4DPET contours overestimated ITV volumes in all cases, while 4DCT contours matched expected ITV volume values. Investigation of the cause and effects of the discrepancies is undergoing.

  11. A new partial volume segmentation approach to extract bladder wall for computer-aided detection in virtual cystoscopy

    NASA Astrophysics Data System (ADS)

    Li, Lihong; Wang, Zigang; Li, Xiang; Wei, Xinzhou; Adler, Howard L.; Huang, Wei; Rizvi, Syed A.; Meng, Hong; Harrington, Donald P.; Liang, Zhengrong

    2004-04-01

    We propose a new partial volume (PV) segmentation scheme to extract bladder wall for computer aided detection (CAD) of bladder lesions using multispectral MR images. Compared with CT images, MR images provide not only a better tissue contrast between bladder wall and bladder lumen, but also the multispectral information. As multispectral images are spatially registered over three-dimensional space, information extracted from them is more valuable than that extracted from each image individually. Furthermore, the intrinsic T1 and T2 contrast of the urine against the bladder wall eliminates the invasive air insufflation procedure. Because the earliest stages of bladder lesion growth tend to develop gradually and migrate slowly from the mucosa into the bladder wall, our proposed PV algorithm quantifies images as percentages of tissues inside each voxel. It preserves both morphology and texture information and provides tissue growth tendency in addition to the anatomical structure. Our CAD system utilizes a multi-scan protocol on dual (full and empty of urine) states of the bladder to extract both geometrical and texture information. Moreover, multi-scan of transverse and coronal MR images eliminates motion artifacts. Experimental results indicate that the presented scheme is feasible towards mass screening and lesion detection for virtual cystoscopy (VC).

  12. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations

    PubMed Central

    Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.

    2015-01-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this

  13. Analysis of wear mechanism and influence factors of drum segment of hot rolling coiler

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Peng, Yan; Liu, Hongmin; Liu, Yunfei

    2013-03-01

    Because the work environment of segment is complex, and the wear failures usually happen, the wear mechanism corresponding to the load is a key factor for the solution of this problem. At present, many researchers have investigated the failure of segment, but have not taken into account the compositive influences of matching and coiling process. To investigate the wear failure of the drum segment of the hot rolling coiler, the MMU-5G abrasion tester is applied to simulate the wear behavior under different temperatures, different loads and different stages, and the friction coefficients and wear rates are acquired. Scanning electron microscopy(SEM) is used to observe the micro-morphology of worn surface, X-ray energy dispersive spectroscopy(EDS) is used to analyze the chemical composition of worn surface, finally the wear mechanism of segment in working process is judged and the influence regulars of the environmental factors on the material wear behaviors are found. The test and analysis results show that under certain load, the wear of the segment changes into oxidation wear from abrasive wear step by step with the temperature increases, and the wear degree reduces; under certain temperature, the main wear mechanism of segment changes into spalling wear from abrasive wear with the load increases, and the wear degree slightly increases. The proposed research provides a theoretical foundation and a practical reference for optimizing the wear behavior and extending the working life of segment.

  14. Imalytics Preclinical: Interactive Analysis of Biomedical Volume Data

    PubMed Central

    Gremse, Felix; Stärk, Marius; Ehling, Josef; Menzel, Jan Robert; Lammers, Twan; Kiessling, Fabian

    2016-01-01

    A software tool is presented for interactive segmentation of volumetric medical data sets. To allow interactive processing of large data sets, segmentation operations, and rendering are GPU-accelerated. Special adjustments are provided to overcome GPU-imposed constraints such as limited memory and host-device bandwidth. A general and efficient undo/redo mechanism is implemented using GPU-accelerated compression of the multiclass segmentation state. A broadly applicable set of interactive segmentation operations is provided which can be combined to solve the quantification task of many types of imaging studies. A fully GPU-accelerated ray casting method for multiclass segmentation rendering is implemented which is well-balanced with respect to delay, frame rate, worst-case memory consumption, scalability, and image quality. Performance of segmentation operations and rendering are measured using high-resolution example data sets showing that GPU-acceleration greatly improves the performance. Compared to a reference marching cubes implementation, the rendering was found to be superior with respect to rendering delay and worst-case memory consumption while providing sufficiently high frame rates for interactive visualization and comparable image quality. The fast interactive segmentation operations and the accurate rendering make our tool particularly suitable for efficient analysis of multimodal image data sets which arise in large amounts in preclinical imaging studies. PMID:26909109

  15. Imalytics Preclinical: Interactive Analysis of Biomedical Volume Data.

    PubMed

    Gremse, Felix; Stärk, Marius; Ehling, Josef; Menzel, Jan Robert; Lammers, Twan; Kiessling, Fabian

    2016-01-01

    A software tool is presented for interactive segmentation of volumetric medical data sets. To allow interactive processing of large data sets, segmentation operations, and rendering are GPU-accelerated. Special adjustments are provided to overcome GPU-imposed constraints such as limited memory and host-device bandwidth. A general and efficient undo/redo mechanism is implemented using GPU-accelerated compression of the multiclass segmentation state. A broadly applicable set of interactive segmentation operations is provided which can be combined to solve the quantification task of many types of imaging studies. A fully GPU-accelerated ray casting method for multiclass segmentation rendering is implemented which is well-balanced with respect to delay, frame rate, worst-case memory consumption, scalability, and image quality. Performance of segmentation operations and rendering are measured using high-resolution example data sets showing that GPU-acceleration greatly improves the performance. Compared to a reference marching cubes implementation, the rendering was found to be superior with respect to rendering delay and worst-case memory consumption while providing sufficiently high frame rates for interactive visualization and comparable image quality. The fast interactive segmentation operations and the accurate rendering make our tool particularly suitable for efficient analysis of multimodal image data sets which arise in large amounts in preclinical imaging studies. PMID:26909109

  16. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment

  17. Microarray kit analysis of cytokines in blood product units and segments

    PubMed Central

    Weiskopf, Richard B.; Yau, Rebecca; Sanchez, Rosa; Lowell, Clifford; Toy, Pearl

    2009-01-01

    BACKGROUND Cytokine concentrations in transfused blood components are of interest for some clinical trials. It is not always possible to process samples of transfused components quickly after their administration. Additionally, it is not practical to sample material in an acceptable manner from many bags of components before transfusion, and after transfusion, the only representative remaining fluid of the component may be that in the “segment,” as the bag may have been completely transfused. Multiplex array technology allows rapid simultaneous testing of multiple analytes in small volume samples. We used this technology to measure leukocyte cytokine levels in blood products to determine (1) whether concentrations in segments correlate with those in the main bag, and thus, whether segments could be used for estimation of the concentrations in the transfused component; and (2) whether concentrations after sample storage at 4C for 24 hrs do not differ from concentrations before storage, thus allowing for processing within 24 hrs, rather than immediately after transfusion. STUDY DESIGN AND METHODS Leukocyte cytokines were measured in the supernatant from bags and segments of leukoreduced red blood cells, non-leukoreduced whole blood, and leukoreduced plateletphereses using the ProteoPlex Human Cytokine Array kit (Novagen). RESULTS Cytokine concentrations in packed red blood cell and whole blood, or plateletphereses stored at 4°C did not differ between bag and segment samples (all p>0.05). There was no evidence of systematic differences between segment and bag concentrations. Cytokine concentrations in samples from plateletphereses did not change within 24 hrs storage at 4°C. CONCLUSION Samples from either bag or segment can be used to study cytokine concentrations in groups of blood products. Cytokine concentrations in plateletphereses appear to be stable for at least 24 hrs of storage at 4°C, and, thus, samples stored with those conditions may be used to

  18. 3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities

    NASA Astrophysics Data System (ADS)

    Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir

    2016-03-01

    Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.

  19. A robust and fast line segment detector based on top-down smaller eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing

    2014-01-01

    In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.

  20. Segmenting Business Students Using Cluster Analysis Applied to Student Satisfaction Survey Results

    ERIC Educational Resources Information Center

    Gibson, Allen

    2009-01-01

    This paper demonstrates a new application of cluster analysis to segment business school students according to their degree of satisfaction with various aspects of the academic program. The resulting clusters provide additional insight into drivers of student satisfaction that are not evident from analysis of the responses of the student body as a…

  1. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  2. AxonSeg: Open Source Software for Axon and Myelin Segmentation and Morphometric Analysis.

    PubMed

    Zaimi, Aldo; Duval, Tanguy; Gasecka, Alicja; Côté, Daniel; Stikov, Nikola; Cohen-Adad, Julien

    2016-01-01

    Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy (SEM) only). Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface (GUI) and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i) image pre-processing; (ii) pre-segmentation of axons over a cropped image and discriminant analysis (DA) to select the best parameters based on axon shape and intensity information; (iii) automatic axon and myelin segmentation over the full image; and (iv) atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM), SEM and coherent anti-Stokes Raman scattering (CARS) microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg. PMID:27594833

  3. Finite difference based vibration simulation analysis of a segmented distributed piezoelectric structronic plate system

    NASA Astrophysics Data System (ADS)

    Ren, B. Y.; Wang, L.; Tzou, H. S.; Yue, H. H.

    2010-08-01

    Electrical modeling of piezoelectric structronic systems by analog circuits has the disadvantages of huge circuit structure and low precision. However, studies of electrical simulation of segmented distributed piezoelectric structronic plate systems (PSPSs) by using output voltage signals of high-speed digital circuits to evaluate the real-time dynamic displacements are scarce in the literature. Therefore, an equivalent dynamic model based on the finite difference method (FDM) is presented to simulate the actual physical model of the segmented distributed PSPS with simply supported boundary conditions. By means of the FDM, the four-ordered dynamic partial differential equations (PDEs) of the main structure/segmented distributed sensor signals/control moments of the segmented distributed actuator of the PSPS are transformed to finite difference equations. A dynamics matrix model based on the Newmark-β integration method is established. The output voltage signal characteristics of the lower modes (m <= 3, n <= 3) with different finite difference mesh dimensions and different integration time steps are analyzed by digital signal processing (DSP) circuit simulation software. The control effects of segmented distributed actuators with different effective areas are consistent with the results of the analytical model in relevant references. Therefore, the method of digital simulation for vibration analysis of segmented distributed PSPSs presented in this paper can provide a reference for further research into the electrical simulation of PSPSs.

  4. Preliminary analysis of effect of random segment errors on coronagraph performance

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-09-01

    "Are we alone in the Universe?" is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 1010 of the host star's light with a 10-11 stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segments. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  5. Gene expression analysis reveals that Delta/Notch signalling is not involved in onychophoran segmentation.

    PubMed

    Janssen, Ralf; Budd, Graham E

    2016-03-01

    Delta/Notch (Dl/N) signalling is involved in the gene regulatory network underlying the segmentation process in vertebrates and possibly also in annelids and arthropods, leading to the hypothesis that segmentation may have evolved in the last common ancestor of bilaterian animals. Because of seemingly contradicting results within the well-studied arthropods, however, the role and origin of Dl/N signalling in segmentation generally is still unclear. In this study, we investigate core components of Dl/N signalling by means of gene expression analysis in the onychophoran Euperipatoides kanangrensis, a close relative to the arthropods. We find that neither Delta or Notch nor any other investigated components of its signalling pathway are likely to be involved in segment addition in onychophorans. We instead suggest that Dl/N signalling may be involved in posterior elongation, another conserved function of these genes. We suggest further that the posterior elongation network, rather than classic Dl/N signalling, may be in the control of the highly conserved segment polarity gene network and the lower-level pair-rule gene network in onychophorans. Consequently, we believe that the pair-rule gene network and its interaction with Dl/N signalling may have evolved within the arthropod lineage and that Dl/N signalling has thus likely been recruited independently for segment addition in different phyla. PMID:26935716

  6. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  7. AxonSeg: Open Source Software for Axon and Myelin Segmentation and Morphometric Analysis

    PubMed Central

    Zaimi, Aldo; Duval, Tanguy; Gasecka, Alicja; Côté, Daniel; Stikov, Nikola; Cohen-Adad, Julien

    2016-01-01

    Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy (SEM) only). Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface (GUI) and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i) image pre-processing; (ii) pre-segmentation of axons over a cropped image and discriminant analysis (DA) to select the best parameters based on axon shape and intensity information; (iii) automatic axon and myelin segmentation over the full image; and (iv) atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM), SEM and coherent anti-Stokes Raman scattering (CARS) microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg. PMID:27594833

  8. Fire flame detection using color segmentation and space-time analysis

    NASA Astrophysics Data System (ADS)

    Ruchanurucks, Miti; Saengngoen, Praphin; Sajjawiso, Theeraphat

    2011-10-01

    This paper presents a fire flame detection using CCTV cameras based on image processing. The scheme relies on color segmentation and space-time analysis. The segmentation is performed to extract fire-like-color regions in an image. Many methods are benchmarked against each other to find the best for practical CCTV camera. After that, the space-time analysis is used to recognized fire behavior. A space-time window is generated from contour of the threshold image. Feature extraction is done in Fourier domain of the window. Neural network is used for behavior recognition. The system will be shown to be practical and robust.

  9. Proteomic Analysis of the Retina: Removal of RPE Alters Outer Segment Assembly and Retinal Protein Expression

    PubMed Central

    Wang, XiaoFei; Nookala, Suba; Narayanan, Chidambarathanu; Giorgianni, Francesco; Beranova-Giorgianni, Sarka; McCollum, Gary; Gerling, Ivan; Penn, John S.; Jablonski, Monica M.

    2008-01-01

    The mechanisms that regulate the complex physiologic task of photoreceptor outer segment assembly remain an enigma. One limiting factor in revealing the mechanism(s) by which this process is modulated is that not all of the role players that participate in this process are known. The purpose of this study was to determine some of the retinal proteins that likely play a critical role in regulating photoreceptor outer segment assembly. To do so, we analyzed and compared the proteome map of tadpole Xenopus laevis retinal pigment epithelium (RPE)-supported retinas containing organized outer segments with that of RPE-deprived retinas containing disorganized outer segments. Solubilized proteins were labeled with CyDye fluors followed by multiplexed two-dimensional separation. The intensity of protein spots and comparison of proteome maps was performed using DeCyder software. Identification of differentially regulated proteins was determined using nanoLC-ESI-MS/MS analysis. We found a total of 27 protein spots, 21 of which were unique proteins, which were differentially expressed in retinas with disorganized outer segments. We predict that in the absence of the RPE, oxidative stress initiates an unfolded protein response. Subsequently, downregulation of several candidate Müller glial cell proteins may explain the inability of photoreceptors to properly fold their outer segment membranes. In this study we have used identification and bioinformatics assessment of proteins that are differentially expressed in retinas with disorganized outer segments as a first step in determining probable key molecules involved in regulating photoreceptor outer segment assembly. PMID:18803304

  10. The Prognostic Impact of In-Hospital Change in Mean Platelet Volume in Patients With Non-ST-Segment Elevation Myocardial Infarction.

    PubMed

    Kırış, Tuncay; Yazici, Selcuk; Günaydin, Zeki Yüksel; Akyüz, Şükrü; Güzelburç, Özge; Atmaca, Hüsnü; Ertürk, Mehmet; Nazli, Cem; Dogan, Abdullah

    2016-08-01

    It is unclear whether changes in mean platelet volume (MPV) are associated with total mortality in acute coronary syndromes. We investigated whether the change in MPV predicts total mortality in patients with non-ST-segment elevation myocardial infarction (NSTEMI). We retrospectively analyzed 419 consecutive patients (19 patients were excluded). The remaining patients were categorized as survivors (n = 351) or nonsurvivors (n = 49). Measurements of MPV were performed at admission and after 24 hours. The difference between the 2 measurements was considered as the MPV change (ΔMPV). The end point of the study was total mortality at 1-year follow-up. During the follow-up, there were 49 deaths (12.2%). Admission MPV was comparable in the 2 groups. However, both MPV (9.6 ± 1.4 fL vs 9.2 ± 1.0 fL, P = .044) and ΔMPV (0.40 [0.10-0.70] fL vs 0.70 [0.40-1.20] fL, P < .001) at the first 24 hours were higher in nonsurvivors than survivors. In multivariate analysis, ΔMPV was an independent predictor of total mortality (odds ratio: 1.84, 95% confidence interval: 1.28-2.65, P = .001). An early increase in MPV after admission was independently associated with total mortality in patients with NSTEMI. Such patients may need more effective antiplatelet therapy. PMID:26787684

  11. The Influence of Segmental Impedance Analysis in Predicting Validity of Consumer Grade Bioelectrical Impedance Analysis Devices

    NASA Astrophysics Data System (ADS)

    Sharp, Andy; Heath, Jennifer; Peterson, Janet

    2008-05-01

    Consumer grade bioelectric impedance analysis (BIA) instruments measure the body's impedance at 50 kHz, and yield a quick estimate of percent body fat. The frequency dependence of the impedance gives more information about the current pathway and the response of different tissues. This study explores the impedance response of human tissue at a range of frequencies from 0.2 - 102 kHz using a four probe method and probe locations standard for segmental BIA research of the arm. The data at 50 kHz, for a 21 year old healthy Caucasian male (resistance of 180φ±10 and reactance of 33φ±2) is in agreement with previously reported values [1]. The frequency dependence is not consistent with simple circuit models commonly used in evaluating BIA data, and repeatability of measurements is problematic. This research will contribute to a better understanding of the inherent difficulties in estimating body fat using consumer grade BIA devices. [1] Chumlea, William C., Richard N. Baumgartner, and Alex F. Roche. ``Specific resistivity used to estimate fat-free mass from segmental body measures of bioelectrical impedance.'' Am J Clin Nutr 48 (1998): 7-15.

  12. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed

  13. 3-D segmentation and quantitative analysis of inner and outer walls of thrombotic abdominal aortic aneurysms

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Yin, Yin; Wahle, Andreas; Olszewski, Mark E.; Sonka, Milan

    2008-03-01

    An abdominal aortic aneurysm (AAA) is an area of a localized widening of the abdominal aorta, with a frequent presence of thrombus. A ruptured aneurysm can cause death due to severe internal bleeding. AAA thrombus segmentation and quantitative analysis are of paramount importance for diagnosis, risk assessment, and determination of treatment options. Until now, only a small number of methods for thrombus segmentation and analysis have been presented in the literature, either requiring substantial user interaction or exhibiting insufficient performance. We report a novel method offering minimal user interaction and high accuracy. Our thrombus segmentation method is composed of an initial automated luminal surface segmentation, followed by a cost function-based optimal segmentation of the inner and outer surfaces of the aortic wall. The approach utilizes the power and flexibility of the optimal triangle mesh-based 3-D graph search method, in which cost functions for thrombus inner and outer surfaces are based on gradient magnitudes. Sometimes local failures caused by image ambiguity occur, in which case several control points are used to guide the computer segmentation without the need to trace borders manually. Our method was tested in 9 MDCT image datasets (951 image slices). With the exception of a case in which the thrombus was highly eccentric, visually acceptable aortic lumen and thrombus segmentation results were achieved. No user interaction was used in 3 out of 8 datasets, and 7.80 +/- 2.71 mouse clicks per case / 0.083 +/- 0.035 mouse clicks per image slice were required in the remaining 5 datasets.

  14. Combining multiset resolution and segmentation for hyperspectral image analysis of biological tissues.

    PubMed

    Piqueras, S; Krafft, C; Beleites, C; Egodage, K; von Eggeling, F; Guntinas-Lichius, O; Popp, J; Tauler, R; de Juan, A

    2015-06-30

    Hyperspectral images can provide useful biochemical information about tissue samples. Often, Fourier transform infrared (FTIR) images have been used to distinguish different tissue elements and changes caused by pathological causes. The spectral variation between tissue types and pathological states is very small and multivariate analysis methods are required to describe adequately these subtle changes. In this work, a strategy combining multivariate curve resolution-alternating least squares (MCR-ALS), a resolution (unmixing) method, which recovers distribution maps and pure spectra of image constituents, and K-means clustering, a segmentation method, which identifies groups of similar pixels in an image, is used to provide efficient information on tissue samples. First, multiset MCR-ALS analysis is performed on the set of images related to a particular pathology status to provide basic spectral signatures and distribution maps of the biological contributions needed to describe the tissues. Later on, multiset segmentation analysis is applied to the obtained MCR scores (concentration profiles), used as compressed initial information for segmentation purposes. The multiset idea is transferred to perform image segmentation of different tissue samples. Doing so, a difference can be made between clusters associated with relevant biological parts common to all images, linked to general trends of the type of samples analyzed, and sample-specific clusters, that reflect the natural biological sample-to-sample variability. The last step consists of performing separate multiset MCR-ALS analyses on the pixels of each of the relevant segmentation clusters for the pathology studied to obtain a finer description of the related tissue parts. The potential of the strategy combining multiset resolution on complete images, multiset segmentation and multiset local resolution analysis will be shown on a study focused on FTIR images of tissue sections recorded on inflamed and non

  15. Segmentation of vascular structures and hematopoietic cells in 3D microscopy images and quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mu, Jian; Yang, Lin; Kamocka, Malgorzata M.; Zollman, Amy L.; Carlesso, Nadia; Chen, Danny Z.

    2015-03-01

    In this paper, we present image processing methods for quantitative study of how the bone marrow microenvironment changes (characterized by altered vascular structure and hematopoietic cell distribution) caused by diseases or various factors. We develop algorithms that automatically segment vascular structures and hematopoietic cells in 3-D microscopy images, perform quantitative analysis of the properties of the segmented vascular structures and cells, and examine how such properties change. In processing images, we apply local thresholding to segment vessels, and add post-processing steps to deal with imaging artifacts. We propose an improved watershed algorithm that relies on both intensity and shape information and can separate multiple overlapping cells better than common watershed methods. We then quantitatively compute various features of the vascular structures and hematopoietic cells, such as the branches and sizes of vessels and the distribution of cells. In analyzing vascular properties, we provide algorithms for pruning fake vessel segments and branches based on vessel skeletons. Our algorithms can segment vascular structures and hematopoietic cells with good quality. We use our methods to quantitatively examine the changes in the bone marrow microenvironment caused by the deletion of Notch pathway. Our quantitative analysis reveals property changes in samples with deleted Notch pathway. Our tool is useful for biologists to quantitatively measure changes in the bone marrow microenvironment, for developing possible therapeutic strategies to help the bone marrow microenvironment recovery.

  16. Segmented K-mer and its application on similarity analysis of mitochondrial genome sequences.

    PubMed

    Yu, Hong-Jie

    2013-04-15

    K-mer-based approach has been widely used in similarity analyses so as to discover similarity/dissimilarity among different biological sequences. In this study, we have improved the traditional K-mer method, and introduce a segmented K-mer approach (s-K-mer). After each primary sequence is divided into several segments, we simultaneously transform all these segments into corresponding K-mer-based vectors. In this approach, it is vital how to determine the optimal combination of distance metric with the number of K and the number of segments, i.e., (K(⁎), s(⁎), and d(⁎)). Based on the cascaded feature vectors transformed from s(⁎) segmented sequences, we analyze 34 mammalian genome sequences using the proposed s-K-mer approach. Meanwhile, we compare the results of s-K-mer with those of traditional K-mer. The contrastive analysis results demonstrate that s-K-mer approach outperforms the traditionally K-mer method on similarity analysis among different species. PMID:23353775

  17. Morphotectonic Index Analysis as an Indicator of Neotectonic Segmentation of the Nicoya Peninsula, Costa Rica

    NASA Astrophysics Data System (ADS)

    Morrish, S.; Marshall, J. S.

    2013-12-01

    The Nicoya Peninsula lies within the Costa Rican forearc where the Cocos plate subducts under the Caribbean plate at ~8.5 cm/yr. Rapid plate convergence produces frequent large earthquakes (~50yr recurrence interval) and pronounced crustal deformation (0.1-2.0m/ky uplift). Seven uplifted segments have been identified in previous studies using broad geomorphic surfaces (Hare & Gardner 1984) and late Quaternary marine terraces (Marshall et al. 2010). These surfaces suggest long term net uplift and segmentation of the peninsula in response to contrasting domains of subducting seafloor (EPR, CNS-1, CNS-2). In this study, newer 10m contour digital topographic data (CENIGA- Terra Project) will be used to characterize and delineate this segmentation using morphotectonic analysis of drainage basins and correlation of fluvial terrace/ geomorphic surface elevations. The peninsula has six primary watersheds which drain into the Pacific Ocean; the Río Andamojo, Río Tabaco, Río Nosara, Río Ora, Río Bongo, and Río Ario which range in area from 200 km2 to 350 km2. The trunk rivers follow major lineaments that define morphotectonic segment boundaries and in turn their drainage basins are bisected by them. Morphometric analysis of the lower (1st and 2nd) order drainage basins will provide insight into segmented tectonic uplift and deformation by comparing values of drainage basin asymmetry, stream length gradient, and hypsometry with respect to margin segmentation and subducting seafloor domain. A general geomorphic analysis will be conducted alongside the morphometric analysis to map previously recognized (Morrish et al. 2010) but poorly characterized late Quaternary fluvial terraces. Stream capture and drainage divide migration are common processes throughout the peninsula in response to the ongoing deformation. Identification and characterization of basin piracy throughout the peninsula will provide insight into the history of landscape evolution in response to

  18. Loads analysis and testing of flight configuration solid rocket motor outer boot ring segments

    NASA Technical Reports Server (NTRS)

    Ahmed, Rafiq

    1990-01-01

    The loads testing on in-house-fabricated flight configuration Solid Rocket Motor (SRM) outer boot ring segments. The tests determined the bending strength and bending stiffness of these beams and showed that they compared well with the hand analysis. The bending stiffness test results compared very well with the finite element data.

  19. Analysis of the ISS Russian Segment Outer Surface Materials Installed on the CKK Detachable Cassette

    NASA Astrophysics Data System (ADS)

    Naumov, S. F.; Borisov, V. A.; Plotnikov, A. D.; Sokolova, S. P.; Kurilenok, A. O.; Skurat, V. E.; Leipunsky, I. O.; Pshechenkov, P. A.; Beryozkina, N. G.; Volkov, I. O.

    2009-01-01

    This report presents an analysis of the effects caused by space environmental factors (SEF) and the International Space Station's (ISS) outer environment on operational parameters of the outer surface materials of the ISS Russian Segment (RS). The tests were performed using detachable container cassettes (CKK) that serve as a part of the ISS RS contamination control system.

  20. Scientific and clinical evidence for the use of fetal ECG ST segment analysis (STAN).

    PubMed

    Steer, Philip J; Hvidman, Lone Egly

    2014-06-01

    Fetal electrocardiogram waveform analysis has been studied for many decades, but it is only in the last 20 years that computerization has made real-time analysis practical for clinical use. Changes in the ST segment have been shown to correlate with fetal condition, in particular with acid-base status. Meta-analysis of randomized trials (five in total, four using the computerized system) has shown that use of computerized ST segment analysis (STAN) reduces the need for fetal blood sampling by about 40%. However, although there are trends to lower rates of low Apgar scores and acidosis, the differences are not statistically significant. There is no effect on cesarean section rates. Disadvantages include the need for amniotic membranes to be ruptured so that a fetal scalp electrode can be applied, and the need for STAN values to be interpreted in conjunction with detailed fetal heart rate pattern analysis. PMID:24597897

  1. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  2. Effect of ST segment measurement point on performance of exercise ECG analysis.

    PubMed

    Lehtinen, R; Sievänen, H; Turjanmaa, V; Niemelä, K; Malmivuo, J

    1997-10-10

    To evaluate the effect of ST-segment measurement point on diagnostic performance of the ST-segment/heart rate (ST/HR) hysteresis, the ST/HR index, and the end-exercise ST-segment depression in the detection of coronary artery disease, we analysed the exercise electrocardiograms of 347 patients using ST-segment depression measured at 0, 20, 40, 60 and 80 ms after the J-point. Of these patients, 127 had and 13 had no significant coronary artery disease according to angiography, 18 had no myocardial perfusion defect according to technetium-99m sestamibi single-photon emission computed tomography, and 189 were clinically 'normal' having low likelihood of coronary artery disease. Comparison of areas under the receiver operating characteristic curves showed that the discriminative capacity of the above diagnostic variables improved systematically up to the ST-segment measurement point of 60 ms after the J-point. As compared to analysis at the J-point (0 ms), the areas based on the 60-ms point were 89 vs. 84% (p=0.0001) for the ST/HR hysteresis, 83 vs. 76% (p<0.0001) for the ST/HR index, and 76 vs. 61% (p<0.0001) for the end-exercise ST depression. These findings suggest that the ST-segment measurement at 60 ms after the J-point is the most reasonable point of choice in terms of discriminative capacity of both the simple and the heart rate-adjusted indices of ST depression. Moreover, the ST/HR hysteresis had the best discriminative capacity independently of the ST-segment measurement point, the observation thus giving further support to clinical utility of this new method in the detection of coronary artery disease. PMID:9363740

  3. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends

    PubMed Central

    Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.

    2015-01-01

    The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351

  4. Moving cast shadow resistant for foreground segmentation based on shadow properties analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Gao, Yun; Yuan, Guowu; Ji, Rongbin

    2015-12-01

    Moving object detection is the fundamental task in machine vision applications. However, moving cast shadows detection is one of the major concerns for accurate video segmentation. Since detected moving object areas are often contain shadow points, errors in measurements, localization, segmentation, classification and tracking may arise from this. A novel shadow elimination algorithm is proposed in this paper. A set of suspected moving object area are detected by the adaptive Gaussian approach. A model is established based on shadow optical properties analysis. And shadow regions are discriminated from the set of moving pixels by using the properties of brightness, chromaticity and texture in sequence.

  5. Segmentation, statistical analysis, and modelling of the wall system in ceramic foams

    SciTech Connect

    Kampf, Jürgen; Schlachter, Anna-Lena; Redenbach, Claudia; Liebscher, André

    2015-01-15

    Closed walls in otherwise open foam structures may have a great impact on macroscopic properties of the materials. In this paper, we present two algorithms for the segmentation of such closed walls from micro-computed tomography images of the foam structure. The techniques are compared on simulated data and applied to tomographic images of ceramic filters. This allows for a detailed statistical analysis of the normal directions and sizes of the walls. Finally, we explain how the information derived from the segmented wall system can be included in a stochastic microstructure model for the foam.

  6. Patient Segmentation Analysis Offers Significant Benefits For Integrated Care And Support.

    PubMed

    Vuik, Sabine I; Mayer, Erik K; Darzi, Ara

    2016-05-01

    Integrated care aims to organize care around the patient instead of the provider. It is therefore crucial to understand differences across patients and their needs. Segmentation analysis that uses big data can help divide a patient population into distinct groups, which can then be targeted with care models and intervention programs tailored to their needs. In this article we explore the potential applications of patient segmentation in integrated care. We propose a framework for population strategies in integrated care-whole populations, subpopulations, and high-risk populations-and show how patient segmentation can support these strategies. Through international case examples, we illustrate practical considerations such as choosing a segmentation logic, accessing data, and tailoring care models. Important issues for policy makers to consider are trade-offs between simplicity and precision, trade-offs between customized and off-the-shelf solutions, and the availability of linked data sets. We conclude that segmentation can provide many benefits to integrated care, and we encourage policy makers to support its use. PMID:27140981

  7. Segmented assimilation and attitudes toward psychotherapy: a moderated mediation analysis.

    PubMed

    Rogers-Sirin, Lauren

    2013-07-01

    The present study examines the relations between acculturative stress, mental health, and attitudes toward psychotherapy, and whether these relations are the same for immigrants of color and White immigrants. This study predicted that acculturative stress would have a significant, negative relation with attitudes toward psychotherapy and that this relation would be moderated by race (immigrants of color and White immigrants) so that as acculturative stress increases, attitudes toward psychotherapy become more negative for immigrants of color but not White immigrants. Finally, mental health was predicted to mediate the relation between acculturative stress and attitudes toward psychotherapy for immigrants of color, but not White immigrants. Participants were 149 first-generation, immigrant, young adults, between the ages of 18 and 29, who identified as White, Black, Latino, or Asian. A significant negative correlation was found between acculturative stress and attitudes toward psychotherapy. A moderated mediation analysis demonstrated that the negative relation between acculturative stress and attitudes toward psychotherapy was mediated by mental health symptoms for immigrants of color but not White immigrants. PMID:23544838

  8. Computer model analysis of the relationship of ST-segment and ST-segment/heart rate slope response to the constituents of the ischemic injury source.

    PubMed

    Hyttinen, J; Viik, J; Lehtinen, R; Plonsey, R; Malmivuo, J

    1997-07-01

    The objective of the study was to investigate a proposed linear relationship between the extent of myocardial ischemic injury and the ST-segment/heart rate (ST/HR) slope by computer simulation of the injury sources arising in exercise electrocardiographic (ECG) tests. The extent and location of the ischemic injury were simulated for both single- and multivessel coronary artery disease by use of an accurate source-volume conductor model which assumes a linear relationship between heart rate and extent of ischemia. The results indicated that in some cases the ST/HR slope in leads II, aVF, and especially V5 may be related to the extent of ischemia. However, the simulations demonstrated that neither the ST-segment deviation nor the ST/HR slope was directly proportional to either the area of the ischemic boundary or the number of vessels occluded. Furthermore, in multivessel coronary artery disease, the temporal and spatial diversity of the generated multiple injury sources distorted the presumed linearity between ST-segment deviation and heart rate. It was concluded that the ST/HR slope and ST-segment deviation of the 12-lead ECG are not able to indicate extent of ischemic injury or number of vessels occluded. PMID:9261724

  9. Phantom-based ground-truth generation for cerebral vessel segmentation and pulsatile deformation analysis

    NASA Astrophysics Data System (ADS)

    Schetelig, Daniel; Säring, Dennis; Illies, Till; Sedlacik, Jan; Kording, Fabian; Werner, René

    2016-03-01

    Hemodynamic and mechanical factors of the vascular system are assumed to play a major role in understanding, e.g., initiation, growth and rupture of cerebral aneurysms. Among those factors, cardiac cycle-related pulsatile motion and deformation of cerebral vessels currently attract much interest. However, imaging of those effects requires high spatial and temporal resolution and remains challenging { and similarly does the analysis of the acquired images: Flow velocity changes and contrast media inflow cause vessel intensity variations in related temporally resolved computed tomography and magnetic resonance angiography data over the cardiac cycle and impede application of intensity threshold-based segmentation and subsequent motion analysis. In this work, a flow phantom for generation of ground-truth images for evaluation of appropriate segmentation and motion analysis algorithms is developed. The acquired ground-truth data is used to illustrate the interplay between intensity fluctuations and (erroneous) motion quantification by standard threshold-based segmentation, and an adaptive threshold-based segmentation approach is proposed that alleviates respective issues. The results of the phantom study are further demonstrated to be transferable to patient data.

  10. FIELD VALIDATION OF EXPOSURE ASSESSMENT MODELS. VOLUME 2. ANALYSIS

    EPA Science Inventory

    This is the second of two volumes describing a series of dual tracer experiments designed to evaluate the PAL-DS model, a Gaussian diffusion model modified to take into account settling and deposition, as well as three other deposition models. In this volume, an analysis of the d...

  11. Mean-Field Analysis of Recursive Entropic Segmentation of Biological Sequences

    NASA Astrophysics Data System (ADS)

    Cheong, Siew-Ann; Stodghill, Paul; Schneider, David; Myers, Christopher

    2007-03-01

    Horizontal gene transfer in bacteria results in genomic sequences which are mosaic in nature. An important first step in the analysis of a bacterial genome would thus be to model the statistically nonstationary nucleotide or protein sequence with a collection of P stationary Markov chains, and partition the sequence of length N into M statistically stationary segments/domains. This can be done for Markov chains of order K = 0 using a recursive segmentation scheme based on the Jensen-Shannon divergence, where the unknown parameters P and M are estimated from a hypothesis testing/model selection process. In this talk, we describe how the Jensen-Shannon divergence can be generalized to Markov chains of order K > 0, as well as an algorithm optimizing the positions of a fixed number of domain walls. We then describe a mean field analysis of the generalized recursive Jensen-Shannon segmentation scheme, and show how most domain walls appear as local maxima in the divergence spectrum of the sequence, before highlighting the main problem associated with the recursive segmentation scheme, i.e. the strengths of the domain walls selected recursively do not decrease monotonically. This problem is especially severe in repetitive sequences, whose statistical signatures we will also discuss.

  12. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  13. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-01

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  14. An approach to multi-temporal MODIS image analysis using image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Senthilnath, J.; Bajpai, Shivesh; Omkar, S. N.; Diwakar, P. G.; Mani, V.

    2012-11-01

    This paper discusses an approach for river mapping and flood evaluation based on multi-temporal time series analysis of satellite images utilizing pixel spectral information for image classification and region-based segmentation for extracting water-covered regions. Analysis of MODIS satellite images is applied in three stages: before flood, during flood and after flood. Water regions are extracted from the MODIS images using image classification (based on spectral information) and image segmentation (based on spatial information). Multi-temporal MODIS images from "normal" (non-flood) and flood time-periods are processed in two steps. In the first step, image classifiers such as Support Vector Machines (SVM) and Artificial Neural Networks (ANN) separate the image pixels into water and non-water groups based on their spectral features. The classified image is then segmented using spatial features of the water pixels to remove the misclassified water. From the results obtained, we evaluate the performance of the method and conclude that the use of image classification (SVM and ANN) and region-based image segmentation is an accurate and reliable approach for the extraction of water-covered regions.

  15. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets

    PubMed Central

    Belevich, Ilya; Joensuu, Merja; Kumar, Darshan; Vihinen, Helena; Jokitalo, Eija

    2016-01-01

    Understanding the structure–function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program. PMID:26727152

  16. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  17. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    SciTech Connect

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  18. Mimicking human expert interpretation of remotely sensed raster imagery by using a novel segmentation analysis within ArcGIS

    NASA Astrophysics Data System (ADS)

    Le Bas, Tim; Scarth, Anthony; Bunting, Peter

    2015-04-01

    Traditional computer based methods for the interpretation of remotely sensed imagery use each pixel individually or the average of a small window of pixels to calculate a class or thematic value, which provides an interpretation. However when a human expert interprets imagery, the human eye is excellent at finding coherent and homogenous areas and edge features. It may therefore be advantageous for computer analysis to mimic human interpretation. A new toolbox for ArcGIS 10.x will be presented that segments the data layers into a set of polygons. Each polygon is defined by a K-means clustering and region growing algorithm, thus finding areas, their edges and any lineations in the imagery. Attached to each polygon are the characteristics of the imagery such as mean and standard deviation of the pixel values, within the polygon. The segmentation of imagery into a jigsaw of polygons also has the advantage that the human interpreter does not need to spend hours digitising the boundaries. The segmentation process has been taken from the RSGIS library of analysis and classification routines (Bunting et al., 2014). These routines are freeware and have been modified to be available in the ArcToolbox under the Windows (v7) operating system. Input to the segmentation process is a multi-layered raster image, for example; a Landsat image, or a set of raster datasets made up from derivatives of topography. The size and number of polygons are set by the user and are dependent on the imagery used. Examples will be presented of data from the marine environment utilising bathymetric depth, slope, rugosity and backscatter from a multibeam system. Meaningful classification of the polygons using their numerical characteristics is the next goal. Object based image analysis (OBIA) should help this workflow. Fully calibrated imagery systems will allow numerical classification to be translated into more readily understandable terms. Peter Bunting, Daniel Clewley, Richard M. Lucas and Sam

  19. Parametric investigation of Radome analysis methods. Volume 4: Experimental results

    NASA Astrophysics Data System (ADS)

    Bassett, H. L.; Newton, J. M.; Adams, W.; Ussailis, J. S.; Hadsell, M. J.; Huddleston, G. K.

    1981-02-01

    This Volume 4 of four volumes presents 140 measured far-field patterns and boresight error data for eight combinations of three monopulse antennas and five tangent ogive Rexolite radomes at 35 GHz. The antennas and radomes, all of different sizes, were selected to provide a range of parameters as found in the applications. The measured data serve as true data in the parametric investigation of radome analysis methods to determine the accuracies and ranges of validity of selected methods of analysis.

  20. Stress Analysis of Bolted, Segmented Cylindrical Shells Exhibiting Flange Mating-Surface Waviness

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2009-01-01

    Bolted, segmented cylindrical shells are a common structural component in many engineering systems especially for aerospace launch vehicles. Segmented shells are often needed due to limitations of manufacturing capabilities or transportation issues related to very long, large-diameter cylindrical shells. These cylindrical shells typically have a flange or ring welded to opposite ends so that shell segments can be mated together and bolted to form a larger structural system. As the diameter of these shells increases, maintaining strict fabrication tolerances for the flanges to be flat and parallel on a welded structure is an extreme challenge. Local fit-up stresses develop in the structure due to flange mating-surface mismatch (flange waviness). These local stresses need to be considered when predicting a critical initial flaw size. Flange waviness is one contributor to the fit-up stress state. The present paper describes the modeling and analysis effort to simulate fit-up stresses due to flange waviness in a typical bolted, segmented cylindrical shell. Results from parametric studies are presented for various flange mating-surface waviness distributions and amplitudes.

  1. Analysis of gene expression levels in individual bacterial cells without image segmentation

    SciTech Connect

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J.

    2012-05-11

    Highlights: Black-Right-Pointing-Pointer We present a method for extracting gene expression data from images of bacterial cells. Black-Right-Pointing-Pointer The method does not employ cell segmentation and does not require high magnification. Black-Right-Pointing-Pointer Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. Black-Right-Pointing-Pointer We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

  2. Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis

    NASA Astrophysics Data System (ADS)

    Chung, Howard; Cobzas, Dana; Birdsell, Laura; Lieffers, Jessica; Baracos, Vickie

    2009-02-01

    The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.

  3. Analysis, design, and test of a graphite/polyimide Shuttle orbiter body flap segment

    NASA Technical Reports Server (NTRS)

    Graves, S. R.; Morita, W. H.

    1982-01-01

    For future missions, increases in Space Shuttle orbiter deliverable and recoverable payload weight capability may be needed. Such increases could be obtained by reducing the inert weight of the Shuttle. The application of advanced composites in orbiter structural components would make it possible to achieve such reductions. In 1975, NASA selected the orbiter body flap as a demonstration component for the Composite for Advanced Space Transportation Systems (CASTS) program. The progress made in 1977 through 1980 was integrated into a design of a graphite/polyimide (Gr/Pi) body flap technology demonstration segment (TDS). Aspects of composite body flap design and analysis are discussed, taking into account the direct-bond fibrous refractory composite insulation (FRCI) tile on Gr/Pi structure, Gr/Pi body flap weight savings, the body flap design concept, and composite body flap analysis. Details regarding the Gr/Pi technology demonstration segment are also examined.

  4. Analysis of a 26,756 bp segment from the left arm of yeast chromosome IV.

    PubMed

    Wölfl, S; Hanemann, V; Saluz, H P

    1996-12-01

    The nucleotide sequence of a 26.7 kb DNA segment from the left arm of Saccharomyces cerevisiae chromosome IV is presented. An analysis of this segment revealed 11 open reading frames (ORFs) longer than 300 bp and one split gene. These ORFs include the genes encoding the large subunit of RNA polymerase II, the biotin apo-protein ligase, an ADP-ribosylation factor (ARF 2), the 'L35'-ribosomal protein, a rho GDP dissociation factor, and the sequence encoding the protein phosphatase 2A. Further sequence analysis revealed a short ORF encoding the ribosomal protein YL41B, an intron in a 5' untranslated region and an extended homology with another cosmid (X83276) located on the same chromosome. The potential biological relevance of these findings is discussed. PMID:8972577

  5. Screening Analysis : Volume 1, Description and Conclusions.

    SciTech Connect

    Bonneville Power Administration; Corps of Engineers; Bureau of Reclamation

    1992-08-01

    The SOR consists of three analytical phases leading to a Draft EIS. The first phase Pilot Analysis, was performed for the purpose of testing the decision analysis methodology being used in the SOR. The Pilot Analysis is described later in this chapter. The second phase, Screening Analysis, examines all possible operating alternatives using a simplified analytical approach. It is described in detail in this and the next chapter. This document also presents the results of screening. The final phase, Full-Scale Analysis, will be documented in the Draft EIS and is intended to evaluate comprehensively the few, best alternatives arising from the screening analysis. The purpose of screening is to analyze a wide variety of differing ways of operating the Columbia River system to test the reaction of the system to change. The many alternatives considered reflect the range of needs and requirements of the various river users and interests in the Columbia River Basin. While some of the alternatives might be viewed as extreme, the information gained from the analysis is useful in highlighting issues and conflicts in meeting operating objectives. Screening is also intended to develop a broad technical basis for evaluation including regional experts and to begin developing an evaluation capability for each river use that will support full-scale analysis. Finally, screening provides a logical method for examining all possible options and reaching a decision on a few alternatives worthy of full-scale analysis. An organizational structure was developed and staffed to manage and execute the SOR, specifically during the screening phase and the upcoming full-scale analysis phase. The organization involves ten technical work groups, each representing a particular river use. Several other groups exist to oversee or support the efforts of the work groups.

  6. Comparision between Brain Atrophy and Subdural Volume to Predict Chronic Subdural Hematoma: Volumetric CT Imaging Analysis

    PubMed Central

    Ju, Min-Wook; Kwon, Hyon-Jo; Choi, Seung-Won; Koh, Hyeon-Song; Youm, Jin-Young; Song, Shi-Hun

    2015-01-01

    Objective Brain atrophy and subdural hygroma were well known factors that enlarge the subdural space, which induced formation of chronic subdural hematoma (CSDH). Thus, we identified the subdural volume that could be used to predict the rate of future CSDH after head trauma using a computed tomography (CT) volumetric analysis. Methods A single institution case-control study was conducted involving 1,186 patients who visited our hospital after head trauma from January 1, 2010 to December 31, 2014. Fifty-one patients with delayed CSDH were identified, and 50 patients with age and sex matched for control. Intracranial volume (ICV), the brain parenchyme, and the subdural space were segmented using CT image-based software. To adjust for variations in head size, volume ratios were assessed as a percentage of ICV [brain volume index (BVI), subdural volume index (SVI)]. The maximum depth of the subdural space on both sides was used to estimate the SVI. Results Before adjusting for cranium size, brain volume tended to be smaller, and subdural space volume was significantly larger in the CSDH group (p=0.138, p=0.021, respectively). The BVI and SVI were significantly different (p=0.003, p=0.001, respectively). SVI [area under the curve (AUC), 77.3%; p=0.008] was a more reliable technique for predicting CSDH than BVI (AUC, 68.1%; p=0.001). Bilateral subdural depth (sum of subdural depth on both sides) increased linearly with SVI (p<0.0001). Conclusion Subdural space volume was significantly larger in CSDH groups. SVI was a more reliable technique for predicting CSDH. Bilateral subdural depth was useful to measure SVI. PMID:27169071

  7. Fetal autonomic brain age scores, segmented heart rate variability analysis, and traditional short term variability.

    PubMed

    Hoyer, Dirk; Kowalski, Eva-Maria; Schmidt, Alexander; Tetschke, Florian; Nowack, Samuel; Rudolph, Anja; Wallwitz, Ulrike; Kynass, Isabelle; Bode, Franziska; Tegtmeyer, Janine; Kumm, Kathrin; Moraru, Liviu; Götz, Theresa; Haueisen, Jens; Witte, Otto W; Schleußner, Ekkehard; Schneider, Uwe

    2014-01-01

    Disturbances of fetal autonomic brain development can be evaluated from fetal heart rate patterns (HRP) reflecting the activity of the autonomic nervous system. Although HRP analysis from cardiotocographic (CTG) recordings is established for fetal surveillance, temporal resolution is low. Fetal magnetocardiography (MCG), however, provides stable continuous recordings at a higher temporal resolution combined with a more precise heart rate variability (HRV) analysis. A direct comparison of CTG and MCG based HRV analysis is pending. The aims of the present study are: (i) to compare the fetal maturation age predicting value of the MCG based fetal Autonomic Brain Age Score (fABAS) approach with that of CTG based Dawes-Redman methodology; and (ii) to elaborate fABAS methodology by segmentation according to fetal behavioral states and HRP. We investigated MCG recordings from 418 normal fetuses, aged between 21 and 40 weeks of gestation. In linear regression models we obtained an age predicting value of CTG compatible short term variability (STV) of R (2) = 0.200 (coefficient of determination) in contrast to MCG/fABAS related multivariate models with R (2) = 0.648 in 30 min recordings, R (2) = 0.610 in active sleep segments of 10 min, and R (2) = 0.626 in quiet sleep segments of 10 min. Additionally segmented analysis under particular exclusion of accelerations (AC) and decelerations (DC) in quiet sleep resulted in a novel multivariate model with R (2) = 0.706. According to our results, fMCG based fABAS may provide a promising tool for the estimation of fetal autonomic brain age. Beside other traditional and novel HRV indices as possible indicators of developmental disturbances, the establishment of a fABAS score normogram may represent a specific reference. The present results are intended to contribute to further exploration and validation using independent data sets and multicenter research structures. PMID:25505399

  8. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  9. Information architecture. Volume 2, Part 1: Baseline analysis summary

    SciTech Connect

    1996-12-01

    The Department of Energy (DOE) Information Architecture, Volume 2, Baseline Analysis, is a collaborative and logical next-step effort in the processes required to produce a Departmentwide information architecture. The baseline analysis serves a diverse audience of program management and technical personnel and provides an organized way to examine the Department`s existing or de facto information architecture. A companion document to Volume 1, The Foundations, it furnishes the rationale for establishing a Departmentwide information architecture. This volume, consisting of the Baseline Analysis Summary (part 1), Baseline Analysis (part 2), and Reference Data (part 3), is of interest to readers who wish to understand how the Department`s current information architecture technologies are employed. The analysis identifies how and where current technologies support business areas, programs, sites, and corporate systems.

  10. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  11. Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data

    NASA Astrophysics Data System (ADS)

    Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus

    The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.

  12. Analysis of object segmentation methods for VOP generation in MPEG-4

    NASA Astrophysics Data System (ADS)

    Vaithianathan, Karthikeyan; Panchanathan, Sethuraman

    2000-04-01

    The recent audio-visual standard MPEG4 emphasizes content- based information representation and coding. Rather than operating at the level of pixels, MPEG4 operates at a higher level of abstraction, capturing the information based on the content of a video sequence. Video object plane (VOP) extraction is an important step in defining the content of any video sequence, except in the case of authored applications which involve creation of video sequences using synthetic objects and graphics. The generation of VOPs from a video sequence involves segmenting the objects from every frame of the video sequence. The problem of object segmentation is also being addressed by the Computer Vision community. The major problem faced by the researchers is to define object boundaries such that they are semantically meaningful. Finding a single robust solution for this problem that can work for all kinds of video sequences still remains to be a challenging task. The object segmentation problem can be simplified by imposing constraints on the video sequences. These constraints largely depend on the type of application where the segmentation technique will be used. The purpose of this paper is twofold. In the first section, we summarize the state-of- the-art research in this topic and analyze the various VOP generation and object segmentation methods that have been presented in the recent literature. In the next section, we focus on the different types of video sequences, the important cues that can be employed for efficient object segmentation, the different object segmentation techniques and the types of techniques that are well suited for each type of application. A detailed analysis of these approaches from the perspective of accuracy of the object boundaries, robustness towards different kinds of video sequences, ability to track the objects through the video sequences, and complexity involved in implementing these approaches along with other limitations will be discussed. In

  13. Laser power conversion system analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-ground laser power conversion system analysis investigated the feasibility and cost effectiveness of converting solar energy into laser energy in space, and transmitting the laser energy to earth for conversion to electrical energy. The analysis included space laser systems with electrical outputs on the ground ranging from 100 to 10,000 MW. The space laser power system was shown to be feasible and a viable alternate to the microwave solar power satellite. The narrow laser beam provides many options and alternatives not attainable with a microwave beam.

  14. [Intracranial volume reserve assessment based on ICP pulse wave analysis].

    PubMed

    Berdyga, J; Czernicki, Z; Jurkiewicz, J

    1994-01-01

    ICP waves were analysed in the situation of expanding intracranial mass. The aim of the study was to determine how big the intracranial added volume has to be in order to produce significant changes of harmonic disturbances index (HFC) of ICP pulse waves. The diagnostic value of HFC and other parameters was compared. The following other parameters were studied: intracranial pressure (ICP), CSF outflow resistance (R), volume pressure response (VPR) and visual evoked potentials (VEP). It was found that ICP wave analysis very clearly reflects the intracranial volume-pressure relation changes. PMID:8028705

  15. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    SciTech Connect

    Akhbardeh, Alireza; Jacobs, Michael A.

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment

  16. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  17. Scanning and transmission electron microscopic analysis of ampullary segment of oviduct during estrous cycle in caprines.

    PubMed

    Sharma, R K; Singh, R; Bhardwaj, J K

    2015-01-01

    The ampullary segment of the mammalian oviduct provides suitable milieu for fertilization and development of zygote before implantation into uterus. It is, therefore, in the present study, the cyclic changes in the morphology of ampullary segment of goat oviduct were studied during follicular and luteal phases using scanning and transmission electron microscopy techniques. Topographical analysis revealed the presence of uniformly ciliated ampullary epithelia, concealing apical processes of non-ciliated cells along with bulbous secretory cells during follicular phase. The luteal phase was marked with decline in number of ciliated cells with increased occurrence of secretory cells. The ultrastructure analysis has demonstrated the presence of indented nuclear membrane, supranuclear cytoplasm, secretory granules, rough endoplasmic reticulum, large lipid droplets, apically located glycogen masses, oval shaped mitochondria in the secretory cells. The ciliated cells were characterized by the presence of elongated nuclei, abundant smooth endoplasmic reticulum, oval or spherical shaped mitochondria with crecentric cristae during follicular phase. However, in the luteal phase, secretory cells were possessing highly indented nucleus with diffused electron dense chromatin, hyaline nucleosol, increased number of lipid droplets. The ciliated cells had numerous fibrous granules and basal bodies. The parallel use of scanning and transmission electron microscopy techniques has enabled us to examine the cyclic and hormone dependent changes occurring in the topography and fine structure of epithelium of ampullary segment and its cells during different reproductive phases that will be great help in understanding major bottle neck that limits success rate in vitro fertilization and embryo transfer technology. PMID:25491952

  18. Multiresolution Analysis Using Wavelet, Ridgelet, and Curvelet Transforms for Medical Image Segmentation

    PubMed Central

    AlZubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  19. AAV Vectors for FRET-Based Analysis of Protein-Protein Interactions in Photoreceptor Outer Segments

    PubMed Central

    Becirovic, Elvir; Böhm, Sybille; Nguyen, Ong N. P.; Riedmayr, Lisa M.; Hammelmann, Verena; Schön, Christian; Butz, Elisabeth S.; Wahl-Schott, Christian; Biel, Martin; Michalakis, Stylianos

    2016-01-01

    Fluorescence resonance energy transfer (FRET) is a powerful method for the detection and quantification of stationary and dynamic protein-protein interactions. Technical limitations have hampered systematic in vivo FRET experiments to study protein-protein interactions in their native environment. Here, we describe a rapid and robust protocol that combines adeno-associated virus (AAV) vector-mediated in vivo delivery of genetically encoded FRET partners with ex vivo FRET measurements. The method was established on acutely isolated outer segments of murine rod and cone photoreceptors and relies on the high co-transduction efficiency of retinal photoreceptors by co-delivered AAV vectors. The procedure can be used for the systematic analysis of protein-protein interactions of wild type or mutant outer segment proteins in their native environment. Conclusively, our protocol can help to characterize the physiological and pathophysiological relevance of photoreceptor specific proteins and, in principle, should also be transferable to other cell types. PMID:27516733

  20. AAV Vectors for FRET-Based Analysis of Protein-Protein Interactions in Photoreceptor Outer Segments.

    PubMed

    Becirovic, Elvir; Böhm, Sybille; Nguyen, Ong N P; Riedmayr, Lisa M; Hammelmann, Verena; Schön, Christian; Butz, Elisabeth S; Wahl-Schott, Christian; Biel, Martin; Michalakis, Stylianos

    2016-01-01

    Fluorescence resonance energy transfer (FRET) is a powerful method for the detection and quantification of stationary and dynamic protein-protein interactions. Technical limitations have hampered systematic in vivo FRET experiments to study protein-protein interactions in their native environment. Here, we describe a rapid and robust protocol that combines adeno-associated virus (AAV) vector-mediated in vivo delivery of genetically encoded FRET partners with ex vivo FRET measurements. The method was established on acutely isolated outer segments of murine rod and cone photoreceptors and relies on the high co-transduction efficiency of retinal photoreceptors by co-delivered AAV vectors. The procedure can be used for the systematic analysis of protein-protein interactions of wild type or mutant outer segment proteins in their native environment. Conclusively, our protocol can help to characterize the physiological and pathophysiological relevance of photoreceptor specific proteins and, in principle, should also be transferable to other cell types. PMID:27516733

  1. Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation.

    PubMed

    Alzubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  2. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  3. A hybrid neural network analysis of subtle brain volume differences in children surviving brain tumors.

    PubMed

    Reddick, W E; Mulhern, R K; Elkin, T D; Glass, J O; Merchant, T E; Langston, J W

    1998-05-01

    In the treatment of children with brain tumors, balancing the efficacy of treatment against commonly observed side effects is difficult because of a lack of quantitative measures of brain damage that can be correlated with the intensity of treatment. We quantitatively assessed volumes of brain parenchyma on magnetic resonance (MR) images using a hybrid combination of the Kohonen self-organizing map for segmentation and a multilayer backpropagation neural network for tissue classification. Initially, we analyzed the relationship between volumetric differences and radiologists' grading of atrophy in 80 subjects. This investigation revealed that brain parenchyma and white matter volumes significantly decreased as atrophy increased, whereas gray matter volumes had no relationship with atrophy. Next, we compared 37 medulloblastoma patients treated with surgery, irradiation, and chemotherapy to 19 patients treated with surgery and irradiation alone. This study demonstrated that, in these patients, chemotherapy had no significant effect on brain parenchyma, white matter, or gray matter volumes. We then investigated volumetric differences due to cranial irradiation in 15 medulloblastoma patients treated with surgery and radiation therapy, and compared these with a group of 15 age-matched patients with low-grade astrocytoma treated with surgery alone. With a minimum follow-up of one year after irradiation, all radiation-treated patients demonstrated significantly reduced white matter volumes, whereas gray matter volumes were relatively unchanged compared with those of age-matched patients treated with surgery alone. These results indicate that reductions in cerebral white matter: 1) are correlated significantly with atrophy; 2) are not related to chemotherapy; and 3) are correlated significantly with irradiation. This hybrid neural network analysis of subtle brain volume differences with magnetic resonance may constitute a direct measure of treatment-induced brain damage

  4. Multivariate statistical analysis as a tool for the segmentation of 3D spectral data.

    PubMed

    Lucas, G; Burdet, P; Cantoni, M; Hébert, C

    2013-01-01

    Acquisition of three-dimensional (3D) spectral data is nowadays common using many different microanalytical techniques. In order to proceed to the 3D reconstruction, data processing is necessary not only to deal with noisy acquisitions but also to segment the data in term of chemical composition. In this article, we demonstrate the value of multivariate statistical analysis (MSA) methods for this purpose, allowing fast and reliable results. Using scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) coupled with a focused ion beam (FIB), a stack of spectrum images have been acquired on a sample produced by laser welding of a nickel-titanium wire and a stainless steel wire presenting a complex microstructure. These data have been analyzed using principal component analysis (PCA) and factor rotations. PCA allows to significantly improve the overall quality of the data, but produces abstract components. Here it is shown that rotated components can be used without prior knowledge of the sample to help the interpretation of the data, obtaining quickly qualitative mappings representative of elements or compounds found in the material. Such abundance maps can then be used to plot scatter diagrams and interactively identify the different domains in presence by defining clusters of voxels having similar compositions. Identified voxels are advantageously overlaid on secondary electron (SE) images with higher resolution in order to refine the segmentation. The 3D reconstruction can then be performed using available commercial softwares on the basis of the provided segmentation. To asses the quality of the segmentation, the results have been compared to an EDX quantification performed on the same data. PMID:24035679

  5. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  6. Spectral analysis program. Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Hayden, W. L.

    1972-01-01

    The spectral analysis program (SAP) was developed to provide the Manned Spacecraft Center with the capability of computing the power spectrum of a phase or frequency modulated high frequency carrier wave. Previous power spectrum computational techniques were restricted to relatively simple modulating signals because of excessive computational time, even on a high speed digital computer. The present technique uses the recently developed extended fast Fourier transform and represents a generalized approach for simple and complex modulating signals. The present technique is especially convenient for implementation of a variety of low-pass filters for the modulating signal and bandpass filters for the modulated signal.

  7. A common neural substrate for the analysis of pitch and duration pattern in segmented sound?

    PubMed

    Griffiths, T D; Johnsrude, I; Dean, J L; Green, G G

    1999-12-16

    The analysis of patterns of pitch and duration over time in natural segmented sounds is fundamentally relevant to the analysis of speech, environmental sounds and music. The neural basis for differences between the processing of pitch and duration sequences is not established. We carried out a PET activation study on nine right-handed musically naive subjects, in order to examine the basis for early pitch- and duration-sequence analysis. The input stimuli and output task were closely controlled. We demonstrated a strikingly similar bilateral neural network for both types of analysis. The network is right lateralised and includes the cerebellum, posterior superior temporal cortices, and inferior frontal cortices. These data are consistent with a common initial mechanism for the analysis of pitch and duration patterns within sequences. PMID:10716217

  8. Segmentation of Unstructured Datasets

    NASA Technical Reports Server (NTRS)

    Bhat, Smitha

    1996-01-01

    Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

  9. Method 349.0 Determination of Ammonia in Estuarine and Coastal Waters by Gas Segmented Continuous Flow Colorimetric Analysis

    EPA Science Inventory

    This method provides a procedure for the determination of ammonia in estuarine and coastal waters. The method is based upon the indophenol reaction,1-5 here adapted to automated gas-segmented continuous flow analysis.

  10. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    PubMed

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types. PMID:18000328