Science.gov

Sample records for volume segmentation analysis

  1. Economic Analysis. Volume V. Course Segments 65-79.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    The fifth volume of the multimedia, individualized course in economic analysis produced for the United States Naval Academy covers segments 65-79 of the course. Included in the volume are discussions of monopoly markets, monopolistic competition, oligopoly markets, and the theory of factor demand and supply. Other segments of the course, the…

  2. Automated segmentation and dose-volume analysis with DICOMautomaton

    NASA Astrophysics Data System (ADS)

    Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.

    2014-03-01

    Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.

  3. Volume Segmentation and Ghost Particles

    NASA Astrophysics Data System (ADS)

    Ziskin, Isaac; Adrian, Ronald

    2011-11-01

    Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.

  4. Direct volume estimation without segmentation

    NASA Astrophysics Data System (ADS)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  5. Segmentation-based method incorporating fractional volume analysis for quantification of brain atrophy on magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Wang, Deming; Doddrell, David M.

    2001-07-01

    Partial volume effect is a major problem in brain tissue segmentation on digital images such as magnetic resonance (MR) images. In this paper, special attention has been paid to partial volume effect when developing a method for quantifying brain atrophy. Specifically, partial volume effect is minimized in the process of parameter estimation prior to segmentation by identifying and excluding those voxels with possible partial volume effect. A quantitative measure for partial volume effect was also introduced through developing a model that calculates fractional volumes for voxels with mixtures of two different tissues. For quantifying cerebrospinal fluid (CSF) volumes, fractional volumes are calculated for two classes of mixture involving gray matter and CSF, and white matter and CSF. Tissue segmentation is carried out using 1D and 2D thresholding techniques after images are intensity- corrected. Threshold values are estimated using the minimum error method. Morphological processing and region identification analysis are used extensively in the algorithm. As an application, the method was employed for evaluating rates of brain atrophy based on serially acquired structural brain MR images. Consistent and accurate rates of brain atrophy have been obtained for patients with Alzheimer's disease as well as for elderly subjects due to normal aging process.

  6. Volume analysis of treatment response of head and neck lesions using 3D level set segmentation

    NASA Astrophysics Data System (ADS)

    Hadjiiski, Lubomir; Street, Ethan; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Chan, Heang-Ping; Mukherji, Suresh K.

    2008-03-01

    A computerized system for segmenting lesions in head and neck CT scans was developed to assist radiologists in estimation of the response to treatment of malignant lesions. The system performs 3D segmentations based on a level set model and uses as input an approximate bounding box for the lesion of interest. In this preliminary study, CT scans from a pre-treatment exam and a post one-cycle chemotherapy exam of 13 patients containing head and neck neoplasms were used. A radiologist marked 35 temporal pairs of lesions. 13 pairs were primary site cancers and 22 pairs were metastatic lymph nodes. For all lesions, a radiologist outlined a contour on the best slice on both the pre- and post treatment scans. For the 13 primary lesion pairs, full 3D contours were also extracted by a radiologist. The average pre- and post-treatment areas on the best slices for all lesions were 4.5 and 2.1 cm2, respectively. For the 13 primary site pairs the average pre- and post-treatment primary lesions volumes were 15.4 and 6.7 cm 3 respectively. The correlation between the automatic and manual estimates for the pre-to-post-treatment change in area for all 35 pairs was r=0.97, while the correlation for the percent change in area was r=0.80. The correlation for the change in volume for the 13 primary site pairs was r=0.89, while the correlation for the percent change in volume was r=0.79. The average signed percent error between the automatic and manual areas for all 70 lesions was 11.0+/-20.6%. The average signed percent error between the automatic and manual volumes for all 26 primary lesions was 37.8+/-42.1%. The preliminary results indicate that the automated segmentation system can reliably estimate tumor size change in response to treatment relative to radiologist's hand segmentation.

  7. NSEG, a segmented mission analysis program for low and high speed aircraft. Volume 1: Theoretical development

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is presented. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed characteristics were specified in tabular form. The code also contains extensive flight envelope performance mapping capabilities. Approximate take off and landing analyses were performed. At high speeds, centrifugal lift effects were accounted for. Extensive turbojet and ramjet engine scaling procedures were incorporated in the code.

  8. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 3: Demonstration problems

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    Program NSEG is a rapid mission analysis code based on the use of approximate flight path equations of motion. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelope performance mapping capabilities. For example, rate-of-climb, turn rates, and energy maneuverability parameter values may be mapped in the Mach-altitude plane. Approximate take off and landing analyses are also performed. At high speeds, centrifugal lift effects are accounted for. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  9. Volume rendering for interactive 3D segmentation

    NASA Astrophysics Data System (ADS)

    Toennies, Klaus D.; Derz, Claus

    1997-05-01

    Combined emission/absorption and reflection/transmission volume rendering is able to display poorly segmented structures from 3D medical image sequences. Visual cues such as shading and color let the user distinguish structures in the 3D display that are incompletely extracted by threshold segmentation. In order to be truly helpful, analyzed information needs to be quantified and transferred back into the data. We extend our previously presented scheme for such display be establishing a communication between visual analysis and the display process. The main tool is a selective 3D picking device. For being useful on a rather rough segmentation, the device itself and the display offer facilities for object selection. Selective intersection planes let the user discard information prior to choosing a tissue of interest. Subsequently, a picking is carried out on the 2D display by casting a ray into the volume. The picking device is made pre-selective using already existing segmentation information. Thus, objects can be picked that are visible behind semi-transparent surfaces of other structures. Information generated by a later connected- component analysis can then be integrated into the data. Data examination is continued on an improved display letting the user actively participate in the analysis process. Results of this display-and-interaction scheme proved to be very effective. The viewer's ability to extract relevant information form a complex scene is combined with the computer's ability to quantify this information. The approach introduces 3D computer graphics methods into user- guided image analysis creating an analysis-synthesis cycle for interactive 3D segmentation.

  10. Inter-sport variability of muscle volume distribution identified by segmental bioelectrical impedance analysis in four ball sports

    PubMed Central

    Yamada, Yosuke; Masuo, Yoshihisa; Nakamura, Eitaro; Oda, Shingo

    2013-01-01

    The aim of this study was to evaluate and quantify differences in muscle distribution in athletes of various ball sports using segmental bioelectrical impedance analysis (SBIA). Participants were 115 male collegiate athletes from four ball sports (baseball, soccer, tennis, and lacrosse). Percent body fat (%BF) and lean body mass were measured, and SBIA was used to measure segmental muscle volume (MV) in bilateral upper arms, forearms, thighs, and lower legs. We calculated the MV ratios of dominant to nondominant, proximal to distal, and upper to lower limbs. The measurements consisted of a total of 31 variables. Cluster and factor analyses were applied to identify redundant variables. The muscle distribution was significantly different among groups, but the %BF was not. The classification procedures of the discriminant analysis could correctly distinguish 84.3% of the athletes. These results suggest that collegiate ball game athletes have adapted their physique to their sport movements very well, and the SBIA, which is an affordable, noninvasive, easy-to-operate, and fast alternative method in the field, can distinguish ball game athletes according to their specific muscle distribution within a 5-minute measurement. The SBIA could be a useful, affordable, and fast tool for identifying talents for specific sports. PMID:24379714

  11. Uncertainty-aware guided volume segmentation.

    PubMed

    Prassni, Jörg-Stefan; Ropinski, Timo; Hinrichs, Klaus

    2010-01-01

    Although direct volume rendering is established as a powerful tool for the visualization of volumetric data, efficient and reliable feature detection is still an open topic. Usually, a tradeoff between fast but imprecise classification schemes and accurate but time-consuming segmentation techniques has to be made. Furthermore, the issue of uncertainty introduced with the feature detection process is completely neglected by the majority of existing approaches.In this paper we propose a guided probabilistic volume segmentation approach that focuses on the minimization of uncertainty. In an iterative process, our system continuously assesses uncertainty of a random walker-based segmentation in order to detect regions with high ambiguity, to which the user's attention is directed to support the correction of potential misclassifications. This reduces the risk of critical segmentation errors and ensures that information about the segmentation's reliability is conveyed to the user in a dependable way. In order to improve the efficiency of the segmentation process, our technique does not only take into account the volume data to be segmented, but also enables the user to incorporate classification information. An interactive workflow has been achieved by implementing the presented system on the GPU using the OpenCL API. Our results obtained for several medical data sets of different modalities, including brain MRI and abdominal CT, demonstrate the reliability and efficiency of our approach. PMID:20975176

  12. Volume rendering of segmented image objects.

    PubMed

    Bullitt, Elizabeth; Aylward, Stephen R

    2002-08-01

    This paper describes a new method of combining ray-casting with segmentation. Volume rendering is performed at interactive rates on personal computers, and visualizations include both "superficial" ray-casting through a shell at each object's surface and "deep" ray-casting through the confines of each object. A feature of the approach is the option to smoothly and interactively dilate segmentation boundaries along all axes. This ability, when combined with selective "turning off" of extraneous image objects, can help clinicians detect and evaluate segmentation errors that may affect surgical planning. We describe both a method optimized for displaying tubular objects and a more general method applicable to objects of arbitrary geometry. In both cases, select three-dimensional points are projected onto a modified z buffer that records additional information about the projected objects. A subsequent step selectively volume renders only through the object volumes indicated by the z buffer. We describe how our approach differs from other reported methods for combining segmentation with ray-casting, and illustrate how our method can be useful in helping to detect segmentation errors. PMID:12472272

  13. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 2: Program users manual

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  14. Automated White Matter Total Lesion Volume Segmentation in Diabetes

    PubMed Central

    Maldjian, J.A.; Whitlow, C.T.; Saha, B.N.; Kota, G.; Vandergriff, C.; Davenport, E.M.; Divers, J.; Freedman, B.I.; Bowden, D.W.

    2014-01-01

    Background and Purpose WM lesion segmentation is often performed with the use of subjective rating scales because manual methods are laborious and tedious; however, automated methods are now available. We compared the performance of total lesion volume grading computed by use of an automated WM lesion segmentation algorithm with that of subjective rating scales and expert manual segmentation in a cohort of subjects with type 2 diabetes. Materials and Methods Structural T1 and FLAIR MR imaging data from 50 subjects with diabetes (age, 67.7 ± 7.2 years) and 50 nondiabetic sibling pairs (age, 67.5 ± 9.4 years) were evaluated in an institutional review board–approved study. WM lesion segmentation maps and total lesion volume were generated for each subject by means of the Statistical Parametric Mapping (SPM8) Lesion Segmentation Toolbox. Subjective WM lesion grade was determined by means of a 0–9 rating scale by 2 readers. Ground-truth total lesion volume was determined by means of manual segmentation by experienced readers. Correlation analyses compared manual segmentation total lesion volume with automated and subjective evaluation methods. Results Correlation between average lesion segmentation and ground-truth total lesion volume was 0.84. Maximum correlation between the Lesion Segmentation Toolbox and ground-truth total lesion volume (ρ = 0.87) occurred at the segmentation threshold of k = 0.25, whereas maximum correlation between subjective lesion segmentation and the Lesion Segmentation Toolbox (ρ = 0.73) occurred at k = 0.15. The difference between the 2 correlation estimates with ground-truth was not statistically significant. The lower segmentation threshold (0.15 versus 0.25) suggests that subjective raters overestimate WM lesion burden. Conclusions We validate the Lesion Segmentation Toolbox for determining total lesion volume in diabetes-enriched populations and compare it with a common subjective WM lesion rating scale. The Lesion Segmentation

  15. Bioimpedance Measurement of Segmental Fluid Volumes and Hemodynamics

    NASA Technical Reports Server (NTRS)

    Montgomery, Leslie D.; Wu, Yi-Chang; Ku, Yu-Tsuan E.; Gerth, Wayne A.; DeVincenzi, D. (Technical Monitor)

    2000-01-01

    Bioimpedance has become a useful tool to measure changes in body fluid compartment volumes. An Electrical Impedance Spectroscopic (EIS) system is described that extends the capabilities of conventional fixed frequency impedance plethysmographic (IPG) methods to allow examination of the redistribution of fluids between the intracellular and extracellular compartments of body segments. The combination of EIS and IPG techniques was evaluated in the human calf, thigh, and torso segments of eight healthy men during 90 minutes of six degree head-down tilt (HDT). After 90 minutes HDT the calf and thigh segments significantly (P < 0.05) lost conductive volume (eight and four percent, respectively) while the torso significantly (P < 0.05) gained volume (approximately three percent). Hemodynamic responses calculated from pulsatile IPG data also showed a segmental pattern consistent with vascular fluid loss from the lower extremities and vascular engorgement in the torso. Lumped-parameter equivalent circuit analyses of EIS data for the calf and thigh indicated that the overall volume decreases in these segments arose from reduced extracellular volume that was not completely balanced by increased intracellular volume. The combined use of IPG and EIS techniques enables noninvasive tracking of multi-segment volumetric and hemodynamic responses to environmental and physiological stresses.

  16. A Ray Casting Accelerated Method of Segmented Regular Volume Data

    NASA Astrophysics Data System (ADS)

    Zhu, Min; Guo, Ming; Wang, Liting; Dai, Yujin

    The size of volume data field which is constructed by large-scale war industry product ICT images is large, and empty voxels in the volume data field occupy little ratio. The effect of existing ray casting accelerated methods is not distinct. In 3D visualization fault diagnosis of large-scale war industry product, only some of the information in the volume data field can help surveyor check out fault inside it. Computational complexity will greatly increase if all volume data is 3D reconstructed. So a new ray casting accelerated method based on segmented volume data is put forward. Segmented information volume data field is built by use of segmented result. Consulting the conformation method of existing hierarchical volume data structures, hierarchical volume data structure on the base of segmented information is constructed. According to the structure, the construction parts defined by user are identified automatically in ray casting. The other parts are regarded as empty voxels, hence the sampling step is adjusted dynamically, the sampling point amount is decreased, and the volume rendering speed is improved. Experimental results finally reveal the high efficiency and good display performance of the proposed method.

  17. Uterine fibroid segmentation and volume measurement on MRI

    NASA Astrophysics Data System (ADS)

    Yao, Jianhua; Chen, David; Lu, Wenzhu; Premkumar, Ahalya

    2006-03-01

    Uterine leiomyomas are the most common pelvic tumors in females. The efficacy of medical treatment is gauged by shrinkage of the size of these tumors. In this paper, we present a method to robustly segment the fibroids on MRI and accurately measure the 3D volume. Our method is based on a combination of fast marching level set and Laplacian level set. With a seed point placed inside the fibroid region, a fast marching level set is first employed to obtain a rough segmentation, followed by a Laplacian level set to refine the segmentation. We devised a scheme to automatically determine the parameters for the level set function and the sigmoid function based on pixel statistics around the seed point. The segmentation is conducted on three concurrent views (axial, coronal and sagittal), and a combined volume measurement is computed to obtain a more reliable measurement. We carried out extensive tests on 13 patients, 25 MRI studies and 133 fibroids. The segmentation result was validated against manual segmentation defined by experts. The average segmentation sensitivity (true positive fraction) among all fibroids was 84.6%, and the average segmentation specificity (1-false positive fraction) was 84.3%.

  18. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    PubMed Central

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís; Sastre-Garriga, Jaume; Montalban, Xavier; Rovira, Àlex; Lladó, Xavier

    2015-01-01

    Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS) lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM) and white matter (WM) using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations. PMID:26740917

  19. Fast global interactive volume segmentation with regional supervoxel descriptors

    NASA Astrophysics Data System (ADS)

    Luengo, Imanol; Basham, Mark; French, Andrew P.

    2016-03-01

    In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction.

  20. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  1. Tooth segmentation system with intelligent editing for cephalometric analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shoupu

    2015-03-01

    Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.

  2. Synthesis of intensity gradient and texture information for efficient three-dimensional segmentation of medical volumes

    PubMed Central

    Vantaram, Sreenath Rao; Saber, Eli; Dianat, Sohail A.; Hu, Yang

    2015-01-01

    Abstract. We propose a framework that efficiently employs intensity, gradient, and textural features for three-dimensional (3-D) segmentation of medical (MRI/CT) volumes. Our methodology commences by determining the magnitude of intensity variations across the input volume using a 3-D gradient detection scheme. The resultant gradient volume is utilized in a dynamic volume growing/formation process that is initiated in voxel locations with small gradient magnitudes and is concluded at sites with large gradient magnitudes, yielding a map comprising an initial set of partitions (or subvolumes). This partition map is combined with an entropy-based texture descriptor along with intensity and gradient attributes in a multivariate analysis-based volume merging procedure that fuses subvolumes with similar characteristics to yield a final/refined segmentation output. Additionally, a semiautomated version of the aforestated algorithm that allows a user to interactively segment a desired subvolume of interest as opposed to the entire volume is also discussed. Our approach was tested on several MRI and CT datasets and the results show favorable performance in comparison to the state-of-the-art ITK-SNAP technique. PMID:26158098

  3. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  4. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    NASA Astrophysics Data System (ADS)

    Hatt, M.; Lamare, F.; Boussion, N.; Turzo, A.; Collet, C.; Salzenstein, F.; Roux, C.; Jarritt, P.; Carson, K.; Cheze-LeRest, C.; Visvikis, D.

    2007-07-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both

  5. Semiautomatic Regional Segmentation to Measure Orbital Fat Volumes in Thyroid-Associated Ophthalmopathy

    PubMed Central

    Comerci, M.; Elefante, A.; Strianese, D.; Senese, R.; Bonavolontà, P.; Alfano, B.; Bonavolontà, G.; Brunetti, A.

    2013-01-01

    Summary This study was designed to validate a novel semi-automated segmentation method to measure regional intra-orbital fat tissue volume in Graves' ophthalmopathy. Twenty-four orbits from 12 patients with Graves' ophthalmopathy, 24 orbits from 12 controls, ten orbits from five MRI study simulations and two orbits from a digital model were used. Following manual region of interest definition of the orbital volumes performed by two operators with different levels of expertise, an automated procedure calculated intra-orbital fat tissue volumes (global and regional, with automated definition of four quadrants). In patients with Graves' disease, clinical activity score and degree of exophthalmos were measured and correlated with intra-orbital fat volumes. Operator performance was evaluated and statistical analysis of the measurements was performed. Accurate intra-orbital fat volume measurements were obtained with coefficients of variation below 5%. The mean operator difference in total fat volume measurements was 0.56%. Patients had significantly higher intra-orbital fat volumes than controls (p<0.001 using Student's t test). Fat volumes and clinical score were significantly correlated (p<0.001). The semi-automated method described here can provide accurate, reproducible intra-orbital fat measurements with low inter-operator variation and good correlation with clinical data. PMID:24007725

  6. Relaxed image foresting transforms for interactive volume image segmentation

    NASA Astrophysics Data System (ADS)

    Malmberg, Filip; Nyström, Ingela; Mehnert, Andrew; Engstrom, Craig; Bengtsson, Ewert

    2010-03-01

    The Image Foresting Transform (IFT) is a framework for image partitioning, commonly used for interactive segmentation. Given an image where a subset of the image elements (seed-points) have been assigned correct segmentation labels, the IFT completes the labeling by computing minimal cost paths from all image elements to the seed-points. Each image element is then given the same label as the closest seed-point. Here, we propose the relaxed IFT (RIFT). This modified version of the IFT features an additional parameter to control the smoothness of the segmentation boundary. The RIFT yields more intuitive segmentation results in the presence of noise and weak edges, while maintaining a low computational complexity. We show an application of the method to the refinement of manual segmentations of a thoracolumbar muscle in magnetic resonance images. The performed study shows that the refined segmentations are qualitatively similar to the manual segmentations, while intra-user variations are reduced by more than 50%.

  7. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region

    PubMed Central

    Tian, Jing; Varga, Boglárka; Somfai, Gábor Márk; Lee, Wen-Hsiang; Smiddy, William E.; Cabrera DeBuc, Delia

    2015-01-01

    Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra’s algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (∼ 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data. PMID:26258430

  8. Volume quantization of the mouse cerebellum by semiautomatic 3D segmentation of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sijbers, Jan; Van der Linden, Anne-Marie; Scheunders, Paul; Van Audekerke, Johan; Van Dyck, Dirk; Raman, Erik R.

    1996-04-01

    The aim of this work is the development of a non-invasive technique for efficient and accurate volume quantization of the cerebellum of mice. This enables an in-vivo study on the development of the cerebellum in order to define possible alterations in cerebellum volume of transgenic mice. We concentrate on a semi-automatic segmentation procedure to extract the cerebellum from 3D magnetic resonance data. The proposed technique uses a 3D variant of Vincent and Soille's immersion based watershed algorithm which is applied to the gradient magnitude of the MR data. The algorithm results in a partitioning of the data in volume primitives. The known drawback of the watershed algorithm, over-segmentation, is strongly reduced by a priori application of an adaptive anisotropic diffusion filter on the gradient magnitude data. In addition, over-segmentation is a posteriori contingently reduced by properly merging volume primitives, based on the minimum description length principle. The outcome of the preceding image processing step is presented to the user for manual segmentation. The first slice which contains the object of interest is quickly segmented by the user through selection of basic image regions. In the sequel, the subsequent slices are automatically segmented. The segmentation results are contingently manually corrected. The technique is tested on phantom objects, where segmentation errors less than 2% were observed. Three-dimensional reconstructions of the segmented data are shown for the mouse cerebellum and the mouse brains in toto.

  9. Hitchhiker's Guide to Voxel Segmentation for Partial Volume Correction of In Vivo Magnetic Resonance Spectroscopy.

    PubMed

    Quadrelli, Scott; Mountford, Carolyn; Ramadan, Saadallah

    2016-01-01

    Partial volume effects have the potential to cause inaccuracies when quantifying metabolites using proton magnetic resonance spectroscopy (MRS). In order to correct for cerebrospinal fluid content, a spectroscopic voxel needs to be segmented according to different tissue contents. This article aims to detail how automated partial volume segmentation can be undertaken and provides a software framework for researchers to develop their own tools. While many studies have detailed the impact of partial volume correction on proton magnetic resonance spectroscopy quantification, there is a paucity of literature explaining how voxel segmentation can be achieved using freely available neuroimaging packages. PMID:27147822

  10. Theoretical analysis of multispectral image segmentation criteria.

    PubMed

    Kerfoot, I B; Bresler, Y

    1999-01-01

    Markov random field (MRF) image segmentation algorithms have been extensively studied, and have gained wide acceptance. However, almost all of the work on them has been experimental. This provides a good understanding of the performance of existing algorithms, but not a unified explanation of the significance of each component. To address this issue, we present a theoretical analysis of several MRF image segmentation criteria. Standard methods of signal detection and estimation are used in the theoretical analysis, which quantitatively predicts the performance at realistic noise levels. The analysis is decoupled into the problems of false alarm rate, parameter selection (Neyman-Pearson and receiver operating characteristics), detection threshold, expected a priori boundary roughness, and supervision. Only the performance inherent to a criterion, with perfect global optimization, is considered. The analysis indicates that boundary and region penalties are very useful, while distinct-mean penalties are of questionable merit. Region penalties are far more important for multispectral segmentation than for greyscale. This observation also holds for Gauss-Markov random fields, and for many separable within-class PDFs. To validate the analysis, we present optimization algorithms for several criteria. Theoretical and experimental results agree fairly well. PMID:18267494

  11. Volume Averaging of Spectral-Domain Optical Coherence Tomography Impacts Retinal Segmentation in Children

    PubMed Central

    Trimboli-Heidler, Carmelina; Vogt, Kelly; Avery, Robert A.

    2016-01-01

    Purpose To determine the influence of volume averaging on retinal layer thickness measures acquired with spectral-domain optical coherence tomography (SD-OCT) in children. Methods Macular SD-OCT images were acquired using three different volume settings (i.e., 1, 3, and 9 volumes) in children enrolled in a prospective OCT study. Total retinal thickness and five inner layers were measured around an Early Treatment Diabetic Retinopathy Scale (ETDRS) grid using beta version automated segmentation software for the Spectralis. The magnitude of manual segmentation required to correct the automated segmentation was classified as either minor (<12 lines adjusted), moderate (>12 and <25 lines adjusted), severe (>26 and <48 lines adjusted), or fail (>48 lines adjusted or could not adjust due to poor image quality). The frequency of each edit classification was assessed for each volume setting. Thickness, paired difference, and 95% limits of agreement of each anatomic quadrant were compared across volume density. Results Seventy-five subjects (median age 11.8 years, range 4.3–18.5 years) contributed 75 eyes. Less than 5% of the 9- and 3-volume scans required more than minor manual segmentation corrections, compared with 71% of 1-volume scans. The inner (3 mm) region demonstrated similar measures across all layers, regardless of volume number. The 1-volume scans demonstrated greater variability of the retinal nerve fiber layer (RNLF) thickness, compared with the other volumes in the outer (6 mm) region. Conclusions In children, volume averaging of SD-OCT acquisitions reduce retinal layer segmentation errors. Translational Relevance This study highlights the importance of volume averaging when acquiring macula volumes intended for multilayer segmentation. PMID:27570711

  12. Midbrain volume segmentation using active shape models and LBPs

    NASA Astrophysics Data System (ADS)

    Olveres, Jimena; Nava, Rodrigo; Escalante-Ramírez, Boris; Cristóbal, Gabriel; García-Moreno, Carla María.

    2013-09-01

    In recent years, the use of Magnetic Resonance Imaging (MRI) to detect different brain structures such as midbrain, white matter, gray matter, corpus callosum, and cerebellum has increased. This fact together with the evidence that midbrain is associated with Parkinson's disease has led researchers to consider midbrain segmentation as an important issue. Nowadays, Active Shape Models (ASM) are widely used in literature for organ segmentation where the shape is an important discriminant feature. Nevertheless, this approach is based on the assumption that objects of interest are usually located on strong edges. Such a limitation may lead to a final shape far from the actual shape model. This paper proposes a novel method based on the combined use of ASM and Local Binary Patterns for segmenting midbrain. Furthermore, we analyzed several LBP methods and evaluated their performance. The joint-model considers both global and local statistics to improve final adjustments. The results showed that our proposal performs substantially better than the ASM algorithm and provides better segmentation measurements.

  13. Automated lung tumor segmentation for whole body PET volume based on novel downhill region growing

    NASA Astrophysics Data System (ADS)

    Ballangan, Cherry; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Feng, Dagan

    2010-03-01

    We propose an automated lung tumor segmentation method for whole body PET images based on a novel downhill region growing (DRG) technique, which regards homogeneous tumor hotspots as 3D monotonically decreasing functions. The method has three major steps: thoracic slice extraction with K-means clustering of the slice features; hotspot segmentation with DRG; and decision tree analysis based hotspot classification. To overcome the common problem of leakage into adjacent hotspots in automated lung tumor segmentation, DRG employs the tumors' SUV monotonicity features. DRG also uses gradient magnitude of tumors' SUV to improve tumor boundary definition. We used 14 PET volumes from patients with primary NSCLC for validation. The thoracic region extraction step achieved good and consistent results for all patients despite marked differences in size and shape of the lungs and the presence of large tumors. The DRG technique was able to avoid the problem of leakage into adjacent hotspots and produced a volumetric overlap fraction of 0.61 +/- 0.13 which outperformed four other methods where the overlap fraction varied from 0.40 +/- 0.24 to 0.59 +/- 0.14. Of the 18 tumors in 14 NSCLC studies, 15 lesions were classified correctly, 2 were false negative and 15 were false positive.

  14. 3D robust Chan-Vese model for industrial computed tomography volume data segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Linghui; Zeng, Li; Luan, Xiao

    2013-11-01

    Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.

  15. Multi-region unstructured volume segmentation using tetrahedron filling

    SciTech Connect

    Willliams, Sean Jamerson; Dillard, Scott E; Thoma, Dan J; Hlawitschka, Mario; Hamann, Bernd

    2010-01-01

    Segmentation is one of the most common operations in image processing, and while there are several solutions already present in the literature, they each have their own benefits and drawbacks that make them well-suited for some types of data and not for others. We focus on the problem of breaking an image into multiple regions in a single segmentation pass, while supporting both voxel and scattered point data. To solve this problem, we begin with a set of potential boundary points and use a Delaunay triangulation to complete the boundaries. We use heuristic- and interaction-driven Voronoi clustering to find reasonable groupings of tetrahedra. Apart from the computation of the Delaunay triangulation, our algorithm has linear time complexity with respect to the number of tetrahedra.

  16. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  17. Scintigraphic method for the assessment of intraluminal volume and motility of isolated intestinal segments. [Dogs

    SciTech Connect

    Mitchell, A.; Macey, D.J.; Collin, J.

    1983-07-01

    The isolated in vivo intestinal segment is a popular experimental preparation for the investigation of intestinal function, but its value has been limited because no method has been available for measuring changes in intraluminal volume under experimental conditions. We report a scintigraphic technique for measuring intraluminal volume and assessing intestinal motility. Between 30 and 180 ml, the volume of a 75-cm segment of canine jejunum, perfused with Tc-99m-labeled tin colloid, was found to be proportional to the recorded count rate. This method has been used to monitor the effects of the hormone vasopressin on intestinal function.

  18. Automatic segmentation of the fetal cerebellum on ultrasound volumes, using a 3D statistical shape model.

    PubMed

    Gutiérrez-Becker, Benjamín; Arámbula Cosío, Fernando; Guzmán Huerta, Mario E; Benavides-Serralde, Jesús Andrés; Camargo-Marín, Lisbeth; Medina Bañuelos, Verónica

    2013-09-01

    Previous work has shown that the segmentation of anatomical structures on 3D ultrasound data sets provides an important tool for the assessment of the fetal health. In this work, we present an algorithm based on a 3D statistical shape model to segment the fetal cerebellum on 3D ultrasound volumes. This model is adjusted using an ad hoc objective function which is in turn optimized using the Nelder-Mead simplex algorithm. Our algorithm was tested on ultrasound volumes of the fetal brain taken from 20 pregnant women, between 18 and 24 gestational weeks. An intraclass correlation coefficient of 0.8528 and a mean Dice coefficient of 0.8 between cerebellar volumes measured using manual techniques and the volumes calculated using our algorithm were obtained. As far as we know, this is the first effort to automatically segment fetal intracranial structures on 3D ultrasound data. PMID:23686392

  19. Automated segmentation of mesothelioma volume on CT scan

    NASA Astrophysics Data System (ADS)

    Zhao, Binsheng; Schwartz, Lawrence; Flores, Raja; Liu, Fan; Kijewski, Peter; Krug, Lee; Rusch, Valerie

    2005-04-01

    In mesothelioma, response is usually assessed by computed tomography (CT). In current clinical practice the Response Evaluation Criteria in Solid Tumors (RECIST) or WHO, i.e., the uni-dimensional or the bi-dimensional measurements, is applied to the assessment of therapy response. However, the shape of the mesothelioma volume is very irregular and its longest dimension is almost never in the axial plane. Furthermore, the sections and the sites where radiologists measure the tumor are rather subjective, resulting in poor reproducibility of tumor size measurements. We are developing an objective three-dimensional (3D) computer algorithm to automatically identify and quantify tumor volumes that are associated with malignant pleural mesothelioma to assess therapy response. The algorithm first extracts the lung pleural surface from the volumetric CT images by interpolating the chest ribs over a number of adjacent slices and then forming a volume that includes the thorax. This volume allows a separation of mesothelioma from the chest wall. Subsequently, the structures inside the extracted pleural lung surface, including the mediastinal area, lung parenchyma, and pleural mesothelioma, can be identified using a multiple thresholding technique and morphological operations. Preliminary results have shown the potential of utilizing this algorithm to automatically detect and quantify tumor volumes on CT scans and thus to assess therapy response for malignant pleural mesothelioma.

  20. LANDSAT-D program. Volume 2: Ground segment

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Raw digital data, as received from the LANDSAT spacecraft, cannot generate images that meet specifications. Radiometric corrections must be made to compensate for aging and for differences in sensitivity among the instrument sensors. Geometric corrections must be made to compensate for off-nadir look angle, and to calculate spacecraft drift from its prescribed path. Corrections must also be made for look-angle jitter caused by vibrations induced by spacecraft equipment. The major components of the LANDSAT ground segment and their functions are discussed.

  1. Knowledge-based segmentation of pediatric kidneys in CT for measuring parenchymal volume

    NASA Astrophysics Data System (ADS)

    Brown, Matthew S.; Feng, Waldo C.; Hall, Theodore R.; McNitt-Gray, Michael F.; Churchill, Bernard M.

    2000-06-01

    The purpose of this work was to develop an automated method for segmenting pediatric kidneys in contrast-enhanced helical CT images and measuring the volume of the renal parenchyma. An automated system was developed to segment the abdomen, spine, aorta and kidneys. The expected size, shape, topology an X-ray attenuation of anatomical structures are stored as features in an anatomical model. These features guide 3-D threshold-based segmentation and then matching of extracted image regions to anatomical structures in the model. Following segmentation, the kidney volumes are calculated by summing included voxels. To validate the system, the kidney volumes of 4 swine were calculated using our approach and compared to the 'true' volumes measured after harvesting the kidneys. Automated volume calculations were also performed retrospectively in a cohort of 10 children. The mean difference between the calculated and measured values in the swine kidneys was 1.38 (S.D. plus or minus 0.44) cc. For the pediatric cases, calculated volumes ranged from 41.7 - 252.1 cc/kidney, and the mean ratio of right to left kidney volume was 0.96 (S.D. plus or minus 0.07). These results demonstrate the accuracy of the volumetric technique that may in the future provide an objective assessment of renal damage.

  2. Hybrid segmentation framework for 3D medical image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  3. High volume production trial of mirror segments for the Thirty Meter Telescope

    NASA Astrophysics Data System (ADS)

    Oota, Tetsuji; Negishi, Mahito; Shinonaga, Hirohiko; Gomi, Akihiko; Tanaka, Yutaka; Akutsu, Kotaro; Otsuka, Itaru; Mochizuki, Shun; Iye, Masanori; Yamashita, Takuya

    2014-07-01

    The Thirty Meter Telescope is a next-generation optical/infrared telescope to be constructed on Mauna Kea, Hawaii toward the end of this decade, as an international project. Its 30 m primary mirror consists of 492 off-axis aspheric segmented mirrors. High volume production of hundreds of segments has started in 2013 based on the contract between National Astronomical Observatory of Japan and Canon Inc.. This paper describes the achievements of the high volume production trials. The Stressed Mirror Figuring technique which is established by Keck Telescope engineers is arranged and adopted. To measure the segment surface figure, a novel stitching algorithm is evaluated by experiment. The integration procedure is checked with prototype segment.

  4. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  5. Multi-Segment Hemodynamic and Volume Assessment With Impedance Plethysmography: Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Ku, Yu-Tsuan E.; Montgomery, Leslie D.; Webbon, Bruce W. (Technical Monitor)

    1995-01-01

    Definition of multi-segmental circulatory and volume changes in the human body provides an understanding of the physiologic responses to various aerospace conditions. We have developed instrumentation and testing procedures at NASA Ames Research Center that may be useful in biomedical research and clinical diagnosis. Specialized two, four, and six channel impedance systems will be described that have been used to measure calf, thigh, thoracic, arm, and cerebral hemodynamic and volume changes during various experimental investigations.

  6. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes

    PubMed Central

    Eapen, Maya; Korah, Reeba; Geetha, G.

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  7. Lung Segmentation in 4D CT Volumes Based on Robust Active Shape Model Matching

    PubMed Central

    Gill, Gurman; Beichel, Reinhard R.

    2015-01-01

    Dynamic and longitudinal lung CT imaging produce 4D lung image data sets, enabling applications like radiation treatment planning or assessment of response to treatment of lung diseases. In this paper, we present a 4D lung segmentation method that mutually utilizes all individual CT volumes to derive segmentations for each CT data set. Our approach is based on a 3D robust active shape model and extends it to fully utilize 4D lung image data sets. This yields an initial segmentation for the 4D volume, which is then refined by using a 4D optimal surface finding algorithm. The approach was evaluated on a diverse set of 152 CT scans of normal and diseased lungs, consisting of total lung capacity and functional residual capacity scan pairs. In addition, a comparison to a 3D segmentation method and a registration based 4D lung segmentation approach was performed. The proposed 4D method obtained an average Dice coefficient of 0.9773 ± 0.0254, which was statistically significantly better (p value ≪0.001) than the 3D method (0.9659 ± 0.0517). Compared to the registration based 4D method, our method obtained better or similar performance, but was 58.6% faster. Also, the method can be easily expanded to process 4D CT data sets consisting of several volumes. PMID:26557844

  8. Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory.

    PubMed

    Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin

    2012-01-01

    In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region. PMID:23286081

  9. Analysis of Random Segment Errors on Coronagraph Performance

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; N'Diaye, Mamadou; Stahl, Mark T.; Stahl, H. Philip

    2016-01-01

    At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

  10. Automated segmentation and measurement of global white matter lesion volume in patients with multiple sclerosis.

    PubMed

    Alfano, B; Brunetti, A; Larobina, M; Quarantelli, M; Tedeschi, E; Ciarmiello, A; Covelli, E M; Salvatore, M

    2000-12-01

    A fully automated magnetic resonance (MR) segmentation method for identification and volume measurement of demyelinated white matter has been developed. Spin-echo MR brain scans were performed in 38 patients with multiple sclerosis (MS) and in 46 healthy subjects. Segmentation of normal tissues and white matter lesions (WML) was obtained, based on their relaxation rates and proton density maps. For WML identification, additional criteria included three-dimensional (3D) lesion shape and surrounding tissue composition. Segmented images were generated, and normal brain tissues and WML volumes were obtained. Sensitivity, specificity, and reproducibility of the method were calculated, using the WML identified by two neuroradiologists as the gold standard. The average volume of "abnormal" white matter in normal subjects (false positive) was 0.11 ml (range 0-0.59 ml). In MS patients the average WML volume was 31.0 ml (range 1.1-132.5 ml), with a sensitivity of 87.3%. In the reproducibility study, the mean SD of WML volumes was 2.9 ml. The procedure appears suitable for monitoring disease changes over time. J. Magn. Reson. Imaging 2000;12:799-807. PMID:11105017

  11. Semi-automatic tool for segmentation and volumetric analysis of medical images.

    PubMed

    Heinonen, T; Dastidar, P; Kauppinen, P; Malmivuo, J; Eskola, H

    1998-05-01

    Segmentation software is described, developed for medical image processing and run on Windows. The software applies basic image processing techniques through a graphical user interface. For particular applications, such as brain lesion segmentation, the software enables the combination of different segmentation techniques to improve its efficiency. The program is applied for magnetic resonance imaging, computed tomography and optical images of cryosections. The software can be utilised in numerous applications, including pre-processing for three-dimensional presentations, volumetric analysis and construction of volume conductor models. PMID:9747567

  12. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    NASA Astrophysics Data System (ADS)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  13. Generalized method for partial volume estimation and tissue segmentation in cerebral magnetic resonance images

    PubMed Central

    Khademi, April; Venetsanopoulos, Anastasios; Moody, Alan R.

    2014-01-01

    Abstract. An artifact found in magnetic resonance images (MRI) called partial volume averaging (PVA) has received much attention since accurate segmentation of cerebral anatomy and pathology is impeded by this artifact. Traditional neurological segmentation techniques rely on Gaussian mixture models to handle noise and PVA, or high-dimensional feature sets that exploit redundancy in multispectral datasets. Unfortunately, model-based techniques may not be optimal for images with non-Gaussian noise distributions and/or pathology, and multispectral techniques model probabilities instead of the partial volume (PV) fraction. For robust segmentation, a PV fraction estimation approach is developed for cerebral MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead, the PV fraction is estimated directly from each image using an adaptively defined global edge map constructed by exploiting a relationship between edge content and PVA. The final PVA map is used to segment anatomy and pathology with subvoxel accuracy. Validation on simulated and real, pathology-free T1 MRI (Gaussian noise), as well as pathological fluid attenuation inversion recovery MRI (non-Gaussian noise), demonstrate that the PV fraction is accurately estimated and the resultant segmentation is robust. Comparison to model-based methods further highlight the benefits of the current approach. PMID:26158022

  14. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    NASA Astrophysics Data System (ADS)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  15. Volume rendering segmented data using 3D textures: a practical approach for intra-operative visualization

    NASA Astrophysics Data System (ADS)

    Subramanian, Navneeth; Mullick, Rakesh; Vaidya, Vivek

    2006-03-01

    Volume rendering has high utility in visualization of segmented datasets. However, volume rendering of the segmented labels along with the original data causes undesirable intermixing/bleeding artifacts arising from interpolation at the sharp boundaries. This issue is further amplified in 3D textures based volume rendering due to the inaccessibility of the interpolation stage. We present an approach which helps minimize intermixing artifacts while maintaining the high performance of 3D texture based volume rendering - both of which are critical for intra-operative visualization. Our approach uses a 2D transfer function based classification scheme where label distinction is achieved through an encoding that generates unique gradient values for labels. This helps ensure that labelled voxels always map to distinct regions in the 2D transfer function, irrespective of interpolation. In contrast to previously reported algorithms, our algorithm does not require multiple passes for rendering and supports greater than 4 masks. It also allows for real-time modification of the colors/opacities of the segmented structures along with the original data. Additionally, these capabilities are available with minimal texture memory requirements amongst comparable algorithms. Results are presented on clinical and phantom data.

  16. Trabecular-Iris Circumference Volume in Open Angle Eyes Using Swept-Source Fourier Domain Anterior Segment Optical Coherence Tomography

    PubMed Central

    Rigi, Mohammed; Blieden, Lauren S.; Nguyen, Donna; Chuang, Alice Z.; Baker, Laura A.; Bell, Nicholas P.; Lee, David A.; Mankiewicz, Kimberly A.; Feldman, Robert M.

    2014-01-01

    Purpose. To introduce a new anterior segment optical coherence tomography parameter, trabecular-iris circumference volume (TICV), which measures the integrated volume of the peripheral angle, and establish a reference range in normal, open angle eyes. Methods. One eye of each participant with open angles and a normal anterior segment was imaged using 3D mode by the CASIA SS-1000 (Tomey, Nagoya, Japan). Trabecular-iris space area (TISA) and TICV at 500 and 750 µm were calculated. Analysis of covariance was performed to examine the effect of age and its interaction with spherical equivalent. Results. The study included 100 participants with a mean age of 50 (±15) years (range 20–79). TICV showed a normal distribution with a mean (±SD) value of 4.75 µL (±2.30) for TICV500 and a mean (±SD) value of 8.90 µL (±3.88) for TICV750. Overall, TICV showed an age-related reduction (P = 0.035). In addition, angle volume increased with increased myopia for all age groups, except for those older than 65 years. Conclusions. This study introduces a new parameter to measure peripheral angle volume, TICV, with age-adjusted normal ranges for open angle eyes. Further investigation is warranted to determine the clinical utility of this new parameter. PMID:25210623

  17. Trabecular-iris circumference volume in open angle eyes using swept-source fourier domain anterior segment optical coherence tomography.

    PubMed

    Rigi, Mohammed; Blieden, Lauren S; Nguyen, Donna; Chuang, Alice Z; Baker, Laura A; Bell, Nicholas P; Lee, David A; Mankiewicz, Kimberly A; Feldman, Robert M

    2014-01-01

    Purpose. To introduce a new anterior segment optical coherence tomography parameter, trabecular-iris circumference volume (TICV), which measures the integrated volume of the peripheral angle, and establish a reference range in normal, open angle eyes. Methods. One eye of each participant with open angles and a normal anterior segment was imaged using 3D mode by the CASIA SS-1000 (Tomey, Nagoya, Japan). Trabecular-iris space area (TISA) and TICV at 500 and 750 µm were calculated. Analysis of covariance was performed to examine the effect of age and its interaction with spherical equivalent. Results. The study included 100 participants with a mean age of 50 (±15) years (range 20-79). TICV showed a normal distribution with a mean (±SD) value of 4.75 µL (±2.30) for TICV500 and a mean (±SD) value of 8.90 µL (±3.88) for TICV750. Overall, TICV showed an age-related reduction (P = 0.035). In addition, angle volume increased with increased myopia for all age groups, except for those older than 65 years. Conclusions. This study introduces a new parameter to measure peripheral angle volume, TICV, with age-adjusted normal ranges for open angle eyes. Further investigation is warranted to determine the clinical utility of this new parameter. PMID:25210623

  18. Exploratory analysis of genomic segmentations with Segtools

    PubMed Central

    2011-01-01

    Background As genome-wide experiments and annotations become more prevalent, researchers increasingly require tools to help interpret data at this scale. Many functional genomics experiments involve partitioning the genome into labeled segments, such that segments sharing the same label exhibit one or more biochemical or functional traits. For example, a collection of ChlP-seq experiments yields a compendium of peaks, each labeled with one or more associated DNA-binding proteins. Similarly, manually or automatically generated annotations of functional genomic elements, including cis-regulatory modules and protein-coding or RNA genes, can also be summarized as genomic segmentations. Results We present a software toolkit called Segtools that simplifies and automates the exploration of genomic segmentations. The software operates as a series of interacting tools, each of which provides one mode of summarization. These various tools can be pipelined and summarized in a single HTML page. We describe the Segtools toolkit and demonstrate its use in interpreting a collection of human histone modification data sets and Plasmodium falciparum local chromatin structure data sets. Conclusions Segtools provides a convenient, powerful means of interpreting a genomic segmentation. PMID:22029426

  19. Segments.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Presents a market taxonomy for higher education, including what it reveals about the structure of the market, the model's technical attributes, and its capacity to explain pricing behavior. Details the identification of the principle seams separating one market segment from another and how student aspirations help to organize the market, making…

  20. Cost, volume and profitability analysis.

    PubMed

    Tarantino, David P

    2002-01-01

    If you want to increase your income by seeing more patients, it's important to figure out the financial impact such a move could have on your practice. Learn how to run a cost, volume, and profitability analysis to determine how business decisions can change your financial picture. PMID:11806235

  1. Hitchhiker’s Guide to Voxel Segmentation for Partial Volume Correction of In Vivo Magnetic Resonance Spectroscopy

    PubMed Central

    Quadrelli, Scott; Mountford, Carolyn; Ramadan, Saadallah

    2016-01-01

    Partial volume effects have the potential to cause inaccuracies when quantifying metabolites using proton magnetic resonance spectroscopy (MRS). In order to correct for cerebrospinal fluid content, a spectroscopic voxel needs to be segmented according to different tissue contents. This article aims to detail how automated partial volume segmentation can be undertaken and provides a software framework for researchers to develop their own tools. While many studies have detailed the impact of partial volume correction on proton magnetic resonance spectroscopy quantification, there is a paucity of literature explaining how voxel segmentation can be achieved using freely available neuroimaging packages. PMID:27147822

  2. Automated cerebellar segmentation: Validation and application to detect smaller volumes in children prenatally exposed to alcohol☆

    PubMed Central

    Cardenas, Valerie A.; Price, Mathew; Infante, M. Alejandra; Moore, Eileen M.; Mattson, Sarah N.; Riley, Edward P.; Fein, George

    2014-01-01

    Objective To validate an automated cerebellar segmentation method based on active shape and appearance modeling and then segment the cerebellum on images acquired from adolescents with histories of prenatal alcohol exposure (PAE) and non-exposed controls (NC). Methods Automated segmentations of the total cerebellum, right and left cerebellar hemispheres, and three vermal lobes (anterior, lobules I–V; superior posterior, lobules VI–VII; inferior posterior, lobules VIII–X) were compared to expert manual labelings on 20 subjects, studied twice, that were not used for model training. The method was also used to segment the cerebellum on 11 PAE and 9 NC adolescents. Results The test–retest intraclass correlation coefficients (ICCs) of the automated method were greater than 0.94 for all cerebellar volume and mid-sagittal vermal area measures, comparable or better than the test–retest ICCs for manual measurement (all ICCs > 0.92). The ICCs computed on all four cerebellar measurements (manual and automated measures on the repeat scans) to compare comparability were above 0.97 for non-vermis parcels, and above 0.89 for vermis parcels. When applied to patients, the automated method detected smaller cerebellar volumes and mid-sagittal areas in the PAE group compared to controls (p < 0.05 for all regions except the superior posterior lobe, consistent with prior studies). Discussion These results demonstrate excellent reliability and validity of automated cerebellar volume and mid-sagittal area measurements, compared to manual measurements. These data also illustrate that this new technology for automatically delineating the cerebellum leads to conclusions regarding the effects of prenatal alcohol exposure on the cerebellum consistent with prior studies that used labor intensive manual delineation, even with a very small sample. PMID:25061566

  3. Automatic coronary lumen segmentation with partial volume modeling improves lesions' hemodynamic significance assessment

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Lamash, Y.; Gilboa, G.; Nickisch, H.; Prevrhal, S.; Schmitt, H.; Vembar, M.; Goshen, L.

    2016-03-01

    The determination of hemodynamic significance of coronary artery lesions from cardiac computed tomography angiography (CCTA) based on blood flow simulations has the potential to improve CCTA's specificity, thus resulting in improved clinical decision making. Accurate coronary lumen segmentation required for flow simulation is challenging due to several factors. Specifically, the partial-volume effect (PVE) in small-diameter lumina may result in overestimation of the lumen diameter that can lead to an erroneous hemodynamic significance assessment. In this work, we present a coronary artery segmentation algorithm tailored specifically for flow simulations by accounting for the PVE. Our algorithm detects lumen regions that may be subject to the PVE by analyzing the intensity values along the coronary centerline and integrates this information into a machine-learning based graph min-cut segmentation framework to obtain accurate coronary lumen segmentations. We demonstrate the improvement in hemodynamic significance assessment achieved by accounting for the PVE in the automatic segmentation of 91 coronary artery lesions from 85 patients. We compare hemodynamic significance assessments by means of fractional flow reserve (FFR) resulting from simulations on 3D models generated by our segmentation algorithm with and without accounting for the PVE. By accounting for the PVE we improved the area under the ROC curve for detecting hemodynamically significant CAD by 29% (N=91, 0.85 vs. 0.66, p<0.05, Delong's test) with invasive FFR threshold of 0.8 as the reference standard. Our algorithm has the potential to facilitate non-invasive hemodynamic significance assessment of coronary lesions.

  4. Four-chamber heart modeling and automatic segmentation for 3D cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Georgescu, Bogdan; Barbu, Adrian; Scheuering, Michael; Comaniciu, Dorin

    2008-03-01

    Multi-chamber heart segmentation is a prerequisite for quantification of the cardiac function. In this paper, we propose an automatic heart chamber segmentation system. There are two closely related tasks to develop such a system: heart modeling and automatic model fitting to an unseen volume. The heart is a complicated non-rigid organ with four chambers and several major vessel trunks attached. A flexible and accurate model is necessary to capture the heart chamber shape at an appropriate level of details. In our four-chamber surface mesh model, the following two factors are considered and traded-off: 1) accuracy in anatomy and 2) easiness for both annotation and automatic detection. Important landmarks such as valves and cusp points on the interventricular septum are explicitly represented in our model. These landmarks can be detected reliably to guide the automatic model fitting process. We also propose two mechanisms, the rotation-axis based and parallel-slice based resampling methods, to establish mesh point correspondence, which is necessary to build a statistical shape model to enforce priori shape constraints in the model fitting procedure. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3D computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9-dimensional similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3D shape through learning-based boundary delineation. Extensive experiments demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the state-of-the-art. This

  5. A novel colonic polyp volume segmentation method for computer tomographic colonography

    NASA Astrophysics Data System (ADS)

    Wang, Huafeng; Li, Lihong C.; Han, Hao; Song, Bowen; Peng, Hao; Wang, Yunhong; Wang, Lihua; Liang, Zhengrong

    2014-03-01

    Colorectal cancer is the third most common type of cancer. However, this disease can be prevented by detection and removal of precursor adenomatous polyps after the diagnosis given by experts on computer tomographic colonography (CTC). During CTC diagnosis, the radiologist looks for colon polyps and measures not only the size but also the malignancy. It is a common sense that to segment polyp volumes from their complicated growing environment is of much significance for accomplishing the CTC based early diagnosis task. Previously, the polyp volumes are mainly given from the manually or semi-automatically drawing by the radiologists. As a result, some deviations cannot be avoided since the polyps are usually small (6~9mm) and the radiologists' experience and knowledge are varying from one to another. In order to achieve automatic polyp segmentation carried out by the machine, we proposed a new method based on the colon decomposition strategy. We evaluated our algorithm on both phantom and patient data. Experimental results demonstrate our approach is capable of segment the small polyps from their complicated growing background.

  6. Segmentation of brain image volumes using the data list management library.

    PubMed

    Román-Alonso, G; Jiménez-Alaniz, J R; Buenabad-Chávez, J; Castro-García, M A; Vargas-Rodríguez, A H

    2007-01-01

    The segmentation of head images is useful to detect neuroanatomical structures and to follow and quantify the evolution of several brain lesions. 2D images correspond to brain slices. The more images are used the higher the resolution obtained is, but more processing power is required and parallelism becomes desirable. We present a new approach to segmentation of brain image volumes using DLML (Data List Management Library), a tool developed by our team. We organise the integer numbers identifying images into a list, and our DLML version process them both in parallel and with dynamic load balancing transparently to the programmer. We compare the performance of our DLML version to other typical parallel approaches developed with MPI (master-slave and static data distribution), using cluster configurations with 4-32 processors. PMID:18002398

  7. Segmentation and quantitative analysis of individual cells in developmental tissues.

    PubMed

    Nandy, Kaustav; Kim, Jusub; McCullough, Dean P; McAuliffe, Matthew; Meaburn, Karen J; Yamaguchi, Terry P; Gudla, Prabhakar R; Lockett, Stephen J

    2014-01-01

    Image analysis is vital for extracting quantitative information from biological images and is used extensively, including investigations in developmental biology. The technique commences with the segmentation (delineation) of objects of interest from 2D images or 3D image stacks and is usually followed by the measurement and classification of the segmented objects. This chapter focuses on the segmentation task and here we explain the use of ImageJ, MIPAV (Medical Image Processing, Analysis, and Visualization), and VisSeg, three freely available software packages for this purpose. ImageJ and MIPAV are extremely versatile and can be used in diverse applications. VisSeg is a specialized tool for performing highly accurate and reliable 2D and 3D segmentation of objects such as cells and cell nuclei in images and stacks. PMID:24318825

  8. Segment-to-segment contact elements for modelling joint interfaces in finite element analysis

    NASA Astrophysics Data System (ADS)

    Mayer, M. H.; Gaul, L.

    2007-02-01

    This paper presents an efficient approach to model contact interfaces of joints in finite element analysis (FEA) with segment-to-segment contact elements like thin layer or zero thickness elements. These elements originate from geomechanics and have been applied recently in modal analysis as an efficient way to define the contact stiffness of fixed joints for model updating. A big advantage of these elements is that no global contact search algorithm is employed as used in master-slave contacts. Contact search algorithms are not necessary for modelling contact interfaces of fixed joints since the interfaces are always in contact and restricted to small relative movements, which saves much computing time. We first give an introduction into the theory of segment-to-segment contact elements leading to zero thickness and thin layer elements. As a new application of zero thickness elements, we demonstrate the implementation of a structural contact damping model, derived from a Masing model, as non-linear constitutive laws for the contact element. This damping model takes into account the non-linear influence of frictional microslip in the contact interface of fixed joints. With this model we simulate the non-linear response of a bolted structure. This approach constitutes a new way to simulate multi-degree-of-freedom systems with structural joints and predict modal damping properties.

  9. Systematic Error in Hippocampal Volume Asymmetry Measurement is Minimal with a Manual Segmentation Protocol

    PubMed Central

    Rogers, Baxter P.; Sheffield, Julia M.; Luksik, Andrew S.; Heckers, Stephan

    2012-01-01

    Hemispheric asymmetry of hippocampal volume is a common finding that has biological relevance, including associations with dementia and cognitive performance. However, a recent study has reported the possibility of systematic error in measurements of hippocampal asymmetry by magnetic resonance volumetry. We manually traced the volumes of the anterior and posterior hippocampus in 40 healthy people to measure systematic error related to image orientation. We found a bias due to the side of the screen on which the hippocampus was viewed, such that hippocampal volume was larger when traced on the left side of the screen than when traced on the right (p = 0.05). However, this bias was smaller than the anatomical right > left asymmetry of the anterior hippocampus. We found right > left asymmetry of hippocampal volume regardless of image presentation (radiological versus neurological). We conclude that manual segmentation protocols can minimize the effect of image orientation in the study of hippocampal volume asymmetry, but our confirmation that such bias exists suggests strategies to avoid it in future studies. PMID:23248580

  10. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  11. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    NASA Astrophysics Data System (ADS)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  12. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    NASA Astrophysics Data System (ADS)

    Leibfarth, S.; Eckert, F.; Welz, S.; Siegel, C.; Schmidt, H.; Schwenzer, N.; Zips, D.; Thorwarth, D.

    2015-07-01

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only.

  13. Bayesian Analysis and Segmentation of Multichannel Image Sequences

    NASA Astrophysics Data System (ADS)

    Chang, Michael Ming Hsin

    This thesis is concerned with the segmentation and analysis of multichannel image sequence data. In particular, we use maximum a posteriori probability (MAP) criterion and Gibbs random fields (GRF) to formulate the problems. We start by reviewing the significance of MAP estimation with GRF priors and study the feasibility of various optimization methods for implementing the MAP estimator. We proceed to investigate three areas where image data and parameter estimates are present in multichannels, multiframes, and interrelated in complicated manners. These areas of study include color image segmentation, multislice MR image segmentation, and optical flow estimation and segmentation in multiframe temporal sequences. Besides developing novel algorithms in each of these areas, we demonstrate how to exploit the potential of MAP estimation and GRFs, and we propose practical and efficient implementations. Illustrative examples and relevant experimental results are included.

  14. Leaf image segmentation method based on multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Jin-Wei; Shi, Wen; Liao, Gui-Ping

    2013-12-01

    To identify singular regions of crop leaf affected by diseases, based on multifractal detrended fluctuation analysis (MF-DFA), an image segmentation method is proposed. In the proposed method, first, we defend a new texture descriptor: local generalized Hurst exponent, recorded as LHq based on MF-DFA. And then, box-counting dimension f(LHq) is calculated for sub-images constituted by the LHq of some pixels, which come from a specific region. Consequently, series of f(LHq) of the different regions can be obtained. Finally, the singular regions are segmented according to the corresponding f(LHq). Six kinds of corn diseases leaf's images are tested in our experiments. Both the proposed method and other two segmentation methods—multifractal spectrum based and fuzzy C-means clustering have been compared in the experiments. The comparison results demonstrate that the proposed method can recognize the lesion regions more effectively and provide more robust segmentations.

  15. Whole-body and segmental muscle volume are associated with ball velocity in high school baseball pitchers

    PubMed Central

    Yamada, Yosuke; Yamashita, Daichi; Yamamoto, Shinji; Matsui, Tomoyuki; Seo, Kazuya; Azuma, Yoshikazu; Kida, Yoshikazu; Morihara, Toru; Kimura, Misaka

    2013-01-01

    The aim of the study was to examine the relationship between pitching ball velocity and segmental (trunk, upper arm, forearm, upper leg, and lower leg) and whole-body muscle volume (MV) in high school baseball pitchers. Forty-seven male high school pitchers (40 right-handers and seven left-handers; age, 16.2 ± 0.7 years; stature, 173.6 ± 4.9 cm; mass, 65.0 ± 6.8 kg, years of baseball experience, 7.5 ± 1.8 years; maximum pitching ball velocity, 119.0 ± 9.0 km/hour) participated in the study. Segmental and whole-body MV were measured using segmental bioelectrical impedance analysis. Maximum ball velocity was measured with a sports radar gun. The MV of the dominant arm was significantly larger than the MV of the non-dominant arm (P < 0.001). There was no difference in MV between the dominant and non-dominant legs. Whole-body MV was significantly correlated with ball velocity (r = 0.412, P < 0.01). Trunk MV was not correlated with ball velocity, but the MV for both lower legs, and the dominant upper leg, upper arm, and forearm were significantly correlated with ball velocity (P < 0.05). The results were not affected by age or years of baseball experience. Whole-body and segmental MV are associated with ball velocity in high school baseball pitchers. However, the contribution of the muscle mass on pitching ball velocity is limited, thus other fundamental factors (ie, pitching skill) are also important. PMID:24379713

  16. Whole-body and segmental muscle volume are associated with ball velocity in high school baseball pitchers.

    PubMed

    Yamada, Yosuke; Yamashita, Daichi; Yamamoto, Shinji; Matsui, Tomoyuki; Seo, Kazuya; Azuma, Yoshikazu; Kida, Yoshikazu; Morihara, Toru; Kimura, Misaka

    2013-01-01

    The aim of the study was to examine the relationship between pitching ball velocity and segmental (trunk, upper arm, forearm, upper leg, and lower leg) and whole-body muscle volume (MV) in high school baseball pitchers. Forty-seven male high school pitchers (40 right-handers and seven left-handers; age, 16.2 ± 0.7 years; stature, 173.6 ± 4.9 cm; mass, 65.0 ± 6.8 kg, years of baseball experience, 7.5 ± 1.8 years; maximum pitching ball velocity, 119.0 ± 9.0 km/hour) participated in the study. Segmental and whole-body MV were measured using segmental bioelectrical impedance analysis. Maximum ball velocity was measured with a sports radar gun. The MV of the dominant arm was significantly larger than the MV of the non-dominant arm (P < 0.001). There was no difference in MV between the dominant and non-dominant legs. Whole-body MV was significantly correlated with ball velocity (r = 0.412, P < 0.01). Trunk MV was not correlated with ball velocity, but the MV for both lower legs, and the dominant upper leg, upper arm, and forearm were significantly correlated with ball velocity (P < 0.05). The results were not affected by age or years of baseball experience. Whole-body and segmental MV are associated with ball velocity in high school baseball pitchers. However, the contribution of the muscle mass on pitching ball velocity is limited, thus other fundamental factors (ie, pitching skill) are also important. PMID:24379713

  17. MR volume segmentation of gray matter and white matter using manual thresholding: Dependence on image brightness

    SciTech Connect

    Harris, G.J.; Barta, P.E.; Peng, L.W.; Lee, S.; Brettschneider, P.D.; Shah, A.; Henderer, J.D.; Schlaepfer, T.E.; Pearlson, G.D. Tufts Univ. School of Medicine, Boston, MA )

    1994-02-01

    To describe a quantitative MR imaging segmentation method for determination of the volume of cerebrospinal fluid, gray matter, and white matter in living human brain, and to determine the method's reliability. We developed a computer method that allows rapid, user-friendly determination of cerebrospinal fluid, gray matter, and white matter volumes in a reliable manner, both globally and regionally. This method was applied to a large control population (N = 57). Initially, image brightness had a strong correlation with the gray-white ratio (r = .78). Bright images tended to overestimate, dim images to underestimate gray matter volumes. This artifact was corrected for by offsetting each image to an approximately equal brightness. After brightness correction, gray-white ratio was correlated with age (r = -.35). The age-dependent gray-white ratio was similar to that for the same age range in a prior neuropathology report. Interrater reliability was high (.93 intraclass correlation coefficient). The method described here for gray matter, white matter, and cerebrospinal fluid volume calculation is reliable and valid. A correction method for an artifact related to image brightness was developed. 12 refs., 3 figs.

  18. Analysis of the Segmented Features of Indicator of Mine Presence

    NASA Astrophysics Data System (ADS)

    Krtalic, A.

    2016-06-01

    The aim of this research is to investigate possibility for interactive semi-automatic interpretation of digital images in humanitarian demining for the purpose of detection and extraction of (strong) indicators of mine presence which can be seen on the images, according to the parameters of the general geometric shapes rather than radiometric characteristics. For that purpose, objects are created by segmentation. The segments represent the observed indicator and the objects that surround them (for analysis of the degree of discrimination of objects from the environment) in the best possible way. These indicators cover a certain characteristic surface. These areas are determined by segmenting the digital image. Sets of pixels that form such surface on images have specific geometric features. In this way, it is provided to analyze the features of the segments on the basis of the object, rather than the pixel level. Factor analysis of geometric parameters of this segments is performed in order to identify parameters that can be distinguished from the other parameters according to their geometric features. Factor analysis was carried out in two different ways, according to the characteristics of the general geometric shape and to the type of strong indicators of mine presence. The continuation of this research is the implementation of the automatic extraction of indicators of mine presence according results presented in this paper.

  19. 4-D segmentation and normalization of 3He MR images for intrasubject assessment of ventilated lung volumes

    NASA Astrophysics Data System (ADS)

    Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.

    2012-03-01

    Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.

  20. Segment clustering methodology for unsupervised Holter recordings analysis

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sotelo, Jose Luis; Peluffo-Ordoñez, Diego; Castellanos Dominguez, German

    2015-01-01

    Cardiac arrhythmia analysis on Holter recordings is an important issue in clinical settings, however such issue implicitly involves attending other problems related to the large amount of unlabelled data which means a high computational cost. In this work an unsupervised methodology based in a segment framework is presented, which consists of dividing the raw data into a balanced number of segments in order to identify fiducial points, characterize and cluster the heartbeats in each segment separately. The resulting clusters are merged or split according to an assumed criterion of homogeneity. This framework compensates the high computational cost employed in Holter analysis, being possible its implementation for further real time applications. The performance of the method is measure over the records from the MIT/BIH arrhythmia database and achieves high values of sensibility and specificity, taking advantage of database labels, for a broad kind of heartbeats types recommended by the AAMI.

  1. A fuzzy, nonparametric segmentation framework for DTI and MRI analysis.

    PubMed

    Awate, Suyash P; Gee, James C

    2007-01-01

    This paper presents a novel statistical fuzzy-segmentation method for diffusion tensor (DT) images and magnetic resonance (MR) images. Typical fuzzy-segmentation schemes, e.g. those based on fuzzy-C-means (FCM), incorporate Gaussian class models which are inherently biased towards ellipsoidal clusters. Fiber bundles in DT images, however, comprise tensors that can inherently lie on more-complex manifolds. Unlike FCM-based schemes, the proposed method relies on modeling the manifolds underlying the classes by incorporating nonparametric data-driven statistical models. It produces an optimal fuzzy segmentation by maximizing a novel information-theoretic energy in a Markov-random-field framework. For DT images, the paper describes a consistent statistical technique for nonparametric modeling in Riemannian DT spaces that incorporates two very recent works. In this way, the proposed method provides uncertainties in the segmentation decisions, which stem from imaging artifacts including noise, partial voluming, and inhomogeneity. The paper shows results on synthetic and real, DT as well as MR images. PMID:17633708

  2. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  3. Volume change of segments II and III of the liver after gastrectomy in patients with gastric cancer

    PubMed Central

    Ozutemiz, Can; Obuz, Funda; Taylan, Abdullah; Atila, Koray; Bora, Seymen; Ellidokuz, Hulya

    2016-01-01

    PURPOSE We aimed to evaluate the relationship between gastrectomy and the volume of liver segments II and III in patients with gastric cancer. METHODS Computed tomography images of 54 patients who underwent curative gastrectomy for gastric adenocarcinoma were retrospectively evaluated by two blinded observers. Volumes of the total liver and segments II and III were measured. The difference between preoperative and postoperative volume measurements was compared. RESULTS Total liver volumes measured by both observers in the preoperative and postoperative scans were similar (P > 0.05). High correlation was found between both observers (preoperative r=0.99; postoperative r=0.98). Total liver volumes showed a mean reduction of 13.4% after gastrectomy (P = 0.977). The mean volume of segments II and III showed similar decrease in measurements of both observers (38.4% vs. 36.4%, P = 0.363); the correlation between the observers were high (preoperative r=0.97, P < 0.001; postoperative r=0.99, P < 0.001). Volume decrease in the rest of the liver was not different between the observers (8.2% vs. 9.1%, P = 0.388). Time had poor correlation with volume change of segments II and III and the total liver for each observer (observer 1, rseg2/3=0.32, rtotal=0.13; observer 2, rseg2/3=0.37, rtotal=0.16). CONCLUSION Segments II and III of the liver showed significant atrophy compared with the rest of the liver and the total liver after gastrectomy. Volume reduction had poor correlation with time. PMID:26899148

  4. A comprehensive segmentation analysis of crude oil market based on time irreversibility

    NASA Astrophysics Data System (ADS)

    Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi

    2016-05-01

    In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.

  5. Linear test bed. Volume 1: Test bed no. 1. [aerospike test bed with segmented combustor

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Linear Test Bed program was to design, fabricate, and evaluation test an advanced aerospike test bed which employed the segmented combustor concept. The system is designated as a linear aerospike system and consists of a thrust chamber assembly, a power package, and a thrust frame. It was designed as an experimental system to demonstrate the feasibility of the linear aerospike-segmented combustor concept. The overall dimensions are 120 inches long by 120 inches wide by 96 inches in height. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure, at a mixture ratio of 5.5. At the design conditions, the sea level thrust is 200,000 pounds. The complete program including concept selection, design, fabrication, component test, system test, supporting analysis and posttest hardware inspection is described.

  6. Atrophy of the Cerebellar Vermis in Essential Tremor: Segmental Volumetric MRI Analysis.

    PubMed

    Shin, Hyeeun; Lee, Dong-Kyun; Lee, Jong-Min; Huh, Young-Eun; Youn, Jinyoung; Louis, Elan D; Cho, Jin Whan

    2016-04-01

    Postmortem studies of essential tremor (ET) have demonstrated the presence of degenerative changes in the cerebellum, and imaging studies have examined related structural changes in the brain. However, their results have not been completely consistent and the number of imaging studies has been limited. We aimed to study cerebellar involvement in ET using MRI segmental volumetric analysis. In addition, a unique feature of this study was that we stratified ET patients into subtypes based on the clinical presence of cerebellar signs and compared their MRI findings. Thirty-nine ET patients and 36 normal healthy controls, matched for age and sex, were enrolled. Cerebellar signs in ET patients were assessed using the clinical tremor rating scale and International Cooperative Ataxia Rating Scale. ET patients were divided into two groups: patients with cerebellar signs (cerebellar-ET) and those without (classic-ET). MRI volumetry was performed using CIVET pipeline software. Data on whole and segmented cerebellar volumes were analyzed using SPSS. While there was a trend for whole cerebellar volume to decrease from controls to classic-ET to cerebellar-ET, this trend was not significant. The volume of several contiguous segments of the cerebellar vermis was reduced in ET patients versus controls. Furthermore, these vermis volumes were reduced in the cerebellar-ET group versus the classic-ET group. The volume of several adjacent segments of the cerebellar vermis was reduced in ET. This effect was more evident in ET patients with clinical signs of cerebellar dysfunction. The presence of tissue atrophy suggests that ET might be a neurodegenerative disease. PMID:26062905

  7. Integrated multidisciplinary analysis of segmented reflector telescopes

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.; Needels, Laura

    1992-01-01

    The present multidisciplinary telescope-analysis approach, which encompasses thermal, structural, control and optical considerations, is illustrated for the case of an IR telescope in LEO; attention is given to end-to-end evaluations of the effects of mechanical disturbances and thermal gradients in measures of optical performance. Both geometric ray-tracing and surface-to-surface diffraction approximations are used in the telescope's optical model. Also noted is the role played by NASA-JPL's Integrated Modeling of Advanced Optical Systems computation tool, in view of numerical samples.

  8. Analysis of recent segmental duplications in the bovine genome

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Duplicated sequences are an important source of gene innovation and structural variation within mammalian genomes. We describe the first systematic and genome-wide analysis of segmental duplications in the modern domesticated cattle (Bos taurus). Using two distinct computational analyses, we estimat...

  9. 3D surface analysis and classification in neuroimaging segmentation.

    PubMed

    Zagar, Martin; Mlinarić, Hrvoje; Knezović, Josip

    2011-06-01

    This work emphasizes new algorithms for 3D edge and corner detection used in surface extraction and new concept of image segmentation in neuroimaging based on multidimensional shape analysis and classification. We propose using of NifTI standard for describing input data which enables interoperability and enhancement of existing computing tools used widely in neuroimaging research. In methods section we present our newly developed algorithm for 3D edge and corner detection, together with the algorithm for estimating local 3D shape. Surface of estimated shape is analyzed and segmented according to kernel shapes. PMID:21755723

  10. Fingerprint image segmentation based on multi-features histogram analysis

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhang, Youguang

    2007-11-01

    An effective fingerprint image segmentation based on multi-features histogram analysis is presented. We extract a new feature, together with three other features to segment fingerprints. Two of these four features, each of which is related to one of the other two, are reciprocals with each other, so features are divided into two groups. These two features' histograms are calculated respectively to determine which feature group is introduced to segment the aim-fingerprint. The features could also divide fingerprints into two classes with high and low quality. Experimental results show that our algorithm could classify foreground and background effectively with lower computational cost, and it can also reduce pseudo-minutiae detected and improve the performance of AFIS.

  11. Three-dimensional segmentation of pulmonary artery volume from thoracic computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Lindenmaier, Tamas J.; Sheikh, Khadija; Bluemke, Emma; Gyacskov, Igor; Mura, Marco; Licskai, Christopher; Mielniczuk, Lisa; Fenster, Aaron; Cunningham, Ian A.; Parraga, Grace

    2015-03-01

    Chronic obstructive pulmonary disease (COPD), is a major contributor to hospitalization and healthcare costs in North America. While the hallmark of COPD is airflow limitation, it is also associated with abnormalities of the cardiovascular system. Enlargement of the pulmonary artery (PA) is a morphological marker of pulmonary hypertension, and was previously shown to predict acute exacerbations using a one-dimensional diameter measurement of the main PA. We hypothesized that a three-dimensional (3D) quantification of PA size would be more sensitive than 1D methods and encompass morphological changes along the entire central pulmonary artery. Hence, we developed a 3D measurement of the main (MPA), left (LPA) and right (RPA) pulmonary arteries as well as total PA volume (TPAV) from thoracic CT images. This approach incorporates segmentation of pulmonary vessels in cross-section for the MPA, LPA and RPA to provide an estimate of their volumes. Three observers performed five repeated measurements for 15 ex-smokers with ≥10 pack-years, and randomly identified from a larger dataset of 199 patients. There was a strong agreement (r2=0.76) for PA volume and PA diameter measurements, which was used as a gold standard. Observer measurements were strongly correlated and coefficients of variation for observer 1 (MPA:2%, LPA:3%, RPA:2%, TPA:2%) were not significantly different from observer 2 and 3 results. In conclusion, we generated manual 3D pulmonary artery volume measurements from thoracic CT images that can be performed with high reproducibility. Future work will involve automation for implementation in clinical workflows.

  12. Salted and preserved duck eggs: a consumer market segmentation analysis.

    PubMed

    Arthur, Jennifer; Wiseman, Kelleen; Cheng, K M

    2015-08-01

    The combination of increasing ethnic diversity in North America and growing consumer support for local food products may present opportunities for local producers and processors in the ethnic foods product category. Our study examined the ethnic Chinese (pop. 402,000) market for salted and preserved duck eggs in Vancouver, British Columbia (BC), Canada. The objective of the study was to develop a segmentation model using survey data to categorize consumer groups based on their attitudes and the importance they placed on product attributes. We further used post-segmentation acculturation score, demographics and buyer behaviors to define these groups. Data were gathered via a survey of randomly selected Vancouver households with Chinese surnames (n = 410), targeting the adult responsible for grocery shopping. Results from principal component analysis and a 2-step cluster analysis suggest the existence of 4 market segments, described as Enthusiasts, Potentialists, Pragmatists, Health Skeptics (salted duck eggs), and Neutralists (preserved duck eggs). Kruskal Wallis tests and post hoc Mann-Whitney tests found significant differences between segments in terms of attitudes and the importance placed on product characteristics. Health Skeptics, preserved egg Potentialists, and Pragmatists of both egg products were significantly biased against Chinese imports compared to others. Except for Enthusiasts, segments disagreed that eggs are 'Healthy Products'. Preserved egg Enthusiasts had a significantly lower acculturation score (AS) compared to all others, while salted egg Enthusiasts had a lower AS compared to Health Skeptics. All segments rated "produced in BC, not mainland China" products in the "neutral to very likely" range for increasing their satisfaction with the eggs. Results also indicate that buyers of each egg type are willing to pay an average premium of at least 10% more for BC produced products versus imports, with all other characteristics equal. Overall

  13. Small rural hospitals: an example of market segmentation analysis.

    PubMed

    Mainous, A G; Shelby, R L

    1991-01-01

    In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution. PMID:10111266

  14. Documented Safety Analysis for the B695 Segment

    SciTech Connect

    Laycak, D

    2008-09-11

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems

  15. Extracellular and intracellular volume variations during postural change measured by segmental and wrist-ankle bioimpedance spectroscopy.

    PubMed

    Fenech, Marianne; Jaffrin, Michel Y

    2004-01-01

    Extracellular (ECW) and intracellular (ICW) volumes were measured using both segmental and wrist-ankle (W-A) bioimpedance spectroscopy (5-1000 kHz) in 15 healthy subjects (7 men, 8 women). In the 1st protocol, the subject, after sitting for 30 min, laid supine for at least 30 min. In the second protocol, the subject, who had been supine for 1 hr, sat up in bed for 10 min and returned to supine position for another hour. Segmental ECW and ICW resistances of legs, arms and trunk were measured by placing four voltage electrodes on wrist, shoulder, top of thigh and ankle and using Hanai's conductivity theory. W-A resistances were found to be very close to the sum of segmental resistances. When switching from sitting to supine (protocol 1), the mean ECW leg resistance increased by 18.2%, that of arm and W-A by 12.4%. Trunk resistance also increased but not significantly by 4.8%. Corresponding increases in ICW resistance were smaller for legs (3.7%) and arm (-0.7%) but larger for the trunk (21.4%). Total body ECW volumes from segmental measurements were in good agreement with W-A and Watson anthropomorphic correlation. The decrease in total ECW volume (when supine) calculated from segmental resistances was at 0.79 l less than the W-A one (1.12 l). Total ICW volume reductions were 3.4% (segmental) and 3.8% (W-A). Tests of protocol 2 confirmed that resistance and fluid volume values were not affected by a temporary position change. PMID:14723506

  16. Accurate airway segmentation based on intensity structure analysis and graph-cut

    NASA Astrophysics Data System (ADS)

    Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku

    2016-03-01

    This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.

  17. An automatic method of brain tumor segmentation from MRI volume based on the symmetry of brain and level set method

    NASA Astrophysics Data System (ADS)

    Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su

    2010-02-01

    This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.

  18. Influence of cold walls on PET image quantification and volume segmentation: A phantom study

    SciTech Connect

    Berthon, B.; Marshall, C.; Edwards, A.; Spezi, E.; Evans, M.

    2013-08-15

    Purpose: Commercially available fillable plastic inserts used in positron emission tomography phantoms usually have thick plastic walls, separating their content from the background activity. These “cold” walls can modify the intensity values of neighboring active regions due to the partial volume effect, resulting in errors in the estimation of standardized uptake values. Numerous papers suggest that this is an issue for phantom work simulating tumor tissue, quality control, and calibration work. This study aims to investigate the influence of the cold plastic wall thickness on the quantification of 18F-fluorodeoxyglucose on the image activity recovery and on the performance of advanced automatic segmentation algorithms for the delineation of active regions delimited by plastic walls.Methods: A commercial set of six spheres of different diameters was replicated using a manufacturing technique which achieves a reduction in plastic walls thickness of up to 90%, while keeping the same internal volume. Both sets of thin- and thick-wall inserts were imaged simultaneously in a custom phantom for six different tumor-to-background ratios. Intensity values were compared in terms of mean and maximum standardized uptake values (SUVs) in the spheres and mean SUV of the hottest 1 ml region (SUV{sub max}, SUV{sub mean}, and SUV{sub peak}). The recovery coefficient (RC) was also derived for each sphere. The results were compared against the values predicted by a theoretical model of the PET-intensity profiles for the same tumor-to-background ratios (TBRs), sphere sizes, and wall thicknesses. In addition, ten automatic segmentation methods, written in house, were applied to both thin- and thick-wall inserts. The contours obtained were compared to computed tomography derived gold standard (“ground truth”), using five different accuracy metrics.Results: The authors' results showed that thin-wall inserts achieved significantly higher SUV{sub mean}, SUV{sub max}, and RC

  19. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  20. Segmented infrared image analysis for rotating machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Duan, Lixiang; Yao, Mingchao; Wang, Jinjiang; Bai, Tangbo; Zhang, Laibin

    2016-07-01

    As a noncontact and non-intrusive technique, infrared image analysis becomes promising for machinery defect diagnosis. However, the insignificant information and strong noise in infrared image limit its performance. To address this issue, this paper presents an image segmentation approach to enhance the feature extraction in infrared image analysis. A region selection criterion named dispersion degree is also formulated to discriminate fault representative regions from unrelated background information. Feature extraction and fusion methods are then applied to obtain features from selected regions for further diagnosis. Experimental studies on a rotor fault simulator demonstrate that the presented segmented feature enhancement approach outperforms the one from the original image using both Naïve Bayes classifier and support vector machine.

  1. Education, Work and Employment--Volume II. Segmented Labour Markets, Workplace Democracy and Educational Planning, Education and Self-Employment.

    ERIC Educational Resources Information Center

    Carnoy, Martin; And Others

    This volume contains three studies covering separate yet complementary aspects of the problem of the relationships between the educational system and the production system as manpower user. The first monograph on the theories of the markets seeks to answer two questions: what can be learned from the work done on the segmentation of the labor…

  2. A method for avoiding overlap of left and right lungs in shape model guided segmentation of lungs in CT volumes

    PubMed Central

    Gill, Gurman; Bauer, Christian; Beichel, Reinhard R.

    2014-01-01

    Purpose: The automated correct segmentation of left and right lungs is a nontrivial problem, because the tissue layer between both lungs can be quite thin. In the case of lung segmentation with left and right lung models, overlapping segmentations can occur. In this paper, the authors address this issue and propose a solution for a model-based lung segmentation method. Methods: The thin tissue layer between left and right lungs is detected by means of a classification approach and utilized to selectively modify the cost function of the lung segmentation method. The approach was evaluated on a diverse set of 212 CT scans of normal and diseased lungs. Performance was assessed by utilizing an independent reference standard and by means of comparison to the standard segmentation method without overlap avoidance. Results: For cases where the standard approach produced overlapping segmentations, the proposed method significantly (p = 1.65 × 10−9) reduced the overlap by 97.13% on average (median: 99.96%). In addition, segmentation accuracy assessed with the Dice coefficient showed a statistically significant improvement (p = 7.5 × 10−5) and was 0.9845 ± 0.0111. For cases where the standard approach did not produce an overlap, performance of the proposed method was not found to be significantly different. Conclusions: The proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis steps. PMID:25281960

  3. Analysis of recent segmental duplications in the bovine genome

    PubMed Central

    2009-01-01

    Background Duplicated sequences are an important source of gene innovation and structural variation within mammalian genomes. We performed the first systematic and genome-wide analysis of segmental duplications in the modern domesticated cattle (Bos taurus). Using two distinct computational analyses, we estimated that 3.1% (94.4 Mb) of the bovine genome consists of recently duplicated sequences (≥ 1 kb in length, ≥ 90% sequence identity). Similar to other mammalian draft assemblies, almost half (47% of 94.4 Mb) of these sequences have not been assigned to cattle chromosomes. Results In this study, we provide the first experimental validation large duplications and briefly compared their distribution on two independent bovine genome assemblies using fluorescent in situ hybridization (FISH). Our analyses suggest that the (75-90%) of segmental duplications are organized into local tandem duplication clusters. Along with rodents and carnivores, these results now confidently establish tandem duplications as the most likely mammalian archetypical organization, in contrast to humans and great ape species which show a preponderance of interspersed duplications. A cross-species survey of duplicated genes and gene families indicated that duplication, positive selection and gene conversion have shaped primates, rodents, carnivores and ruminants to different degrees for their speciation and adaptation. We identified that bovine segmental duplications corresponding to genes are significantly enriched for specific biological functions such as immunity, digestion, lactation and reproduction. Conclusion Our results suggest that in most mammalian lineages segmental duplications are organized in a tandem configuration. Segmental duplications remain problematic for genome and assembly and we highlight genic regions that require higher quality sequence characterization. This study provides insights into mammalian genome evolution and generates a valuable resource for cattle

  4. Study of tracking and data acquisition system for the 1990's. Volume 4: TDAS space segment architecture

    NASA Technical Reports Server (NTRS)

    Orr, R. S.

    1984-01-01

    Tracking and data acquisition system (TDAS) requirements, TDAS architectural goals, enhanced TDAS subsystems, constellation and networking options, TDAS spacecraft options, crosslink implementation, baseline TDAS space segment architecture, and treat model development/security analysis are addressed.

  5. High volume data storage architecture analysis

    NASA Technical Reports Server (NTRS)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  6. Three dimensional level set based semiautomatic segmentation of atherosclerotic carotid artery wall volume using 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Hossain, Md. Murad; AlMuhanna, Khalid; Zhao, Limin; Lal, Brajesh K.; Sikdar, Siddhartha

    2014-03-01

    3D segmentation of carotid plaque from ultrasound (US) images is challenging due to image artifacts and poor boundary definition. Semiautomatic segmentation algorithms for calculating vessel wall volume (VWV) have been proposed for the common carotid artery (CCA) but they have not been applied on plaques in the internal carotid artery (ICA). In this work, we describe a 3D segmentation algorithm that is robust to shadowing and missing boundaries. Our algorithm uses distance regularized level set method with edge and region based energy to segment the adventitial wall boundary (AWB) and lumen-intima boundary (LIB) of plaques in the CCA, ICA and external carotid artery (ECA). The algorithm is initialized by manually placing points on the boundary of a subset of transverse slices with an interslice distance of 4mm. We propose a novel user defined stopping surface based energy to prevent leaking of evolving surface across poorly defined boundaries. Validation was performed against manual segmentation using 3D US volumes acquired from five asymptomatic patients with carotid stenosis using a linear 4D probe. A pseudo gold-standard boundary was formed from manual segmentation by three observers. The Dice similarity coefficient (DSC), Hausdor distance (HD) and modified HD (MHD) were used to compare the algorithm results against the pseudo gold-standard on 1205 cross sectional slices of 5 3D US image sets. The algorithm showed good agreement with the pseudo gold standard boundary with mean DSC of 93.3% (AWB) and 89.82% (LIB); mean MHD of 0.34 mm (AWB) and 0.24 mm (LIB); mean HD of 1.27 mm (AWB) and 0.72 mm (LIB). The proposed 3D semiautomatic segmentation is the first step towards full characterization of 3D plaque progression and longitudinal monitoring.

  7. An analysis of segmentation dynamics throughout embryogenesis in the centipede Strigamia maritima

    PubMed Central

    2013-01-01

    Background Most segmented animals add segments sequentially as the animal grows. In vertebrates, segment patterning depends on oscillations of gene expression coordinated as travelling waves in the posterior, unsegmented mesoderm. Recently, waves of segmentation gene expression have been clearly documented in insects. However, it remains unclear whether cyclic gene activity is widespread across arthropods, and possibly ancestral among segmented animals. Previous studies have suggested that a segmentation oscillator may exist in Strigamia, an arthropod only distantly related to insects, but further evidence is needed to document this. Results Using the genes even skipped and Delta as representative of genes involved in segment patterning in insects and in vertebrates, respectively, we have carried out a detailed analysis of the spatio-temporal dynamics of gene expression throughout the process of segment patterning in Strigamia. We show that a segmentation clock is involved in segment formation: most segments are generated by cycles of dynamic gene activity that generate a pattern of double segment periodicity, which is only later resolved to the definitive single segment pattern. However, not all segments are generated by this process. The most posterior segments are added individually from a localized sub-terminal area of the embryo, without prior pair-rule patterning. Conclusions Our data suggest that dynamic patterning of gene expression may be widespread among the arthropods, but that a single network of segmentation genes can generate either oscillatory behavior at pair-rule periodicity or direct single segment patterning, at different stages of embryogenesis. PMID:24289308

  8. Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging

    PubMed Central

    Beaton, L.; Mazzaferri, J.; Lalonde, F.; Hidalgo-Aguirre, M.; Descovich, D.; Lesk, M. R.; Costantino, S.

    2015-01-01

    We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373

  9. Fully Automated Renal Tissue Volumetry in MR Volume Data Using Prior-Shape-Based Segmentation in Subject-Specific Probability Maps.

    PubMed

    Gloger, Oliver; Tönnies, Klaus; Laqua, Rene; Völzke, Henry

    2015-10-01

    Organ segmentation in magnetic resonance (MR) volume data is of increasing interest in epidemiological studies and clinical practice. Especially in large-scale population-based studies, organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time consuming and prone to reader variability, large-scale studies need automatic methods to perform organ segmentation. In this paper, we present an automated framework for renal tissue segmentation that computes renal parenchyma, cortex, and medulla volumetry in native MR volume data without any user interaction. We introduce a novel strategy of subject-specific probability map computation for renal tissue types, which takes inter- and intra-MR-intensity variability into account. Several kinds of tissue-related 2-D and 3-D prior-shape knowledge are incorporated in modularized framework parts to segment renal parenchyma in a final level set segmentation strategy. Subject-specific probabilities for medulla and cortex tissue are applied in a fuzzy clustering technique to delineate cortex and medulla tissue inside segmented parenchyma regions. The novel subject-specific computation approach provides clearly improved tissue probability map quality than existing methods. Comparing to existing methods, the framework provides improved results for parenchyma segmentation. Furthermore, cortex and medulla segmentation qualities are very promising but cannot be compared to existing methods since state-of-the art methods for automated cortex and medulla segmentation in native MR volume data are still missing. PMID:25915954

  10. Three-dimensional choroidal segmentation in spectral OCT volumes using optic disc prior information

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Girkin, Christopher A.; Hariri, Amirhossein; Sadda, SriniVas R.

    2016-03-01

    Recently, much attention has been focused on determining the role of the peripapillary choroid - the layer between the outer retinal pigment epithelium (RPE)/Bruchs membrane (BM) and choroid-sclera (C-S) junction, whether primary or secondary in the pathogenesis of glaucoma. However, the automated choroidal segmentation in spectral-domain optical coherence tomography (SD-OCT) images of optic nerve head (ONH) has not been reported probably due to the fact that the presence of the BM opening (BMO, corresponding to the optic disc) can deflect the choroidal segmentation from its correct position. The purpose of this study is to develop a 3D graph-based approach to identify the 3D choroidal layer in ONH-centered SD-OCT images using the BMO prior information. More specifically, an initial 3D choroidal segmentation was first performed using the 3D graph search algorithm. Note that varying surface interaction constraints based on the choroidal morphological model were applied. To assist the choroidal segmentation, two other surfaces of internal limiting membrane and innerouter segment junction were also segmented. Based on the segmented layer between the RPE/BM and C-S junction, a 2D projection map was created. The BMO in the projection map was detected by a 2D graph search. The pre-defined BMO information was then incorporated into the surface interaction constraints of the 3D graph search to obtain more accurate choroidal segmentation. Twenty SD-OCT images from 20 healthy subjects were used. The mean differences of the choroidal borders between the algorithm and manual segmentation were at a sub-voxel level, indicating a high level segmentation accuracy.

  11. Segmental chloride and fluid handling during correction of chloride-depletion alkalosis without volume expansion in the rat.

    PubMed Central

    Galla, J H; Bonduris, D N; Dumbauld, S L; Luke, R G

    1984-01-01

    To determine whether chloride-depletion metabolic alkalosis (CDA) can be corrected by provision of chloride without volume expansion or intranephronal redistribution of fluid reabsorption, CDA was produced in Sprague-Dawley rats by peritoneal dialysis against 0.15 M NaHCO3; controls (CON) were dialyzed against Ringer's bicarbonate. Animals were infused with isotonic solutions containing the same Cl and total CO2 (tCO2) concentrations as in postdialysis plasma at rates shown to be associated with slight but stable volume contraction. During the subsequent 6 h, serum Cl and tCO2 concentrations remained stable and normal in CON and corrected towards normal in CDA; urinary chloride excretion was less and bicarbonate excretion greater than those in CON during this period. Micropuncture and microinjection studies were performed in the 3rd h after dialysis. Plasma volumes determined by 125I-albumin were not different. Inulin clearance and fractional chloride excretion were lower (P less than 0.05) in CDA. Superficial nephron glomerular filtration rate determined from distal puncture sites was lower (P less than 0.02) in CDA (27.9 +/- 2.3 nl/min) compared with that in CON (37.9 +/- 2.6). Fractional fluid and chloride reabsorption in the proximal convoluted tubule and within the loop segment did not differ. Fractional chloride delivery to the early distal convolution did not differ but that out of this segment was less (P less than 0.01) in group CDA. Urinary recovery of 36Cl injected into the collecting duct segment was lower (P less than 0.01) in CDA (CON 74 +/- 3; CDA 34 +/- 4%). These data show that CDA can be corrected by the provision of chloride without volume expansion or alterations in the intranephronal distribution of fluid reabsorption. Enhanced chloride reabsorption in the collecting duct segment, and possibly in the distal convoluted tubule, contributes importantly to this correction. PMID:6690486

  12. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  13. Pulse shape analysis and position determination in segmented HPGe detectors: The AGATA detector library

    NASA Astrophysics Data System (ADS)

    Bruyneel, B.; Birkenbach, B.; Reiter, P.

    2016-03-01

    The AGATA Detector Library (ADL) was developed for the calculation of signals from highly segmented large volume high-purity germanium (HPGe) detectors. ADL basis sets comprise a huge amount of calculated position-dependent detector pulse shapes. A basis set is needed for Pulse Shape Analysis (PSA). By means of PSA the interaction position of a γ-ray inside the active detector volume is determined. Theoretical concepts of the calculations are introduced and cover the relevant aspects of signal formation in HPGe. The approximations and the realization of the computer code with its input parameters are explained in detail. ADL is a versatile and modular computer code; new detectors can be implemented in this library. Measured position resolutions of the AGATA detectors based on ADL are discussed.

  14. Automated target recognition technique for image segmentation and scene analysis

    NASA Astrophysics Data System (ADS)

    Baumgart, Chris W.; Ciarcia, Christopher A.

    1994-03-01

    Automated target recognition (ATR) software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army's Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multisensor system designed to detect buried and surface- emplaced metallic and nonmetallic antitank mines. The basic requirements for this ATR software were the following: (1) an ability to separate target objects from the background in low signal-noise conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed using an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics, which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a tradeoff between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  15. Breast Density Analysis Using an Automatic Density Segmentation Algorithm.

    PubMed

    Oliver, Arnau; Tortajada, Meritxell; Lladó, Xavier; Freixenet, Jordi; Ganau, Sergi; Tortajada, Lidia; Vilagran, Mariona; Sentís, Melcior; Martí, Robert

    2015-10-01

    Breast density is a strong risk factor for breast cancer. In this paper, we present an automated approach for breast density segmentation in mammographic images based on a supervised pixel-based classification and using textural and morphological features. The objective of the paper is not only to show the feasibility of an automatic algorithm for breast density segmentation but also to prove its potential application to the study of breast density evolution in longitudinal studies. The database used here contains three complete screening examinations, acquired 2 years apart, of 130 different patients. The approach was validated by comparing manual expert annotations with automatically obtained estimations. Transversal analysis of the breast density analysis of craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts acquired in the same study showed a correlation coefficient of ρ = 0.96 between the mammographic density percentage for left and right breasts, whereas a comparison of both mammographic views showed a correlation of ρ = 0.95. A longitudinal study of breast density confirmed the trend that dense tissue percentage decreases over time, although we noticed that the decrease in the ratio depends on the initial amount of breast density. PMID:25720749

  16. Multi-level segment analysis: definition and applications in turbulence

    NASA Astrophysics Data System (ADS)

    Wang, Lipo

    2015-11-01

    The interaction of different scales is among the most interesting and challenging features in turbulence research. Existing approaches used for scaling analysis such as structure-function and Fourier spectrum method have their respective limitations, for instance scale mixing, i.e. the so-called infrared and ultraviolet effects. For a given function, by specifying different window sizes, the local extremal point set will be different. Such window size dependent feature indicates multi-scale statistics. A new method, multi-level segment analysis (MSA) based on the local extrema statistics, has been developed. The part of the function between two adjacent extremal points is defined as a segment, which is characterized by the functional difference and scale difference. The structure function can be differently derived from these characteristic parameters. Data test results show that MSA can successfully reveal different scaling regimes in turbulence systems such as Lagrangian and two-dimensional turbulence, which have been remaining controversial in turbulence research. In principle MSA can generally be extended for various analyses.

  17. A segment interaction analysis of proximal-to-distal sequential segment motion patterns.

    PubMed

    Putnam, C A

    1991-01-01

    The purpose of this study was to examine the motion-dependent interaction between adjacent lower extremity segments during the actions of kicking and the swing phases of running and walking. This was done to help explain the proximal-to-distal sequential pattern of segment motions typically observed in these activities and to evaluate general biomechanical principles used to explain this motion pattern. High speed film data were collected for four subjects performing each skill. Equations were derived which expressed the interaction between segments in terms of resultant joint moments at the hip and knee and several interactive moments which were functions of gravitational forces or kinematic variables. The angular motion-dependent interaction between the thigh and leg was found to play a significant role in determining the sequential segment motion patterns observed in all three activities. The general nature of this interaction was consistent across all three movements except during phases in which there were large differences in the knee angle. Support was found for the principle of summation of segment speeds, whereas no support was found for the principle of summation of force or for general statements concerning the effect of negative thigh acceleration on positive leg acceleration. The roles played by resultant joint moments in producing the observed segment motion sequences are discussed. PMID:1997807

  18. Layout pattern analysis using the Voronoi diagram of line segments

    NASA Astrophysics Data System (ADS)

    Dey, Sandeep Kumar; Cheilaris, Panagiotis; Gabrani, Maria; Papadopoulou, Evanthia

    2016-01-01

    Early identification of problematic patterns in very large scale integration (VLSI) designs is of great value as the lithographic simulation tools face significant timing challenges. To reduce the processing time, such a tool selects only a fraction of possible patterns which have a probable area of failure, with the risk of missing some problematic patterns. We introduce a fast method to automatically extract patterns based on their structure and context, using the Voronoi diagram of line-segments as derived from the edges of VLSI design shapes. Designers put line segments around the problematic locations in patterns called "gauges," along which the critical distance is measured. The gauge center is the midpoint of a gauge. We first use the Voronoi diagram of VLSI shapes to identify possible problematic locations, represented as gauge centers. Then we use the derived locations to extract windows containing the problematic patterns from the design layout. The problematic locations are prioritized by the shape and proximity information of the design polygons. We perform experiments for pattern selection in a portion of a 22-nm random logic design layout. The design layout had 38,584 design polygons (consisting of 199,946 line segments) on layer Mx, and 7079 markers generated by an optical rule checker (ORC) tool. The optical rules specify requirements for printing circuits with minimum dimension. Markers are the locations of some optical rule violations in the layout. We verify our approach by comparing the coverage of our extracted patterns to the ORC-generated markers. We further derive a similarity measure between patterns and between layouts. The similarity measure helps to identify a set of representative gauges that reduces the number of patterns for analysis.

  19. Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.

    ERIC Educational Resources Information Center

    Lay, Robert S.; Maguire, John J.

    1983-01-01

    Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)

  20. Analysis of Retinal Peripapillary Segmentation in Early Alzheimer's Disease Patients

    PubMed Central

    Salobrar-Garcia, Elena; Hoyas, Irene; Leal, Mercedes; de Hoz, Rosa; Rojas, Blanca; Ramirez, Ana I.; Salazar, Juan J.; Yubero, Raquel; Gil, Pedro; Triviño, Alberto; Ramirez, José M.

    2015-01-01

    Decreased thickness of the retinal nerve fiber layer (RNFL) may reflect retinal neuronal-ganglion cell death. A decrease in the RNFL has been demonstrated in Alzheimer's disease (AD) in addition to aging by optical coherence tomography (OCT). Twenty-three mild-AD patients and 28 age-matched control subjects with mean Mini-Mental State Examination 23.3 and 28.2, respectively, with no ocular disease or systemic disorders affecting vision, were considered for study. OCT peripapillary and macular segmentation thickness were examined in the right eye of each patient. Compared to controls, eyes of patients with mild-AD patients showed no statistical difference in peripapillary RNFL thickness (P > 0.05); however, sectors 2, 3, 4, 8, 9, and 11 of the papilla showed thinning, while in sectors 1, 5, 6, 7, and 10 there was thickening. Total macular volume and RNFL thickness of the fovea in all four inner quadrants and in the outer temporal quadrants proved to be significantly decreased (P < 0.01). Despite the fact that peripapillary RNFL thickness did not statistically differ in comparison to control eyes, the increase in peripapillary thickness in our mild-AD patients could correspond to an early neurodegeneration stage and may entail the existence of an inflammatory process that could lead to progressive peripapillary fiber damage. PMID:26557684

  1. Bifilar analysis study, volume 1

    NASA Technical Reports Server (NTRS)

    Miao, W.; Mouzakis, T.

    1980-01-01

    A coupled rotor/bifilar/airframe analysis was developed and utilized to study the dynamic characteristics of the centrifugally tuned, rotor-hub-mounted, bifilar vibration absorber. The analysis contains the major components that impact the bifilar absorber performance, namely, an elastic rotor with hover aerodynamics, a flexible fuselage, and nonlinear individual degrees of freedom for each bifilar mass. Airspeed, rotor speed, bifilar mass and tuning variations are considered. The performance of the bifilar absorber is shown to be a function of its basic parameters: dynamic mass, damping and tuning, as well as the impedance of the rotor hub. The effect of the dissimilar responses of the individual bifilar masses which are caused by tolerance induced mass, damping and tuning variations is also examined.

  2. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  3. Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images

    PubMed Central

    Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.

    2015-01-01

    Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634

  4. Automated cerebellar lobule segmentation with application to cerebellar structural analysis in cerebellar disease.

    PubMed

    Yang, Zhen; Ye, Chuyang; Bogovic, John A; Carass, Aaron; Jedynak, Bruno M; Ying, Sarah H; Prince, Jerry L

    2016-02-15

    The cerebellum plays an important role in both motor control and cognitive function. Cerebellar function is topographically organized and diseases that affect specific parts of the cerebellum are associated with specific patterns of symptoms. Accordingly, delineation and quantification of cerebellar sub-regions from magnetic resonance images are important in the study of cerebellar atrophy and associated functional losses. This paper describes an automated cerebellar lobule segmentation method based on a graph cut segmentation framework. Results from multi-atlas labeling and tissue classification contribute to the region terms in the graph cut energy function and boundary classification contributes to the boundary term in the energy function. A cerebellar parcellation is achieved by minimizing the energy function using the α-expansion technique. The proposed method was evaluated using a leave-one-out cross-validation on 15 subjects including both healthy controls and patients with cerebellar diseases. Based on reported Dice coefficients, the proposed method outperforms two state-of-the-art methods. The proposed method was then applied to 77 subjects to study the region-specific cerebellar structural differences in three spinocerebellar ataxia (SCA) genetic subtypes. Quantitative analysis of the lobule volumes shows distinct patterns of volume changes associated with different SCA subtypes consistent with known patterns of atrophy in these genetic subtypes. PMID:26408861

  5. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  6. A novel approach for the automated segmentation and volume quantification of cardiac fats on computed tomography.

    PubMed

    Rodrigues, É O; Morais, F F C; Morais, N A O S; Conci, L S; Neto, L V; Conci, A

    2016-01-01

    The deposits of fat on the surroundings of the heart are correlated to several health risk factors such as atherosclerosis, carotid stiffness, coronary artery calcification, atrial fibrillation and many others. These deposits vary unrelated to obesity, which reinforces its direct segmentation for further quantification. However, manual segmentation of these fats has not been widely deployed in clinical practice due to the required human workload and consequential high cost of physicians and technicians. In this work, we propose a unified method for an autonomous segmentation and quantification of two types of cardiac fats. The segmented fats are termed epicardial and mediastinal, and stand apart from each other by the pericardium. Much effort was devoted to achieve minimal user intervention. The proposed methodology mainly comprises registration and classification algorithms to perform the desired segmentation. We compare the performance of several classification algorithms on this task, including neural networks, probabilistic models and decision tree algorithms. Experimental results of the proposed methodology have shown that the mean accuracy regarding both epicardial and mediastinal fats is 98.5% (99.5% if the features are normalized), with a mean true positive rate of 98.0%. In average, the Dice similarity index was equal to 97.6%. PMID:26474835

  7. Applicability of semi-automatic segmentation for volumetric analysis of brain lesions.

    PubMed

    Heinonen, T; Dastidar, P; Eskola, H; Frey, H; Ryymin, P; Laasonen, E

    1998-01-01

    This project involves the development of a fast semi-automatic segmentation procedure to make an accurate volumetric estimation of brain lesions. This method has been applied in the segmentation of demyelination plaques in Multiple Sclerosis (MS) and right cerebral hemispheric infarctions in patients with neglect. The developed segmentation method includes several image processing techniques, such as image enhancement, amplitude segmentation, and region growing. The entire program operates on a PC-based computer and applies graphical user interfaces. Twenty three patients with MS and 43 patients with right cerebral hemisphere infarctions were studied on a 0.5 T MRI unit. The MS plaques and cerebral infarctions were thereafter segmented. The volumetric accuracy of the program was demonstrated by segmenting Magnetic Resonance (MR) images of fluid filled syringes. The relative error of the total volume measurement based on the MR images of syringes was 1.5%. Also the repeatability test was carried out as inter-and intra-observer study in which MS plaques of six randomly selected patients were segmented. These tests indicated 7% variability in the inter-observer study and 4% variability in the intra-observer study. Average time used to segment and calculate the total plaque volumes for one patient was 10 min. This simple segmentation method can be utilized in the quantitation of anatomical structures, such as air cells in the sinonasal and temporal bone area, as well as in different pathological conditions, such as brain tumours, intracerebral haematomas and bony destructions. PMID:9680601

  8. Automated segmentation of chronic stroke lesions using LINDA: Lesion identification with neighborhood data analysis.

    PubMed

    Pustina, Dorian; Coslett, H Branch; Turkeltaub, Peter E; Tustison, Nicholas; Schwartz, Myrna F; Avants, Brian

    2016-04-01

    The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696 ± 0.16, Hausdorff distance of 17.9 ± 9.8 mm, and average displacement of 2.54 ± 1.38 mm. The manual and predicted lesion volumes correlated at r = 0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. PMID:26756101

  9. A framework for automatic heart sound analysis without segmentation

    PubMed Central

    2011-01-01

    Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set. PMID:21303558

  10. Improving the clinical correlation of multiple sclerosis black hole volume change by paired-scan analysis.

    PubMed

    Tam, Roger C; Traboulsee, Anthony; Riddehough, Andrew; Li, David K B

    2012-01-01

    The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes. PMID:24179734

  11. Performance evaluation of automated segmentation software on optical coherence tomography volume data.

    PubMed

    Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E; Debuc, Delia Cabrera

    2016-05-01

    Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth. PMID:27159849

  12. Segmentation and analysis of emission-computed-tomography images

    NASA Astrophysics Data System (ADS)

    Johnson, Valen E.; Bowsher, James E.; Qian, Jiang; Jaszczak, Ronald J.

    1992-12-01

    This paper describes a statistical model for reconstruction of emission computed tomography (ECT) images. A distinguishing feature of this model is that it is parameterized in terms of quantities of direct physiological significance, rather than only in terms of grey-level voxel values. Specifically, parameters representing regions, region means, and region volumes are included in the model formulation and are estimated directly from projection data. The model is specified hierarchically within the Bayesian paradigm. At the lowest level of the hierarchy, a Gibbs distribution is employed to specify a probability distribution on the space of all possible partitions of the discretized image scene. A novel feature of this distribution is that the number of partitioning elements, or image regions, is not assumed known a priori. In contrast, any other segmentation models (e.g., Liang et al., 1991, Amit et al., 1991) require that the number of regions be specified prior to image reconstruction. Since the number of regions in a source distribution is seldom known a priori, allowing the number of regions to vary within the model framework is an important practical feature of this model. In the second level of the model hierarchy, random variables representing emission intensity are associated with each partitioning element or region. Individual voxel intensities are assumed to be drawn from a gamma distribution with mean equal to the region mean in the third stage, and in the final stage of the hierarchy projection data are assumed to be generated from Poisson distributions with means equal to weighted sums of voxel intensities.

  13. REACH. Teacher's Guide, Volume III. Task Analysis.

    ERIC Educational Resources Information Center

    Morris, James Lee; And Others

    Designed for use with individualized instructional units (CE 026 345-347, CE 026 349-351) in the electromechanical cluster, this third volume of the postsecondary teacher's guide presents the task analysis which was used in the development of the REACH (Refrigeration, Electro-Mechanical, Air Conditioning, Heating) curriculum. The major blocks of…

  14. SU-E-J-238: Monitoring Lymph Node Volumes During Radiotherapy Using Semi-Automatic Segmentation of MRI Images

    SciTech Connect

    Veeraraghavan, H; Tyagi, N; Riaz, N; McBride, S; Lee, N; Deasy, J

    2014-06-01

    Purpose: Identification and image-based monitoring of lymph nodes growing due to disease, could be an attractive alternative to prophylactic head and neck irradiation. We evaluated the accuracy of the user-interactive Grow Cut algorithm for volumetric segmentation of radiotherapy relevant lymph nodes from MRI taken weekly during radiotherapy. Method: The algorithm employs user drawn strokes in the image to volumetrically segment multiple structures of interest. We used a 3D T2-wturbo spin echo images with an isotropic resolution of 1 mm3 and FOV of 492×492×300 mm3 of head and neck cancer patients who underwent weekly MR imaging during the course of radiotherapy. Various lymph node (LN) levels (N2, N3, N4'5) were individually contoured on the weekly MR images by an expert physician and used as ground truth in our study. The segmentation results were compared with the physician drawn lymph nodes based on DICE similarity score. Results: Three head and neck patients with 6 weekly MR images were evaluated. Two patients had level 2 LN drawn and one patient had level N2, N3 and N4'5 drawn on each MR image. The algorithm took an average of a minute to segment the entire volume (512×512×300 mm3). The algorithm achieved an overall DICE similarity score of 0.78. The time taken for initializing and obtaining the volumetric mask was about 5 mins for cases with only N2 LN and about 15 mins for the case with N2,N3 and N4'5 level nodes. The longer initialization time for the latter case was due to the need for accurate user inputs to separate overlapping portions of the different LN. The standard deviation in segmentation accuracy at different time points was utmost 0.05. Conclusions: Our initial evaluation of the grow cut segmentation shows reasonably accurate and consistent volumetric segmentations of LN with minimal user effort and time.

  15. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation

    PubMed Central

    Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe

    2015-01-01

    Purpose We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). Materials and Methods The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. Results VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). Conclusion It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method. PMID:25793178

  16. Fractal Segmentation and Clustering Analysis for Seismic Time Slices

    NASA Astrophysics Data System (ADS)

    Ronquillo, G.; Oleschko, K.; Korvin, G.; Arizabalo, R. D.

    2002-05-01

    Fractal analysis has become part of the standard approach for quantifying texture on gray-tone or colored images. In this research we introduce a multi-stage fractal procedure to segment, classify and measure the clustering patterns on seismic time slices from a 3-D seismic survey. Five fractal classifiers (c1)-(c5) were designed to yield standardized, unbiased and precise measures of the clustering of seismic signals. The classifiers were tested on seismic time slices from the AKAL field, Cantarell Oil Complex, Mexico. The generalized lacunarity (c1), fractal signature (c2), heterogeneity (c3), rugosity of boundaries (c4) and continuity resp. tortuosity (c5) of the clusters are shown to be efficient measures of the time-space variability of seismic signals. The Local Fractal Analysis (LFA) of time slices has proved to be a powerful edge detection filter to detect and enhance linear features, like faults or buried meandering rivers. The local fractal dimensions of the time slices were also compared with the self-affinity dimensions of the corresponding parts of porosity-logs. It is speculated that the spectral dimension of the negative-amplitude parts of the time-slice yields a measure of connectivity between the formation's high-porosity zones, and correlates with overall permeability.

  17. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  18. Blood vessel segmentation using line-direction vector based on Hessian analysis

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Kitasaka, Takayuki; Mori, Kensaku

    2010-03-01

    For decision of the treatment strategy, grading of stenoses is important in diagnosis of vascular disease such as arterial occlusive disease or thromboembolism. It is also important to understand the vasculature in minimally invasive surgery such as laparoscopic surgery or natural orifice translumenal endoscopic surgery. Precise segmentation and recognition of blood vessel regions are indispensable tasks in medical image processing systems. Previous methods utilize only ``lineness'' measure, which is computed by Hessian analysis. However, difference of the intensity values between a voxel of thin blood vessel and a voxel of surrounding tissue is generally decreased by the partial volume effect. Therefore, previous methods cannot extract thin blood vessel regions precisely. This paper describes a novel blood vessel segmentation method that can extract thin blood vessels with suppressing false positives. The proposed method utilizes not only lineness measure but also line-direction vector corresponding to the largest eigenvalue in Hessian analysis. By introducing line-direction information, it is possible to distinguish between a blood vessel voxel and a voxel having a low lineness measure caused by noise. In addition, we consider the scale of blood vessel. The proposed method can reduce false positives in some line-like tissues close to blood vessel regions by utilization of iterative region growing with scale information. The experimental result shows thin blood vessel (0.5 mm in diameter, almost same as voxel spacing) can be extracted finely by the proposed method.

  19. Finite Volume Methods: Foundation and Analysis

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Ohlberger, Mario

    2003-01-01

    Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

  20. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  1. Application of taxonomy theory, Volume 1: Computing a Hopf bifurcation-related segment of the feasibility boundary. Final report

    SciTech Connect

    Zaborszky, J.; Venkatasubramanian, V.

    1995-10-01

    Taxonomy Theory is the first precise comprehensive theory for large power system dynamics modeled in any detail. The motivation for this project is to show that it can be used, practically, for analyzing a disturbance that actually occurred on a large system, which affected a sizable portion of the Midwest with supercritical Hopf type oscillations. This event is well documented and studied. The report first summarizes Taxonomy Theory with an engineering flavor. Then various computational approaches are sighted and analyzed for desirability to use with Taxonomy Theory. Then working equations are developed for computing a segment of the feasibility boundary that bounds the region of (operating) parameters throughout which the operating point can be moved without losing stability. Then experimental software incorporating large EPRI software packages PSAPAC is developed. After a summary of the events during the subject disturbance, numerous large scale computations, up to 7600 buses, are reported. These results are reduced into graphical and tabular forms, which then are analyzed and discussed. The report is divided into two volumes. This volume illustrates the use of the Taxonomy Theory for computing the feasibility boundary and presents evidence that the event indeed led to a Hopf type oscillation on the system. Furthermore it proves that the Feasibility Theory can indeed be used for practical computation work with very large systems. Volume 2, a separate volume, will show that the disturbance has led to a supercritical (that is stable oscillation) Hopf bifurcation.

  2. Automatic brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Clark, Matthew C.; Hall, Lawrence O.; Goldgof, Dmitry B.; Velthuizen, Robert P.; Murtaugh, F. R.; Silbiger, Martin L.

    1998-06-01

    A system that automatically segments and labels complete glioblastoma-multiform tumor volumes in magnetic resonance images of the human brain is presented. The magnetic resonance images consist of three feature images (T1- weighted, proton density, T2-weighted) and are processed by a system which integrates knowledge-based techniques with multispectral analysis and is independent of a particular magnetic resonance scanning protocol. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intra-cranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intra-cranial region, with region analysis used in performing the final tumor labeling. This system has been trained on eleven volume data sets and tested on twenty-two unseen volume data sets acquired from a single magnetic resonance imaging system. The knowledge-based tumor segmentation was compared with radiologist-verified `ground truth' tumor volumes and results generated by a supervised fuzzy clustering algorithm. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time.

  3. Fusing Markov random fields with anatomical knowledge and shape-based analysis to segment multiple sclerosis white matter lesions in magnetic resonance images of the brain

    NASA Astrophysics Data System (ADS)

    AlZubi, Stephan; Toennies, Klaus D.; Bodammer, N.; Hinrichs, Herman

    2002-05-01

    This paper proposes an image analysis system to segment multiple sclerosis lesions of magnetic resonance (MR) brain volumes consisting of 3 mm thick slices using three channels (images showing T1-, T2- and PD -weighted contrast). The method uses the statistical model of Markov Random Fields (MRF) both at low and high levels. The neighborhood system used in this MRF is defined in three types: (1) Voxel to voxel: a low-level heterogeneous neighborhood system is used to restore noisy images. (2) Voxel to segment: a fuzzy atlas, which indicates the probability distribution of each tissue type in the brain, is registered elastically with the MRF. It is used by the MRF as a-priori knowledge to correct miss-classified voxels. (3) Segment to segment: Remaining lesion candidates are processed by a feature based classifier that looks at unary and neighborhood information to eliminate more false positives. An expert's manual segmentation was compared with the algorithm.

  4. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  5. Atlas-Based Segmentation Improves Consistency and Decreases Time Required for Contouring Postoperative Endometrial Cancer Nodal Volumes

    SciTech Connect

    Young, Amy V.; Wortham, Angela; Wernick, Iddo; Evans, Andrew; Ennis, Ronald D.

    2011-03-01

    Purpose: Accurate target delineation of the nodal volumes is essential for three-dimensional conformal and intensity-modulated radiotherapy planning for endometrial cancer adjuvant therapy. We hypothesized that atlas-based segmentation ('autocontouring') would lead to time savings and more consistent contours among physicians. Methods and Materials: A reference anatomy atlas was constructed using the data from 15 postoperative endometrial cancer patients by contouring the pelvic nodal clinical target volume on the simulation computed tomography scan according to the Radiation Therapy Oncology Group 0418 trial using commercially available software. On the simulation computed tomography scans from 10 additional endometrial cancer patients, the nodal clinical target volume autocontours were generated. Three radiation oncologists corrected the autocontours and delineated the manual nodal contours under timed conditions while unaware of the other contours. The time difference was determined, and the overlap of the contours was calculated using Dice's coefficient. Results: For all physicians, manual contouring of the pelvic nodal target volumes and editing the autocontours required a mean {+-} standard deviation of 32 {+-} 9 vs. 23 {+-} 7 minutes, respectively (p = .000001), a 26% time savings. For each physician, the time required to delineate the manual contours vs. correcting the autocontours was 30 {+-} 3 vs. 21 {+-} 5 min (p = .003), 39 {+-} 12 vs. 30 {+-} 5 min (p = .055), and 29 {+-} 5 vs. 20 {+-} 5 min (p = .0002). The mean overlap increased from manual contouring (0.77) to correcting the autocontours (0.79; p = .038). Conclusion: The results of our study have shown that autocontouring leads to increased consistency and time savings when contouring the nodal target volumes for adjuvant treatment of endometrial cancer, although the autocontours still required careful editing to ensure that the lymph nodes at risk of recurrence are properly included in the target

  6. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  7. A level set segmentation for computer-aided dental x-ray analysis

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Fevens, Thomas; Krzyzak, Adam; Li, Song

    2005-04-01

    A level-set-based segmentation framework for Computer Aided Dental X-rays Analysis (CADXA) is proposed. In this framework, we first employ level set methods to segment the dental X-ray image into three regions: Normal Region (NR), Potential Abnormal Region (PAR), Abnormal and Background Region (ABR). The segmentation results are then used to build uncertainty maps based on a proposed uncertainty measurement method and an analysis scheme is applied. The level set segmentation method consists of two stages: a training stage and a segmentation stage. During the training stage, manually chosen representative images are segmented using hierarchical level set region detection. The segmentation results are used to train a support vector machine (SVM) classifier. During the segmentation stage, a dental X-ray image is first classified by the trained SVM. The classifier provides an initial contour which is close to the correct boundary for the coupled level set method which is then used to further segment the image. Different dental X-ray images are used to test the framework. Experimental results show that the proposed framework achieves faster level set segmentation and provides more detailed information and indications of possible problems to the dentist. To our best knowledge, this is one of the first results on CADXA using level set methods.

  8. Simultaneous segmentation of retinal surfaces and microcystic macular edema in SDOCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Lang, Andrew; Swingle, Emily K.; Al-Louzi, Omar; Carass, Aaron; Solomon, Sharon; Calabresi, Peter A.; Saidha, Shiv; Prince, Jerry L.

    2016-03-01

    Optical coherence tomography (OCT) is a noninvasive imaging modality that has begun to find widespread use in retinal imaging for the detection of a variety of ocular diseases. In addition to structural changes in the form of altered retinal layer thicknesses, pathological conditions may also cause the formation of edema within the retina. In multiple sclerosis, for instance, the nerve fiber and ganglion cell layers are known to thin. Additionally, the formation of pseudocysts called microcystic macular edema (MME) have also been observed in the eyes of about 5% of MS patients, and its presence has been shown to be correlated with disease severity. Previously, we proposed separate algorithms for the segmentation of retinal layers and MME, but since MME mainly occurs within specific regions of the retina, a simultaneous approach is advantageous. In this work, we propose an automated globally optimal graph-theoretic approach that simultaneously segments the retinal layers and the MME in volumetric OCT scans. SD-OCT scans from one eye of 12 MS patients with known MME and 8 healthy controls were acquired and the pseudocysts manually traced. The overall precision and recall of the pseudocyst detection was found to be 86.0% and 79.5%, respectively.

  9. Simultaneous Segmentation of Retinal Surfaces and Microcystic Macular Edema in SDOCT Volumes

    PubMed Central

    Antony, Bhavna J.; Lang, Andrew; Swingle, Emily K.; Al-Louzi, Omar; Carass, Aaron; Solomon, Sharon; Calabresi, Peter A.; Saidha, Shiv; Prince, Jerry L.

    2016-01-01

    Optical coherence tomography (OCT) is a noninvasive imaging modality that has begun to find widespread use in retinal imaging for the detection of a variety of ocular diseases. In addition to structural changes in the form of altered retinal layer thicknesses, pathological conditions may also cause the formation of edema within the retina. In multiple sclerosis, for instance, the nerve fiber and ganglion cell layers are known to thin. Additionally, the formation of pseudocysts called microcystic macular edema (MME) have also been observed in the eyes of about 5% of MS patients, and its presence has been shown to be correlated with disease severity. Previously, we proposed separate algorithms for the segmentation of retinal layers and MME, but since MME mainly occurs within specific regions of the retina, a simultaneous approach is advantageous. In this work, we propose an automated globally optimal graph-theoretic approach that simultaneously segments the retinal layers and the MME in volumetric OCT scans. SD-OCT scans from one eye of 12 MS patients with known MME and 8 healthy controls were acquired and the pseudocysts manually traced. The overall precision and recall of the pseudocyst detection was found to be 86.0% and 79.5%, respectively. PMID:27199502

  10. Introduction to Psychology and Leadership. Part Ten; Discipline. Segments I & II, Volume X.

    ERIC Educational Resources Information Center

    Westinghouse Learning Corp., Annapolis, MD.

    The tenth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on discipline and is presented in two documents. This document is a self-instructional text with audiotape and intrinsically programed sections. EM 010 441 is…

  11. Introduction to Psychology and Leadership. Part Ten; Discipline. Segments I & II, Volume X, Script.

    ERIC Educational Resources Information Center

    Westinghouse Learning Corp., Annapolis, MD.

    The tenth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on discipline and is presented in two parts. This document is a self-instructional text with a tape script and intrinsically programed sections. EM 010 442 is…

  12. Adolescents and alcohol: an explorative audience segmentation analysis

    PubMed Central

    2012-01-01

    Background So far, audience segmentation of adolescents with respect to alcohol has been carried out mainly on the basis of socio-demographic characteristics. In this study we examined whether it is possible to segment adolescents according to their values and attitudes towards alcohol to use as guidance for prevention programmes. Methods A random sample of 7,000 adolescents aged 12 to 18 was drawn from the Municipal Basic Administration (MBA) of 29 Local Authorities in the province North-Brabant in the Netherlands. By means of an online questionnaire data were gathered on values and attitudes towards alcohol, alcohol consumption and socio-demographic characteristics. Results We were able to distinguish a total of five segments on the basis of five attitude factors. Moreover, the five segments also differed in drinking behavior independently of socio-demographic variables. Conclusions Our investigation was a first step in the search for possibilities of segmenting by factors other than socio-demographic characteristics. Further research is necessary in order to understand these results for alcohol prevention policy in concrete terms. PMID:22950946

  13. Fast Hough transform analysis: pattern deviation from line segment

    NASA Astrophysics Data System (ADS)

    Ershov, E.; Terekhin, A.; Nikolaev, D.; Postnikov, V.; Karpenko, S.

    2015-12-01

    In this paper, we analyze properties of dyadic patterns. These pattern were proposed to approximate line segments in the fast Hough transform (FHT). Initially, these patterns only had recursive computational scheme. We provide simple closed form expression for calculating point coordinates and their deviation from corresponding ideal lines.

  14. An Experimental Analysis of Phoneme Blending and Segmenting Skills

    ERIC Educational Resources Information Center

    Daly, Edward J., III; Johnson, Sarah; LeClair, Courtney

    2009-01-01

    In this 2-experiment study, experimental analyses of phoneme blending and segmenting skills were conducted with four-first grade students. Intraindividual analyses were conducted to identify the effects of classroom-based instruction on blending phonemes in Experiment 1. In Experiment 2, the effects of an individualized intervention for the…

  15. Infant Word Segmentation and Childhood Vocabulary Development: A Longitudinal Analysis

    ERIC Educational Resources Information Center

    Singh, Leher; Reznick, J. Steven; Xuehua, Liang

    2012-01-01

    Infants begin to segment novel words from speech by 7.5 months, demonstrating an ability to track, encode and retrieve words in the context of larger units. Although it is presumed that word recognition at this stage is a prerequisite to constructing a vocabulary, the continuity between these stages of development has not yet been empirically…

  16. Semi-automatic cone beam CT segmentation of in vivo pre-clinical subcutaneous tumours provides an efficient non-invasive alternative for tumour volume measurements

    PubMed Central

    Brodin, N P; Tang, J; Skalina, K; Quinn, TJ; Basu, I; Guha, C

    2015-01-01

    Objective: To evaluate the feasibility and accuracy of using cone beam CT (CBCT) scans obtained in radiation studies using the small-animal radiation research platform to perform semi-automatic tumour segmentation of pre-clinical tumour volumes. Methods: Volume measurements were evaluated for different anatomical tumour sites, the flank, thigh and dorsum of the hind foot, for a variety of tumour cell lines. The estimated tumour volumes from CBCT and manual calliper measurements using different volume equations were compared with the “gold standard”, measured by weighing the tumours following euthanasia and tumour resection. The correlation between tumour volumes estimated with the different methods, compared with the gold standard, was estimated by the Spearman's rank correlation coefficient, root-mean-square deviation and the coefficient of determination. Results: The semi-automatic CBCT volume segmentation performed favourably compared with manual calliper measures for flank tumours ≤2 cm3 and thigh tumours ≤1 cm3. For tumours >2 cm3 or foot tumours, the CBCT method was not able to accurately segment the tumour volumes and manual calliper measures were superior. Conclusion: We demonstrated that tumour volumes of flank and thigh tumours, obtained as a part of radiation studies using image-guided small-animal irradiators, can be estimated more efficiently and accurately using semi-automatic segmentation from CBCT scans. Advances in knowledge: This is the first study evaluating tumour volume assessment of pre-clinical subcutaneous tumours in different anatomical sites using on-board CBCT imaging. We also compared the accuracy of the CBCT method to manual calliper measures, using various volume calculation equations. PMID:25823502

  17. Interactive high-quality visualization of color volume datasets using GPU-based refinements of segmentation data.

    PubMed

    Lee, Byeonghun; Kwon, Koojoo; Shin, Byeong-Seok

    2016-04-24

    Data sets containing colored anatomical images of the human body, such as Visible Human or Visible Korean, show realistic internal organ structures. However, imperfect segmentations of these color images, which are typically generated manually or semi-automatically, produces poor-quality rendering results. We propose an interactive high-quality visualization method using GPU-based refinements to aid in the study of anatomical structures. In order to represent the boundaries of a region-of-interest (ROI) smoothly, we apply Gaussian filtering to the opacity values of the color volume. Morphological grayscale erosion operations are performed to reduce the region size, which is expanded by Gaussian filtering. Pseudo-coloring and color blending are also applied to the color volume in order to give more informative rendering results. We implement these operations on GPUs to speed up the refinements. As a result, our method delivered high-quality result images with smooth boundaries and provided considerably faster refinements. The speed of these refinements is sufficient to be used with interactive renderings as the ROI changes, especially compared to CPU-based methods. Moreover, the pseudo-coloring methods used presented anatomical structures clearly. PMID:27127935

  18. Latent segmentation based count models: Analysis of bicycle safety in Montreal and Toronto.

    PubMed

    Yasmin, Shamsunnahar; Eluru, Naveen

    2016-10-01

    The study contributes to literature on bicycle safety by building on the traditional count regression models to investigate factors affecting bicycle crashes at the Traffic Analysis Zone (TAZ) level. TAZ is a traffic related geographic entity which is most frequently used as spatial unit for macroscopic crash risk analysis. In conventional count models, the impact of exogenous factors is restricted to be the same across the entire region. However, it is possible that the influence of exogenous factors might vary across different TAZs. To accommodate for the potential variation in the impact of exogenous factors we formulate latent segmentation based count models. Specifically, we formulate and estimate latent segmentation based Poisson (LP) and latent segmentation based Negative Binomial (LNB) models to study bicycle crash counts. In our latent segmentation approach, we allow for more than two segments and also consider a large set of variables in segmentation and segment specific models. The formulated models are estimated using bicycle-motor vehicle crash data from the Island of Montreal and City of Toronto for the years 2006 through 2010. The TAZ level variables considered in our analysis include accessibility measures, exposure measures, sociodemographic characteristics, socioeconomic characteristics, road network characteristics and built environment. A policy analysis is also conducted to illustrate the applicability of the proposed model for planning purposes. This macro-level research would assist decision makers, transportation officials and community planners to make informed decisions to proactively improve bicycle safety - a prerequisite to promoting a culture of active transportation. PMID:27442595

  19. Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Syed, Abdul Haleem

    Land-use-and-land-cover (LULC) mapping is crucial in precision agriculture, environmental monitoring, disaster response, and military applications. The demand for improved and more accurate LULC maps has led to the emergence of a key methodology known as Geographic Object-Based Image Analysis (GEOBIA). The core idea of the GEOBIA for an object-based classification system (OBC) is to change the unit of analysis from single-pixels to groups-of-pixels called `objects' through segmentation. While this new paradigm solved problems and improved global accuracy, it also raised new challenges such as the loss of accuracy in categories that are less abundant, but potentially important. Although this trade-off may be acceptable in some domains, the consequences of such an accuracy loss could be potentially fatal in others (for instance, landmine detection). This thesis proposes a method to improve OBC performance by eliminating such accuracy losses. Specifically, we examine the two key players of an OBC system: Hierarchical Segmentation and Supervised Classification. Further, we propose a model to understand the source of accuracy errors in minority categories and provide a method called Scale Fusion to eliminate those errors. This proposed fusion method involves two stages. First, the characteristic scale for each category is estimated through a combination of segmentation and supervised classification. Next, these estimated scales (segmentation maps) are fused into one combined-object-map. Classification performance is evaluated by comparing results of the multi-cut-and-fuse approach (proposed) to the traditional single-cut (SC) scale selection strategy. Testing on four different data sets revealed that our proposed algorithm improves accuracy on minority classes while performing just as well on abundant categories. Another active obstacle, presented by today's remotely sensed images, is the volume of information produced by our modern sensors with high spatial and

  20. Computed Tomographic Image Analysis Based on FEM Performance Comparison of Segmentation on Knee Joint Reconstruction

    PubMed Central

    Jang, Seong-Wook; Seo, Young-Jin; Yoo, Yon-Sik

    2014-01-01

    The demand for an accurate and accessible image segmentation to generate 3D models from CT scan data has been increasing as such models are required in many areas of orthopedics. In this paper, to find the optimal image segmentation to create a 3D model of the knee CT data, we compared and validated segmentation algorithms based on both objective comparisons and finite element (FE) analysis. For comparison purposes, we used 1 model reconstructed in accordance with the instructions of a clinical professional and 3 models reconstructed using image processing algorithms (Sobel operator, Laplacian of Gaussian operator, and Canny edge detection). Comparison was performed by inspecting intermodel morphological deviations with the iterative closest point (ICP) algorithm, and FE analysis was performed to examine the effects of the segmentation algorithm on the results of the knee joint movement analysis. PMID:25538950

  1. Computed tomographic image analysis based on FEM performance comparison of segmentation on knee joint reconstruction.

    PubMed

    Jang, Seong-Wook; Seo, Young-Jin; Yoo, Yon-Sik; Kim, Yoon Sang

    2014-01-01

    The demand for an accurate and accessible image segmentation to generate 3D models from CT scan data has been increasing as such models are required in many areas of orthopedics. In this paper, to find the optimal image segmentation to create a 3D model of the knee CT data, we compared and validated segmentation algorithms based on both objective comparisons and finite element (FE) analysis. For comparison purposes, we used 1 model reconstructed in accordance with the instructions of a clinical professional and 3 models reconstructed using image processing algorithms (Sobel operator, Laplacian of Gaussian operator, and Canny edge detection). Comparison was performed by inspecting intermodel morphological deviations with the iterative closest point (ICP) algorithm, and FE analysis was performed to examine the effects of the segmentation algorithm on the results of the knee joint movement analysis. PMID:25538950

  2. Extensions to analysis of ignition transients of segmented rocket motors

    NASA Technical Reports Server (NTRS)

    Caveny, L. H.

    1978-01-01

    The analytical procedures described in NASA CR-150162 were extended for the purpose of analyzing the data from the first static test of the Solid Rocket Booster for the Space Shuttle. The component of thrust associated with the rapid changes in the internal flow field was calculated. This dynamic thrust component was shown to be prominent during flame spreading. An approach was implemented to account for the close coupling between the igniter and head end segment of the booster. The tips of the star points were ignited first, followed by radial and longitudinal flame spreading.

  3. Sensitivity analysis of volume scattering phase functions.

    PubMed

    Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael

    2016-08-01

    To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m-3. PMID:27505819

  4. Landmine detection using IR image segmentation by means of fractal dimension analysis

    NASA Astrophysics Data System (ADS)

    Abbate, Horacio A.; Gambini, Juliana; Delrieux, Claudio; Castro, Eduardo H.

    2009-05-01

    This work is concerned with buried landmines detection by long wave infrared images obtained during the heating or cooling of the soil and a segmentation process of the images. The segmentation process is performed by means of a local fractal dimension analysis (LFD) as a feature descriptor. We use two different LFD estimators, box-counting dimension (BC), and differential box counting dimension (DBC). These features are computed in a per pixel basis, and the set of features is clusterized by means of the K-means method. This segmentation technique produces outstanding results, with low computational cost.

  5. Theoretical analysis and experimental verification on valve-less piezoelectric pump with hemisphere-segment bluff-body

    NASA Astrophysics Data System (ADS)

    Ji, Jing; Zhang, Jianhui; Xia, Qixiao; Wang, Shouyin; Huang, Jun; Zhao, Chunsheng

    2014-05-01

    Existing researches on no-moving part valves in valve-less piezoelectric pumps mainly concentrate on pipeline valves and chamber bottom valves, which leads to the complex structure and manufacturing process of pump channel and chamber bottom. Furthermore, position fixed valves with respect to the inlet and outlet also makes the adjustability and controllability of flow rate worse. In order to overcome these shortcomings, this paper puts forward a novel implantable structure of valve-less piezoelectric pump with hemisphere-segments in the pump chamber. Based on the theory of flow around bluff-body, the flow resistance on the spherical and round surface of hemisphere-segment is different when fluid flows through, and the macroscopic flow resistance differences thus formed are also different. A novel valve-less piezoelectric pump with hemisphere-segment bluff-body (HSBB) is presented and designed. HSBB is the no-moving part valve. By the method of volume and momentum comparison, the stress on the bluff-body in the pump chamber is analyzed. The essential reason of unidirectional fluid pumping is expounded, and the flow rate formula is obtained. To verify the theory, a prototype is produced. By using the prototype, experimental research on the relationship between flow rate, pressure difference, voltage, and frequency has been carried out, which proves the correctness of the above theory. This prototype has six hemisphere-segments in the chamber filled with water, and the effective diameter of the piezoelectric bimorph is 30mm. The experiment result shows that the flow rate can reach 0.50 mL/s at the frequency of 6 Hz and the voltage of 110 V. Besides, the pressure difference can reach 26.2 mm H2O at the frequency of 6 Hz and the voltage of 160 V. This research proposes a valve-less piezoelectric pump with hemisphere-segment bluff-body, and its validity and feasibility is verified through theoretical analysis and experiment.

  6. Factor Analysis on Cogging Torques in Segment Core Motors

    NASA Astrophysics Data System (ADS)

    Enomoto, Yuji; Kitamura, Masashi; Sakai, Toshihiko; Ohara, Kouichiro

    The segment core method is a popular method employed in motor core manufacturing; however, this method does not allow the stator core precision to be enhanced because the stator is assembled from many cores. The axial eccentricity of rotor and stator and the internal roundness of the stator core are regarded as the main factors which affect cogging torque. In the present study, the way in which a motor with a split-type stator generates a cogging torque is investigated to determine whether high- precision assembly of stator cores can reduce cogging torque. Here, DC brushless motors were used to verify the influence of stator-rotor eccentricity and roundness of the stator bore on cogging torque. The evaluation results prove the feasibility of reducing cogging torque by improving the stator core precision. Therefore, improving the eccentricity and roundness will enable stable production of well controlled motors with few torque ripples.

  7. Health lifestyles: audience segmentation analysis for public health interventions.

    PubMed

    Slater, M D; Flora, J A

    1991-01-01

    This article is concerned with the application of market segmentation techniques in order to improve the planning and implementation of public health education programs. Seven distinctive patterns of health attitudes, social influences, and behaviors are identified using cluster analytic techniques in a sample drawn from four central California cities, and are subjected to construct and predictive validation: The lifestyle clusters predict behaviors including seatbelt use, vitamin C use, and attention to health information. The clusters also predict self-reported improvements in health behavior as measured in a two-year follow-up survey, e.g., eating less salt and losing weight, and self-reported new moderate and new vigorous exercise. Implications of these lifestyle clusters for public health education and intervention planning, and the larger potential of lifestyle clustering techniques in public health efforts, are discussed. PMID:2055779

  8. Analysis of radially cracked ring segments subject to forces and couples

    NASA Technical Reports Server (NTRS)

    Gross, B.; Strawley, J. E.

    1975-01-01

    Results of planar boundary collocation analysis are given for ring segment (C shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5, and ratios of crack length to segment width in the range 0.1 to 0.8.

  9. Analysis of radially cracked ring segments subject to forces and couples

    NASA Technical Reports Server (NTRS)

    Gross, B.; Srawley, J. E.

    1977-01-01

    Results of planar boundary collocation analysis are given for ring segment (C-shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5 and ratios of crack length to segment width in the range 0.1 to 0.8.

  10. SU-E-J-123: Assessing Segmentation Accuracy of Internal Volumes and Sub-Volumes in 4D PET/CT of Lung Tumors Using a Novel 3D Printed Phantom

    SciTech Connect

    Soultan, D; Murphy, J; James, C; Hoh, C; Moiseenko, V; Cervino, L; Gill, B

    2015-06-15

    Purpose: To assess the accuracy of internal target volume (ITV) segmentation of lung tumors for treatment planning of simultaneous integrated boost (SIB) radiotherapy as seen in 4D PET/CT images, using a novel 3D-printed phantom. Methods: The insert mimics high PET tracer uptake in the core and 50% uptake in the periphery, by using a porous design at the periphery. A lung phantom with the insert was placed on a programmable moving platform. Seven breathing waveforms of ideal and patient-specific respiratory motion patterns were fed to the platform, and 4D PET/CT scans were acquired of each of them. CT images were binned into 10 phases, and PET images were binned into 5 phases following the clinical protocol. Two scenarios were investigated for segmentation: a gate 30–70 window, and no gating. The radiation oncologist contoured the outer ITV of the porous insert with on CT images, while the internal void volume with 100% uptake was contoured on PET images for being indistinguishable from the outer volume in CT images. Segmented ITVs were compared to the expected volumes based on known target size and motion. Results: 3 ideal breathing patterns, 2 regular-breathing patient waveforms, and 2 irregular-breathing patient waveforms were used for this study. 18F-FDG was used as the PET tracer. The segmented ITVs from CT closely matched the expected motion for both no gating and gate 30–70 window, with disagreement of contoured ITV with respect to the expected volume not exceeding 13%. PET contours were seen to overestimate volumes in all the cases, up to more than 40%. Conclusion: 4DPET images of a novel 3D printed phantom designed to mimic different uptake values were obtained. 4DPET contours overestimated ITV volumes in all cases, while 4DCT contours matched expected ITV volume values. Investigation of the cause and effects of the discrepancies is undergoing.

  11. A new partial volume segmentation approach to extract bladder wall for computer-aided detection in virtual cystoscopy

    NASA Astrophysics Data System (ADS)

    Li, Lihong; Wang, Zigang; Li, Xiang; Wei, Xinzhou; Adler, Howard L.; Huang, Wei; Rizvi, Syed A.; Meng, Hong; Harrington, Donald P.; Liang, Zhengrong

    2004-04-01

    We propose a new partial volume (PV) segmentation scheme to extract bladder wall for computer aided detection (CAD) of bladder lesions using multispectral MR images. Compared with CT images, MR images provide not only a better tissue contrast between bladder wall and bladder lumen, but also the multispectral information. As multispectral images are spatially registered over three-dimensional space, information extracted from them is more valuable than that extracted from each image individually. Furthermore, the intrinsic T1 and T2 contrast of the urine against the bladder wall eliminates the invasive air insufflation procedure. Because the earliest stages of bladder lesion growth tend to develop gradually and migrate slowly from the mucosa into the bladder wall, our proposed PV algorithm quantifies images as percentages of tissues inside each voxel. It preserves both morphology and texture information and provides tissue growth tendency in addition to the anatomical structure. Our CAD system utilizes a multi-scan protocol on dual (full and empty of urine) states of the bladder to extract both geometrical and texture information. Moreover, multi-scan of transverse and coronal MR images eliminates motion artifacts. Experimental results indicate that the presented scheme is feasible towards mass screening and lesion detection for virtual cystoscopy (VC).

  12. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations

    PubMed Central

    Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.

    2015-01-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this

  13. Analysis of wear mechanism and influence factors of drum segment of hot rolling coiler

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Peng, Yan; Liu, Hongmin; Liu, Yunfei

    2013-03-01

    Because the work environment of segment is complex, and the wear failures usually happen, the wear mechanism corresponding to the load is a key factor for the solution of this problem. At present, many researchers have investigated the failure of segment, but have not taken into account the compositive influences of matching and coiling process. To investigate the wear failure of the drum segment of the hot rolling coiler, the MMU-5G abrasion tester is applied to simulate the wear behavior under different temperatures, different loads and different stages, and the friction coefficients and wear rates are acquired. Scanning electron microscopy(SEM) is used to observe the micro-morphology of worn surface, X-ray energy dispersive spectroscopy(EDS) is used to analyze the chemical composition of worn surface, finally the wear mechanism of segment in working process is judged and the influence regulars of the environmental factors on the material wear behaviors are found. The test and analysis results show that under certain load, the wear of the segment changes into oxidation wear from abrasive wear step by step with the temperature increases, and the wear degree reduces; under certain temperature, the main wear mechanism of segment changes into spalling wear from abrasive wear with the load increases, and the wear degree slightly increases. The proposed research provides a theoretical foundation and a practical reference for optimizing the wear behavior and extending the working life of segment.

  14. Imalytics Preclinical: Interactive Analysis of Biomedical Volume Data

    PubMed Central

    Gremse, Felix; Stärk, Marius; Ehling, Josef; Menzel, Jan Robert; Lammers, Twan; Kiessling, Fabian

    2016-01-01

    A software tool is presented for interactive segmentation of volumetric medical data sets. To allow interactive processing of large data sets, segmentation operations, and rendering are GPU-accelerated. Special adjustments are provided to overcome GPU-imposed constraints such as limited memory and host-device bandwidth. A general and efficient undo/redo mechanism is implemented using GPU-accelerated compression of the multiclass segmentation state. A broadly applicable set of interactive segmentation operations is provided which can be combined to solve the quantification task of many types of imaging studies. A fully GPU-accelerated ray casting method for multiclass segmentation rendering is implemented which is well-balanced with respect to delay, frame rate, worst-case memory consumption, scalability, and image quality. Performance of segmentation operations and rendering are measured using high-resolution example data sets showing that GPU-acceleration greatly improves the performance. Compared to a reference marching cubes implementation, the rendering was found to be superior with respect to rendering delay and worst-case memory consumption while providing sufficiently high frame rates for interactive visualization and comparable image quality. The fast interactive segmentation operations and the accurate rendering make our tool particularly suitable for efficient analysis of multimodal image data sets which arise in large amounts in preclinical imaging studies. PMID:26909109

  15. Imalytics Preclinical: Interactive Analysis of Biomedical Volume Data.

    PubMed

    Gremse, Felix; Stärk, Marius; Ehling, Josef; Menzel, Jan Robert; Lammers, Twan; Kiessling, Fabian

    2016-01-01

    A software tool is presented for interactive segmentation of volumetric medical data sets. To allow interactive processing of large data sets, segmentation operations, and rendering are GPU-accelerated. Special adjustments are provided to overcome GPU-imposed constraints such as limited memory and host-device bandwidth. A general and efficient undo/redo mechanism is implemented using GPU-accelerated compression of the multiclass segmentation state. A broadly applicable set of interactive segmentation operations is provided which can be combined to solve the quantification task of many types of imaging studies. A fully GPU-accelerated ray casting method for multiclass segmentation rendering is implemented which is well-balanced with respect to delay, frame rate, worst-case memory consumption, scalability, and image quality. Performance of segmentation operations and rendering are measured using high-resolution example data sets showing that GPU-acceleration greatly improves the performance. Compared to a reference marching cubes implementation, the rendering was found to be superior with respect to rendering delay and worst-case memory consumption while providing sufficiently high frame rates for interactive visualization and comparable image quality. The fast interactive segmentation operations and the accurate rendering make our tool particularly suitable for efficient analysis of multimodal image data sets which arise in large amounts in preclinical imaging studies. PMID:26909109

  16. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment

  17. 3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities

    NASA Astrophysics Data System (ADS)

    Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir

    2016-03-01

    Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.

  18. Microarray kit analysis of cytokines in blood product units and segments

    PubMed Central

    Weiskopf, Richard B.; Yau, Rebecca; Sanchez, Rosa; Lowell, Clifford; Toy, Pearl

    2009-01-01

    BACKGROUND Cytokine concentrations in transfused blood components are of interest for some clinical trials. It is not always possible to process samples of transfused components quickly after their administration. Additionally, it is not practical to sample material in an acceptable manner from many bags of components before transfusion, and after transfusion, the only representative remaining fluid of the component may be that in the “segment,” as the bag may have been completely transfused. Multiplex array technology allows rapid simultaneous testing of multiple analytes in small volume samples. We used this technology to measure leukocyte cytokine levels in blood products to determine (1) whether concentrations in segments correlate with those in the main bag, and thus, whether segments could be used for estimation of the concentrations in the transfused component; and (2) whether concentrations after sample storage at 4C for 24 hrs do not differ from concentrations before storage, thus allowing for processing within 24 hrs, rather than immediately after transfusion. STUDY DESIGN AND METHODS Leukocyte cytokines were measured in the supernatant from bags and segments of leukoreduced red blood cells, non-leukoreduced whole blood, and leukoreduced plateletphereses using the ProteoPlex Human Cytokine Array kit (Novagen). RESULTS Cytokine concentrations in packed red blood cell and whole blood, or plateletphereses stored at 4°C did not differ between bag and segment samples (all p>0.05). There was no evidence of systematic differences between segment and bag concentrations. Cytokine concentrations in samples from plateletphereses did not change within 24 hrs storage at 4°C. CONCLUSION Samples from either bag or segment can be used to study cytokine concentrations in groups of blood products. Cytokine concentrations in plateletphereses appear to be stable for at least 24 hrs of storage at 4°C, and, thus, samples stored with those conditions may be used to

  19. A robust and fast line segment detector based on top-down smaller eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing

    2014-01-01

    In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.

  20. Segmenting Business Students Using Cluster Analysis Applied to Student Satisfaction Survey Results

    ERIC Educational Resources Information Center

    Gibson, Allen

    2009-01-01

    This paper demonstrates a new application of cluster analysis to segment business school students according to their degree of satisfaction with various aspects of the academic program. The resulting clusters provide additional insight into drivers of student satisfaction that are not evident from analysis of the responses of the student body as a…

  1. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  2. Finite difference based vibration simulation analysis of a segmented distributed piezoelectric structronic plate system

    NASA Astrophysics Data System (ADS)

    Ren, B. Y.; Wang, L.; Tzou, H. S.; Yue, H. H.

    2010-08-01

    Electrical modeling of piezoelectric structronic systems by analog circuits has the disadvantages of huge circuit structure and low precision. However, studies of electrical simulation of segmented distributed piezoelectric structronic plate systems (PSPSs) by using output voltage signals of high-speed digital circuits to evaluate the real-time dynamic displacements are scarce in the literature. Therefore, an equivalent dynamic model based on the finite difference method (FDM) is presented to simulate the actual physical model of the segmented distributed PSPS with simply supported boundary conditions. By means of the FDM, the four-ordered dynamic partial differential equations (PDEs) of the main structure/segmented distributed sensor signals/control moments of the segmented distributed actuator of the PSPS are transformed to finite difference equations. A dynamics matrix model based on the Newmark-β integration method is established. The output voltage signal characteristics of the lower modes (m <= 3, n <= 3) with different finite difference mesh dimensions and different integration time steps are analyzed by digital signal processing (DSP) circuit simulation software. The control effects of segmented distributed actuators with different effective areas are consistent with the results of the analytical model in relevant references. Therefore, the method of digital simulation for vibration analysis of segmented distributed PSPSs presented in this paper can provide a reference for further research into the electrical simulation of PSPSs.

  3. AxonSeg: Open Source Software for Axon and Myelin Segmentation and Morphometric Analysis.

    PubMed

    Zaimi, Aldo; Duval, Tanguy; Gasecka, Alicja; Côté, Daniel; Stikov, Nikola; Cohen-Adad, Julien

    2016-01-01

    Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy (SEM) only). Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface (GUI) and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i) image pre-processing; (ii) pre-segmentation of axons over a cropped image and discriminant analysis (DA) to select the best parameters based on axon shape and intensity information; (iii) automatic axon and myelin segmentation over the full image; and (iv) atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM), SEM and coherent anti-Stokes Raman scattering (CARS) microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg. PMID:27594833

  4. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  5. Gene expression analysis reveals that Delta/Notch signalling is not involved in onychophoran segmentation.

    PubMed

    Janssen, Ralf; Budd, Graham E

    2016-03-01

    Delta/Notch (Dl/N) signalling is involved in the gene regulatory network underlying the segmentation process in vertebrates and possibly also in annelids and arthropods, leading to the hypothesis that segmentation may have evolved in the last common ancestor of bilaterian animals. Because of seemingly contradicting results within the well-studied arthropods, however, the role and origin of Dl/N signalling in segmentation generally is still unclear. In this study, we investigate core components of Dl/N signalling by means of gene expression analysis in the onychophoran Euperipatoides kanangrensis, a close relative to the arthropods. We find that neither Delta or Notch nor any other investigated components of its signalling pathway are likely to be involved in segment addition in onychophorans. We instead suggest that Dl/N signalling may be involved in posterior elongation, another conserved function of these genes. We suggest further that the posterior elongation network, rather than classic Dl/N signalling, may be in the control of the highly conserved segment polarity gene network and the lower-level pair-rule gene network in onychophorans. Consequently, we believe that the pair-rule gene network and its interaction with Dl/N signalling may have evolved within the arthropod lineage and that Dl/N signalling has thus likely been recruited independently for segment addition in different phyla. PMID:26935716

  6. Preliminary analysis of effect of random segment errors on coronagraph performance

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-09-01

    "Are we alone in the Universe?" is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 1010 of the host star's light with a 10-11 stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segments. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  7. AxonSeg: Open Source Software for Axon and Myelin Segmentation and Morphometric Analysis

    PubMed Central

    Zaimi, Aldo; Duval, Tanguy; Gasecka, Alicja; Côté, Daniel; Stikov, Nikola; Cohen-Adad, Julien

    2016-01-01

    Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy (SEM) only). Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface (GUI) and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i) image pre-processing; (ii) pre-segmentation of axons over a cropped image and discriminant analysis (DA) to select the best parameters based on axon shape and intensity information; (iii) automatic axon and myelin segmentation over the full image; and (iv) atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM), SEM and coherent anti-Stokes Raman scattering (CARS) microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg. PMID:27594833

  8. Fire flame detection using color segmentation and space-time analysis

    NASA Astrophysics Data System (ADS)

    Ruchanurucks, Miti; Saengngoen, Praphin; Sajjawiso, Theeraphat

    2011-10-01

    This paper presents a fire flame detection using CCTV cameras based on image processing. The scheme relies on color segmentation and space-time analysis. The segmentation is performed to extract fire-like-color regions in an image. Many methods are benchmarked against each other to find the best for practical CCTV camera. After that, the space-time analysis is used to recognized fire behavior. A space-time window is generated from contour of the threshold image. Feature extraction is done in Fourier domain of the window. Neural network is used for behavior recognition. The system will be shown to be practical and robust.

  9. Proteomic Analysis of the Retina: Removal of RPE Alters Outer Segment Assembly and Retinal Protein Expression

    PubMed Central

    Wang, XiaoFei; Nookala, Suba; Narayanan, Chidambarathanu; Giorgianni, Francesco; Beranova-Giorgianni, Sarka; McCollum, Gary; Gerling, Ivan; Penn, John S.; Jablonski, Monica M.

    2008-01-01

    The mechanisms that regulate the complex physiologic task of photoreceptor outer segment assembly remain an enigma. One limiting factor in revealing the mechanism(s) by which this process is modulated is that not all of the role players that participate in this process are known. The purpose of this study was to determine some of the retinal proteins that likely play a critical role in regulating photoreceptor outer segment assembly. To do so, we analyzed and compared the proteome map of tadpole Xenopus laevis retinal pigment epithelium (RPE)-supported retinas containing organized outer segments with that of RPE-deprived retinas containing disorganized outer segments. Solubilized proteins were labeled with CyDye fluors followed by multiplexed two-dimensional separation. The intensity of protein spots and comparison of proteome maps was performed using DeCyder software. Identification of differentially regulated proteins was determined using nanoLC-ESI-MS/MS analysis. We found a total of 27 protein spots, 21 of which were unique proteins, which were differentially expressed in retinas with disorganized outer segments. We predict that in the absence of the RPE, oxidative stress initiates an unfolded protein response. Subsequently, downregulation of several candidate Müller glial cell proteins may explain the inability of photoreceptors to properly fold their outer segment membranes. In this study we have used identification and bioinformatics assessment of proteins that are differentially expressed in retinas with disorganized outer segments as a first step in determining probable key molecules involved in regulating photoreceptor outer segment assembly. PMID:18803304

  10. The Prognostic Impact of In-Hospital Change in Mean Platelet Volume in Patients With Non-ST-Segment Elevation Myocardial Infarction.

    PubMed

    Kırış, Tuncay; Yazici, Selcuk; Günaydin, Zeki Yüksel; Akyüz, Şükrü; Güzelburç, Özge; Atmaca, Hüsnü; Ertürk, Mehmet; Nazli, Cem; Dogan, Abdullah

    2016-08-01

    It is unclear whether changes in mean platelet volume (MPV) are associated with total mortality in acute coronary syndromes. We investigated whether the change in MPV predicts total mortality in patients with non-ST-segment elevation myocardial infarction (NSTEMI). We retrospectively analyzed 419 consecutive patients (19 patients were excluded). The remaining patients were categorized as survivors (n = 351) or nonsurvivors (n = 49). Measurements of MPV were performed at admission and after 24 hours. The difference between the 2 measurements was considered as the MPV change (ΔMPV). The end point of the study was total mortality at 1-year follow-up. During the follow-up, there were 49 deaths (12.2%). Admission MPV was comparable in the 2 groups. However, both MPV (9.6 ± 1.4 fL vs 9.2 ± 1.0 fL, P = .044) and ΔMPV (0.40 [0.10-0.70] fL vs 0.70 [0.40-1.20] fL, P < .001) at the first 24 hours were higher in nonsurvivors than survivors. In multivariate analysis, ΔMPV was an independent predictor of total mortality (odds ratio: 1.84, 95% confidence interval: 1.28-2.65, P = .001). An early increase in MPV after admission was independently associated with total mortality in patients with NSTEMI. Such patients may need more effective antiplatelet therapy. PMID:26787684

  11. The Influence of Segmental Impedance Analysis in Predicting Validity of Consumer Grade Bioelectrical Impedance Analysis Devices

    NASA Astrophysics Data System (ADS)

    Sharp, Andy; Heath, Jennifer; Peterson, Janet

    2008-05-01

    Consumer grade bioelectric impedance analysis (BIA) instruments measure the body's impedance at 50 kHz, and yield a quick estimate of percent body fat. The frequency dependence of the impedance gives more information about the current pathway and the response of different tissues. This study explores the impedance response of human tissue at a range of frequencies from 0.2 - 102 kHz using a four probe method and probe locations standard for segmental BIA research of the arm. The data at 50 kHz, for a 21 year old healthy Caucasian male (resistance of 180φ±10 and reactance of 33φ±2) is in agreement with previously reported values [1]. The frequency dependence is not consistent with simple circuit models commonly used in evaluating BIA data, and repeatability of measurements is problematic. This research will contribute to a better understanding of the inherent difficulties in estimating body fat using consumer grade BIA devices. [1] Chumlea, William C., Richard N. Baumgartner, and Alex F. Roche. ``Specific resistivity used to estimate fat-free mass from segmental body measures of bioelectrical impedance.'' Am J Clin Nutr 48 (1998): 7-15.

  12. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed

  13. 3-D segmentation and quantitative analysis of inner and outer walls of thrombotic abdominal aortic aneurysms

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Yin, Yin; Wahle, Andreas; Olszewski, Mark E.; Sonka, Milan

    2008-03-01

    An abdominal aortic aneurysm (AAA) is an area of a localized widening of the abdominal aorta, with a frequent presence of thrombus. A ruptured aneurysm can cause death due to severe internal bleeding. AAA thrombus segmentation and quantitative analysis are of paramount importance for diagnosis, risk assessment, and determination of treatment options. Until now, only a small number of methods for thrombus segmentation and analysis have been presented in the literature, either requiring substantial user interaction or exhibiting insufficient performance. We report a novel method offering minimal user interaction and high accuracy. Our thrombus segmentation method is composed of an initial automated luminal surface segmentation, followed by a cost function-based optimal segmentation of the inner and outer surfaces of the aortic wall. The approach utilizes the power and flexibility of the optimal triangle mesh-based 3-D graph search method, in which cost functions for thrombus inner and outer surfaces are based on gradient magnitudes. Sometimes local failures caused by image ambiguity occur, in which case several control points are used to guide the computer segmentation without the need to trace borders manually. Our method was tested in 9 MDCT image datasets (951 image slices). With the exception of a case in which the thrombus was highly eccentric, visually acceptable aortic lumen and thrombus segmentation results were achieved. No user interaction was used in 3 out of 8 datasets, and 7.80 +/- 2.71 mouse clicks per case / 0.083 +/- 0.035 mouse clicks per image slice were required in the remaining 5 datasets.

  14. Combining multiset resolution and segmentation for hyperspectral image analysis of biological tissues.

    PubMed

    Piqueras, S; Krafft, C; Beleites, C; Egodage, K; von Eggeling, F; Guntinas-Lichius, O; Popp, J; Tauler, R; de Juan, A

    2015-06-30

    Hyperspectral images can provide useful biochemical information about tissue samples. Often, Fourier transform infrared (FTIR) images have been used to distinguish different tissue elements and changes caused by pathological causes. The spectral variation between tissue types and pathological states is very small and multivariate analysis methods are required to describe adequately these subtle changes. In this work, a strategy combining multivariate curve resolution-alternating least squares (MCR-ALS), a resolution (unmixing) method, which recovers distribution maps and pure spectra of image constituents, and K-means clustering, a segmentation method, which identifies groups of similar pixels in an image, is used to provide efficient information on tissue samples. First, multiset MCR-ALS analysis is performed on the set of images related to a particular pathology status to provide basic spectral signatures and distribution maps of the biological contributions needed to describe the tissues. Later on, multiset segmentation analysis is applied to the obtained MCR scores (concentration profiles), used as compressed initial information for segmentation purposes. The multiset idea is transferred to perform image segmentation of different tissue samples. Doing so, a difference can be made between clusters associated with relevant biological parts common to all images, linked to general trends of the type of samples analyzed, and sample-specific clusters, that reflect the natural biological sample-to-sample variability. The last step consists of performing separate multiset MCR-ALS analyses on the pixels of each of the relevant segmentation clusters for the pathology studied to obtain a finer description of the related tissue parts. The potential of the strategy combining multiset resolution on complete images, multiset segmentation and multiset local resolution analysis will be shown on a study focused on FTIR images of tissue sections recorded on inflamed and non

  15. Segmentation of vascular structures and hematopoietic cells in 3D microscopy images and quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mu, Jian; Yang, Lin; Kamocka, Malgorzata M.; Zollman, Amy L.; Carlesso, Nadia; Chen, Danny Z.

    2015-03-01

    In this paper, we present image processing methods for quantitative study of how the bone marrow microenvironment changes (characterized by altered vascular structure and hematopoietic cell distribution) caused by diseases or various factors. We develop algorithms that automatically segment vascular structures and hematopoietic cells in 3-D microscopy images, perform quantitative analysis of the properties of the segmented vascular structures and cells, and examine how such properties change. In processing images, we apply local thresholding to segment vessels, and add post-processing steps to deal with imaging artifacts. We propose an improved watershed algorithm that relies on both intensity and shape information and can separate multiple overlapping cells better than common watershed methods. We then quantitatively compute various features of the vascular structures and hematopoietic cells, such as the branches and sizes of vessels and the distribution of cells. In analyzing vascular properties, we provide algorithms for pruning fake vessel segments and branches based on vessel skeletons. Our algorithms can segment vascular structures and hematopoietic cells with good quality. We use our methods to quantitatively examine the changes in the bone marrow microenvironment caused by the deletion of Notch pathway. Our quantitative analysis reveals property changes in samples with deleted Notch pathway. Our tool is useful for biologists to quantitatively measure changes in the bone marrow microenvironment, for developing possible therapeutic strategies to help the bone marrow microenvironment recovery.

  16. Segmented K-mer and its application on similarity analysis of mitochondrial genome sequences.

    PubMed

    Yu, Hong-Jie

    2013-04-15

    K-mer-based approach has been widely used in similarity analyses so as to discover similarity/dissimilarity among different biological sequences. In this study, we have improved the traditional K-mer method, and introduce a segmented K-mer approach (s-K-mer). After each primary sequence is divided into several segments, we simultaneously transform all these segments into corresponding K-mer-based vectors. In this approach, it is vital how to determine the optimal combination of distance metric with the number of K and the number of segments, i.e., (K(⁎), s(⁎), and d(⁎)). Based on the cascaded feature vectors transformed from s(⁎) segmented sequences, we analyze 34 mammalian genome sequences using the proposed s-K-mer approach. Meanwhile, we compare the results of s-K-mer with those of traditional K-mer. The contrastive analysis results demonstrate that s-K-mer approach outperforms the traditionally K-mer method on similarity analysis among different species. PMID:23353775

  17. Morphotectonic Index Analysis as an Indicator of Neotectonic Segmentation of the Nicoya Peninsula, Costa Rica

    NASA Astrophysics Data System (ADS)

    Morrish, S.; Marshall, J. S.

    2013-12-01

    The Nicoya Peninsula lies within the Costa Rican forearc where the Cocos plate subducts under the Caribbean plate at ~8.5 cm/yr. Rapid plate convergence produces frequent large earthquakes (~50yr recurrence interval) and pronounced crustal deformation (0.1-2.0m/ky uplift). Seven uplifted segments have been identified in previous studies using broad geomorphic surfaces (Hare & Gardner 1984) and late Quaternary marine terraces (Marshall et al. 2010). These surfaces suggest long term net uplift and segmentation of the peninsula in response to contrasting domains of subducting seafloor (EPR, CNS-1, CNS-2). In this study, newer 10m contour digital topographic data (CENIGA- Terra Project) will be used to characterize and delineate this segmentation using morphotectonic analysis of drainage basins and correlation of fluvial terrace/ geomorphic surface elevations. The peninsula has six primary watersheds which drain into the Pacific Ocean; the Río Andamojo, Río Tabaco, Río Nosara, Río Ora, Río Bongo, and Río Ario which range in area from 200 km2 to 350 km2. The trunk rivers follow major lineaments that define morphotectonic segment boundaries and in turn their drainage basins are bisected by them. Morphometric analysis of the lower (1st and 2nd) order drainage basins will provide insight into segmented tectonic uplift and deformation by comparing values of drainage basin asymmetry, stream length gradient, and hypsometry with respect to margin segmentation and subducting seafloor domain. A general geomorphic analysis will be conducted alongside the morphometric analysis to map previously recognized (Morrish et al. 2010) but poorly characterized late Quaternary fluvial terraces. Stream capture and drainage divide migration are common processes throughout the peninsula in response to the ongoing deformation. Identification and characterization of basin piracy throughout the peninsula will provide insight into the history of landscape evolution in response to

  18. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  19. Loads analysis and testing of flight configuration solid rocket motor outer boot ring segments

    NASA Technical Reports Server (NTRS)

    Ahmed, Rafiq

    1990-01-01

    The loads testing on in-house-fabricated flight configuration Solid Rocket Motor (SRM) outer boot ring segments. The tests determined the bending strength and bending stiffness of these beams and showed that they compared well with the hand analysis. The bending stiffness test results compared very well with the finite element data.

  20. Analysis of the ISS Russian Segment Outer Surface Materials Installed on the CKK Detachable Cassette

    NASA Astrophysics Data System (ADS)

    Naumov, S. F.; Borisov, V. A.; Plotnikov, A. D.; Sokolova, S. P.; Kurilenok, A. O.; Skurat, V. E.; Leipunsky, I. O.; Pshechenkov, P. A.; Beryozkina, N. G.; Volkov, I. O.

    2009-01-01

    This report presents an analysis of the effects caused by space environmental factors (SEF) and the International Space Station's (ISS) outer environment on operational parameters of the outer surface materials of the ISS Russian Segment (RS). The tests were performed using detachable container cassettes (CKK) that serve as a part of the ISS RS contamination control system.

  1. Scientific and clinical evidence for the use of fetal ECG ST segment analysis (STAN).

    PubMed

    Steer, Philip J; Hvidman, Lone Egly

    2014-06-01

    Fetal electrocardiogram waveform analysis has been studied for many decades, but it is only in the last 20 years that computerization has made real-time analysis practical for clinical use. Changes in the ST segment have been shown to correlate with fetal condition, in particular with acid-base status. Meta-analysis of randomized trials (five in total, four using the computerized system) has shown that use of computerized ST segment analysis (STAN) reduces the need for fetal blood sampling by about 40%. However, although there are trends to lower rates of low Apgar scores and acidosis, the differences are not statistically significant. There is no effect on cesarean section rates. Disadvantages include the need for amniotic membranes to be ruptured so that a fetal scalp electrode can be applied, and the need for STAN values to be interpreted in conjunction with detailed fetal heart rate pattern analysis. PMID:24597897

  2. Effect of ST segment measurement point on performance of exercise ECG analysis.

    PubMed

    Lehtinen, R; Sievänen, H; Turjanmaa, V; Niemelä, K; Malmivuo, J

    1997-10-10

    To evaluate the effect of ST-segment measurement point on diagnostic performance of the ST-segment/heart rate (ST/HR) hysteresis, the ST/HR index, and the end-exercise ST-segment depression in the detection of coronary artery disease, we analysed the exercise electrocardiograms of 347 patients using ST-segment depression measured at 0, 20, 40, 60 and 80 ms after the J-point. Of these patients, 127 had and 13 had no significant coronary artery disease according to angiography, 18 had no myocardial perfusion defect according to technetium-99m sestamibi single-photon emission computed tomography, and 189 were clinically 'normal' having low likelihood of coronary artery disease. Comparison of areas under the receiver operating characteristic curves showed that the discriminative capacity of the above diagnostic variables improved systematically up to the ST-segment measurement point of 60 ms after the J-point. As compared to analysis at the J-point (0 ms), the areas based on the 60-ms point were 89 vs. 84% (p=0.0001) for the ST/HR hysteresis, 83 vs. 76% (p<0.0001) for the ST/HR index, and 76 vs. 61% (p<0.0001) for the end-exercise ST depression. These findings suggest that the ST-segment measurement at 60 ms after the J-point is the most reasonable point of choice in terms of discriminative capacity of both the simple and the heart rate-adjusted indices of ST depression. Moreover, the ST/HR hysteresis had the best discriminative capacity independently of the ST-segment measurement point, the observation thus giving further support to clinical utility of this new method in the detection of coronary artery disease. PMID:9363740

  3. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends

    PubMed Central

    Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.

    2015-01-01

    The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351

  4. Moving cast shadow resistant for foreground segmentation based on shadow properties analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Gao, Yun; Yuan, Guowu; Ji, Rongbin

    2015-12-01

    Moving object detection is the fundamental task in machine vision applications. However, moving cast shadows detection is one of the major concerns for accurate video segmentation. Since detected moving object areas are often contain shadow points, errors in measurements, localization, segmentation, classification and tracking may arise from this. A novel shadow elimination algorithm is proposed in this paper. A set of suspected moving object area are detected by the adaptive Gaussian approach. A model is established based on shadow optical properties analysis. And shadow regions are discriminated from the set of moving pixels by using the properties of brightness, chromaticity and texture in sequence.

  5. Segmentation, statistical analysis, and modelling of the wall system in ceramic foams

    SciTech Connect

    Kampf, Jürgen; Schlachter, Anna-Lena; Redenbach, Claudia; Liebscher, André

    2015-01-15

    Closed walls in otherwise open foam structures may have a great impact on macroscopic properties of the materials. In this paper, we present two algorithms for the segmentation of such closed walls from micro-computed tomography images of the foam structure. The techniques are compared on simulated data and applied to tomographic images of ceramic filters. This allows for a detailed statistical analysis of the normal directions and sizes of the walls. Finally, we explain how the information derived from the segmented wall system can be included in a stochastic microstructure model for the foam.

  6. Patient Segmentation Analysis Offers Significant Benefits For Integrated Care And Support.

    PubMed

    Vuik, Sabine I; Mayer, Erik K; Darzi, Ara

    2016-05-01

    Integrated care aims to organize care around the patient instead of the provider. It is therefore crucial to understand differences across patients and their needs. Segmentation analysis that uses big data can help divide a patient population into distinct groups, which can then be targeted with care models and intervention programs tailored to their needs. In this article we explore the potential applications of patient segmentation in integrated care. We propose a framework for population strategies in integrated care-whole populations, subpopulations, and high-risk populations-and show how patient segmentation can support these strategies. Through international case examples, we illustrate practical considerations such as choosing a segmentation logic, accessing data, and tailoring care models. Important issues for policy makers to consider are trade-offs between simplicity and precision, trade-offs between customized and off-the-shelf solutions, and the availability of linked data sets. We conclude that segmentation can provide many benefits to integrated care, and we encourage policy makers to support its use. PMID:27140981

  7. Segmented assimilation and attitudes toward psychotherapy: a moderated mediation analysis.

    PubMed

    Rogers-Sirin, Lauren

    2013-07-01

    The present study examines the relations between acculturative stress, mental health, and attitudes toward psychotherapy, and whether these relations are the same for immigrants of color and White immigrants. This study predicted that acculturative stress would have a significant, negative relation with attitudes toward psychotherapy and that this relation would be moderated by race (immigrants of color and White immigrants) so that as acculturative stress increases, attitudes toward psychotherapy become more negative for immigrants of color but not White immigrants. Finally, mental health was predicted to mediate the relation between acculturative stress and attitudes toward psychotherapy for immigrants of color, but not White immigrants. Participants were 149 first-generation, immigrant, young adults, between the ages of 18 and 29, who identified as White, Black, Latino, or Asian. A significant negative correlation was found between acculturative stress and attitudes toward psychotherapy. A moderated mediation analysis demonstrated that the negative relation between acculturative stress and attitudes toward psychotherapy was mediated by mental health symptoms for immigrants of color but not White immigrants. PMID:23544838

  8. Computer model analysis of the relationship of ST-segment and ST-segment/heart rate slope response to the constituents of the ischemic injury source.

    PubMed

    Hyttinen, J; Viik, J; Lehtinen, R; Plonsey, R; Malmivuo, J

    1997-07-01

    The objective of the study was to investigate a proposed linear relationship between the extent of myocardial ischemic injury and the ST-segment/heart rate (ST/HR) slope by computer simulation of the injury sources arising in exercise electrocardiographic (ECG) tests. The extent and location of the ischemic injury were simulated for both single- and multivessel coronary artery disease by use of an accurate source-volume conductor model which assumes a linear relationship between heart rate and extent of ischemia. The results indicated that in some cases the ST/HR slope in leads II, aVF, and especially V5 may be related to the extent of ischemia. However, the simulations demonstrated that neither the ST-segment deviation nor the ST/HR slope was directly proportional to either the area of the ischemic boundary or the number of vessels occluded. Furthermore, in multivessel coronary artery disease, the temporal and spatial diversity of the generated multiple injury sources distorted the presumed linearity between ST-segment deviation and heart rate. It was concluded that the ST/HR slope and ST-segment deviation of the 12-lead ECG are not able to indicate extent of ischemic injury or number of vessels occluded. PMID:9261724

  9. Phantom-based ground-truth generation for cerebral vessel segmentation and pulsatile deformation analysis

    NASA Astrophysics Data System (ADS)

    Schetelig, Daniel; Säring, Dennis; Illies, Till; Sedlacik, Jan; Kording, Fabian; Werner, René

    2016-03-01

    Hemodynamic and mechanical factors of the vascular system are assumed to play a major role in understanding, e.g., initiation, growth and rupture of cerebral aneurysms. Among those factors, cardiac cycle-related pulsatile motion and deformation of cerebral vessels currently attract much interest. However, imaging of those effects requires high spatial and temporal resolution and remains challenging { and similarly does the analysis of the acquired images: Flow velocity changes and contrast media inflow cause vessel intensity variations in related temporally resolved computed tomography and magnetic resonance angiography data over the cardiac cycle and impede application of intensity threshold-based segmentation and subsequent motion analysis. In this work, a flow phantom for generation of ground-truth images for evaluation of appropriate segmentation and motion analysis algorithms is developed. The acquired ground-truth data is used to illustrate the interplay between intensity fluctuations and (erroneous) motion quantification by standard threshold-based segmentation, and an adaptive threshold-based segmentation approach is proposed that alleviates respective issues. The results of the phantom study are further demonstrated to be transferable to patient data.

  10. FIELD VALIDATION OF EXPOSURE ASSESSMENT MODELS. VOLUME 2. ANALYSIS

    EPA Science Inventory

    This is the second of two volumes describing a series of dual tracer experiments designed to evaluate the PAL-DS model, a Gaussian diffusion model modified to take into account settling and deposition, as well as three other deposition models. In this volume, an analysis of the d...

  11. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-01

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  12. An approach to multi-temporal MODIS image analysis using image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Senthilnath, J.; Bajpai, Shivesh; Omkar, S. N.; Diwakar, P. G.; Mani, V.

    2012-11-01

    This paper discusses an approach for river mapping and flood evaluation based on multi-temporal time series analysis of satellite images utilizing pixel spectral information for image classification and region-based segmentation for extracting water-covered regions. Analysis of MODIS satellite images is applied in three stages: before flood, during flood and after flood. Water regions are extracted from the MODIS images using image classification (based on spectral information) and image segmentation (based on spatial information). Multi-temporal MODIS images from "normal" (non-flood) and flood time-periods are processed in two steps. In the first step, image classifiers such as Support Vector Machines (SVM) and Artificial Neural Networks (ANN) separate the image pixels into water and non-water groups based on their spectral features. The classified image is then segmented using spatial features of the water pixels to remove the misclassified water. From the results obtained, we evaluate the performance of the method and conclude that the use of image classification (SVM and ANN) and region-based image segmentation is an accurate and reliable approach for the extraction of water-covered regions.

  13. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  14. Mean-Field Analysis of Recursive Entropic Segmentation of Biological Sequences

    NASA Astrophysics Data System (ADS)

    Cheong, Siew-Ann; Stodghill, Paul; Schneider, David; Myers, Christopher

    2007-03-01

    Horizontal gene transfer in bacteria results in genomic sequences which are mosaic in nature. An important first step in the analysis of a bacterial genome would thus be to model the statistically nonstationary nucleotide or protein sequence with a collection of P stationary Markov chains, and partition the sequence of length N into M statistically stationary segments/domains. This can be done for Markov chains of order K = 0 using a recursive segmentation scheme based on the Jensen-Shannon divergence, where the unknown parameters P and M are estimated from a hypothesis testing/model selection process. In this talk, we describe how the Jensen-Shannon divergence can be generalized to Markov chains of order K > 0, as well as an algorithm optimizing the positions of a fixed number of domain walls. We then describe a mean field analysis of the generalized recursive Jensen-Shannon segmentation scheme, and show how most domain walls appear as local maxima in the divergence spectrum of the sequence, before highlighting the main problem associated with the recursive segmentation scheme, i.e. the strengths of the domain walls selected recursively do not decrease monotonically. This problem is especially severe in repetitive sequences, whose statistical signatures we will also discuss.

  15. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets

    PubMed Central

    Belevich, Ilya; Joensuu, Merja; Kumar, Darshan; Vihinen, Helena; Jokitalo, Eija

    2016-01-01

    Understanding the structure–function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program. PMID:26727152

  16. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  17. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    SciTech Connect

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  18. Mimicking human expert interpretation of remotely sensed raster imagery by using a novel segmentation analysis within ArcGIS

    NASA Astrophysics Data System (ADS)

    Le Bas, Tim; Scarth, Anthony; Bunting, Peter

    2015-04-01

    Traditional computer based methods for the interpretation of remotely sensed imagery use each pixel individually or the average of a small window of pixels to calculate a class or thematic value, which provides an interpretation. However when a human expert interprets imagery, the human eye is excellent at finding coherent and homogenous areas and edge features. It may therefore be advantageous for computer analysis to mimic human interpretation. A new toolbox for ArcGIS 10.x will be presented that segments the data layers into a set of polygons. Each polygon is defined by a K-means clustering and region growing algorithm, thus finding areas, their edges and any lineations in the imagery. Attached to each polygon are the characteristics of the imagery such as mean and standard deviation of the pixel values, within the polygon. The segmentation of imagery into a jigsaw of polygons also has the advantage that the human interpreter does not need to spend hours digitising the boundaries. The segmentation process has been taken from the RSGIS library of analysis and classification routines (Bunting et al., 2014). These routines are freeware and have been modified to be available in the ArcToolbox under the Windows (v7) operating system. Input to the segmentation process is a multi-layered raster image, for example; a Landsat image, or a set of raster datasets made up from derivatives of topography. The size and number of polygons are set by the user and are dependent on the imagery used. Examples will be presented of data from the marine environment utilising bathymetric depth, slope, rugosity and backscatter from a multibeam system. Meaningful classification of the polygons using their numerical characteristics is the next goal. Object based image analysis (OBIA) should help this workflow. Fully calibrated imagery systems will allow numerical classification to be translated into more readily understandable terms. Peter Bunting, Daniel Clewley, Richard M. Lucas and Sam

  19. Parametric investigation of Radome analysis methods. Volume 4: Experimental results

    NASA Astrophysics Data System (ADS)

    Bassett, H. L.; Newton, J. M.; Adams, W.; Ussailis, J. S.; Hadsell, M. J.; Huddleston, G. K.

    1981-02-01

    This Volume 4 of four volumes presents 140 measured far-field patterns and boresight error data for eight combinations of three monopulse antennas and five tangent ogive Rexolite radomes at 35 GHz. The antennas and radomes, all of different sizes, were selected to provide a range of parameters as found in the applications. The measured data serve as true data in the parametric investigation of radome analysis methods to determine the accuracies and ranges of validity of selected methods of analysis.

  20. Analysis of gene expression levels in individual bacterial cells without image segmentation

    SciTech Connect

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J.

    2012-05-11

    Highlights: Black-Right-Pointing-Pointer We present a method for extracting gene expression data from images of bacterial cells. Black-Right-Pointing-Pointer The method does not employ cell segmentation and does not require high magnification. Black-Right-Pointing-Pointer Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. Black-Right-Pointing-Pointer We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

  1. Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis

    NASA Astrophysics Data System (ADS)

    Chung, Howard; Cobzas, Dana; Birdsell, Laura; Lieffers, Jessica; Baracos, Vickie

    2009-02-01

    The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.

  2. Stress Analysis of Bolted, Segmented Cylindrical Shells Exhibiting Flange Mating-Surface Waviness

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2009-01-01

    Bolted, segmented cylindrical shells are a common structural component in many engineering systems especially for aerospace launch vehicles. Segmented shells are often needed due to limitations of manufacturing capabilities or transportation issues related to very long, large-diameter cylindrical shells. These cylindrical shells typically have a flange or ring welded to opposite ends so that shell segments can be mated together and bolted to form a larger structural system. As the diameter of these shells increases, maintaining strict fabrication tolerances for the flanges to be flat and parallel on a welded structure is an extreme challenge. Local fit-up stresses develop in the structure due to flange mating-surface mismatch (flange waviness). These local stresses need to be considered when predicting a critical initial flaw size. Flange waviness is one contributor to the fit-up stress state. The present paper describes the modeling and analysis effort to simulate fit-up stresses due to flange waviness in a typical bolted, segmented cylindrical shell. Results from parametric studies are presented for various flange mating-surface waviness distributions and amplitudes.

  3. Analysis, design, and test of a graphite/polyimide Shuttle orbiter body flap segment

    NASA Technical Reports Server (NTRS)

    Graves, S. R.; Morita, W. H.

    1982-01-01

    For future missions, increases in Space Shuttle orbiter deliverable and recoverable payload weight capability may be needed. Such increases could be obtained by reducing the inert weight of the Shuttle. The application of advanced composites in orbiter structural components would make it possible to achieve such reductions. In 1975, NASA selected the orbiter body flap as a demonstration component for the Composite for Advanced Space Transportation Systems (CASTS) program. The progress made in 1977 through 1980 was integrated into a design of a graphite/polyimide (Gr/Pi) body flap technology demonstration segment (TDS). Aspects of composite body flap design and analysis are discussed, taking into account the direct-bond fibrous refractory composite insulation (FRCI) tile on Gr/Pi structure, Gr/Pi body flap weight savings, the body flap design concept, and composite body flap analysis. Details regarding the Gr/Pi technology demonstration segment are also examined.

  4. Analysis of a 26,756 bp segment from the left arm of yeast chromosome IV.

    PubMed

    Wölfl, S; Hanemann, V; Saluz, H P

    1996-12-01

    The nucleotide sequence of a 26.7 kb DNA segment from the left arm of Saccharomyces cerevisiae chromosome IV is presented. An analysis of this segment revealed 11 open reading frames (ORFs) longer than 300 bp and one split gene. These ORFs include the genes encoding the large subunit of RNA polymerase II, the biotin apo-protein ligase, an ADP-ribosylation factor (ARF 2), the 'L35'-ribosomal protein, a rho GDP dissociation factor, and the sequence encoding the protein phosphatase 2A. Further sequence analysis revealed a short ORF encoding the ribosomal protein YL41B, an intron in a 5' untranslated region and an extended homology with another cosmid (X83276) located on the same chromosome. The potential biological relevance of these findings is discussed. PMID:8972577

  5. Screening Analysis : Volume 1, Description and Conclusions.

    SciTech Connect

    Bonneville Power Administration; Corps of Engineers; Bureau of Reclamation

    1992-08-01

    The SOR consists of three analytical phases leading to a Draft EIS. The first phase Pilot Analysis, was performed for the purpose of testing the decision analysis methodology being used in the SOR. The Pilot Analysis is described later in this chapter. The second phase, Screening Analysis, examines all possible operating alternatives using a simplified analytical approach. It is described in detail in this and the next chapter. This document also presents the results of screening. The final phase, Full-Scale Analysis, will be documented in the Draft EIS and is intended to evaluate comprehensively the few, best alternatives arising from the screening analysis. The purpose of screening is to analyze a wide variety of differing ways of operating the Columbia River system to test the reaction of the system to change. The many alternatives considered reflect the range of needs and requirements of the various river users and interests in the Columbia River Basin. While some of the alternatives might be viewed as extreme, the information gained from the analysis is useful in highlighting issues and conflicts in meeting operating objectives. Screening is also intended to develop a broad technical basis for evaluation including regional experts and to begin developing an evaluation capability for each river use that will support full-scale analysis. Finally, screening provides a logical method for examining all possible options and reaching a decision on a few alternatives worthy of full-scale analysis. An organizational structure was developed and staffed to manage and execute the SOR, specifically during the screening phase and the upcoming full-scale analysis phase. The organization involves ten technical work groups, each representing a particular river use. Several other groups exist to oversee or support the efforts of the work groups.

  6. Comparision between Brain Atrophy and Subdural Volume to Predict Chronic Subdural Hematoma: Volumetric CT Imaging Analysis

    PubMed Central

    Ju, Min-Wook; Kwon, Hyon-Jo; Choi, Seung-Won; Koh, Hyeon-Song; Youm, Jin-Young; Song, Shi-Hun

    2015-01-01

    Objective Brain atrophy and subdural hygroma were well known factors that enlarge the subdural space, which induced formation of chronic subdural hematoma (CSDH). Thus, we identified the subdural volume that could be used to predict the rate of future CSDH after head trauma using a computed tomography (CT) volumetric analysis. Methods A single institution case-control study was conducted involving 1,186 patients who visited our hospital after head trauma from January 1, 2010 to December 31, 2014. Fifty-one patients with delayed CSDH were identified, and 50 patients with age and sex matched for control. Intracranial volume (ICV), the brain parenchyme, and the subdural space were segmented using CT image-based software. To adjust for variations in head size, volume ratios were assessed as a percentage of ICV [brain volume index (BVI), subdural volume index (SVI)]. The maximum depth of the subdural space on both sides was used to estimate the SVI. Results Before adjusting for cranium size, brain volume tended to be smaller, and subdural space volume was significantly larger in the CSDH group (p=0.138, p=0.021, respectively). The BVI and SVI were significantly different (p=0.003, p=0.001, respectively). SVI [area under the curve (AUC), 77.3%; p=0.008] was a more reliable technique for predicting CSDH than BVI (AUC, 68.1%; p=0.001). Bilateral subdural depth (sum of subdural depth on both sides) increased linearly with SVI (p<0.0001). Conclusion Subdural space volume was significantly larger in CSDH groups. SVI was a more reliable technique for predicting CSDH. Bilateral subdural depth was useful to measure SVI. PMID:27169071

  7. Fetal autonomic brain age scores, segmented heart rate variability analysis, and traditional short term variability.

    PubMed

    Hoyer, Dirk; Kowalski, Eva-Maria; Schmidt, Alexander; Tetschke, Florian; Nowack, Samuel; Rudolph, Anja; Wallwitz, Ulrike; Kynass, Isabelle; Bode, Franziska; Tegtmeyer, Janine; Kumm, Kathrin; Moraru, Liviu; Götz, Theresa; Haueisen, Jens; Witte, Otto W; Schleußner, Ekkehard; Schneider, Uwe

    2014-01-01

    Disturbances of fetal autonomic brain development can be evaluated from fetal heart rate patterns (HRP) reflecting the activity of the autonomic nervous system. Although HRP analysis from cardiotocographic (CTG) recordings is established for fetal surveillance, temporal resolution is low. Fetal magnetocardiography (MCG), however, provides stable continuous recordings at a higher temporal resolution combined with a more precise heart rate variability (HRV) analysis. A direct comparison of CTG and MCG based HRV analysis is pending. The aims of the present study are: (i) to compare the fetal maturation age predicting value of the MCG based fetal Autonomic Brain Age Score (fABAS) approach with that of CTG based Dawes-Redman methodology; and (ii) to elaborate fABAS methodology by segmentation according to fetal behavioral states and HRP. We investigated MCG recordings from 418 normal fetuses, aged between 21 and 40 weeks of gestation. In linear regression models we obtained an age predicting value of CTG compatible short term variability (STV) of R (2) = 0.200 (coefficient of determination) in contrast to MCG/fABAS related multivariate models with R (2) = 0.648 in 30 min recordings, R (2) = 0.610 in active sleep segments of 10 min, and R (2) = 0.626 in quiet sleep segments of 10 min. Additionally segmented analysis under particular exclusion of accelerations (AC) and decelerations (DC) in quiet sleep resulted in a novel multivariate model with R (2) = 0.706. According to our results, fMCG based fABAS may provide a promising tool for the estimation of fetal autonomic brain age. Beside other traditional and novel HRV indices as possible indicators of developmental disturbances, the establishment of a fABAS score normogram may represent a specific reference. The present results are intended to contribute to further exploration and validation using independent data sets and multicenter research structures. PMID:25505399

  8. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  9. Information architecture. Volume 2, Part 1: Baseline analysis summary

    SciTech Connect

    1996-12-01

    The Department of Energy (DOE) Information Architecture, Volume 2, Baseline Analysis, is a collaborative and logical next-step effort in the processes required to produce a Departmentwide information architecture. The baseline analysis serves a diverse audience of program management and technical personnel and provides an organized way to examine the Department`s existing or de facto information architecture. A companion document to Volume 1, The Foundations, it furnishes the rationale for establishing a Departmentwide information architecture. This volume, consisting of the Baseline Analysis Summary (part 1), Baseline Analysis (part 2), and Reference Data (part 3), is of interest to readers who wish to understand how the Department`s current information architecture technologies are employed. The analysis identifies how and where current technologies support business areas, programs, sites, and corporate systems.

  10. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  11. Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data

    NASA Astrophysics Data System (ADS)

    Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus

    The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.

  12. Analysis of object segmentation methods for VOP generation in MPEG-4

    NASA Astrophysics Data System (ADS)

    Vaithianathan, Karthikeyan; Panchanathan, Sethuraman

    2000-04-01

    The recent audio-visual standard MPEG4 emphasizes content- based information representation and coding. Rather than operating at the level of pixels, MPEG4 operates at a higher level of abstraction, capturing the information based on the content of a video sequence. Video object plane (VOP) extraction is an important step in defining the content of any video sequence, except in the case of authored applications which involve creation of video sequences using synthetic objects and graphics. The generation of VOPs from a video sequence involves segmenting the objects from every frame of the video sequence. The problem of object segmentation is also being addressed by the Computer Vision community. The major problem faced by the researchers is to define object boundaries such that they are semantically meaningful. Finding a single robust solution for this problem that can work for all kinds of video sequences still remains to be a challenging task. The object segmentation problem can be simplified by imposing constraints on the video sequences. These constraints largely depend on the type of application where the segmentation technique will be used. The purpose of this paper is twofold. In the first section, we summarize the state-of- the-art research in this topic and analyze the various VOP generation and object segmentation methods that have been presented in the recent literature. In the next section, we focus on the different types of video sequences, the important cues that can be employed for efficient object segmentation, the different object segmentation techniques and the types of techniques that are well suited for each type of application. A detailed analysis of these approaches from the perspective of accuracy of the object boundaries, robustness towards different kinds of video sequences, ability to track the objects through the video sequences, and complexity involved in implementing these approaches along with other limitations will be discussed. In

  13. Laser power conversion system analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-ground laser power conversion system analysis investigated the feasibility and cost effectiveness of converting solar energy into laser energy in space, and transmitting the laser energy to earth for conversion to electrical energy. The analysis included space laser systems with electrical outputs on the ground ranging from 100 to 10,000 MW. The space laser power system was shown to be feasible and a viable alternate to the microwave solar power satellite. The narrow laser beam provides many options and alternatives not attainable with a microwave beam.

  14. [Intracranial volume reserve assessment based on ICP pulse wave analysis].

    PubMed

    Berdyga, J; Czernicki, Z; Jurkiewicz, J

    1994-01-01

    ICP waves were analysed in the situation of expanding intracranial mass. The aim of the study was to determine how big the intracranial added volume has to be in order to produce significant changes of harmonic disturbances index (HFC) of ICP pulse waves. The diagnostic value of HFC and other parameters was compared. The following other parameters were studied: intracranial pressure (ICP), CSF outflow resistance (R), volume pressure response (VPR) and visual evoked potentials (VEP). It was found that ICP wave analysis very clearly reflects the intracranial volume-pressure relation changes. PMID:8028705

  15. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    SciTech Connect

    Akhbardeh, Alireza; Jacobs, Michael A.

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment

  16. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  17. Scanning and transmission electron microscopic analysis of ampullary segment of oviduct during estrous cycle in caprines.

    PubMed

    Sharma, R K; Singh, R; Bhardwaj, J K

    2015-01-01

    The ampullary segment of the mammalian oviduct provides suitable milieu for fertilization and development of zygote before implantation into uterus. It is, therefore, in the present study, the cyclic changes in the morphology of ampullary segment of goat oviduct were studied during follicular and luteal phases using scanning and transmission electron microscopy techniques. Topographical analysis revealed the presence of uniformly ciliated ampullary epithelia, concealing apical processes of non-ciliated cells along with bulbous secretory cells during follicular phase. The luteal phase was marked with decline in number of ciliated cells with increased occurrence of secretory cells. The ultrastructure analysis has demonstrated the presence of indented nuclear membrane, supranuclear cytoplasm, secretory granules, rough endoplasmic reticulum, large lipid droplets, apically located glycogen masses, oval shaped mitochondria in the secretory cells. The ciliated cells were characterized by the presence of elongated nuclei, abundant smooth endoplasmic reticulum, oval or spherical shaped mitochondria with crecentric cristae during follicular phase. However, in the luteal phase, secretory cells were possessing highly indented nucleus with diffused electron dense chromatin, hyaline nucleosol, increased number of lipid droplets. The ciliated cells had numerous fibrous granules and basal bodies. The parallel use of scanning and transmission electron microscopy techniques has enabled us to examine the cyclic and hormone dependent changes occurring in the topography and fine structure of epithelium of ampullary segment and its cells during different reproductive phases that will be great help in understanding major bottle neck that limits success rate in vitro fertilization and embryo transfer technology. PMID:25491952

  18. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  19. AAV Vectors for FRET-Based Analysis of Protein-Protein Interactions in Photoreceptor Outer Segments

    PubMed Central

    Becirovic, Elvir; Böhm, Sybille; Nguyen, Ong N. P.; Riedmayr, Lisa M.; Hammelmann, Verena; Schön, Christian; Butz, Elisabeth S.; Wahl-Schott, Christian; Biel, Martin; Michalakis, Stylianos

    2016-01-01

    Fluorescence resonance energy transfer (FRET) is a powerful method for the detection and quantification of stationary and dynamic protein-protein interactions. Technical limitations have hampered systematic in vivo FRET experiments to study protein-protein interactions in their native environment. Here, we describe a rapid and robust protocol that combines adeno-associated virus (AAV) vector-mediated in vivo delivery of genetically encoded FRET partners with ex vivo FRET measurements. The method was established on acutely isolated outer segments of murine rod and cone photoreceptors and relies on the high co-transduction efficiency of retinal photoreceptors by co-delivered AAV vectors. The procedure can be used for the systematic analysis of protein-protein interactions of wild type or mutant outer segment proteins in their native environment. Conclusively, our protocol can help to characterize the physiological and pathophysiological relevance of photoreceptor specific proteins and, in principle, should also be transferable to other cell types. PMID:27516733

  20. Multiresolution Analysis Using Wavelet, Ridgelet, and Curvelet Transforms for Medical Image Segmentation

    PubMed Central

    AlZubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  1. AAV Vectors for FRET-Based Analysis of Protein-Protein Interactions in Photoreceptor Outer Segments.

    PubMed

    Becirovic, Elvir; Böhm, Sybille; Nguyen, Ong N P; Riedmayr, Lisa M; Hammelmann, Verena; Schön, Christian; Butz, Elisabeth S; Wahl-Schott, Christian; Biel, Martin; Michalakis, Stylianos

    2016-01-01

    Fluorescence resonance energy transfer (FRET) is a powerful method for the detection and quantification of stationary and dynamic protein-protein interactions. Technical limitations have hampered systematic in vivo FRET experiments to study protein-protein interactions in their native environment. Here, we describe a rapid and robust protocol that combines adeno-associated virus (AAV) vector-mediated in vivo delivery of genetically encoded FRET partners with ex vivo FRET measurements. The method was established on acutely isolated outer segments of murine rod and cone photoreceptors and relies on the high co-transduction efficiency of retinal photoreceptors by co-delivered AAV vectors. The procedure can be used for the systematic analysis of protein-protein interactions of wild type or mutant outer segment proteins in their native environment. Conclusively, our protocol can help to characterize the physiological and pathophysiological relevance of photoreceptor specific proteins and, in principle, should also be transferable to other cell types. PMID:27516733

  2. Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation.

    PubMed

    Alzubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  3. A hybrid neural network analysis of subtle brain volume differences in children surviving brain tumors.

    PubMed

    Reddick, W E; Mulhern, R K; Elkin, T D; Glass, J O; Merchant, T E; Langston, J W

    1998-05-01

    In the treatment of children with brain tumors, balancing the efficacy of treatment against commonly observed side effects is difficult because of a lack of quantitative measures of brain damage that can be correlated with the intensity of treatment. We quantitatively assessed volumes of brain parenchyma on magnetic resonance (MR) images using a hybrid combination of the Kohonen self-organizing map for segmentation and a multilayer backpropagation neural network for tissue classification. Initially, we analyzed the relationship between volumetric differences and radiologists' grading of atrophy in 80 subjects. This investigation revealed that brain parenchyma and white matter volumes significantly decreased as atrophy increased, whereas gray matter volumes had no relationship with atrophy. Next, we compared 37 medulloblastoma patients treated with surgery, irradiation, and chemotherapy to 19 patients treated with surgery and irradiation alone. This study demonstrated that, in these patients, chemotherapy had no significant effect on brain parenchyma, white matter, or gray matter volumes. We then investigated volumetric differences due to cranial irradiation in 15 medulloblastoma patients treated with surgery and radiation therapy, and compared these with a group of 15 age-matched patients with low-grade astrocytoma treated with surgery alone. With a minimum follow-up of one year after irradiation, all radiation-treated patients demonstrated significantly reduced white matter volumes, whereas gray matter volumes were relatively unchanged compared with those of age-matched patients treated with surgery alone. These results indicate that reductions in cerebral white matter: 1) are correlated significantly with atrophy; 2) are not related to chemotherapy; and 3) are correlated significantly with irradiation. This hybrid neural network analysis of subtle brain volume differences with magnetic resonance may constitute a direct measure of treatment-induced brain damage

  4. Multivariate statistical analysis as a tool for the segmentation of 3D spectral data.

    PubMed

    Lucas, G; Burdet, P; Cantoni, M; Hébert, C

    2013-01-01

    Acquisition of three-dimensional (3D) spectral data is nowadays common using many different microanalytical techniques. In order to proceed to the 3D reconstruction, data processing is necessary not only to deal with noisy acquisitions but also to segment the data in term of chemical composition. In this article, we demonstrate the value of multivariate statistical analysis (MSA) methods for this purpose, allowing fast and reliable results. Using scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) coupled with a focused ion beam (FIB), a stack of spectrum images have been acquired on a sample produced by laser welding of a nickel-titanium wire and a stainless steel wire presenting a complex microstructure. These data have been analyzed using principal component analysis (PCA) and factor rotations. PCA allows to significantly improve the overall quality of the data, but produces abstract components. Here it is shown that rotated components can be used without prior knowledge of the sample to help the interpretation of the data, obtaining quickly qualitative mappings representative of elements or compounds found in the material. Such abundance maps can then be used to plot scatter diagrams and interactively identify the different domains in presence by defining clusters of voxels having similar compositions. Identified voxels are advantageously overlaid on secondary electron (SE) images with higher resolution in order to refine the segmentation. The 3D reconstruction can then be performed using available commercial softwares on the basis of the provided segmentation. To asses the quality of the segmentation, the results have been compared to an EDX quantification performed on the same data. PMID:24035679

  5. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  6. Spectral analysis program. Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Hayden, W. L.

    1972-01-01

    The spectral analysis program (SAP) was developed to provide the Manned Spacecraft Center with the capability of computing the power spectrum of a phase or frequency modulated high frequency carrier wave. Previous power spectrum computational techniques were restricted to relatively simple modulating signals because of excessive computational time, even on a high speed digital computer. The present technique uses the recently developed extended fast Fourier transform and represents a generalized approach for simple and complex modulating signals. The present technique is especially convenient for implementation of a variety of low-pass filters for the modulating signal and bandpass filters for the modulated signal.

  7. A common neural substrate for the analysis of pitch and duration pattern in segmented sound?

    PubMed

    Griffiths, T D; Johnsrude, I; Dean, J L; Green, G G

    1999-12-16

    The analysis of patterns of pitch and duration over time in natural segmented sounds is fundamentally relevant to the analysis of speech, environmental sounds and music. The neural basis for differences between the processing of pitch and duration sequences is not established. We carried out a PET activation study on nine right-handed musically naive subjects, in order to examine the basis for early pitch- and duration-sequence analysis. The input stimuli and output task were closely controlled. We demonstrated a strikingly similar bilateral neural network for both types of analysis. The network is right lateralised and includes the cerebellum, posterior superior temporal cortices, and inferior frontal cortices. These data are consistent with a common initial mechanism for the analysis of pitch and duration patterns within sequences. PMID:10716217

  8. Segmentation of Unstructured Datasets

    NASA Technical Reports Server (NTRS)

    Bhat, Smitha

    1996-01-01

    Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

  9. Method 349.0 Determination of Ammonia in Estuarine and Coastal Waters by Gas Segmented Continuous Flow Colorimetric Analysis

    EPA Science Inventory

    This method provides a procedure for the determination of ammonia in estuarine and coastal waters. The method is based upon the indophenol reaction,1-5 here adapted to automated gas-segmented continuous flow analysis.

  10. Advanced finite element analysis of L4-L5 implanted spine segment

    NASA Astrophysics Data System (ADS)

    Pawlikowski, Marek; Domański, Janusz; Suchocki, Cyprian

    2015-09-01

    In the paper finite element (FE) analysis of implanted lumbar spine segment is presented. The segment model consists of two lumbar vertebrae L4 and L5 and the prosthesis. The model of the intervertebral disc prosthesis consists of two metallic plates and a polyurethane core. Bone tissue is modelled as a linear viscoelastic material. The prosthesis core is made of a polyurethane nanocomposite. It is modelled as a non-linear viscoelastic material. The constitutive law of the core, derived in one of the previous papers, is implemented into the FE software Abaqus®. It was done by means of the User-supplied procedure UMAT. The metallic plates are elastic. The most important parts of the paper include: description of the prosthesis geometrical and numerical modelling, mathematical derivation of stiffness tensor and Kirchhoff stress and implementation of the constitutive model of the polyurethane core into Abaqus® software. Two load cases were considered, i.e. compression and stress relaxation under constant displacement. The goal of the paper is to numerically validate the constitutive law, which was previously formulated, and to perform advanced FE analyses of the implanted L4-L5 spine segment in which non-standard constitutive law for one of the model materials, i.e. the prosthesis core, is implemented.

  11. Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population

    USGS Publications Warehouse

    Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel

    2002-01-01

    A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.

  12. Local multifractal detrended fluctuation analysis for non-stationary image's texture segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Zong-shou; Li, Jin-wei

    2014-12-01

    Feature extraction plays a great important role in image processing and pattern recognition. As a power tool, multifractal theory is recently employed for this job. However, traditional multifractal methods are proposed to analyze the objects with stationary measure and cannot for non-stationary measure. The works of this paper is twofold. First, the definition of stationary image and 2D image feature detection methods are proposed. Second, a novel feature extraction scheme for non-stationary image is proposed by local multifractal detrended fluctuation analysis (Local MF-DFA), which is based on 2D MF-DFA. A set of new multifractal descriptors, called local generalized Hurst exponent (Lhq) is defined to characterize the local scaling properties of textures. To test the proposed method, both the novel texture descriptor and other two multifractal indicators, namely, local Hölder coefficients based on capacity measure and multifractal dimension Dq based on multifractal differential box-counting (MDBC) method, are compared in segmentation experiments. The first experiment indicates that the segmentation results obtained by the proposed Lhq are better than the MDBC-based Dq slightly and superior to the local Hölder coefficients significantly. The results in the second experiment demonstrate that the Lhq can distinguish the texture images more effectively and provide more robust segmentations than the MDBC-based Dq significantly.

  13. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    PubMed

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types. PMID:18000328

  14. Profiling the different needs and expectations of patients for population-based medicine: a case study using segmentation analysis

    PubMed Central

    2012-01-01

    Background This study illustrates an evidence-based method for the segmentation analysis of patients that could greatly improve the approach to population-based medicine, by filling a gap in the empirical analysis of this topic. Segmentation facilitates individual patient care in the context of the culture, health status, and the health needs of the entire population to which that patient belongs. Because many health systems are engaged in developing better chronic care management initiatives, patient profiles are critical to understanding whether some patients can move toward effective self-management and can play a central role in determining their own care, which fosters a sense of responsibility for their own health. A review of the literature on patient segmentation provided the background for this research. Method First, we conducted a literature review on patient satisfaction and segmentation to build a survey. Then, we performed 3,461 surveys of outpatient services users. The key structures on which the subjects’ perception of outpatient services was based were extrapolated using principal component factor analysis with varimax rotation. After the factor analysis, segmentation was performed through cluster analysis to better analyze the influence of individual attitudes on the results. Results Four segments were identified through factor and cluster analysis: the “unpretentious,” the “informed and supported,” the “experts” and the “advanced” patients. Their policies and managerial implications are outlined. Conclusions With this research, we provide the following: – a method for profiling patients based on common patient satisfaction surveys that is easily replicable in all health systems and contexts; – a proposal for segments based on the results of a broad-based analysis conducted in the Italian National Health System (INHS). Segments represent profiles of patients requiring different strategies for delivering health services. Their

  15. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  16. Automated system for ST segment and arrhythmia analysis in exercise radionuclide ventriculography

    SciTech Connect

    Hsia, P.W.; Jenkins, J.M.; Shimoni, Y.; Gage, K.P.; Santinga, J.T.; Pitt, B.

    1986-06-01

    A computer-based system for interpretation of the electrocardiogram (ECG) in the diagnosis of arrhythmia and ST segment abnormality in an exercise system is presented. The system was designed for inclusion in a gamma camera so the ECG diagnosis could be combined with the diagnostic capability of radionuclide ventriculography. Digitized data are analyzed in a beat-by-beat mode and a contextual diagnosis of underlying rhythm is provided. Each beat is assigned a beat code based on a combination of waveform analysis and RR interval measurement. The waveform analysis employs a new correlation coefficient formula which corrects for baseline wander. Selective signal averaging, in which only normal beats are included, is done for an improved signal-to-noise ratio prior to ST segment analysis. Template generation, R wave detection, QRS window size, baseline correction, and continuous updating of heart rate have all been automated. ST level and slope measurements are computed on signal-averaged data. Arrhythmia analysis of 13 passages of abnormal rhythm by computer was found to be correct in 98.4 percent of all beats. 25 passages of exercise data, 1-5 min in length, were evaluated by the cardiologist and found to be in agreement in 95.8 percent in measurements of ST level and 91.7 percent in measurements of ST slope.

  17. Introduction to Psychology and Leadership. Part Five, Military Management. Segments VII, VIII, IX & X, Volume V-B.

    ERIC Educational Resources Information Center

    Westinghouse Learning Corp., Annapolis, MD.

    The fifth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on military management and is presented in three separate documents. It is a self-instructional text with audiotape and panelbook sections. EM 010 429 and EM…

  18. Introduction to Psychology and Leadership. Part Four; Achieving Effective Communication. Segments I, II, III, & IV, Volume IV-A.

    ERIC Educational Resources Information Center

    Westinghouse Learning Corp., Annapolis, MD.

    The fourth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on achieving effective communication and is divided into three separate documents. It is a self-instructional linear text with audiotape and intrinsically…

  19. Introduction to Psychology and Leadership. Part Four, Achieving Effective Communication. Segments V, VI, & VII, Volume IV-B.

    ERIC Educational Resources Information Center

    Westinghouse Learning Corp., Annapolis, MD.

    The fourth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on achieving effective communication. It is a self-instructional text with audiotape and intrinsically programed sections. EM 010 427 and EM 010 426 are the…

  20. Introduction to Psychology and Leadership. Part Four; Achieving Effective Communication. Segments IV, V, VI, & VII, Volume IV, Script.

    ERIC Educational Resources Information Center

    Westinghouse Learning Corp., Annapolis, MD.

    The fourth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on achieving effective communication. It is a self-instructional tape script and intrinsically programed booklet. EM 010 427 and EM 010 428 are the first and…

  1. Texture analysis improves level set segmentation of the anterior abdominal wall

    SciTech Connect

    Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Landman, Bennett A.

    2013-12-15

    Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention.Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall.Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture.Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture

  2. Registration of orthogonally oriented wide-field of view OCT volumes using orientation-aware optical flow and retina segmentation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lezama, Jose; Mukherjee, Dibyendu; McNabb, Ryan P.; Sapiro, Guillermo; Izatt, Joseph A.; Farsiu, Sina; Kuo, Anthony N.

    2016-03-01

    Patient motion artifacts are an important source of data irregularities in OCT imaging. With longer duration OCT scans - as is needed for large wide field of view scans or increased scan density - motion artifacts become increasingly problematic. Strategies to mitigate these motion artifacts are then necessary to ensure OCT data integrity. A popular strategy for reducing motion artifacts in OCT images is to capture two orthogonally oriented volumetric scans containing uncorrelated motion and subsequently reconstructing a motion-free volume by combining information from both datasets. While many different variations of this registration approach have been proposed, even the most recent methods might not be suitable for wide FOV OCT scans which can be lacking in features away from the optic nerve head or arcades. To address this problem, we propose a two-stage motion correction algorithm for wide FOV OCT volumes. In the first step, X and Y axes motion is corrected by registering OCT summed voxel projections (SVPs). To achieve this, we introduce a method based on a custom variation of the dense optical flow technique which is aware of the motion free orientation of the scan. Secondly, a depth (Z axis) correction approach based on the segmentation of the retinal layer boundaries in each B-scan using graph-theory and dynamic programming is applied. This motion correction method was applied to wide field retinal OCT volumes (approximately 80° FOV) of 3 subjects with substantial reduction in motion artifacts.

  3. Gait analysis and cerebral volumes in Down's syndrome.

    PubMed

    Rigoldi, C; Galli, M; Condoluci, C; Carducci, F; Onorati, P; Albertini, G

    2009-01-01

    The aim of this study was to look for a relationship between cerebral volumes computed using a voxel-based morphometry algorithm and walking patterns in individuals with Down's syndrome (DS), in order to investigate the origin of the motor problems in these subjects with a view to developing appropriate rehabilitation programmes. Nine children with DS underwent a gait analysis (GA) protocol that used a 3D motion analysis system, force plates and a video system, and magnetic resonance imaging (MRI). Analysis of GA graphs allowed a series of parameters to be defined and computed in order to quantify gait patterns. By combining some of the parameters it was possible to obtain a 3D description of gait in terms of distance from normal values. Finally, the results of cerebral volume analysis were compared with the gait patterns found. A strong relationship emerged between cerebellar vermis volume reduction and quality of gait and also between grey matter volume reduction of some cerebral areas and asymmetrical gait. An evaluation of high-level motor deficits, reflected in a lack or partial lack of proximal functions, is important in order to define a correct rehabilitation programme. PMID:20018142

  4. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  5. Semi-automatic segmentation and modeling of the cervical spinal cord for volume quantification in multiple sclerosis patients from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sonkova, Pavlina; Evangelou, Iordanis E.; Gallo, Antonio; Cantor, Fredric K.; Ohayon, Joan; McFarland, Henry F.; Bagnato, Francesca

    2008-03-01

    Spinal cord (SC) tissue loss is known to occur in some patients with multiple sclerosis (MS), resulting in SC atrophy. Currently, no measurement tools exist to determine the magnitude of SC atrophy from Magnetic Resonance Images (MRI). We have developed and implemented a novel semi-automatic method for quantifying the cervical SC volume (CSCV) from Magnetic Resonance Images (MRI) based on level sets. The image dataset consisted of SC MRI exams obtained at 1.5 Tesla from 12 MS patients (10 relapsing-remitting and 2 secondary progressive) and 12 age- and gender-matched healthy volunteers (HVs). 3D high resolution image data were acquired using an IR-FSPGR sequence acquired in the sagittal plane. The mid-sagittal slice (MSS) was automatically located based on the entropy calculation for each of the consecutive sagittal slices. The image data were then pre-processed by 3D anisotropic diffusion filtering for noise reduction and edge enhancement before segmentation with a level set formulation which did not require re-initialization. The developed method was tested against manual segmentation (considered ground truth) and intra-observer and inter-observer variability were evaluated.

  6. Incorporation of texture-based features in optimal graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Abràmoff, Michael D.; Sonka, Milan; Kwon, Young H.; Garvin, Mona K.

    2012-02-01

    While efficient graph-theoretic approaches exist for the optimal (with respect to a cost function) and simultaneous segmentation of multiple surfaces within volumetric medical images, the appropriate design of cost functions remains an important challenge. Previously proposed methods have used simple cost functions or optimized a combination of the same, but little has been done to design cost functions using learned features from a training set, in a less biased fashion. Here, we present a method to design cost functions for the simultaneous segmentation of multiple surfaces using the graph-theoretic approach. Classified texture features were used to create probability maps, which were incorporated into the graph-search approach. The efficiency of such an approach was tested on 10 optic nerve head centered optical coherence tomography (OCT) volumes obtained from 10 subjects that presented with glaucoma. The mean unsigned border position error was computed with respect to the average of manual tracings from two independent observers and compared to our previously reported results. A significant improvement was noted in the overall means which reduced from 9.25 +/- 4.03μm to 6.73 +/- 2.45μm (p < 0.01) and is also comparable with the inter-observer variability of 8.85 +/- 3.85μm.

  7. Dioxin analysis of Philadelphia northwest incinerator. Summary report. Volume 1

    SciTech Connect

    Milner, I.

    1986-01-01

    A study was conducted by US EPA Region 3 to determine the dioxin-related impact of the Philadelphia Northwest Incinerator on public health. Specifically, it was designed to assess quantitatively the risks to public health resulting from emissions into the ambient air of dioxins as well as the potential effect of deposition of dioxins on the soil in the vicinity of the incinerator. Volume 1 is an executive summary of the study findings. Volume 2 contains contractor reports, laboratory analysis results and other documentation.

  8. The Impact of Policy Guidelines on Hospital Antibiotic Use over a Decade: A Segmented Time Series Analysis

    PubMed Central

    Chandy, Sujith J.; Naik, Girish S.; Charles, Reni; Jeyaseelan, Visalakshi; Naumova, Elena N.; Thomas, Kurien; Lundborg, Cecilia Stalsby

    2014-01-01

    Introduction Antibiotic pressure contributes to rising antibiotic resistance. Policy guidelines encourage rational prescribing behavior, but effectiveness in containing antibiotic use needs further assessment. This study therefore assessed the patterns of antibiotic use over a decade and analyzed the impact of different modes of guideline development and dissemination on inpatient antibiotic use. Methods Antibiotic use was calculated monthly as defined daily doses (DDD) per 100 bed days for nine antibiotic groups and overall. This time series compared trends in antibiotic use in five adjacent time periods identified as ‘Segments,’ divided based on differing modes of guideline development and implementation: Segment 1– Baseline prior to antibiotic guidelines development; Segment 2– During preparation of guidelines and booklet dissemination; Segment 3– Dormant period with no guidelines dissemination; Segment 4– Booklet dissemination of revised guidelines; Segment 5– Booklet dissemination of revised guidelines with intranet access. Regression analysis adapted for segmented time series and adjusted for seasonality assessed changes in antibiotic use trend. Results Overall antibiotic use increased at a monthly rate of 0.95 (SE = 0.18), 0.21 (SE = 0.08) and 0.31 (SE = 0.06) for Segments 1, 2 and 3, stabilized in Segment 4 (0.05; SE = 0.10) and declined in Segment 5 (−0.37; SE = 0.11). Segments 1, 2 and 4 exhibited seasonal fluctuations. Pairwise segmented regression adjusted for seasonality revealed a significant drop in monthly antibiotic use of 0.401 (SE = 0.089; p<0.001) for Segment 5 compared to Segment 4. Most antibiotic groups showed similar trends to overall use. Conclusion Use of overall and specific antibiotic groups showed varied patterns and seasonal fluctuations. Containment of rising overall antibiotic use was possible during periods of active guideline dissemination. Wider access through intranet facilitated

  9. Using semi-automated segmentation of computed tomography datasets for three-dimensional visualization and volume measurements of equine paranasal sinuses.

    PubMed

    Brinkschulte, Markus; Bienert-Zeit, Astrid; Lüpke, Matthias; Hellige, Maren; Staszyk, Carsten; Ohnesorge, Bernhard

    2013-01-01

    The system of the paranasal sinuses morphologically represents one of the most complex parts of the equine body. A clear understanding of spatial relationships is needed for correct diagnosis and treatment. The purpose of this study was to describe the anatomy and volume of equine paranasal sinuses using three-dimensional (3D) reformatted renderings of computed tomography (CT) slices. Heads of 18 cadaver horses, aged 2-25 years, were analyzed by the use of separate semi-automated segmentation of the following bilateral paranasal sinus compartments: rostral maxillary sinus (Sinus maxillaris rostralis), ventral conchal sinus (Sinus conchae ventralis), caudal maxillary sinus (Sinus maxillaris caudalis), dorsal conchal sinus (Sinus conchae dorsalis), frontal sinus (Sinus frontalis), sphenopalatine sinus (Sinus sphenopalatinus), and middle conchal sinus (Sinus conchae mediae). Reconstructed structures were displayed separately, grouped, or altogether as transparent or solid elements to visualize individual paranasal sinus morphology. The paranasal sinuses appeared to be divided into two systems by the maxillary septum (Septum sinuum maxillarium). The first or rostral system included the rostral maxillary and ventral conchal sinus. The second or caudal system included the caudal maxillary, dorsal conchal, frontal, sphenopalatine, and middle conchal sinuses. These two systems overlapped and were interlocked due to the oblique orientation of the maxillary septum. Total volumes of the paranasal sinuses ranged from 911.50 to 1502.00 ml (mean ± SD, 1151.00 ± 186.30 ml). 3D renderings of equine paranasal sinuses by use of semi-automated segmentation of CT-datasets improved understanding of this anatomically challenging region. PMID:23890087

  10. An automated target recognition technique for image segmentation and scene analysis

    SciTech Connect

    Baumgart, C.W.; Ciarcia, C.A.

    1994-02-01

    Automated target recognition software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army`s Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multi-sensor system designed to detect buried and surface-emplaced metallic and non-metallic anti-tank mines. The basic requirements for this ATR software were: (1) an ability to separate target objects from the background in low S/N conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed utilizing an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a trade-off between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  11. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  12. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    PubMed Central

    Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-01-01

    Abstract. Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform. PMID:23942632

  13. Micro analysis of fringe field formed inside LDA measuring volume

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhijit; Nirala, A. K.

    2016-05-01

    In the present study we propose a technique for micro analysis of fringe field formed inside laser Doppler anemometry (LDA) measuring volume. Detailed knowledge of the fringe field obtained by this technique allows beam quality, alignment and fringe uniformity to be evaluated with greater precision and may be helpful for selection of an appropriate optical element for LDA system operation. A complete characterization of fringes formed at the measurement volume using conventional, as well as holographic optical elements, is presented. Results indicate the qualitative, as well as quantitative, improvement of fringes formed at the measurement volume by holographic optical elements. Hence, use of holographic optical elements in LDA systems may be advantageous for improving accuracy in the measurement.

  14. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  15. Segmentation and Visual Analysis of Whole-Body Mouse Skeleton microSPECT

    PubMed Central

    Khmelinskii, Artem; Groen, Harald C.; Baiker, Martin; de Jong, Marion; Lelieveldt, Boudewijn P. F.

    2012-01-01

    Whole-body SPECT small animal imaging is used to study cancer, and plays an important role in the development of new drugs. Comparing and exploring whole-body datasets can be a difficult and time-consuming task due to the inherent heterogeneity of the data (high volume/throughput, multi-modality, postural and positioning variability). The goal of this study was to provide a method to align and compare side-by-side multiple whole-body skeleton SPECT datasets in a common reference, thus eliminating acquisition variability that exists between the subjects in cross-sectional and multi-modal studies. Six whole-body SPECT/CT datasets of BALB/c mice injected with bone targeting tracers 99mTc-methylene diphosphonate (99mTc-MDP) and 99mTc-hydroxymethane diphosphonate (99mTc-HDP) were used to evaluate the proposed method. An articulated version of the MOBY whole-body mouse atlas was used as a common reference. Its individual bones were registered one-by-one to the skeleton extracted from the acquired SPECT data following an anatomical hierarchical tree. Sequential registration was used while constraining the local degrees of freedom (DoFs) of each bone in accordance to the type of joint and its range of motion. The Articulated Planar Reformation (APR) algorithm was applied to the segmented data for side-by-side change visualization and comparison of data. To quantitatively evaluate the proposed algorithm, bone segmentations of extracted skeletons from the correspondent CT datasets were used. Euclidean point to surface distances between each dataset and the MOBY atlas were calculated. The obtained results indicate that after registration, the mean Euclidean distance decreased from 11.5±12.1 to 2.6±2.1 voxels. The proposed approach yielded satisfactory segmentation results with minimal user intervention. It proved to be robust for “incomplete” data (large chunks of skeleton missing) and for an intuitive exploration and comparison of multi-modal SPECT/CT cross

  16. Automatic 3-D grayscale volume matching and shape analysis.

    PubMed

    Guétat, Grégoire; Maitre, Matthieu; Joly, Laurène; Lai, Sen-Lin; Lee, Tzumin; Shinagawa, Yoshihisa

    2006-04-01

    Recently, shape matching in three dimensions (3-D) has been gaining importance in a wide variety of fields such as computer graphics, computer vision, medicine, and biology, with applications such as object recognition, medical diagnosis, and quantitative morphological analysis of biological operations. Automatic shape matching techniques developed in the field of computer graphics handle object surfaces, but ignore intensities of inner voxels. In biology and medical imaging, voxel intensities obtained by computed tomography (CT), magnetic resonance imagery (MRI), and confocal microscopes are important to determine point correspondences. Nevertheless, most biomedical volume matching techniques require human interactions, and automatic methods assume matched objects to have very similar shapes so as to avoid combinatorial explosions of point. This article is aimed at decreasing the gap between the two fields. The proposed method automatically finds dense point correspondences between two grayscale volumes; i.e., finds a correspondent in the second volume for every voxel in the first volume, based on the voxel intensities. Mutiresolutional pyramids are introduced to reduce computational load and handle highly plastic objects. We calculate the average shape of a set of similar objects and give a measure of plasticity to compare them. Matching results can also be used to generate intermediate volumes for morphing. We use various data to validate the effectiveness of our method: we calculate the average shape and plasticity of a set of fly brain cells, and we also match a human skull and an orangutan skull. PMID:16617625

  17. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    SciTech Connect

    Not Available

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  18. A fuzzy, nonparametric segmentation framework for DTI and MRI analysis: with applications to DTI-tract extraction.

    PubMed

    Awate, Suyash P; Zhang, Hui; Gee, James C

    2007-11-01

    This paper presents a novel fuzzy-segmentation method for diffusion tensor (DT) and magnetic resonance (MR) images. Typical fuzzy-segmentation schemes, e.g., those based on fuzzy C means (FCM), incorporate Gaussian class models that are inherently biased towards ellipsoidal clusters characterized by a mean element and a covariance matrix. Tensors in fiber bundles, however, inherently lie on specific manifolds in Riemannian spaces. Unlike FCM-based schemes, the proposed method represents these manifolds using nonparametric data-driven statistical models. The paper describes a statistically-sound (consistent) technique for nonparametric modeling in Riemannian DT spaces. The proposed method produces an optimal fuzzy segmentation by maximizing a novel information-theoretic energy in a Markov-random-field framework. Results on synthetic and real, DT and MR images, show that the proposed method provides information about the uncertainties in the segmentation decisions, which stem from imaging artifacts including noise, partial voluming, and inhomogeneity. By enhancing the nonparametric model to capture the spatial continuity and structure of the fiber bundle, we exploit the framework to extract the cingulum fiber bundle. Typical tractography methods for tract delineation, incorporating thresholds on fractional anisotropy and fiber curvature to terminate tracking, can face serious problems arising from partial voluming and noise. For these reasons, tractography often fails to extract thin tracts with sharp changes in orientation, such as the cingulum. The results demonstrate that the proposed method extracts this structure significantly more accurately as compared to tractography. PMID:18041267

  19. Computerized analysis of coronary artery disease: Performance evaluation of segmentation and tracking of coronary arteries in CT angiograms

    SciTech Connect

    Zhou, Chuan Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean; Agarwal, Prachi; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Patel, Smita; Wei, Jun

    2014-08-15

    Purpose: The authors are developing a computer-aided detection system to assist radiologists in analysis of coronary artery disease in coronary CT angiograms (cCTA). This study evaluated the accuracy of the authors’ coronary artery segmentation and tracking method which are the essential steps to define the search space for the detection of atherosclerotic plaques. Methods: The heart region in cCTA is segmented and the vascular structures are enhanced using the authors’ multiscale coronary artery response (MSCAR) method that performed 3D multiscale filtering and analysis of the eigenvalues of Hessian matrices. Starting from seed points at the origins of the left and right coronary arteries, a 3D rolling balloon region growing (RBG) method that adapts to the local vessel size segmented and tracked each of the coronary arteries and identifies the branches along the tracked vessels. The branches are queued and subsequently tracked until the queue is exhausted. With Institutional Review Board approval, 62 cCTA were collected retrospectively from the authors’ patient files. Three experienced cardiothoracic radiologists manually tracked and marked center points of the coronary arteries as reference standard following the 17-segment model that includes clinically significant coronary arteries. Two radiologists visually examined the computer-segmented vessels and marked the mistakenly tracked veins and noisy structures as false positives (FPs). For the 62 cases, the radiologists marked a total of 10191 center points on 865 visible coronary artery segments. Results: The computer-segmented vessels overlapped with 83.6% (8520/10191) of the center points. Relative to the 865 radiologist-marked segments, the sensitivity reached 91.9% (795/865) if a true positive is defined as a computer-segmented vessel that overlapped with at least 10% of the reference center points marked on the segment. When the overlap threshold is increased to 50% and 100%, the sensitivities were 86

  20. Interfacial energetics approach for analysis of endothelial cell and segmental polyurethane interactions.

    PubMed

    Hill, Michael J; Cheah, Calvin; Sarkar, Debanjan

    2016-08-01

    Understanding the physicochemical interactions between endothelial cells and biomaterials is vital for regenerative medicine applications. Particularly, physical interactions between the substratum interface and spontaneously deposited biomacromolecules as well as between the induced biomolecular interface and the cell in terms of surface energetics are important factors to regulate cellular functions. In this study, we examined the physical interactions between endothelial cells and segmental polyurethanes (PUs) using l-tyrosine based PUs to examine the structure-property relations in terms of PU surface energies and endothelial cell organization. Since, contact angle analysis used to probe surface energetics provides incomplete interpretation and understanding of the physical interactions, we sought a combinatorial surface energetics approach utilizing water contact angle, Zisman's critical surface tension (CST), Kaelble's numerical method, and van Oss-Good-Chaudhury theory (vOGCT), and applied to both substrata and serum adsorbed matrix to correlate human umbilical vein endothelial cell (HUVEC) behavior with surface energetics of l-tyrosine based PU surfaces. We determined that, while water contact angle of substratum or adsorbed matrix did not correlate well with HUVEC behavior, overall higher polarity according to the numerical method as well as Lewis base character of the substratum explained increased HUVEC interaction and monolayer formation as opposed to organization into networks. Cell interaction was also interpreted in terms of the combined effects of substratum and adsorbed matrix polarity and Lewis acid-base character to determine the effect of PU segments. PMID:27065449

  1. Segmental analysis of thallium 201 myocardial perfusion scintigraphy: its value in a community hospital.

    PubMed

    Tendera, M; Campbell, W B; Moyers, J R

    1984-08-01

    In a community hospital, we correlated results of thallium 201 myocardial scintigraphy with coronary arteriographic data in 79 patients. Scintigraphy was 92% sensitive and 85% specific in detecting coronary artery disease. There were no false-negative scintigrams in patients with double or triple vessel disease. The most important factors determining sensitivity of the method in detecting individual coronary stenoses were (1) location of the stenosis in the coronary tree, (2) number of vessels involved, and (3) degree of obstruction. Higher prevalence of perfusion defects in areas of 90% to 99% stenosis as compared with 50% to 89% lesions was of borderline statistical significance (86% vs 59%; P = .06). Myocardial perfusion scintigraphy was unable to predict the number of significantly narrowed coronary vessels. Predictive value of a perfusion defect for a significant coronary stenosis was 87% for anterior, 88% for septal, 90% for lateral, 89% for posterior, and 78% for inferior segment. We conclude that segmental analysis of myocardial scintigrams may be of value in a community hospital. PMID:6463700

  2. Investigating materials for breast nodules simulation by using segmentation and similarity analysis of digital images

    NASA Astrophysics Data System (ADS)

    Siqueira, Paula N.; Marcomini, Karem D.; Sousa, Maria A. Z.; Schiabel, Homero

    2015-03-01

    The task of identifying the malignancy of nodular lesions on mammograms becomes quite complex due to overlapped structures or even to the granular fibrous tissue which can cause confusion in classifying masses shape, leading to unnecessary biopsies. Efforts to develop methods for automatic masses detection in CADe (Computer Aided Detection) schemes have been made with the aim of assisting radiologists and working as a second opinion. The validation of these methods may be accomplished for instance by using databases with clinical images or acquired through breast phantoms. With this aim, some types of materials were tested in order to produce radiographic phantom images which could characterize a good enough approach to the typical mammograms corresponding to actual breast nodules. Therefore different nodules patterns were physically produced and used on a previous developed breast phantom. Their characteristics were tested according to the digital images obtained from phantom exposures at a LORAD M-IV mammography unit. Two analysis were realized the first one by the segmentation of regions of interest containing the simulated nodules by an automated segmentation technique as well as by an experienced radiologist who has delineated the contour of each nodule by means of a graphic display digitizer. Both results were compared by using evaluation metrics. The second one used measure of quality Structural Similarity (SSIM) to generate quantitative data related to the texture produced by each material. Although all the tested materials proved to be suitable for the study, the PVC film yielded the best results.

  3. ANALYSIS OF THE SEGMENTAL IMPACTION OF FEMORAL HEAD FOLLOWING AN ACETABULAR FRACTURE SURGICALLY MANAGED

    PubMed Central

    Guimarães, Rodrigo Pereira; Kaleka, Camila Cohen; Cohen, Carina; Daniachi, Daniel; Keiske Ono, Nelson; Honda, Emerson Kiyoshi; Polesello, Giancarlo Cavalli; Riccioli, Walter

    2015-01-01

    Objective: Correlate the postoperative radiographic evaluation with variables accompanying acetabular fractures in order to determine the predictive factors for segmental impaction of femoral head. Methods: Retrospective analysis of medial files of patients submitted to open reduction surgery with internal acetabular fixation. Within approximately 35 years, 596 patients were treated for acetabular fractures; 267 were followed up for at least two years. The others were excluded either because their follow up was shorter than the minimum time, or as a result of the lack of sufficient data reported on files, or because they had been submitted to non-surgical treatment. The patients were followed up by one of three surgeons of the group using the Merle d'Aubigné and Postel clinical scales as well as radiological studies. Results: Only tow studied variables-age and amount of postoperative reductionshowed statistically significant correlation with femoral head impaction. Conclusions: The quality of reduction-anatomical or with up to 2mm residual deviation-presents a good radiographic evolution, reducing the potential for segmental impaction of the femoral head, a statistically significant finding. PMID:27004191

  4. Theoretical analysis of segmented Wolter/LSM X-ray telescope systems

    NASA Technical Reports Server (NTRS)

    Shealy, D. L.; Chao, S. H.

    1986-01-01

    The Segmented Wolter I/LSM X-ray Telescope, which consists of a Wolter I Telescope with a tilted, off-axis convex spherical Layered Synthetic Microstructure (LSM) optics placed near the primary focus to accommodate multiple off-axis detectors, has been analyzed. The Skylab ATM Experiment S056 Wolter I telescope and the Stanford/MSFC nested Wolter-Schwarzschild x-ray telescope have been considered as the primary optics. A ray trace analysis has been performed to calculate the RMS blur circle radius, point spread function (PSF), the meridional and sagittal line functions (LST), and the full width half maximum (PWHM) of the PSF to study the spatial resolution of the system. The effects on resolution of defocussing the image plane, tilting and decentrating of the multilayer (LSM) optics have also been investigated to give the mounting and alignment tolerances of the LSM optic. Comparison has been made between the performance of the segmented Wolter/LSM optical system and that of the Spectral Slicing X-ray Telescope (SSXRT) systems.

  5. Do tumor volume, percent tumor volume predict biochemical recurrence after radical prostatectomy? A meta-analysis

    PubMed Central

    Meng, Yang; Li, He; Xu, Peng; Wang, Jia

    2015-01-01

    The aim of this meta-analysis was to explore the effects of tumor volume (TV) and percent tumor volume (PTV) on biochemical recurrence (BCR) after radical prostatectomy (RP). An electronic search of Medline, Embase and CENTRAL was performed for relevant studies. Studies evaluated the effects of TV and/or PTV on BCR after RP and provided detailed results of multivariate analyses were included. Combined hazard ratios (HRs) and their corresponding 95% confidence intervals (CIs) were calculated using random-effects or fixed-effects models. A total of 15 studies with 16 datasets were included in the meta-analysis. Our study showed that both TV (HR 1.04, 95% CI: 1.00-1.07; P=0.03) and PTV (HR 1.01, 95% CI: 1.00-1.02; P=0.02) were predictors of BCR after RP. The subgroup analyses revealed that TV predicted BCR in studies from Asia, PTV was significantly correlative with BCR in studies in which PTV was measured by computer planimetry, and both TV and PTV predicted BCR in studies with small sample sizes (<1000). In conclusion, our meta-analysis demonstrated that both TV and PTV were significantly associated with BCR after RP. Therefore, TV and PTV should be considered when assessing the risk of BCR in RP specimens. PMID:26885209

  6. New Fully Automated Method for Segmentation of Breast Lesions on Ultrasound Based on Texture Analysis.

    PubMed

    Gómez-Flores, Wilfrido; Ruiz-Ortega, Bedert Abel

    2016-07-01

    The study described here explored a fully automatic segmentation approach based on texture analysis for breast lesions on ultrasound images. The proposed method involves two main stages: (i) In lesion region detection, the original gray-scale image is transformed into a texture domain based on log-Gabor filters. Local texture patterns are then extracted from overlapping lattices that are further classified by a linear discriminant analysis classifier to distinguish between the "normal tissue" and "breast lesion" classes. Next, an incremental method based on the average radial derivative function reveals the region with the highest probability of being a lesion. (ii) In lesion delineation, using the detected region and the pre-processed ultrasound image, an iterative thresholding procedure based on the average radial derivative function is performed to determine the final lesion contour. The experiments are carried out on a data set of 544 breast ultrasound images (including cysts, benign solid masses and malignant lesions) acquired with three distinct ultrasound machines. In terms of the area under the receiver operating characteristic curve, the one-way analysis of variance test (α=0.05) indicates that the proposed approach significantly outperforms two published fully automatic methods (p<0.001), for which the areas under the curve are 0.91, 0.82 and 0.63, respectively. Hence, these results suggest that the log-Gabor domain improves the discrimination power of texture features to accurately segment breast lesions. In addition, the proposed approach can potentially be used for automated computer diagnosis purposes to assist physicians in detection and classification of breast masses. PMID:27095150

  7. Global fractional anisotropy and mean diffusivity together with segmented brain volumes assemble a predictive discriminant model for young and elderly healthy brains: a pilot study at 3T

    PubMed Central

    Garcia-Lazaro, Haydee Guadalupe; Becerra-Laparra, Ivonne; Cortez-Conradis, David; Roldan-Valadez, Ernesto

    2016-01-01

    Summary Several parameters of brain integrity can be derived from diffusion tensor imaging. These include fractional anisotropy (FA) and mean diffusivity (MD). Combination of these variables using multivariate analysis might result in a predictive model able to detect the structural changes of human brain aging. Our aim was to discriminate between young and older healthy brains by combining structural and volumetric variables from brain MRI: FA, MD, and white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) volumes. This was a cross-sectional study in 21 young (mean age, 25.71±3.04 years; range, 21–34 years) and 10 elderly (mean age, 70.20±4.02 years; range, 66–80 years) healthy volunteers. Multivariate discriminant analysis, with age as the dependent variable and WM, GM and CSF volumes, global FA and MD, and gender as the independent variables, was used to assemble a predictive model. The resulting model was able to differentiate between young and older brains: Wilks’ λ = 0.235, χ2 (6) = 37.603, p = .000001. Only global FA, WM volume and CSF volume significantly discriminated between groups. The total accuracy was 93.5%; the sensitivity, specificity and positive and negative predictive values were 91.30%, 100%, 100% and 80%, respectively. Global FA, WM volume and CSF volume are parameters that, when combined, reliably discriminate between young and older brains. A decrease in FA is the strongest predictor of membership of the older brain group, followed by an increase in WM and CSF volumes. Brain assessment using a predictive model might allow the follow-up of selected cases that deviate from normal aging. PMID:27027893

  8. Evaluation of poly-drug use in methadone-related fatalities using segmental hair analysis.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2015-03-01

    In Denmark, fatal poisoning among drug addicts is often related to methadone. The primary mechanism contributing to fatal methadone overdose is respiratory depression. Concurrent use of other central nervous system (CNS) depressants is suggested to heighten the potential for fatal methadone toxicity. Reduced tolerance due to a short-time abstinence period is also proposed to determine a risk for fatal overdose. The primary aims of this study were to investigate if concurrent use of CNS depressants or reduced tolerance were significant risk factors in methadone-related fatalities using segmental hair analysis. The study included 99 methadone-related fatalities collected in Denmark from 2008 to 2011, where both blood and hair were available. The cases were divided into three subgroups based on the cause of death; methadone poisoning (N=64), poly-drug poisoning (N=28) or methadone poisoning combined with fatal diseases (N=7). No significant differences between methadone concentrations in the subgroups were obtained in both blood and hair. The methadone blood concentrations were highly variable (0.015-5.3, median: 0.52mg/kg) and mainly within the concentration range detected in living methadone users. In hair, methadone was detected in 97 fatalities with concentrations ranging from 0.061 to 211ng/mg (median: 11ng/mg). In the remaining two cases, methadone was detected in blood but absent in hair specimens, suggesting that these two subjects were methadone-naive users. Extensive poly-drug use was observed in all three subgroups, both recently and within the last months prior to death. Especially, concurrent use of multiple benzodiazepines was prevalent among the deceased followed by the abuse of morphine, codeine, amphetamine, cannabis, cocaine and ethanol. By including quantitative segmental hair analysis, additional information on poly-drug use was obtained. Especially, 6-acetylmorphine was detected more frequently in hair specimens, indicating that regular abuse of

  9. A Randomized Trial of Intrapartum Fetal ECG ST-Segment Analysis

    PubMed Central

    Belfort, Michael A.; Saade, George R.; Thom, Elizabeth; Blackwell, Sean C.; Reddy, Uma M.; Thorp, John M.; Tita, Alan T.N.; Miller, Russell S.; Peaceman, Alan M.; McKenna, David S.; Chien, Edward K.S.; Rouse, Dwight J.; Gibbs, Ronald S.; El-Sayed, Yasser Y.; Sorokin, Yoram; Caritis, Steve N.; VanDorsten, J. Peter

    2015-01-01

    BACKGROUND It is unclear whether using fetal electrocardiographic (ECG) ST-segment analysis as an adjunct to conventional intrapartum electronic fetal heart-rate monitoring modifies intrapartum and neonatal outcomes. METHODS We performed a multicenter trial in which women with a singleton fetus who were attempting vaginal delivery at more than 36 weeks of gestation and who had cervical dilation of 2 to 7 cm were randomly assigned to “open” or “masked” monitoring with fetal ST-segment analysis. The masked system functioned as a normal fetal heart-rate monitor. The open system displayed additional information for use when uncertain fetal heart-rate patterns were detected. The primary outcome was a composite of intrapartum fetal death, neonatal death, an Apgar score of 3 or less at 5 minutes, neonatal seizure, an umbilical-artery blood pH of 7.05 or less with a base deficit of 12 mmol per liter or more, intubation for ventilation at delivery, or neonatal encephalopathy. RESULTS A total of 11,108 patients underwent randomization; 5532 were assigned to the open group, and 5576 to the masked group. The primary outcome occurred in 52 fetuses or neonates of women in the open group (0.9%) and 40 fetuses or neonates of women in the masked group (0.7%) (relative risk, 1.31; 95% confidence interval, 0.87 to 1.98; P = 0.20). Among the individual components of the primary outcome, only the frequency of a 5-minute Apgar score of 3 or less differed significantly between neonates of women in the open group and those in the masked group (0.3% vs. 0.1%, P = 0.02). There were no significant between-group differences in the rate of cesarean delivery (16.9% and 16.2%, respectively; P = 0.30) or any operative delivery (22.8% and 22.0%, respectively; P = 0.31). Adverse events were rare and occurred with similar frequency in the two groups. CONCLUSIONS Fetal ECG ST-segment analysis used as an adjunct to conventional intrapartum electronic fetal heart-rate monitoring did not improve

  10. Segmentation and volumetric measurement of renal cysts and parenchyma from MR images of polycystic kidneys using multi-spectral analysis method

    NASA Astrophysics Data System (ADS)

    Bae, K. T.; Commean, P. K.; Brunsden, B. S.; Baumgarten, D. A.; King, B. F., Jr.; Wetzel, L. H.; Kenney, P. J.; Chapman, A. B.; Torres, V. E.; Grantham, J. J.; Guay-Woodford, L. M.; Tao, C.; Miller, J. P.; Meyers, C. M.; Bennett, W. M.

    2008-03-01

    For segmentation and volume measurement of renal cysts and parenchyma from kidney MR images in subjects with autosomal dominant polycystic kidney disease (ADPKD), a semi-automated, multi-spectral anaylsis (MSA) method was developed and applied to T1- and T2-weighted MR images. In this method, renal cysts and parenchyma were characterized and segmented for their characteristic T1 and T2 signal intensity differences. The performance of the MSA segmentation method was tested on ADPKD phantoms and patients. Segmented renal cysts and parenchyma volumes were measured and compared with reference standard measurements by fluid displacement method in the phantoms and stereology and region-based thresholding methods in patients, respectively. As results, renal cysts and parenchyma were segmented successfully with the MSA method. The volume measurements obtained with MSA were in good agreement with the measurements by other segmentation methods for both phantoms and subjects. The MSA method, however, was more time-consuming than the other segmentation methods because it required pre-segmentation, image registration and tissue classification-determination steps.

  11. Multi-level segment analysis: definition and application in turbulent systems

    NASA Astrophysics Data System (ADS)

    Wang, L. P.; Huang, Y. X.

    2015-06-01

    For many complex systems the interaction of different scales is among the most interesting and challenging features. It seems not very successful to extract the physical properties in different scale regimes by the existing approaches, such as the structure-function and Fourier spectrum method. Fundamentally, these methods have their respective limitations, for instance scale mixing, i.e. the so-called infrared and ultraviolet effects. To make improvements in this regard, a new method, multi-level segment analysis (MSA) based on the local extrema statistics, has been developed. Benchmark (fractional Brownian motion) verifications and the important case tests (Lagrangian and two-dimensional turbulence) show that MSA can successfully reveal different scaling regimes which have remained quite controversial in turbulence research. In general the MSA method proposed here can be applied to different dynamic systems in which the concepts of multiscale and multifractality are relevant.

  12. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images

    PubMed Central

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L.; Levin, Michael; Miller, Eric L.

    2015-01-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach. PMID:26601004

  13. Fiber composite analysis and design. Volume 2: Structures

    SciTech Connect

    Rosen, B.W.

    1998-09-01

    Recent years have witnessed a significant increase in the understanding and utilization of fibrous composite materials. There has also been a much larger increase in the amount of published literature in this field. This book builds upon existing literature to present a review of the available capability for composite structural design and analysis. The aim is to provide guidance for one who seeks to become familiar with the tools required for designing with fibrous composites. Thus, the book identifies the key concepts associated with the use of these unique materials. This second volume addresses the design and analysis of structural configurations for the practical and efficient utilization of fiber composite materials.

  14. Improved helicopter aeromechanical stability analysis using segmented constrained layer damping and hybrid optimization

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Chattopadhyay, Aditi

    2000-06-01

    Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.

  15. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  16. Application of Control Volume Analysis to Cerebrospinal Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Wei, Timothy; Cohen, Benjamin; Anor, Tomer; Madsen, Joseph

    2011-11-01

    Hydrocephalus is among the most common birth defects and may not be prevented nor cured. Afflicted individuals face serious issues, which at present are too complicated and not well enough understood to treat via systematic therapies. This talk outlines the framework and application of a control volume methodology to clinical Phase Contrast MRI data. Specifically, integral control volume analysis utilizes a fundamental, fluid dynamics methodology to quantify intracranial dynamics within a precise, direct, and physically meaningful framework. A chronically shunted, hydrocephalic patient in need of a revision procedure was used as an in vivo case study. Magnetic resonance velocity measurements within the patient's aqueduct were obtained in four biomedical state and were analyzed using the methods presented in this dissertation. Pressure force estimates were obtained, showing distinct differences in amplitude, phase, and waveform shape for different intracranial states within the same individual. Thoughts on the physiological and diagnostic research and development implications/opportunities will be presented.

  17. A Segmentation Algorithm for Quantitative Analysis of Heterogeneous Tumors of the Cervix With ¹⁸F-FDG PET/CT.

    PubMed

    Mu, Wei; Chen, Zhe; Shen, Wei; Yang, Feng; Liang, Ying; Dai, Ruwei; Wu, Ning; Tian, Jie

    2015-10-01

    As positron-emission tomography (PET) images have low spatial resolution and much noise, accurate image segmentation is one of the most challenging issues in tumor quantification. Tumors of the uterine cervix present a particular challenge because of urine activity in the adjacent bladder. Here, we propose and validate an automatic segmentation method adapted to cervical tumors. Our proposed methodology combined the gradient field information of both the filtered PET image and the level set function into a level set framework by constructing a new evolution equation. Furthermore, we also constructed a new hyperimage to recognize a rough tumor region using the fuzzy c-means algorithm according to the tissue specificity as defined by both PET (uptake) and computed tomography (attenuation) to provide the initial zero level set, which could make the segmentation process fully automatic. The proposed method was verified based on simulation and clinical studies. For simulation studies, seven different phantoms, representing tumors with homogenous/heterogeneous-low/high uptake patterns and different volumes, were simulated with five different noise levels. Twenty-seven cervical cancer patients at different stages were enrolled for clinical evaluation of the method. Dice similarity coefficients (DSC) and Hausdorff distance (HD) were used to evaluate the accuracy of the segmentation method, while a Bland-Altman analysis of the mean standardized uptake value (SUVmean) and metabolic tumor volume (MTV) was used to evaluate the accuracy of the quantification. Using this method, the DSCs and HDs of the homogenous and heterogeneous phantoms under clinical noise level were 93.39 ±1.09% and 6.02 ±1.09 mm, 93.59 ±1.63% and 8.92 ±2.57 mm, respectively. The DSCs and HDs in patients measured 91.80 ±2.46% and 7.79 ±2.18 mm. Through Bland-Altman analysis, the SUVmean and the MTV using our method showed high correlation with the clinical gold standard. The results of both simulation

  18. [Assessment of cardiac function by left heart catheterization: an analysis of left ventricular pressure-volume (length) loops].

    PubMed

    Sasayama, S; Nonogi, H; Sakurai, T; Kawai, C; Fujita, M; Eiho, S; Kuwahara, M

    1984-01-01

    The mechanical property of the cardiac muscle has been classically analyzed in two ways; shortening of muscle fiber, and the development of tension within the muscle. In the ejecting ventricle, left ventricular (LV) function can be analyzed by the analogous two-dimensional framework of pressure-volume loops, which are provided by plotting the instantaneous volume against corresponding LV pressure. The integral pressure with respect to volume allows to assess a total external ventricular work during ejection. The diastolic pressure-volume relations reflect a chamber stiffness of the ventricle. Force-velocity relations also provide an useful conceptual framework for understanding how the ventricle contracts under given afterload, with modification of preload. In the presence of coronary artery disease, the regional nature of left ventricular contractile function should be defined as well as the global ventricular function as described above, because the latter is determined by the complex interaction of dysfunction of the ischemic myocardium and of compensatory augmentation of shortening of the normally perfused myocardium. We utilized a computer technique to analyze the local wall motion of the ischemic heart by cineventriculography. The boundaries of serial ventricular images are automatically traced and superimposed using the external reference system. Radial grids are drawn from the center of gravity of the end-diastolic image. Measurement of length of each radial grid throughout cardiac cycle provides the analysis of movement of the ventricle at a particular point on the circumference. Using phasic pressure obtained simultaneously with opacification as the common parameter, segmental pressure-length loops are constructed simultaneously at various segments. The loops are similar over the entire circumference in the normal heart, being rectangular in morphology and with synchronous behavior during contraction and relaxation. However, the marked distortion of

  19. Ultratrace LC-MS/MS analysis of segmented calf hair for retrospective assessment of time of clenbuterol administration in Agriforensics.

    PubMed

    Duvivier, Wilco F; van Beek, Teris A; Meijer, Thijs; Peeters, Ruth J P; Groot, Maria J; Sterk, Saskia S; Nielen, Michel W F

    2015-01-21

    In agriforensics, time of administration is often debated when illegal drug residues, such as clenbuterol, are found in frequently traded cattle. In this proof-of-concept work, the feasibility of obtaining retrospective timeline information from segmented calf tail hair analyses has been studied. First, an ultraperformance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) hair analysis method was adapted to accommodate smaller sample sizes and in-house validated. Then, longitudinal 1 cm segments of calf tail hair were analyzed to obtain clenbuterol concentration profiles. The profiles found were in good agreement with calculated, theoretical positions of the clenbuterol residues along the hair. Following assessment of the average growth rate of calf tail hair, time of clenbuterol administration could be retrospectively determined from segmented hair analysis data. The data from the initial animal treatment study (n = 2) suggest that time of treatment can be retrospectively estimated with an error of 3-17 days. PMID:25537490

  20. Sequence Analysis of the Segmental Duplication Responsible for Paris Sex-Ratio Drive in Drosophila simulans

    PubMed Central

    Fouvry, Lucie; Ogereau, David; Berger, Anne; Gavory, Frederick; Montchamp-Moreau, Catherine

    2011-01-01

    Sex-ratio distorters are X-linked selfish genetic elements that facilitate their own transmission by subverting Mendelian segregation at the expense of the Y chromosome. Naturally occurring cases of sex-linked distorters have been reported in a variety of organisms, including several species of Drosophila; they trigger genetic conflict over the sex ratio, which is an important evolutionary force. However, with a few exceptions, the causal loci are unknown. Here, we molecularly characterize the segmental duplication involved in the Paris sex-ratio system that is still evolving in natural populations of Drosophila simulans. This 37.5 kb tandem duplication spans six genes, from the second intron of the Trf2 gene (TATA box binding protein-related factor 2) to the first intron of the org-1 gene (optomotor-blind-related-gene-1). Sequence analysis showed that the duplication arose through the production of an exact copy on the template chromosome itself. We estimated this event to be less than 500 years old. We also detected specific signatures of the duplication mechanism; these support the Duplication-Dependent Strand Annealing model. The region at the junction between the two duplicated segments contains several copies of an active transposable element, Hosim1, alternating with 687 bp repeats that are noncoding but transcribed. The almost-complete sequence identity between copies made it impossible to complete the sequencing and assembly of this region. These results form the basis for the functional dissection of Paris sex-ratio drive and will be valuable for future studies designed to better understand the dynamics and the evolutionary significance of sex chromosome drive. PMID:22384350

  1. Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.

    PubMed

    Jakobsson, Gerd; Kronstrand, Robert

    2014-06-01

    A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10 mg) was incubated in 2M sodium hydroxide (80°C, 10 min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4 min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5 ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. PMID:24817045

  2. Identifying Like-Minded Audiences for Global Warming Public Engagement Campaigns: An Audience Segmentation Analysis and Tool Development

    PubMed Central

    Maibach, Edward W.; Leiserowitz, Anthony; Roser-Renouf, Connie; Mertz, C. K.

    2011-01-01

    Background Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation – a process of identifying coherent groups within a population – can be used to improve the effectiveness of public engagement campaigns. Methodology/Principal Findings In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164) to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%), to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%). Three of the segments (totaling 70%) were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18%) were unsupportive, and one was largely disengaged (12%), having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively. Conclusions/Significance In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are

  3. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  4. Analysis of automated highway system risks and uncertainties. Volume 5

    SciTech Connect

    Sicherman, A.

    1994-10-01

    This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.

  5. Optical granulometric analysis of sedimentary deposits by color segmentation-based software: OPTGRAN-CS

    NASA Astrophysics Data System (ADS)

    Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.

    2015-12-01

    The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.

  6. Synfuel program analysis. Volume I. Procedures-capabilities

    SciTech Connect

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This is the first of the two volumes describing the analytic procedures and resulting capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative synfuel projects and integrated programs. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specifiy cases and interpret outputs. It also contains an explicit description (with examples) of the types of results which can be obtained when applied to: the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. In all cases, the objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

  7. Analysis of volume holographic storage allowing large-angle illumination

    NASA Astrophysics Data System (ADS)

    Shamir, Joseph

    2005-05-01

    Advanced technological developments have stimulated renewed interest in volume holography for applications such as information storage and wavelength multiplexing for communications and laser beam shaping. In these and many other applications, the information-carrying wave fronts usually possess narrow spatial-frequency bands, although they may propagate at large angles with respect to each other or a preferred optical axis. Conventional analytic methods are not capable of properly analyzing the optical architectures involved. For mitigation of the analytic difficulties, a novel approximation is introduced to treat narrow spatial-frequency band wave fronts propagating at large angles. This approximation is incorporated into the analysis of volume holography based on a plane-wave decomposition and Fourier analysis. As a result of the analysis, the recently introduced generalized Bragg selectivity is rederived for this more general case and is shown to provide enhanced performance for the above indicated applications. The power of the new theoretical description is demonstrated with the help of specific examples and computer simulations. The simulations reveal some interesting effects, such as coherent motion blur, that were predicted in an earlier publication.

  8. Analysis of Cavity Volumes in Proteins Using Percolation Theory

    NASA Astrophysics Data System (ADS)

    Green, Sheridan; Jacobs, Donald; Farmer, Jenny

    Molecular packing is studied in a diverse set of globular proteins in their native state ranging in size from 34 to 839 residues An new algorithm has been developed that builds upon the classic Hoshen-Kopelman algorithm for site percolation combined with a local connection criterion that classifies empty space within a protein as a cavity when large enough to hold a spherical shaped probe of radius, R, otherwise a microvoid. Although microvoid cannot fit an object (e.g. molecule or ion) that is the size of the probe or larger, total microvoid volume is a major contribution to protein volume. Importantly, the cavity and microvoid classification depends on probe radius. As probe size decreases, less microvoid forms in favor of more cavities. As probe size is varied from large to small, many disconnected cavities merge to form a percolating path. For fixed probe size, microvoid, cavity and solvent accessible boundary volume properties reflect conformational fluctuations. These results are visualized on three-dimensional structures. Analysis of the cluster statistics within the framework of percolation theory suggests interconversion between microvoid and cavity pathways regulate the dynamics of solvent penetration during partial unfolding events important to protein function.

  9. User's operating procedures. Volume 2: Scout project financial analysis program

    NASA Technical Reports Server (NTRS)

    Harris, C. G.; Haris, D. K.

    1985-01-01

    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.

  10. A coronary artery segmentation method based on multiscale analysis and region growing.

    PubMed

    Kerkeni, Asma; Benabdallah, Asma; Manzanera, Antoine; Bedoui, Mohamed Hedi

    2016-03-01

    Accurate coronary artery segmentation is a fundamental step in various medical imaging applications such as stenosis detection, 3D reconstruction and cardiac dynamics assessing. In this paper, a multiscale region growing (MSRG) method for coronary artery segmentation in 2D X-ray angiograms is proposed. First, a region growing rule incorporating both vesselness and direction information in a unique way is introduced. Then an iterative multiscale search based on this criterion is performed. Selected points in each step are considered as seeds for the following step. By combining vesselness and direction information in the growing rule, this method is able to avoid blockage caused by low vesselness values in vascular regions, which in turn, yields continuous vessel tree. Performing the process in a multiscale fashion helps to extract thin and peripheral vessels often missed by other segmentation methods. Quantitative evaluation performed on real angiography images shows that the proposed segmentation method identifies about 80% of the total coronary artery tree in relatively easy images and 70% in challenging cases with a mean precision of 82% and outperforms others segmentation methods in terms of sensitivity. The MSRG segmentation method was also implemented with different enhancement filters and it has been shown that the Frangi filter gives better results. The proposed segmentation method has proven to be tailored for coronary artery segmentation. It keeps an acceptable performance when dealing with challenging situations such as noise, stenosis and poor contrast. PMID:26748040

  11. Unconventional Word Segmentation in Emerging Bilingual Students' Writing: A Longitudinal Analysis

    ERIC Educational Resources Information Center

    Sparrow, Wendy

    2014-01-01

    This study explores cross-language and longitudinal patterns in unconventional word segmentation in 25 emerging bilingual students' (Spanish/English) writing from first through third grade. Spanish and English writing samples were collected annually and analyzed for two basic types of unconventional word segmentation: hyposegmentation, in…

  12. A Theoretical Analysis of How Segmentation of Dynamic Visualizations Optimizes Students' Learning

    ERIC Educational Resources Information Center

    Spanjers, Ingrid A. E.; van Gog, Tamara; van Merrienboer, Jeroen J. G.

    2010-01-01

    This article reviews studies investigating segmentation of dynamic visualizations (i.e., showing dynamic visualizations in pieces with pauses in between) and discusses two not mutually exclusive processes that might underlie the effectiveness of segmentation. First, cognitive activities needed for dealing with the transience of dynamic…

  13. Dynamic performance modeling and stability analysis of a segmented reflector telescope

    NASA Technical Reports Server (NTRS)

    Ryaciotaki-Boussalis, Helen A.; Briggs, Hugh C.; Ih, Che-Hang CH.

    1991-01-01

    The problem of vibration suppression in segmented reflector telescopes is considered. The decomposition of the structure into smaller components is discussed, and control laws for vibration suppression and conditions for stability at the local and global levels are presented. The states of the reflector segments are mapped into ray displacements on the detector plane.

  14. Understanding the market for geographic information: A market segmentation and characteristics analysis

    NASA Technical Reports Server (NTRS)

    Piper, William S.; Mick, Mark W.

    1994-01-01

    Findings and results from a marketing research study are presented. The report identifies market segments and the product types to satisfy demand in each. An estimate of market size is based on the specific industries in each segment. A sample of ten industries was used in the study. The scientific study covered U.S. firms only.

  15. Image Segmentation and Analysis of Flexion-Extension Radiographs of Cervical Spines

    PubMed Central

    Enikov, Eniko T.

    2014-01-01

    We present a new analysis tool for cervical flexion-extension radiographs based on machine vision and computerized image processing. The method is based on semiautomatic image segmentation leading to detection of common landmarks such as the spinolaminar (SL) line or contour lines of the implanted anterior cervical plates. The technique allows for visualization of the local curvature of these landmarks during flexion-extension experiments. In addition to changes in the curvature of the SL line, it has been found that the cervical plates also deform during flexion-extension examination. While extension radiographs reveal larger curvature changes in the SL line, flexion radiographs on the other hand tend to generate larger curvature changes in the implanted cervical plates. Furthermore, while some lordosis is always present in the cervical plates by design, it actually decreases during extension and increases during flexion. Possible causes of this unexpected finding are also discussed. The described analysis may lead to a more precise interpretation of flexion-extension radiographs, allowing diagnosis of spinal instability and/or pseudoarthrosis in already seemingly fused spines. PMID:27006937

  16. Stereophotogrammetrie Mass Distribution Parameter Determination Of The Lower Body Segments For Use In Gait Analysis

    NASA Astrophysics Data System (ADS)

    Sheffer, Daniel B.; Schaer, Alex R.; Baumann, Juerg U.

    1989-04-01

    Inclusion of mass distribution information in biomechanical analysis of motion is a requirement for the accurate calculation of external moments and forces acting on the segmental joints during locomotion. Regression equations produced from a variety of photogrammetric, anthropometric and cadaeveric studies have been developed and espoused in literature. Because of limitations in the accuracy of predicted inertial properties based on the application of regression equation developed on one population and then applied on a different study population, the employment of a measurement technique that accurately defines the shape of each individual subject measured is desirable. This individual data acquisition method is especially needed when analyzing the gait of subjects with large differences in their extremity geo-metry from those considered "normal", or who may possess gross asymmetries in shape in their own contralateral limbs. This study presents the photogrammetric acquisition and data analysis methodology used to assess the inertial tensors of two groups of subjects, one with spastic diplegic cerebral palsy and the other considered normal.

  17. Impact of BAC limit reduction on different population segments: a Poisson fixed effect analysis.

    PubMed

    Kaplan, Sigal; Prato, Carlo Giacomo

    2007-11-01

    Over the past few decades, several countries enacted the reduction of the legal blood alcohol concentration (BAC) limit, often alongside the administrative license revocation or suspension, to battle drinking-and-driving behavior. Several researchers investigated the effectiveness of these policies by applying different analysis procedures, while assuming population homogeneity in responding to these laws. The present analysis focuses on the evaluation of the impact of BAC limit reduction on different population segments. Poisson regression models, adapted to account for possible observation dependence over time and state specific effects, are estimated to measure the reduction of the number of alcohol-related accidents and fatalities for single-vehicle accidents in 22 U.S. jurisdictions over a period of 15 years starting in 1990. Model estimates demonstrate that, for alcohol-related single-vehicle crashes, (i) BAC laws are more effective in terms of reduction of number of casualties rather than number of accidents, (ii) women and elderly population exhibit higher law compliance with respect to men and to young adult and adult population, respectively, and (iii) the presence of passengers in the vehicle enhances the sense of responsibility of the driver. PMID:17920837

  18. Parallel runway requirement analysis study. Volume 1: The analysis

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.

    1993-01-01

    The correlation of increased flight delays with the level of aviation activity is well recognized. A main contributor to these flight delays has been the capacity of airports. Though new airport and runway construction would significantly increase airport capacity, few programs of this type are currently underway, let alone planned, because of the high cost associated with such endeavors. Therefore, it is necessary to achieve the most efficient and cost effective use of existing fixed airport resources through better planning and control of traffic flows. In fact, during the past few years the FAA has initiated such an airport capacity program designed to provide additional capacity at existing airports. Some of the improvements that that program has generated thus far have been based on new Air Traffic Control procedures, terminal automation, additional Instrument Landing Systems, improved controller display aids, and improved utilization of multiple runways/Instrument Meteorological Conditions (IMC) approach procedures. A useful element to understanding potential operational capacity enhancements at high demand airports has been the development and use of an analysis tool called The PLAND_BLUNDER (PLB) Simulation Model. The objective for building this simulation was to develop a parametric model that could be used for analysis in determining the minimum safety level of parallel runway operations for various parameters representing the airplane, navigation, surveillance, and ATC system performance. This simulation is useful as: a quick and economical evaluation of existing environments that are experiencing IMC delays, an efficient way to study and validate proposed procedure modifications, an aid in evaluating requirements for new airports or new runways in old airports, a simple, parametric investigation of a wide range of issues and approaches, an ability to tradeoff air and ground technology and procedures contributions, and a way of considering probable

  19. Analysis of Entanglement Length and Segmental Order Parameter in Polymer Networks

    NASA Astrophysics Data System (ADS)

    Lang, M.; Sommer, J.-U.

    2010-04-01

    The tube model of entangled chains is applied to compute segment fluctuations and segmental orientational order in polymer networks. The entanglement length Ne is extracted directly from monomer fluctuations without constructing a primitive path. Sliding motion of monomers along the tube axis leads to reduction of segmental order along the chain. For network strands of length N≫Ne, the average segmental order decreases ˜(NeN)-1/2 in marked contrast to the 1/Ne contribution of entanglements to network elasticity. As a consequence, network modulus is not proportional to segmental order in entangled polymer networks. Monte Carlo simulations over a wide range of molecular weights are in quantitative agreement with our theoretical predictions. The impact of entanglements on these properties is directly tested by comparing with simulations where entanglement constraints are switched off.

  20. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms.

    PubMed

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly A

    2013-02-15

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652

  1. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms

    PubMed Central

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly

    2013-01-01

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652

  2. Bivariate segmentation of SNP-array data for allele-specific copy number analysis in tumour samples

    PubMed Central

    2013-01-01

    Background SNP arrays output two signals that reflect the total genomic copy number (LRR) and the allelic ratio (BAF), which in combination allow the characterisation of allele-specific copy numbers (ASCNs). While methods based on hidden Markov models (HMMs) have been extended from array comparative genomic hybridisation (aCGH) to jointly handle the two signals, only one method based on change-point detection, ASCAT, performs bivariate segmentation. Results In the present work, we introduce a generic framework for bivariate segmentation of SNP array data for ASCN analysis. For the matter, we discuss the characteristics of the typically applied BAF transformation and how they affect segmentation, introduce concepts of multivariate time series analysis that are of concern in this field and discuss the appropriate formulation of the problem. The framework is implemented in a method named CnaStruct, the bivariate form of the structural change model (SCM), which has been successfully applied to transcriptome mapping and aCGH. Conclusions On a comprehensive synthetic dataset, we show that CnaStruct outperforms the segmentation of existing ASCN analysis methods. Furthermore, CnaStruct can be integrated into the workflows of several ASCN analysis tools in order to improve their performance, specially on tumour samples highly contaminated by normal cells. PMID:23497144

  3. Motion analysis of knee joint using dynamic volume images

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Kohno, Takahiro; Suzuki, Masahiko; Moriya, Hideshige; Mori, Sin-ichiro; Endo, Masahiro

    2006-03-01

    Acquisition and analysis of three-dimensional movement of knee joint is desired in orthopedic surgery. We have developed two methods to obtain dynamic volume images of knee joint. One is a 2D/3D registration method combining a bi-plane dynamic X-ray fluoroscopy and a static three-dimensional CT, the other is a method using so-called 4D-CT that uses a cone-beam and a wide 2D detector. In this paper, we present two analyses of knee joint movement obtained by these methods: (1) transition of the nearest points between femur and tibia (2) principal component analysis (PCA) of six parameters representing the three dimensional movement of knee. As a preprocessing for the analysis, at first the femur and tibia regions are extracted from volume data at each time frame and then the registration of the tibia between different frames by an affine transformation consisting of rotation and translation are performed. The same transformation is applied femur as well. Using those image data, the movement of femur relative to tibia can be analyzed. Six movement parameters of femur consisting of three translation parameters and three rotation parameters are obtained from those images. In the analysis (1), axis of each bone is first found and then the flexion angle of the knee joint is calculated. For each flexion angle, the minimum distance between femur and tibia and the location giving the minimum distance are found in both lateral condyle and medial condyle. As a result, it was observed that the movement of lateral condyle is larger than medial condyle. In the analysis (2), it was found that the movement of the knee can be represented by the first three principal components with precision of 99.58% and those three components seem to strongly relate to three major movements of femur in the knee bend known in orthopedic surgery.

  4. Change Detection and Land Use / Land Cover Database Updating Using Image Segmentation, GIS Analysis and Visual Interpretation

    NASA Astrophysics Data System (ADS)

    Mas, J.-F.; González, R.

    2015-08-01

    This article presents a hybrid method that combines image segmentation, GIS analysis, and visual interpretation in order to detect discrepancies between an existing land use/cover map and satellite images, and assess land use/cover changes. It was applied to the elaboration of a multidate land use/cover database of the State of Michoacán, Mexico using SPOT and Landsat imagery. The method was first applied to improve the resolution of an existing 1:250,000 land use/cover map produced through the visual interpretation of 2007 SPOT images. A segmentation of the 2007 SPOT images was carried out to create spectrally homogeneous objects with a minimum area of two hectares. Through an overlay operation with the outdated map, each segment receives the "majority" category from the map. Furthermore, spectral indices of the SPOT image were calculated for each band and each segment; therefore, each segment was characterized from the images (spectral indices) and the map (class label). In order to detect uncertain areas which present discrepancy between spectral response and class label, a multivariate trimming, which consists in truncating a distribution from its least likely values, was applied. The segments that behave like outliers were detected and labeled as "uncertain" and a probable alternative category was determined by means of a digital classification using a decision tree classification algorithm. Then, the segments were visually inspected in the SPOT image and high resolution imagery to assign a final category. The same procedure was applied to update the map to 2014 using Landsat imagery. As a final step, an accuracy assessment was carried out using verification sites selected from a stratified random sampling and visually interpreted using high resolution imagery and ground truth.

  5. Using Paleoseismic Trenching and LiDAR Analysis to Evaluate Rupture Propagation Through Segment Boundaries of the Central Wasatch Fault Zone, Utah

    NASA Astrophysics Data System (ADS)

    Bennett, S. E. K.; DuRoss, C. B.; Reitman, N. G.; Devore, J. R.; Hiscock, A.; Gold, R. D.; Briggs, R. W.; Personius, S. F.

    2014-12-01

    Paleoseismic data near fault segment boundaries constrain the extent of past surface ruptures and the persistence of rupture termination at segment boundaries. Paleoseismic evidence for large (M≥7.0) earthquakes on the central Holocene-active fault segments of the 350-km-long Wasatch fault zone (WFZ) generally supports single-segment ruptures but also permits multi-segment rupture scenarios. The extent and frequency of ruptures that span segment boundaries remains poorly known, adding uncertainty to seismic hazard models for this populated region of Utah. To address these uncertainties we conducted four paleoseismic investigations near the Salt Lake City-Provo and Provo-Nephi segment boundaries of the WFZ. We examined an exposure of the WFZ at Maple Canyon (Woodland Hills, UT) and excavated the Flat Canyon trench (Salem, UT), 7 and 11 km, respectively, from the southern tip of the Provo segment. We document evidence for at least five earthquakes at Maple Canyon and four to seven earthquakes that post-date mid-Holocene fan deposits at Flat Canyon. These earthquake chronologies will be compared to seven earthquakes observed in previous trenches on the northern Nephi segment to assess rupture correlation across the Provo-Nephi segment boundary. To assess rupture correlation across the Salt Lake City-Provo segment boundary we excavated the Alpine trench (Alpine, UT), 1 km from the northern tip of the Provo segment, and the Corner Canyon trench (Draper, UT) 1 km from the southern tip of the Salt Lake City segment. We document evidence for six earthquakes at both sites. Ongoing geochronologic analysis (14C, optically stimulated luminescence) will constrain earthquake chronologies and help identify through-going ruptures across these segment boundaries. Analysis of new high-resolution (0.5m) airborne LiDAR along the entire WFZ will quantify latest Quaternary displacements and slip rates and document spatial and temporal slip patterns near fault segment boundaries.

  6. Emotions in vowel segments of continuous speech: analysis of the glottal flow using the normalised amplitude quotient.

    PubMed

    Airas, Matti; Alku, Paavo

    2006-01-01

    Emotions in short vowel segments of continuous speech were analysed using inverse filtering and a recently developed glottal flow parameter, the normalised amplitude quotient (NAQ). Simulated emotion portrayals were produced by 9 professional stage actors. Separated /a:/ vowel segments were inverse filtered and parameterized using NAQ. Statistical analyses showed significant differences among most of the emotions studied. Results also demonstrated clear gender differences. Inverse filtering, together with NAQ, was shown to be a promising method for the analysis of emotional content in continuous speech. PMID:16514274

  7. Volume analysis of heat-induced cracks in human molars: A preliminary study

    PubMed Central

    Sandholzer, Michael A.; Baron, Katharina; Heimel, Patrick; Metscher, Brian D.

    2014-01-01

    Context: Only a few methods have been published dealing with the visualization of heat-induced cracks inside bones and teeth. Aims: As a novel approach this study used nondestructive X-ray microtomography (micro-CT) for volume analysis of heat-induced cracks to observe the reaction of human molars to various levels of thermal stress. Materials and Methods: Eighteen clinically extracted third molars were rehydrated and burned under controlled temperatures (400, 650, and 800°C) using an electric furnace adjusted with a 25°C increase/min. The subsequent high-resolution scans (voxel-size 17.7 μm) were made with a compact micro-CT scanner (SkyScan 1174). In total, 14 scans were automatically segmented with Definiens XD Developer 1.2 and three-dimensional (3D) models were computed with Visage Imaging Amira 5.2.2. The results of the automated segmentation were analyzed with an analysis of variance (ANOVA) and uncorrected post hoc least significant difference (LSD) tests using Statistical Package for Social Sciences (SPSS) 17. A probability level of P < 0.05 was used as an index of statistical significance. Results: A temperature-dependent increase of heat-induced cracks was observed between the three temperature groups (P < 0.05, ANOVA post hoc LSD). In addition, the distributions and shape of the heat-induced changes could be classified using the computed 3D models. Conclusion: The macroscopic heat-induced changes observed in this preliminary study correspond with previous observations of unrestored human teeth, yet the current observations also take into account the entire microscopic 3D expansions of heat-induced cracks within the dental hard tissues. Using the same experimental conditions proposed in the literature, this study confirms previous results, adds new observations, and offers new perspectives in the investigation of forensic evidence. PMID:25125923

  8. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    SciTech Connect

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-19

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

  9. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 4

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 4 of the four major tasks included in the study. Task 4 uses flight plan segment wind and temperature differences as indicators of dates and geographic areas for which significant forecast errors may have occurred. An in-depth analysis is then conducted for the days identified. The analysis show that significant errors occur in the operational forecast on 15 of the 33 arbitrarily selected days included in the study. Wind speeds in an area of maximum winds are underestimated by at least 20 to 25 kts. on 14 of these days. The analysis also show that there is a tendency to repeat the same forecast errors from prog to prog. Also, some perceived forecast errors from the flight plan comparisons could not be verified by visual inspection of the corresponding National Meteorological Center forecast and analyses charts, and it is likely that they are the result of weather data interpolation techniques or some other data processing procedure in the airlines' flight planning systems.

  10. a New Framework for Object-Based Image Analysis Based on Segmentation Scale Space and Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Hadavand, A.; Saadatseresht, M.; Homayouni, S.

    2015-12-01

    In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS), a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  11. Structural and functional analysis of transmembrane segment IV of the salt tolerance protein Sod2.

    PubMed

    Ullah, Asad; Kemp, Grant; Lee, Brian; Alves, Claudia; Young, Howard; Sykes, Brian D; Fliegel, Larry

    2013-08-23

    Sod2 is the plasma membrane Na(+)/H(+) exchanger of the fission yeast Schizosaccharomyces pombe. It provides salt tolerance by removing excess intracellular sodium (or lithium) in exchange for protons. We examined the role of amino acid residues of transmembrane segment IV (TM IV) ((126)FPQINFLGSLLIAGCITSTDPVLSALI(152)) in activity by using alanine scanning mutagenesis and examining salt tolerance in sod2-deficient S. pombe. Two amino acids were critical for function. Mutations T144A and V147A resulted in defective proteins that did not confer salt tolerance when reintroduced into S. pombe. Sod2 protein with other alanine mutations in TM IV had little or no effect. T144D and T144K mutant proteins were inactive; however, a T144S protein was functional and provided lithium, but not sodium, tolerance and transport. Analysis of sensitivity to trypsin indicated that the mutations caused a conformational change in the Sod2 protein. We expressed and purified TM IV (amino acids 125-154). NMR analysis yielded a model with two helical regions (amino acids 128-142 and 147-154) separated by an unwound region (amino acids 143-146). Molecular modeling of the entire Sod2 protein suggested that TM IV has a structure similar to that deduced by NMR analysis and an overall structure similar to that of Escherichia coli NhaA. TM IV of Sod2 has similarities to TM V of the Zygosaccharomyces rouxii Na(+)/H(+) exchanger and TM VI of isoform 1 of mammalian Na(+)/H(+) exchanger. TM IV of Sod2 is critical to transport and may be involved in cation binding or conformational changes of the protein. PMID:23836910

  12. A comparison between handgrip strength, upper limb fat free mass by segmental bioelectrical impedance analysis (SBIA) and anthropometric measurements in young males

    NASA Astrophysics Data System (ADS)

    Gonzalez-Correa, C. H.; Caicedo-Eraso, J. C.; Varon-Serna, D. R.

    2013-04-01

    The mechanical function and size of a muscle may be closely linked. Handgrip strength (HGS) has been used as a predictor of functional performing. Anthropometric measurements have been made to estimate arm muscle area (AMA) and physical muscle mass volume of upper limb (ULMMV). Electrical volume estimation is possible by segmental BIA measurements of fat free mass (SBIA-FFM), mainly muscle-mass. Relationship among these variables is not well established. We aimed to determine if physical and electrical muscle mass estimations relate to each other and to what extent HGS is to be related to its size measured by both methods in normal or overweight young males. Regression analysis was used to determine association between these variables. Subjects showed a decreased HGS (65.5%), FFM, (85.5%) and AMA (74.5%). It was found an acceptable association between SBIA-FFM and AMA (r2 = 0.60) and poorer between physical and electrical volume (r2 = 0.55). However, a paired Student t-test and Bland and Altman plot showed that physical and electrical models were not interchangeable (pt<0.0001). HGS showed a very weak association with anthropometric (r2 = 0.07) and electrical (r2 = 0.192) ULMMV showing that muscle mass quantity does not mean muscle strength. Other factors influencing HGS like physical training or nutrition require more research.

  13. Study on the application of MRF and the D-S theory to image segmentation of the human brain and quantitative analysis of the brain tissue

    NASA Astrophysics Data System (ADS)

    Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang

    2012-01-01

    The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.

  14. Magnetic field analysis of Lorentz motors using a novel segmented magnetic equivalent circuit method.

    PubMed

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  15. Movement Analysis of Flexion and Extension of Honeybee Abdomen Based on an Adaptive Segmented Structure

    PubMed Central

    Zhao, Jieliang; Wu, Jianing; Yan, Shaoze

    2015-01-01

    Honeybees (Apis mellifera) curl their abdomens for daily rhythmic activities. Prior to determining this fact, people have concluded that honeybees could curl their abdomen casually. However, an intriguing but less studied feature is the possible unidirectional abdominal deformation in free-flying honeybees. A high-speed video camera was used to capture the curling and to analyze the changes in the arc length of the honeybee abdomen not only in free-flying mode but also in the fixed sample. Frozen sections and environment scanning electron microscope were used to investigate the microstructure and motion principle of honeybee abdomen and to explore the physical structure restricting its curling. An adaptive segmented structure, especially the folded intersegmental membrane (FIM), plays a dominant role in the flexion and extension of the abdomen. The structural features of FIM were utilized to mimic and exhibit movement restriction on honeybee abdomen. Combining experimental analysis and theoretical demonstration, a unidirectional bending mechanism of honeybee abdomen was revealed. Through this finding, a new perspective for aerospace vehicle design can be imitated. PMID:26223946

  16. An Empirical Analysis of Indoor Tanners: Implications for Audience Segmentation in Campaigns.

    PubMed

    Kelley, Dannielle E; Noar, Seth M; Myrick, Jessica Gall; Morales-Pico, Brenda; Zeitany, Alexandra; Thomas, Nancy E

    2016-05-01

    Tanning bed use before age 35 has been strongly associated with several types of skin cancer. The current study sought to advance an understanding of audience segmentation for indoor tanning among young women. Panhellenic sorority systems at two universities in the Southeastern United States participated in this study. A total of 1,481 young women took the survey; 421 (28%) had tanned indoors in the previous 12 months and were the focus of the analyses reported in this article. Results suggested two distinct tanner types: regular (n = 60) and irregular (n = 353) tanners. Regular tanners tanned more frequently (M = 36.2 vs. 8.6 times per year) and reported significantly higher positive outcome expectations (p < .001) and lower negative outcome expectations (p < .01) than irregular tanners, among other significant differences. Hierarchical logistic regression analysis revealed several significant (p < .001) predictors of regular tanning type, with tanning dependence emerging as the strongest predictor of this classification (OR = 2.25). Implications for developing anti-tanning messages directed at regular and irregular tanners are discussed. PMID:27115046

  17. phenoVein-A Tool for Leaf Vein Segmentation and Analysis.

    PubMed

    Bühler, Jonas; Rishmawi, Louai; Pflugfelder, Daniel; Huber, Gregor; Scharr, Hanno; Hülskamp, Martin; Koornneef, Maarten; Schurr, Ulrich; Jahnke, Siegfried

    2015-12-01

    Precise measurements of leaf vein traits are an important aspect of plant phenotyping for ecological and genetic research. Here, we present a powerful and user-friendly image analysis tool named phenoVein. It is dedicated to automated segmenting and analyzing of leaf veins in images acquired with different imaging modalities (microscope, macrophotography, etc.), including options for comfortable manual correction. Advanced image filtering emphasizes veins from the background and compensates for local brightness inhomogeneities. The most important traits being calculated are total vein length, vein density, piecewise vein lengths and widths, areole area, and skeleton graph statistics, like the number of branching or ending points. For the determination of vein widths, a model-based vein edge estimation approach has been implemented. Validation was performed for the measurement of vein length, vein width, and vein density of Arabidopsis (Arabidopsis thaliana), proving the reliability of phenoVein. We demonstrate the power of phenoVein on a set of previously described vein structure mutants of Arabidopsis (hemivenata, ondulata3, and asymmetric leaves2-101) compared with wild-type accessions Columbia-0 and Landsberg erecta-0. phenoVein is freely available as open-source software. PMID:26468519

  18. phenoVein—A Tool for Leaf Vein Segmentation and Analysis1[OPEN

    PubMed Central

    Pflugfelder, Daniel; Huber, Gregor; Scharr, Hanno; Hülskamp, Martin; Koornneef, Maarten; Jahnke, Siegfried

    2015-01-01

    Precise measurements of leaf vein traits are an important aspect of plant phenotyping for ecological and genetic research. Here, we present a powerful and user-friendly image analysis tool named phenoVein. It is dedicated to automated segmenting and analyzing of leaf veins in images acquired with different imaging modalities (microscope, macrophotography, etc.), including options for comfortable manual correction. Advanced image filtering emphasizes veins from the background and compensates for local brightness inhomogeneities. The most important traits being calculated are total vein length, vein density, piecewise vein lengths and widths, areole area, and skeleton graph statistics, like the number of branching or ending points. For the determination of vein widths, a model-based vein edge estimation approach has been implemented. Validation was performed for the measurement of vein length, vein width, and vein density of Arabidopsis (Arabidopsis thaliana), proving the reliability of phenoVein. We demonstrate the power of phenoVein on a set of previously described vein structure mutants of Arabidopsis (hemivenata, ondulata3, and asymmetric leaves2-101) compared with wild-type accessions Columbia-0 and Landsberg erecta-0. phenoVein is freely available as open-source software. PMID:26468519

  19. Breast cancer risk analysis based on a novel segmentation framework for digital mammograms.

    PubMed

    Chen, Xin; Moschidis, Emmanouil; Taylor, Chris; Astley, Susan

    2014-01-01

    The radiographic appearance of breast tissue has been established as a strong risk factor for breast cancer. Here we present a complete machine learning framework for automatic estimation of mammographic density (MD) and robust feature extraction for breast cancer risk analysis. Our framework is able to simultaneously classify the breast region, fatty tissue, pectoral muscle, glandular tissue and nipple region. Integral to our method is the extraction of measures of breast density (as the fraction of the breast area occupied by glandular tissue) and mammographic pattern. A novel aspect of the segmentation framework is that a probability map associated with the label mask is provided, which indicates the level of confidence of each pixel being classified as the current label. The Pearson correlation coefficient between the estimated MD value and the ground truth is 0.8012 (p-value < 0.0001). We demonstrate the capability of our methods to discriminate between women with and without cancer by analyzing the contralateral mammograms of 50 women with unilateral breast cancer, and 50 controls. Using MD we obtained an area under the ROC curve (AUC) of 0.61; however our texture-based measure of mammographic pattern significantly outperforms the MD discrimination with an AUC of 0.70. PMID:25333160

  20. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    PubMed Central

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941

  1. Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method

    PubMed Central

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  2. Vessel segmentation in 3D spectral OCT scans of the retina

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; van Ginneken, Bram; Sonka, Milan; Abràmoff, Michael D.

    2008-03-01

    The latest generation of spectral optical coherence tomography (OCT) scanners is able to image 3D cross-sectional volumes of the retina at a high resolution and high speed. These scans offer a detailed view of the structure of the retina. Automated segmentation of the vessels in these volumes may lead to more objective diagnosis of retinal vascular disease including hypertensive retinopathy, retinopathy of prematurity. Additionally, vessel segmentation can allow color fundus images to be registered to these 3D volumes, possibly leading to a better understanding of the structure and localization of retinal structures and lesions. In this paper we present a method for automatically segmenting the vessels in a 3D OCT volume. First, the retina is automatically segmented into multiple layers, using simultaneous segmentation of their boundary surfaces in 3D. Next, a 2D projection of the vessels is produced by only using information from certain segmented layers. Finally, a supervised, pixel classification based vessel segmentation approach is applied to the projection image. We compared the influence of two methods for the projection on the performance of the vessel segmentation on 10 optic nerve head centered 3D OCT scans. The method was trained on 5 independent scans. Using ROC analysis, our proposed vessel segmentation system obtains an area under the curve of 0.970 when compared with the segmentation of a human observer.

  3. Relationship between methamphetamine use history and segmental hair analysis findings of MA users.

    PubMed

    Han, Eunyoung; Lee, Sangeun; In, Sanghwan; Park, Meejung; Park, Yonghoon; Cho, Sungnam; Shin, Junguk; Lee, Hunjoo

    2015-09-01

    The aim of this study was to investigate the relationship between methamphetamine (MA) use history and segmental hair analysis (1 and 3cm sections) and whole hair analysis results in Korean MA users in rehabilitation programs. Hair samples were collected from 26 Korean MA users. Eleven of the 26 subjects used cannabis with MA and two used cocaine, opiates, and MDMA with MA. Self-reported single dose of MA from the 26 subjects ranged from 0.03 to 0.5g/one time. Concentrations of MA and its metabolite amphetamine (AP) in hair were determined by gas chromatography mass spectrometry (GC/MS) after derivatization. The method used was well validated. Qualitative analysis from all 1cm sections (n=154) revealed a good correlation between positive or negative results for MA in hair and self-reported MA use (69.48%, n=107). In detail, MA results were positive in 66 hair specimens of MA users who reported administering MA, and MA results were negative in 41 hair specimens of MA users who denied MA administration in the corresponding month. Test results were false-negative in 10.39% (n=16) of hair specimens and false-positive in 20.13% (n=31) of hair specimens. In false positive cases, it is considered that after MA cessation it continued to be accumulated in hair still, while in false negative cases, self-reported histories showed a small amount of MA use or MA use 5-7 months previously. In terms of quantitative analysis, the concentrations of MA in 1 and 3cm long hair segments and in whole hair samples ranged from 1.03 to 184.98 (mean 22.01), 2.26 to 89.33 (mean 18.71), and 0.91 to 124.49 (mean 15.24)ng/mg, respectively. Ten subjects showed a good correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 7 among 10 subjects ranged from 0.71 to 0.98 (mean 0.85). Four subjects showed a low correlation between MA use and MA concentration in hair. Correlation coefficient (r) of 4 subjects ranged from 0.36 to 0.55. Eleven subjects showed a poor

  4. Flow Analysis on a Limited Volume Chilled Water System

    SciTech Connect

    Zheng, Lin

    2012-07-31

    LANL Currently has a limited volume chilled water system for use in a glove box, but the system needs to be updated. Before we start building our new system, a flow analysis is needed to ensure that there are no high flow rates, extreme pressures, or any other hazards involved in the system. In this project the piping system is extremely important to us because it directly affects the overall design of the entire system. The primary components necessary for the chilled water piping system are shown in the design. They include the pipes themselves (perhaps of more than one diameter), the various fitting used to connect the individual pipes to form the desired system, the flow rate control devices (valves), and the pumps that add energy to the fluid. Even the most simple pipe systems are actually quite complex when they are viewed in terms of rigorous analytical considerations. I used an 'exact' analysis and dimensional analysis considerations combined with experimental results for this project. When 'real-world' effects are important (such as viscous effects in pipe flows), it is often difficult or impossible to use only theoretical methods to obtain the desired results. A judicious combination of experimental data with theoretical considerations and dimensional analysis are needed in order to reduce risks to an acceptable level.

  5. Three-dimensional volume analysis of vasculature in engineered tissues

    NASA Astrophysics Data System (ADS)

    YousefHussien, Mohammed; Garvin, Kelley; Dalecki, Diane; Saber, Eli; Helguera, María.

    2013-01-01

    Three-dimensional textural and volumetric image analysis holds great potential in understanding the image data produced by multi-photon microscopy. In this paper, an algorithm that quantitatively analyzes the texture and the morphology of vasculature in engineered tissues is proposed. The investigated 3D artificial tissues consist of Human Umbilical Vein Endothelial Cells (HUVEC) embedded in collagen exposed to two regimes of ultrasound standing wave fields under different pressure conditions. Textural features were evaluated using the normalized Gray-Scale Cooccurrence Matrix (GLCM) combined with Gray-Level Run Length Matrix (GLRLM) analysis. To minimize error resulting from any possible volume rotation and to provide a comprehensive textural analysis, an averaged version of nine GLCM and GLRLM orientations is used. To evaluate volumetric features, an automatic threshold using the gray level mean value is utilized. Results show that our analysis is able to differentiate among the exposed samples, due to morphological changes induced by the standing wave fields. Furthermore, we demonstrate that providing more textural parameters than what is currently being reported in the literature, enhances the quantitative understanding of the heterogeneity of artificial tissues.

  6. Biomechanical Analysis of Fusion Segment Rigidity Upon Stress at Both the Fusion and Adjacent Segments: A Comparison between Unilateral and Bilateral Pedicle Screw Fixation

    PubMed Central

    Kim, Ho-Joong; Kang, Kyoung-Tak; Chang, Bong-Soon; Lee, Choon-Ki; Kim, Jang-Woo

    2014-01-01

    Purpose The purpose of this study was to investigate the effects of unilateral pedicle screw fixation on the fusion segment and the superior adjacent segment after one segment lumbar fusion using validated finite element models. Materials and Methods Four L3-4 fusion models were simulated according to the extent of decompression and the method of pedicle screws fixation in L3-4 lumbar fusion. These models included hemi-laminectomy with bilateral pedicle screw fixation in the L3-4 segment (BF-HL model), total laminectomy with bilateral pedicle screw fixation (BF-TL model), hemi-laminectomy with unilateral pedicle screw fixation (UF-HL model), and total laminectomy with unilateral pedicle screw fixation (UF-TL model). In each scenario, intradiscal pressures, annulus stress, and range of motion at the L2-3 and L3-4 segments were analyzed under flexion, extension, lateral bending, and torsional moments. Results Under four pure moments, the unilateral fixation leads to a reduction in increment of range of motion at the adjacent segment, but larger motions were noted at the fusion segment (L3-4) in the unilateral fixation (UF-HL and UF-TL) models when compared to bilateral fixation. The maximal von Mises stress showed similar patterns to range of motion at both superior adjacent L2-3 segments and fusion segment. Conclusion The current study suggests that unilateral pedicle screw fixation seems to be unable to afford sufficient biomechanical stability in case of bilateral total laminectomy. Conversely, in the case of hemi-laminectomy, unilateral fixation could be an alternative option, which also has potential benefit to reduce the stress of the adjacent segment. PMID:25048501

  7. Analysis of human hair to assess exposure to organophosphate flame retardants: Influence of hair segments and gender differences.

    PubMed

    Qiao, Lin; Zheng, Xiao-Bo; Zheng, Jing; Lei, Wei-Xiang; Li, Hong-Fang; Wang, Mei-Huan; He, Chun-Tao; Chen, She-Jun; Yuan, Jian-Gang; Luo, Xiao-Jun; Yu, Yun-Jiang; Yang, Zhong-Yi; Mai, Bi-Xian

    2016-07-01

    Hair is a promising, non-invasive, human biomonitoring matrix that can provide insight into retrospective and integral exposure to organic pollutants. In the present study, we measured the concentrations of organophosphate flame retardants (PFRs) in hair and serum samples from university students in Guangzhou, China, and compared the PFR concentrations in the female hair segments using paired distal (5~10cm from the root) and proximal (0~5cm from the root) samples. PFRs were not detected in the serum samples. All PFRs except tricresyl phosphate (TMPP) and tri-n-propyl phosphate (TPP) were detected in more than half of all hair samples. The concentrations of total PFRs varied from 10.1 to 604ng/g, with a median of 148ng/g. Tris(chloroisopropyl) phosphate (TCIPP) and tri(2-ethylexyl) phosphate (TEHP) were the predominant PFRs in hair. The concentrations of most PFRs in the distal segments were 1.5~8.6 times higher than those in the proximal segments of the hair (t-test, p<0.05), which may be due to the longer exposure time of the distal segments to external sources. The values of log (PFR concentrations-distal/PFR concentrations-proximal) were positively and significantly correlated with log KOA of PFRs (p<0.05, r=0.68), indicating that PFRs with a higher log KOA tend to accumulate in hair at a higher rate than PFRs with a lower log KOA. Using combined segments of female hair, significantly higher PFR concentrations were observed in female hair than in male hair. In contrast, female hair exhibited significantly lower PFR concentrations than male hair when using the same hair position for both genders (0-5cm from the scalp). The controversial results regarding gender differences in PFRs in hair highlight the importance of segmental analysis when using hair as an indicator of human exposure to PFRs. PMID:27078091

  8. Breast Tissue 3D Segmentation and Visualization on MRI

    PubMed Central

    Cui, Xiangfei; Sun, Feifei

    2013-01-01

    Tissue segmentation and visualization are useful for breast lesion detection and quantitative analysis. In this paper, a 3D segmentation algorithm based on Kernel-based Fuzzy C-Means (KFCM) is proposed to separate the breast MR images into different tissues. Then, an improved volume rendering algorithm based on a new transfer function model is applied to implement 3D breast visualization. Experimental results have been shown visually and have achieved reasonable consistency. PMID:23983676

  9. Coal gasification systems engineering and analysis. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Feasibility analyses and systems engineering studies for a 20,000 tons per day medium Btu (MBG) coal gasification plant to be built by TVA in Northern Alabama were conducted. Major objectives were as follows: (1) provide design and cost data to support the selection of a gasifier technology and other major plant design parameters, (2) provide design and cost data to support alternate product evaluation, (3) prepare a technology development plan to address areas of high technical risk, and (4) develop schedules, PERT charts, and a work breakdown structure to aid in preliminary project planning. Volume one contains a summary of gasification system characterizations. Five gasification technologies were selected for evaluation: Koppers-Totzek, Texaco, Lurgi Dry Ash, Slagging Lurgi, and Babcock and Wilcox. A summary of the trade studies and cost sensitivity analysis is included.

  10. Study of Alternate Space Shuttle Concepts. Volume 2, Part 2: Concept Analysis and Definition

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This is the final report of a Phase A Study of Alternate Space Shuttle Concepts by the Lockheed Missiles & Space Company (LMSC) for the National Aeronautics and Space Administration George C. Marshall Space Flight Center (MSFC). The eleven-month study, which began on 30 June 1970, is to examine the stage-and-one-half and other Space Shuttle configurations and to establish feasibility, performance, cost, and schedules for the selected concepts. This final report consists of four volumes as follows: Volume I - Executive Summary, Volume II - Concept Analysis and Definition, Volume III - Program Planning, and Volume IV - Data Cost Data. This document is Volume II, Concept Analysis and Definition.

  11. Mutational analysis of cis-acting RNA signals in segment 7 of influenza A virus.

    PubMed

    Hutchinson, Edward C; Curran, Martin D; Read, Eliot K; Gog, Julia R; Digard, Paul

    2008-12-01

    The genomic viral RNA (vRNA) segments of influenza A virus contain specific packaging signals at their termini that overlap the coding regions. To further characterize cis-acting signals in segment 7, we introduced synonymous mutations into the terminal coding regions. Mutation of codons that are normally highly conserved reduced virus growth in embryonated eggs and MDCK cells between 10- and 1,000-fold compared to that of the wild-type virus, whereas similar alterations to nonconserved codons had little effect. In all cases, the growth-impaired viruses showed defects in virion assembly and genome packaging. In eggs, nearly normal numbers of virus particles that in aggregate contained apparently equimolar quantities of the eight segments were formed, but with about fourfold less overall vRNA content than wild-type virions, suggesting that, on average, fewer than eight segments per particle were packaged. Concomitantly, the particle/PFU and segment/PFU ratios of the mutant viruses showed relative increases of up to 300-fold, with the behavior of the most defective viruses approaching that predicted for random segment packaging. Fluorescent staining of infected cells for the nucleoprotein and specific vRNAs confirmed that most mutant virus particles did not contain a full genome complement. The specific infectivity of the mutant viruses produced by MDCK cells was also reduced, but in this system, the mutations also dramatically reduced virion production. Overall, we conclude that segment 7 plays a key role in the influenza A virus genome packaging process, since mutation of as few as 4 nucleotides can dramatically inhibit infectious virus production through disruption of vRNA packaging. PMID:18815307

  12. A geometric analysis of mastectomy incisions: Optimizing intraoperative breast volume

    PubMed Central

    Chopp, David; Rawlani, Vinay; Ellis, Marco; Johnson, Sarah A; Buck, Donald W; Khan, Seema; Bethke, Kevin; Hansen, Nora; Kim, John YS

    2011-01-01

    INTRODUCTION: The advent of acellular dermis-based tissue expander breast reconstruction has placed an increased emphasis on optimizing intraoperative volume. Because skin preservation is a critical determinant of intraoperative volume expansion, a mathematical model was developed to capture the influence of incision dimension on subsequent tissue expander volumes. METHODS: A mathematical equation was developed to calculate breast volume via integration of a geometrically modelled breast cross-section. The equation calculates volume changes associated with excised skin during the mastectomy incision by reducing the arc length of the cross-section. The degree of volume loss is subsequently calculated based on excision dimensions ranging from 35 mm to 60 mm. RESULTS: A quadratic relationship between breast volume and the vertical dimension of the mastectomy incision exists, such that incrementally larger incisions lead to a disproportionally greater amount of volume loss. The vertical dimension of the mastectomy incision – more so than the horizontal dimension – is of critical importance to maintain breast volume. Moreover, the predicted volume loss is more profound in smaller breasts and primarily occurs in areas that affect breast projection on ptosis. CONCLUSIONS: The present study is the first to model the relationship between the vertical dimensions of the mastectomy incision and subsequent volume loss. These geometric principles will aid in optimizing intra-operative volume expansion during expander-based breast reconstruction. PMID:22654531

  13. Sequence analysis of both genome segments of three Croatian infectious bursal disease field viruses.

    PubMed

    Lojkić, I; Bidin, Z; Pokrić, B

    2008-09-01

    In order to determine the mutations responsible for virulence, three Croatian field infectious bursal disease viruses (IBDV), designated Cro-Ig/02, Cro-Po/00, and Cro-Pa/98 were characterized. Coding regions of both genomic segments were sequenced, and the nucleotide and deduced amino acid sequences were compared with previously reported full-length sequenced IBDV strains. Phylogenetic analysis, based on the nucleotide and deduced amino acid sequences of polyprotein and VP1, was performed. Eight characteristic amino acid residues, that were common to very virulent (vv) IBDV, were detected on polyprotein: 222A, 256I, 294I, 451L, 685N, 715S, 751D, and 1005A. All eight were found in Cro-Ig/02 and Cro-Po/00. C-Pa/98 had all the characteristics of an attenuated strain, except for glutamine on residue 253, which is common for vv, classical virulent, and variant strains. Between less virulent and vvIBDV, three substitutions were found on VP5: 49 G --> R, 79 --> F, and 137 R --> W. In VP1, there were nine characteristic amino acid residues common to vvwIBDV: 146D, 147N, 242E, 390M, 393D, 511S, 562P, 687P, and 695R. All nine residues were found in A-Ig/02, and eight were found in B-Po/00, which had isoleucine on residue 390. Based on our analyses, isolates Cro-Ig/02 and Cro-Po/00 were classified with vv IBDV strains. C-Pa/98 shared all characteristic amino acid residues with attenuated and classical virulence strains, so it was classified with those. PMID:18939645

  14. Global Warming’s Six Americas: An Audience Segmentation Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Roser-Renouf, C.; Maibach, E.; Leiserowitz, A.

    2009-12-01

    One of the first rules of effective communication is to “know thy audience.” People have different psychological, cultural and political reasons for acting - or not acting - to reduce greenhouse gas emissions, and climate change educators can increase their impact by taking these differences into account. In this presentation we will describe six unique audience segments within the American public that each responds to the issue in its own distinct way, and we will discuss methods of engaging each. The six audiences were identified using a nationally representative survey of American adults conducted in the fall of 2008 (N=2,164). In two waves of online data collection, the public’s climate change beliefs, attitudes, risk perceptions, values, policy preferences, conservation, and energy-efficiency behaviors were assessed. The data were subjected to latent class analysis, yielding six groups distinguishable on all the above dimensions. The Alarmed (18%) are fully convinced of the reality and seriousness of climate change and are already taking individual, consumer, and political action to address it. The Concerned (33%) - the largest of the Six Americas - are also convinced that global warming is happening and a serious problem, but have not yet engaged with the issue personally. Three other Americas - the Cautious (19%), the Disengaged (12%) and the Doubtful (11%) - represent different stages of understanding and acceptance of the problem, and none are actively involved. The final America - the Dismissive (7%) - are very sure it is not happening and are actively involved as opponents of a national effort to reduce greenhouse gas emissions. Mitigating climate change will require a diversity of messages, messengers and methods that take into account these differences within the American public. The findings from this research can serve as guideposts for educators on the optimal choices for reaching and influencing target groups with varied informational needs

  15. Analysis of adductors angle measurement in Hammersmith infant neurological examinations using mean shift segmentation and feature point based object tracking.

    PubMed

    Dogra, D P; Majumdar, A K; Sural, S; Mukherjee, J; Mukherjee, S; Singh, A

    2012-09-01

    This paper presents image and video analysis based schemes to automate the process of adductors angle measurement which is carried out on infants as a part of Hammersmith Infant Neurological Examination (HINE). Image segmentation, thinning and feature point based object tracking are used for automating the analysis. Segmentation outputs are processed with a novel region merging algorithm. It is found that the refined segmentation outputs can successfully be used to extract features in the context of the application under consideration. Next, a heuristic based filtering algorithm is applied on the thinned structures for locating necessary points to measure adductors angle. A semi-automatic scheme based on the object tracking of a video has been proposed to minimize errors of the image based analysis. It is observed that the video-based analysis outperforms the image-based method. A fully automatic method has also been proposed and compared with the semi-automatic algorithm. The proposed methods have been tested with several videos recorded from hospitals and the results have been found to be satisfactory in the present context. PMID:22841364

  16. Tumor Burden Analysis on Computed Tomography by Automated Liver and Tumor Segmentation

    PubMed Central

    Linguraru, Marius George; Richbourg, William J.; Liu, Jianfei; Watt, Jeremy M.; Pamulapati, Vivek; Wang, Shijun; Summers, Ronald M.

    2013-01-01

    The paper presents the automated computation of hepatic tumor burden from abdominal CT images of diseased populations with images with inconsistent enhancement. The automated segmentation of livers is addressed first. A novel three-dimensional (3D) affine invariant shape parameterization is employed to compare local shape across organs. By generating a regular sampling of the organ's surface, this parameterization can be effectively used to compare features of a set of closed 3D surfaces point-to-point, while avoiding common problems with the parameterization of concave surfaces. From an initial segmentation of the livers, the areas of atypical local shape are determined using training sets. A geodesic active contour corrects locally the segmentations of the livers in abnormal images. Graph cuts segment the hepatic tumors using shape and enhancement constraints. Liver segmentation errors are reduced significantly and all tumors are detected. Finally, support vector machines and feature selection are employed to reduce the number of false tumor detections. The tumor detection true position fraction of 100% is achieved at 2.3 false positives/case and the tumor burden is estimated with 0.9% error. Results from the test data demonstrate the method's robustness to analyze livers from difficult clinical cases to allow the temporal monitoring of patients with hepatic cancer. PMID:22893379

  17. Stress and strain analysis of contractions during ramp distension in partially obstructed guinea pig jejunal segments

    PubMed Central

    Zhao, Jingbo; Liao, Donghua; Yang, Jian; Gregersen, Hans

    2011-01-01

    Previous studies have demonstrated morphological and biomechanical remodeling in the intestine proximal to an obstruction. The present study aimed to obtain stress and strain thresholds to initiate contraction and the maximal contraction stress and strain in partially obstructed guinea pig jejunal segments. Partial obstruction and sham operations were surgically created in mid-jejunum of male guinea pigs. The animals survived 2, 4, 7, and 14 days, respectively. Animals not being operated on served as normal controls. The segments were used for no-load state, zero-stress state and distension analyses. The segment was inflated to 10 cmH2O pressure in an organ bath containing 37°C Krebs solution and the outer diameter change was monitored. The stress and strain at the contraction threshold and at maximum contraction were computed from the diameter, pressure and the zero-stress state data. Young’s modulus was determined at the contraction threshold. The muscle layer thickness in obstructed intestinal segments increased up to 300%. Compared with sham-obstructed and normal groups, the contraction stress threshold, the maximum contraction stress and the Young’s modulus at the contraction threshold increased whereas the strain threshold and maximum contraction strain decreased after 7 days obstruction (P<0.05 and 0.01). In conclusion, in the partially obstructed intestinal segments, a larger distension force was needed to evoke contraction likely due to tissue remodeling. Higher contraction stresses were produced and the contraction deformation (strain) became smaller. PMID:21632056

  18. Airway segmentation and analysis for the study of mouse models of lung disease using micro-CT

    NASA Astrophysics Data System (ADS)

    Artaechevarria, X.; Pérez-Martín, D.; Ceresa, M.; de Biurrun, G.; Blanco, D.; Montuenga, L. M.; van Ginneken, B.; Ortiz-de-Solorzano, C.; Muñoz-Barrutia, A.

    2009-11-01

    Animal models of lung disease are gaining importance in understanding the underlying mechanisms of diseases such as emphysema and lung cancer. Micro-CT allows in vivo imaging of these models, thus permitting the study of the progression of the disease or the effect of therapeutic drugs in longitudinal studies. Automated analysis of micro-CT images can be helpful to understand the physiology of diseased lungs, especially when combined with measurements of respiratory system input impedance. In this work, we present a fast and robust murine airway segmentation and reconstruction algorithm. The algorithm is based on a propagating fast marching wavefront that, as it grows, divides the tree into segments. We devised a number of specific rules to guarantee that the front propagates only inside the airways and to avoid leaking into the parenchyma. The algorithm was tested on normal mice, a mouse model of chronic inflammation and a mouse model of emphysema. A comparison with manual segmentations of two independent observers shows that the specificity and sensitivity values of our method are comparable to the inter-observer variability, and radius measurements of the mainstem bronchi reveal significant differences between healthy and diseased mice. Combining measurements of the automatically segmented airways with the parameters of the constant phase model provides extra information on how disease affects lung function.

  19. Automatic tumor segmentation using knowledge-based techniques.

    PubMed

    Clark, M C; Hall, L O; Goldgof, D B; Velthuizen, R; Murtagh, F R; Silbiger, M S

    1998-04-01

    A system that automatically segments and labels glioblastoma-multiforme tumors in magnetic resonance images (MRI's) of the human brain is presented. The MRI's consist of T1-weighted, proton density, and T2-weighted feature images and are processed by a system which integrates knowledge-based (KB) techniques with multispectral analysis. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intracranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intracranial region, with region analysis used in performing the final tumor labeling. This system has been trained on three volume data sets and tested on thirteen unseen volume data sets acquired from a single MRI system. The KB tumor segmentation was compared with supervised, radiologist-labeled "ground truth" tumor volumes and supervised k-nearest neighbors tumor segmentations. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time. PMID:9688151

  20. 3-D volume reconstruction of skin lesions for melanin and blood volume estimation and lesion severity analysis.

    PubMed

    D'Alessandro, Brian; Dhawan, Atam P

    2012-11-01

    Subsurface information about skin lesions, such as the blood volume beneath the lesion, is important for the analysis of lesion severity towards early detection of skin cancer such as malignant melanoma. Depth information can be obtained from diffuse reflectance based multispectral transillumination images of the skin. An inverse volume reconstruction method is presented which uses a genetic algorithm optimization procedure with a novel population initialization routine and nudge operator based on the multispectral images to reconstruct the melanin and blood layer volume components. Forward model evaluation for fitness calculation is performed using a parallel processing voxel-based Monte Carlo simulation of light in skin. Reconstruction results for simulated lesions show excellent volume accuracy. Preliminary validation is also done using a set of 14 clinical lesions, categorized into lesion severity by an expert dermatologist. Using two features, the average blood layer thickness and the ratio of blood volume to total lesion volume, the lesions can be classified into mild and moderate/severe classes with 100% accuracy. The method therefore has excellent potential for detection and analysis of pre-malignant lesions. PMID:22829392

  1. Concepts and analysis for precision segmented reflector and feed support structures

    NASA Technical Reports Server (NTRS)

    Miller, Richard K.; Thomson, Mark W.; Hedgepeth, John M.

    1990-01-01

    Several issues surrounding the design of a large (20-meter diameter) Precision Segmented Reflector are investigated. The concerns include development of a reflector support truss geometry that will permit deployment into the required doubly-curved shape without significant member strains. For deployable and erectable reflector support trusses, the reduction of structural redundancy was analyzed to achieve reduced weight and complexity for the designs. The stiffness and accuracy of such reduced member trusses, however, were found to be affected to a degree that is unexpected. The Precision Segmented Reflector designs were developed with performance requirements that represent the Reflector application. A novel deployable sunshade concept was developed, and a detailed parametric study of various feed support structural concepts was performed. The results of the detailed study reveal what may be the most desirable feed support structure geometry for Precision Segmented Reflector/Large Deployable Reflector applications.

  2. Analysis of an Externally Radially Cracked Ring Segment Subject to Three-Point Radial Loading

    NASA Technical Reports Server (NTRS)

    Gross, B.; Srawlwy, J. E.; Shannon, J. L., Jr.

    1983-01-01

    The boundary collocation method was used to generate Mode 1 stress intensity and crack mouth opening displacement coefficients for externally radially cracked ring segments subjected to three point radial loading. Numerical results were obtained for ring segment outer-to-inner radius ratios (R sub o/R sub i) ranging from 1.10 to 2.50 and crack length to segment width ratios (a/W) ranging from 0.1 to 0.8. Stress intensity and crack mouth displacement coefficients were found to depend on the ratios R sub o/R sub i and a/W as well as the included angle between the directions of the reaction forces.

  3. Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images

    PubMed Central

    Lee, Kyungmoo; Buitendijk, Gabriëlle H.S.; Bogunovic, Hrvoje; Springelkamp, Henriët; Hofman, Albert; Wahle, Andreas; Sonka, Milan; Vingerling, Johannes R.; Klaver, Caroline C.W.; Abràmoff, Michael D.

    2016-01-01

    Purpose To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. Methods Six hundred ninety macular SD-OCT image volumes (6.0 × 6.0 × 2.3 mm3) were obtained from one eyes of 690 subjects (74.6 ± 9.7 [mean ± SD] years, 37.8% of males) randomly selected from the population-based Rotterdam Study. The dataset consisted of 420 OCT volumes with successful automated retinal nerve fiber layer (RNFL) segmentations obtained from our previously reported graph-based segmentation method and 270 volumes with failed segmentations. To evaluate the reliability of the layer segmentations, we have developed a new metric, segmentability index SI, which is obtained from a random forest regressor based on 12 features using OCT voxel intensities, edge-based costs, and on-surface costs. The SI was compared with well-known quality indices, quality index (QI), and maximum tissue contrast index (mTCI), using receiver operating characteristic (ROC) analysis. Results The 95% confidence interval (CI) and the area under the curve (AUC) for the QI are 0.621 to 0.805 with AUC 0.713, for the mTCI 0.673 to 0.838 with AUC 0.756, and for the SI 0.784 to 0.920 with AUC 0.852. The SI AUC is significantly larger than either the QI or mTCI AUC (P < 0.01). Conclusions The segmentability index SI is well suited to identify SD-OCT scans for which successful automated intraretinal layer segmentations can be expected. Translational Relevance Interpreting the quantification of SD-OCT images requires the underlying segmentation to be reliable, but standard SD-OCT quality metrics do not predict which segmentations are reliable and which are not. The segmentability index SI presented in this study does allow reliable segmentations to be identified, which is important for more accurate layer thickness analyses in research and population studies. PMID:27066311

  4. Analysis of RapidArc optimization strategies using objective function values and dose-volume histograms.

    PubMed

    Oliver, Michael; Gagne, Isabelle; Popescu, Carmen; Ansbacher, Will; Beckham, Wayne A

    2010-01-01

    RapidArc is a novel treatment planning and delivery system that has recently been made available for clinical use. Included within the Eclipse treatment planning system are a number of different optimization strategies that can be employed to improve the quality of the final treatment plan. The purpose of this study is to systematically assess three categories of strategies for four phantoms, and then apply proven strategies to clinical head and neck cases. Four phantoms were created within Eclipse with varying shapes and locations for the planning target volumes and organs at risk. A baseline optimization consisting of a single 359.8 degrees arc with collimator at 45 degrees was applied to all phantoms. Three categories of strategies were assessed and compared to the baseline strategy. They include changing the initialization parameters, increasing the total number of control points, and increasing the total optimization time. Optimization log files were extracted from the treatment planning system along with final dose-volume histograms for plan assessment. Treatment plans were also generated for four head and neck patients to determine whether the results for phantom plans can be extended to clinical plans. The strategies that resulted in a significant difference from baseline were: changing the maximum leaf speed prior to optimization ( p < 0.05), increasing the total number of segments by adding an arc ( p < 0.05), and increasing the total optimization time by either continuing the optimization ( p < 0.01) or adding time to the optimization by pausing the optimization ( p < 0.01). The reductions in objective function values correlated with improvements in the dose-volume histogram (DVH). The addition of arcs and pausing strategies were applied to head and neck cancer cases, which demonstrated similar benefits with respect to the final objective function value and DVH. Analysis of the optimization log files is a useful way to intercompare treatment plans that

  5. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  6. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 3

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.

  7. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment

    PubMed Central

    Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols. PMID:26147196

  8. Analysis of multitemporal laserscanned DTMs of an active landslide (Doren, Western Austria) using a robust planefitting segmentation

    NASA Astrophysics Data System (ADS)

    Koma, Zs.; Pocsai, A.; Székely, B.; Dorninger, P.; Zámolyi, A.; Roncat, A.

    2012-04-01

    Structural geomorphometric analysis of high-resolution laser scanned DTMs is a straightforward method to study microtopographic components of dynamically forming landscapes, and thus areas affected by mass movements. However, results for multitemporal DTMs may turn out to be difficult to evaluate. In our approach, a robust plane fitting algorithm is used to create various segmentations to filtered lidar point cloud (ground surface points) by applying different sets of parameters. The resulting sets of planes are analyzed in terms of their geologic meaning and compared in order to detect changes. Our study area, the Doren landslide (Bregenzerwald, Vorarlberg, Western Austria), an actively forming landslide developed in molasse sediments has been measured several times by laser scanning (lidar). These DTMs form the input to our procedure. The DTMs are analyzed by the segmentation algorithm, using varying parameter sets (i.e. number of minimum points, standard deviation, point-to-plain distance). The segmented results are checked for indications of geological structures as well as for features belonging to the moving material of the landslide. Finally the segments of the different years are compared. Results show that patterns composed of segments of steep and less steep valley sides can be correlated with the tectonic and lithological setting of the study area. Furthermore some narrow linear or curvilinear zones appear that can be related to the outlines of some small internal mass movements. Interestingly, the various years show sometimes similar patterns despite the continuous displacement of the sliding material. The project has been supported by the Austrian Academy of Sciences (ÖAW) in the framework of the project "Geophysik der Erdkruste".

  9. Incorporation of learned shape priors into a graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes of mice

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Song, Qi; Abràmoff, Michael D.; Sohn, Eliott; Wu, Xiaodong; Garvin, Mona K.

    2014-03-01

    Spectral-domain optical coherence tomography (SD-OCT) finds widespread use clinically for the detection and management of ocular diseases. This non-invasive imaging modality has also begun to find frequent use in research studies involving animals such as mice. Numerous approaches have been proposed for the segmentation of retinal surfaces in SD-OCT images obtained from human subjects; however, the segmentation of retinal surfaces in mice scans is not as well-studied. In this work, we describe a graph-theoretic segmentation approach for the simultaneous segmentation of 10 retinal surfaces in SD-OCT scans of mice that incorporates learned shape priors. We compared the method to a baseline approach that did not incorporate learned shape priors and observed that the overall unsigned border position errors reduced from 3.58 +/- 1.33 μm to 3.20 +/- 0.56 μm.

  10. Multi-temporal MRI carpal bone volumes analysis by principal axes registration

    NASA Astrophysics Data System (ADS)

    Ferretti, Roberta; Dellepiane, Silvana

    2016-03-01

    In this paper, a principal axes registration technique is presented, with the relevant application to segmented volumes. The purpose of the proposed registration is to compare multi-temporal volumes of carpal bones from Magnetic Resonance Imaging (MRI) acquisitions. Starting from the study of the second-order moment matrix, the eigenvectors are calculated to allow the rotation of volumes with respect to reference axes. Then the volumes are spatially translated to become perfectly overlapped. A quantitative evaluation of the results obtained is carried out by computing classical indices from the confusion matrix, which depict similarity measures between the volumes of the same organ as extracted from MRI acquisitions executed at different moments. Within the medical field, the way a registration can be used to compare multi-temporal images is of great interest, since it provides the physician with a tool which allows a visual monitoring of a disease evolution. The segmentation method used herein is based on the graph theory and is a robust, unsupervised and parameters independent method. Patients affected by rheumatic diseases have been considered.

  11. Sequence and phylogenetic analysis of the S1 Genome segment of turkey-origin reoviruses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Based on previous reports characterizing the turkey-origin avian reovirus (TRV) sigma-B (sigma-2) major outer capsid protein gene, the TRVs may represent a new group within the fusogenic orthoreoviruses. However, no sequence data from other TRV genes or genome segments has been reported. The sigma...

  12. 3D CT spine data segmentation and analysis of vertebrae bone lesions.

    PubMed

    Peter, R; Malinsky, M; Ourednicek, P; Jan, J

    2013-01-01

    A method is presented aiming at detecting and classifying bone lesions in 3D CT data of human spine, via Bayesian approach utilizing Markov random fields. A developed algorithm for necessary segmentation of individual possibly heavily distorted vertebrae based on 3D intensity modeling of vertebra types is presented as well. PMID:24110203

  13. Segmental and Positional Effects on Children's Coda Production: Comparing Evidence from Perceptual Judgments and Acoustic Analysis

    ERIC Educational Resources Information Center

    Theodore, Rachel M.; Demuth, Katherine; Shattuck-Hufnagel, Stephanie

    2012-01-01

    Children's early productions are highly variable. Findings from children's early productions of grammatical morphemes indicate that some of the variability is systematically related to segmental and phonological factors. Here, we extend these findings by assessing 2-year-olds' production of non-morphemic codas using both listener decisions and…

  14. Segmentation of hyper-pigmented spots in human skin using automated cluster analysis

    NASA Astrophysics Data System (ADS)

    Gossage, Kirk W.; Weissman, Jesse; Velthuizen, Robert

    2009-02-01

    The appearance and color distribution of skin are important characteristics that affect the human perception of health and vitality. Dermatologists and other skin researchers often use color and appearance to diagnose skin conditions and monitor the efficacy of procedures and treatments. Historically, most skin color and chromophore measurements have been performed using reflectance spectrometers and colorimeters. These devices acquire a single measurement over an integrated area defined by an aperture, and are therefore poorly suited to measure the color of pigmented lesions or other blemishes. Measurements of spots smaller than the aperture will be washed out with background, and spots that are larger may not be adequately sampled unless the blemish is homogenous. Recently, multispectral imaging devices have become available for skin imaging. These devices are designed to image regions of skin and provide information about the levels of endogenous chromophores present in the image field of view. This data is presented as four images at each measurement site including RGB color, melanin, collagen, and blood images. We developed a robust segmentation technique that can segment skin blemishes in these images and provide more precise values of melanin, blood, and collagen by only analyzing the segmented region of interest. Results from hundreds of skin images show this to be a robust automated segmentation technique over a range of skin tones and shades.

  15. National Evaluation of Family Support Programs. Final Report Volume A: The Meta-Analysis.

    ERIC Educational Resources Information Center

    Layzer, Jean I.; Goodson, Barbara D.; Bernstein, Lawrence; Price, Cristofer

    This volume is part of the final report of the National Evaluation of Family Support Programs and details findings from a meta-analysis of extant research on programs providing family support services. Chapter A1 of this volume provides a rationale for using meta-analysis. Chapter A2 describes the steps of preparation for the meta-analysis.…

  16. Yucca Mountain transportation routes: Preliminary characterization and risk analysis; Volume 2, Figures [and] Volume 3, Technical Appendices

    SciTech Connect

    Souleyrette, R.R. II; Sathisan, S.K.; di Bartolo, R.

    1991-05-31

    This report presents appendices related to the preliminary assessment and risk analysis for high-level radioactive waste transportation routes to the proposed Yucca Mountain Project repository. Information includes data on population density, traffic volume, ecologically sensitive areas, and accident history.

  17. Probabilistic analysis of activation volumes generated during deep brain stimulation.

    PubMed

    Butson, Christopher R; Cooper, Scott E; Henderson, Jaimie M; Wolgamuth, Barbara; McIntyre, Cameron C

    2011-02-01

    Deep brain stimulation (DBS) is an established therapy for the treatment of Parkinson's disease (PD) and shows great promise for the treatment of several other disorders. However, while the clinical analysis of DBS has received great attention, a relative paucity of quantitative techniques exists to define the optimal surgical target and most effective stimulation protocol for a given disorder. In this study we describe a methodology that represents an evolutionary addition to the concept of a probabilistic brain atlas, which we call a probabilistic stimulation atlas (PSA). We outline steps to combine quantitative clinical outcome measures with advanced computational models of DBS to identify regions where stimulation-induced activation could provide the best therapeutic improvement on a per-symptom basis. While this methodology is relevant to any form of DBS, we present example results from subthalamic nucleus (STN) DBS for PD. We constructed patient-specific computer models of the volume of tissue activated (VTA) for 163 different stimulation parameter settings which were tested in six patients. We then assigned clinical outcome scores to each VTA and compiled all of the VTAs into a PSA to identify stimulation-induced activation targets that maximized therapeutic response with minimal side effects. The results suggest that selection of both electrode placement and clinical stimulation parameter settings could be tailored to the patient's primary symptoms using patient-specific models and PSAs. PMID:20974269

  18. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  19. Kohonen map as a visualization tool for the analysis of protein sequences: multiple alignments, domains and segments of secondary structures.

    PubMed

    Hanke, J; Reich, J G

    1996-12-01

    The method of Kohonen maps, a special form of neural networks, was applied as a visualization tool for the analysis of protein sequence similarity. The procedure converts sequence (domains, aligned sequences, segments of secondary structure) into a characteristic signal matrix. This conversion depends on the property or replacement score vector selected by the user. Similar sequences have small distance in the signal space. The trained Kohonen network is functionally equivalent to an unsupervised non-linear cluster analyzer. Protein families, or aligned sequences, or segments of similar secondary structure, aggregate as clusters, and their proximity may be inspected on a color screen or on paper. Pull-down menus permit access to background information in the established text-oriented way. PMID:9021261

  20. A Genetic Analysis of Brain Volumes and IQ in Children

    ERIC Educational Resources Information Center

    van Leeuwen, Marieke; Peper, Jiska S.; van den Berg, Stephanie M.; Brouwer, Rachel M.; Hulshoff Pol, Hilleke E.; Kahn, Rene S.; Boomsma, Dorret I.

    2009-01-01

    In a population-based sample of 112 nine-year old twin pairs, we investigated the association among total brain volume, gray matter and white matter volume, intelligence as assessed by the Raven IQ test, verbal comprehension, perceptual organization and perceptual speed as assessed by the Wechsler Intelligence Scale for Children-III. Phenotypic…

  1. EPA RREL'S MOBILE VOLUME REDUCTION UNIT -- APPLICATIONS ANALYSIS REPORT

    EPA Science Inventory

    The volume reduction unit (VRU) is a pilot-scale, mobile soil washing system designed to remove organic contaminants from the soil through particle size separation and solubilization. The VRU removes contaminants by suspending them in a wash solution and by reducing the volume of...

  2. Volume component analysis for classification of LiDAR data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2015-03-01

    One of the most difficult challenges of working with LiDAR data is the large amount of data points that are produced. Analysing these large data sets is an extremely time consuming process. For this reason, automatic perception of LiDAR scenes is a growing area of research. Currently, most LiDAR feature extraction relies on geometrical features specific to the point cloud of interest. These geometrical features are scene-specific, and often rely on the scale and orientation of the object for classification. This paper proposes a robust method for reduced dimensionality feature extraction of 3D objects using a volume component analysis (VCA) approach.1 This VCA approach is based on principal component analysis (PCA). PCA is a method of reduced feature extraction that computes a covariance matrix from the original input vector. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix are used to describe an image. Block-based PCA is an adapted method for feature extraction in facial images because PCA, when performed in local areas of the image, can extract more significant features than can be extracted when the entire image is considered. The image space is split into several of these blocks, and PCA is computed individually for each block. This VCA proposes that a LiDAR point cloud can be represented as a series of voxels whose values correspond to the point density within that relative location. From this voxelized space, block-based PCA is used to analyze sections of the space where the sections, when combined, will represent features of the entire 3-D object. These features are then used as the input to a support vector machine which is trained to identify four classes of objects, vegetation, vehicles, buildings and barriers with an overall accuracy of 93.8%

  3. Segmental neurofibromatosis.

    PubMed

    Galhotra, Virat; Sheikh, Soheyl; Jindal, Sanjeev; Singla, Anshu

    2014-07-01

    Segmental neurofibromatosis is a rare disorder, characterized by neurofibromas or cafι-au-lait macules limited to one region of the body. Its occurrence on the face is extremely rare and only few cases of segmental neurofibromatosis over the face have been described so far. We present a case of segmental neurofibromatosis involving the buccal mucosa, tongue, cheek, ear, and neck on the right side of the face. PMID:25565748

  4. Comparison between MDCT and Grayscale IVUS in a Quantitative Analysis of Coronary Lumen in Segments with or without Atherosclerotic Plaques

    PubMed Central

    Falcão, João L. A. A.; Falcão, Breno A. A.; Gurudevan, Swaminatha V.; Campos, Carlos M.; Silva, Expedito R.; Kalil-Filho, Roberto; Rochitte, Carlos E.; Shiozaki, Afonso A.; Coelho-Filho, Otavio R.; Lemos, Pedro A.

    2015-01-01

    Background The diagnostic accuracy of 64-slice MDCT in comparison with IVUS has been poorly described and is mainly restricted to reports analyzing segments with documented atherosclerotic plaques. Objectives We compared 64-slice multidetector computed tomography (MDCT) with gray scale intravascular ultrasound (IVUS) for the evaluation of coronary lumen dimensions in the context of a comprehensive analysis, including segments with absent or mild disease. Methods The 64-slice MDCT was performed within 72 h before the IVUS imaging, which was obtained for at least one coronary, regardless of the presence of luminal stenosis at angiography. A total of 21 patients were included, with 70 imaged vessels (total length 114.6 ± 38.3 mm per patient). A coronary plaque was diagnosed in segments with plaque burden > 40%. Results At patient, vessel, and segment levels, average lumen area, minimal lumen area, and minimal lumen diameter were highly correlated between IVUS and 64-slice MDCT (p < 0.01). However, 64-slice MDCT tended to underestimate the lumen size with a relatively wide dispersion of the differences. The comparison between 64-slice MDCT and IVUS lumen measurements was not substantially affected by the presence or absence of an underlying plaque. In addition, 64-slice MDCT showed good global accuracy for the detection of IVUS parameters associated with flow-limiting lesions. Conclusions In a comprehensive, multi-territory, and whole-artery analysis, the assessment of coronary lumen by 64-slice MDCT compared with coronary IVUS showed a good overall diagnostic ability, regardless of the presence or absence of underlying atherosclerotic plaques. PMID:25993595

  5. Value and limitations of segmental analysis of stress thallium myocardial imaging for localization of coronary artery disease

    SciTech Connect

    Rigo, P.; Bailey, I.K.; Griffith, L.S.C.; Pitt, B.; Borow, R.D.; Wagner, H.N.; Becker, L.C.

    1980-05-01

    This study was done to determine the value of thallium-201 myocardial scintigraphic imaging (MSI) for identifying disease in the individual coronary arteries. Segmental analysis of rest and stress MSI was performed in 133 patients with ateriographically proved coronary artery disease (CAD). Certain scintigraphic segments were highly specific (97 to 100%) for the three major coronary arteries: anterior wall and septum for the left anterior descending (LAD) coronary artery; the inferior wall for the right coronary artery (RCA); and the proximal lateral wall for the circumflex (LCX) artery. Perfusion defects located in the anterolateral wall in the anterior view were highly specific for proximal disease in the LAD involving the major diagonal branches, but this was not true for septal defects. The apical segments were not specific for any of the three major vessels. Although MSI was abnormal in 89% of these patients with CAD, it was less sensitive for identifying individual vessel disease: 63% for LAD, 50% for RCA, and 21% for LCX disease (narrowings > = 50%). Sensitivity increased with the severity of stenosis, but even for 100% occlusions was only 87% for LAD, 58% for RCA and 38% for LCX. Sensitivity diminished as the number of vessels involved increased: with single-vessel disease, 80% of LAD, 54% of RAC and 33% of LCX lesions were detected, but in patients with triple-vessel disease, only 50% of LAD, 50% of RCA and 16% of LCX lesions were identified. Thus, although segmented analysis of MSI can identify disease in the individual coronary arteries with high specificity, only moderate sensitivity is achieved, reflecting the tendency of MSI to identify only the most severely ischemic area among several that may be present in a heart. Perfusion scintigrams display relative distributions rather than absolute values for myocardial blood flow.

  6. Style, content and format guide for writing safety analysis documents. Volume 1, Safety analysis reports for DOE nuclear facilities

    SciTech Connect

    Not Available

    1994-06-01

    The purpose of Volume 1 of this 4-volume style guide is to furnish guidelines on writing and publishing Safety Analysis Reports (SARs) for DOE nuclear facilities at Sandia National Laboratories. The scope of Volume 1 encompasses not only the general guidelines for writing and publishing, but also the prescribed topics/appendices contents along with examples from typical SARs for DOE nuclear facilities.

  7. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    SciTech Connect

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  8. Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.

    2013-01-01

    The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.

  9. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity

    PubMed Central

    Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin

    2016-01-01

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ±40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design. PMID:26805844

  10. An ECG ambulatory system with mobile embedded architecture for ST-segment analysis.

    PubMed

    Miranda-Cid, Alejandro; Alvarado-Serrano, Carlos

    2010-01-01

    A prototype of a ECG ambulatory system for long term monitoring of ST segment of 3 leads, low power, portability and data storage in solid state memory cards has been developed. The solution presented is based in a mobile embedded architecture of a portable entertainment device used as a tool for storage and processing of bioelectric signals, and a mid-range RISC microcontroller, PIC 16F877, which performs the digitalization and transmission of ECG. The ECG amplifier stage is a low power, unipolar voltage and presents minimal distortion of the phase response of high pass filter in the ST segment. We developed an algorithm that manages access to files through an implementation for FAT32, and the ECG display on the device screen. The records are stored in TXT format for further processing. After the acquisition, the system implemented works as a standard USB mass storage device. PMID:21095640

  11. Design and Analysis of Modules for Segmented X-Ray Optics

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; BIskach, Michael P.; Chan, Kai-Wing; Saha, Timo T; Zhang, William W.

    2012-01-01

    Future X-ray astronomy missions demand thin, light, and closely packed optics which lend themselves to segmentation of the annular mirrors and, in turn, a modular approach to the mirror design. The modular approach to X-ray Flight Mirror Assembly (FMA) design allows excellent scalability of the mirror technology to support a variety of mission sizes and science objectives. This paper describes FMA designs using slumped glass mirror segments for several X-ray astrophysics missions studied by NASA and explores the driving requirements and subsequent verification tests necessary to qualify a slumped glass mirror module for space-flight. A rigorous testing program is outlined allowing Technical Development Modules to reach technical readiness for mission implementation while reducing mission cost and schedule risk.

  12. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity.

    PubMed

    Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin

    2016-01-01

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ± 40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design. PMID:26805844

  13. Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

    2013-04-01

    In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

  14. Growth and morphological analysis of segmented AuAg alloy nanowires created by pulsed electrodeposition in ion-track etched membranes

    PubMed Central

    Burr, Loic; Trautmann, Christina; Toimil-Molares, Maria Eugenia

    2015-01-01

    Summary Background: Multicomponent heterostructure nanowires and nanogaps are of great interest for applications in sensorics. Pulsed electrodeposition in ion-track etched polymer templates is a suitable method to synthesise segmented nanowires with segments consisting of two different types of materials. For a well-controlled synthesis process, detailed analysis of the deposition parameters and the size-distribution of the segmented wires is crucial. Results: The fabrication of electrodeposited AuAg alloy nanowires and segmented Au-rich/Ag-rich/Au-rich nanowires with controlled composition and segment length in ion-track etched polymer templates was developed. Detailed analysis by cyclic voltammetry in ion-track membranes, energy-dispersive X-ray spectroscopy and scanning electron microscopy was performed to determine the dependency between the chosen potential and the segment composition. Additionally, we have dissolved the middle Ag-rich segments in order to create small nanogaps with controlled gap sizes. Annealing of the created structures allows us to influence their morphology. Conclusion: AuAg alloy nanowires, segmented wires and nanogaps with controlled composition and size can be synthesised by electrodeposition in membranes, and are ideal model systems for investigation of surface plasmons. PMID:26199830

  15. Challenges in the segmentation and analysis of X-ray Micro-CT image data

    NASA Astrophysics Data System (ADS)

    Larsen, J. D.; Schaap, M. G.; Tuller, M.; Kulkarni, R.; Guber, A.

    2014-12-01

    Pore scale modeling of fluid flow is becoming increasing popular among scientific disciplines. With increased computational power, and technological advancements it is now possible to create realistic models of fluid flow through highly complex porous media by using a number of fluid dynamic techniques. One such technique that has gained popularity is lattice Boltzmann for its relative ease of programming and ability to capture and represent complex geometries with simple boundary conditions. In this study lattice Boltzmann fluid models are used on macro-porous silt loam soil imagery that was obtained using an industrial CT scanner. The soil imagery was segmented with six separate automated segmentation standards to reduce operator bias and provide distinction between phases. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We attempt to validate simulated permeability from differing segmentation algorithms to experimental findings. Limitations arise with X-ray micro-CT image data. Polychromatic X-ray CT has the potential to produce low image contrast and image artifacts. In this case, we find that the data is unsegmentable and unable to be modeled in a realistic and unbiased fashion.

  16. Quantitative MRI analysis of brain volume changes due to controlled cortical impact.

    PubMed

    Colgan, Niall C; Cronin, Michelle M; Gobbo, Oliviero L; O'Mara, Shane M; O'Connor, William T; Gilchrist, Michael D

    2010-07-01

    More than 85% of reported brain traumas are classified clinically as "mild" using the Glasgow Coma Scale (GCS); qualitative MRI findings are scarce and provide little correspondence to clinical symptoms. Our goal, therefore, was to establish in vivo sequelae of traumatic brain injury (TBI) following lower and higher levels of impact to the frontal lobe using quantitative MRI analysis and a mechanical model of penetrating impact injury. To investigate time-based morphological and physiological changes of living tissue requires a surrogate for the human central nervous system. The present model for TBI was a systematically varied and controlled cortical impact on deeply-anaesthetized Sprague-Dawley rats, that was designed to mimic different injury severities. Whole-brain MRI scans were performed on each rat prior to either a lower- or a higher-level of impact, and then at hourly intervals for 5 h post-impact. Both brain volume and specific anatomical structures were segmented from MR images for inter-subject comparisons post-registration. Animals subjected to lower and higher impact levels exhibited elevated intracranial pressure (ICP) in the low compensatory reserve (i.e., nearly exhausted), and terminal disturbance (i.e., exhausted) ranges, respectively. There was a statistically significant drop in cerebrospinal fluid (CSF) of 35% in the lower impacts, and 65% in the higher impacts, at 5 h compared to sham controls. There was a corresponding increase in corpus callosum volume starting at 1 h, of 60-110% and 30-40% following the lower- and higher-impact levels, respectively. A statistically significant change in the abnormal tissue from 2 h to 5 h was observed for both impact levels, with greater significance for higher impacts. Furthermore, a statistically significant difference between the lower impacts and the sham controls occurred at 3 h. These results are statistically substantiated by a fluctuation in the physical size of the corpus callosum, a decrease in

  17. Segmental neurofibromatosis.

    PubMed

    Toy, Brian

    2003-10-01

    Segmental neurofibromatosis is a rare variant of neurofibromatosis in which skin lesions are confined to a circumscribed body segment. A case of a 72-year-old woman with this condition is presented. Clinical features and genetic evidence are reviewed. PMID:14594599

  18. Active Segmentation

    PubMed Central

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary. We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach. PMID:20686671

  19. Segmentation of thin section images for grain size analysis using region competition and edge-weighted region merging

    NASA Astrophysics Data System (ADS)

    Jungmann, Matthias; Pape, Hansgeorg; Wißkirchen, Peter; Clauser, Christoph; Berlage, Thomas

    2014-11-01

    Microscopic thin section images are a major source of information on physical properties, crystallization processes, and the evolution of rocks. Extracting the boundaries of grains is of special interest for estimating the volumetric structure of sandstone. To deal with large datasets and to relieve the geologist from a manual analysis of images, automated methods are needed for the segmentation task. This paper evaluates the region competition framework, which also includes region merging. The procedure minimizes an energy functional based on the Minimum Description Length (MDL) principle. To overcome some known drawbacks of current algorithms, we present an extension of MDL-based region merging by integrating edge information between adjacent regions. In addition, we introduce a modified implementation for region competition for overcoming computational complexities when dealing with multiple competing regions. Commonly used methods are based on solving differential equations for describing the movement of boundaries, whereas our approach implements a simple updating scheme. Furthermore, we propose intensity features for reducing the amount of data. They are derived by comparing theoretical values obtained from a model function describing the intensity inside uniaxial crystals with measured data. Error, standard deviation, and phase shift between the model and intensity measurements preserve sufficient information for a proper segmentation. Additionally, identified objects are classified into quartz grains, anhydrite, and reaction fringes by these features. This grouping is, in turn, used to improve the segmentation process further. We illustrate the benefits of this approach by four samples of microscopic thin sections and quantify them in a comparison of a segmentation result and a manually obtained one.

  20. Analysis of the Relationship between Hypertrophy of the Ligamentum Flavum and Lumbar Segmental Motion with Aging Process

    PubMed Central

    Yoshiiwa, Toyomi; Kawano, Masanori; Ikeda, Shinichi; Tsumura, Hiroshi

    2016-01-01

    Study Design Retrospective cross-sectional study. Purpose To investigate the relationship between ligamentum flavum (LF) hypertrophy and lumbar segmental motion. Overview of Literature The pathogenesis of LF thickening is unclear and whether the thickening results from tissue hypertrophy or buckling remains controversial. Methods 296 consecutive patients underwent assessment of the lumbar spine by radiographic and magnetic resonance imaging (MRI). Of these patients, 39 with normal L4–L5 disc height were selected to exclude LF buckling as one component of LF hypertrophy. The study group included 27 men and 12 women, with an average age of 61.2 years (range, 23–81 years). Disc degeneration and LF thickness were quantified on MRI. Lumbar segmental spine instability and presence of a vacuum phenomenon were identified on radiographic images. Results The distribution of disc degeneration and LF thickness included grade II degeneration in 4 patients, with a mean LF thickness of 2.43±0.20 mm; grade III in 10 patients, 3.01±0.41 mm; and grade IV in 25 patients, 4.16±1.12 mm. LF thickness significantly increased with grade of disc degeneration and was significantly correlated with age (r=0.55, p<0.01). Logistic regression analysis identified predictive effects of segmental angulation (odds ratio [OR]=1.55, p=0.014) and age (OR=1.16, p=0.008). Conclusions Age-related increases in disc degeneration, combined with continuous lumbar segmental flexion-extension motion, leads to the development of LF hypertrophy. PMID:27340534

  1. Prevalence and Distribution of Segmentation Errors in Macular Ganglion Cell Analysis of Healthy Eyes Using Cirrus HD-OCT

    PubMed Central

    Alshareef, Rayan A.; Dumpala, Sunila; Rapole, Shruthi; Januwada, Manideepak; Goud, Abhilash; Peguda, Hari Kumar; Chhablani, Jay

    2016-01-01

    Purpose To determine the frequency of different types of spectral domain optical coherence tomography (SD-OCT) scan artifacts and errors in ganglion cell algorithm (GCA) in healthy eyes. Methods Infrared image, color-coded map and each of the 128 horizontal b-scans acquired in the macular ganglion cell-inner plexiform layer scans using the Cirrus HD-OCT (Carl Zeiss Meditec, Dublin, CA) macular cube 512 × 128 protocol in 30 healthy normal eyes were evaluated. The frequency and pattern of each artifact was determined. Deviation of the segmentation line was classified into mild (less than 10 microns), moderate (10–50 microns) and severe (more than 50 microns). Each deviation, if present, was noted as upward or downward deviation. Each artifact was further described as per location on the scan and zones in the total scan area. Results A total of 1029 (26.8%) out of total 3840 scans had scan errors. The most common scan error was segmentation error (100%), followed by degraded images (6.70%), blink artifacts (0.09%) and out of register artifacts (3.3%). Misidentification of the inner retinal layers was most frequent (62%). Upward Deviation of the segmentation line (47.91%) and severe deviation (40.3%) were more often noted. Artifacts were mostly located in the central scan area (16.8%). The average number of scans with artifacts per eye was 34.3% and was not related to signal strength on Spearman correlation (p = 0.36). Conclusions This study reveals that image artifacts and scan errors in SD-OCT GCA analysis are common and frequently involve segmentation errors. These errors may affect inner retinal thickness measurements in a clinically significant manner. Careful review of scans for artifacts is important when using this feature of SD-OCT device. PMID:27191396

  2. [First-tracer passage with a single-crystal gamma camera: completed assessment of left-ventricular function by determining enddiastolic volume, regional ejection fraction and %-akinetic segment (author's transl)].

    PubMed

    Bull, U; Knesewitsch, P; Kleinhans, E; Seiderer, M; Strauer, B E

    1981-06-01

    Determination of left ventricular (LV) enddiastolic volume (EDV) was achieved by calibration of the system (single-crystal gamma camera, equipped with a converging collimator) to a volume phantom (egg). A good correlation (r = 0.92) was found with EDV values, obtained from cineventriculography. Images, derived from enddiastole (ED) and endsystole (ES) were corrected for background by "parabolic background subtraction", which is a realistic form of background correction in view of the LV-shape. Regional ejection fraction (REF) was calculated by an electronical operation using the ejection fraction formula and these ED and ES images. REF values reflect regional or segmental LV pump function and are superior to one- or two-dimensional parameters (e.g. visual assessment of asynergy, hemiaxis shortening) since REF values include the third dimension by referring to regional volumes. In addition, per cent-akinetic segment may be replaced by REF. Results from the literature show that first-tracer passage with a single crystal gamma camera at rest (n = 534) yield equivalent results in comparison with cineventriculography. Therefore, this nuclear procedure may be routinely used. REF values complete the diagnostic parameter as yet available. PMID:6265871

  3. Oil-spill risk analysis: Cook inlet outer continental shelf lease sale 149. Volume 2: Conditional risk contour maps of seasonal conditional probabilities. Final report

    SciTech Connect

    Johnson, W.R.; Marshall, C.F.; Anderson, C.M.; Lear, E.M.

    1994-08-01

    The Federal Government has proposed to offer Outer Continental Shelf (OCS) lands in Cook Inlet for oil and gas leasing. Because oil spills may occur from activities associated with offshore oil production, the Minerals Management Service conducts a formal risk assessment. In evaluating the significance of accidental oil spills, it is important to remember that the occurrence of such spills is fundamentally probabilistic. The effects of oil spills that could occur during oil and gas production must be considered. This report summarizes results of an oil-spill risk analysis conducted for the proposed Cook Inlet OCS Lease Sale 149. The objective of this analysis was to estimate relative risks associated with oil and gas production for the proposed lease sale. To aid the analysis, conditional risk contour maps of seasonal conditional probabilities of spill contact were generated for each environmental resource or land segment in the study area. This aspect is discussed in this volume of the two volume report.

  4. A Rapid and Efficient 2D/3D Nuclear Segmentation Method for Analysis of Early Mouse Embryo and Stem Cell Image Data

    PubMed Central

    Lou, Xinghua; Kang, Minjung; Xenopoulos, Panagiotis; Muñoz-Descalzo, Silvia; Hadjantonakis, Anna-Katerina

    2014-01-01

    Summary Segmentation is a fundamental problem that dominates the success of microscopic image analysis. In almost 25 years of cell detection software development, there is still no single piece of commercial software that works well in practice when applied to early mouse embryo or stem cell image data. To address this need, we developed MINS (modular interactive nuclear segmentation) as a MATLAB/C++-based segmentation tool tailored for counting cells and fluorescent intensity measurements of 2D and 3D image data. Our aim was to develop a tool that is accurate and efficient yet straightforward and user friendly. The MINS pipeline comprises three major cascaded modules: detection, segmentation, and cell position classification. An extensive evaluation of MINS on both 2D and 3D images, and comparison to related tools, reveals improvements in segmentation accuracy and usability. Thus, its accuracy and ease of use will allow MINS to be implemented for routine single-cell-level image analyses. PMID:24672759

  5. Computer-aided segmentation and 3D analysis of in vivo MRI examinations of the human vocal tract during phonation

    NASA Astrophysics Data System (ADS)

    Wismüller, Axel; Behrends, Johannes; Hoole, Phil; Leinsinger, Gerda L.; Meyer-Baese, Anke; Reiser, Maximilian F.

    2008-03-01

    We developed, tested, and evaluated a 3D segmentation and analysis system for in vivo MRI examinations of the human vocal tract during phonation. For this purpose, six professionally trained speakers, age 22-34y, were examined using a standardized MRI protocol (1.5 T, T1w FLASH, ST 4mm, 23 slices, acq. time 21s). The volunteers performed a prolonged (>=21s) emission of sounds of the German phonemic inventory. Simultaneous audio tape recording was obtained to control correct utterance. Scans were made in axial, coronal, and sagittal planes each. Computer-aided quantitative 3D evaluation included (i) automated registration of the phoneme-specific data acquired in different slice orientations, (ii) semi-automated segmentation of oropharyngeal structures, (iii) computation of a curvilinear vocal tract midline in 3D by nonlinear PCA, (iv) computation of cross-sectional areas of the vocal tract perpendicular to this midline. For the vowels /a/,/e/,/i/,/o/,/ø/,/u/,/y/, the extracted area functions were used to synthesize phoneme sounds based on an articulatory-acoustic model. For quantitative analysis, recorded and synthesized phonemes were compared, where area functions extracted from 2D midsagittal slices were used as a reference. All vowels could be identified correctly based on the synthesized phoneme sounds. The comparison between synthesized and recorded vowel phonemes revealed that the quality of phoneme sound synthesis was improved for phonemes /a/ and /y/, if 3D instead of 2D data were used, as measured by the average relative frequency shift between recorded and synthesized vowel formants (p<0.05, one-sided Wilcoxon rank sum test). In summary, the combination of fast MRI followed by subsequent 3D segmentation and analysis is a novel approach to examine human phonation in vivo. It unveils functional anatomical findings that may be essential for realistic modelling of the human vocal tract during speech production.

  6. Multimodal Retinal Vessel Segmentation from Spectral-Domain Optical Coherence Tomography and Fundus Photography

    PubMed Central

    Hu, Zhihong; Niemeijer, Meindert; Abràmoff, Michael D.; Garvin, Mona K.

    2014-01-01

    Segmenting retinal vessels in optic nerve head (ONH) centered spectral-domain optical coherence tomography (SD-OCT) volumes is particularly challenging due to the projected neural canal opening (NCO) and relatively low visibility in the ONH center. Color fundus photographs provide a relatively high vessel contrast in the region inside the NCO, but have not been previously used to aid the SD-OCT vessel segmentation process. Thus, in this paper, we present two approaches for the segmentation of retinal vessels in SD-OCT volumes that each take advantage of complimentary information from fundus photographs. In the first approach (referred to as the registered-fundus vessel segmentation approach), vessels are first segmented on the fundus photograph directly (using a k-NN pixel classifier) and this vessel segmentation result is mapped to the SD-OCT volume through the registration of the fundus photograph to the SD-OCT volume. In the second approach (referred to as the multimodal vessel segmentation approach), after fundus-to-SD-OCT registration, vessels are simultaneously segmented with a k-NN classifier using features from both modalities. Three-dimensional structural information from the intraretinal layers and neural canal opening obtained through graph-theoretic segmentation approaches of the SD-OCT volume are used in combination with Gaussian filter banks and Gabor wavelets to generate the features. The approach is trained on 15 and tested on 19 randomly chosen independent image pairs of SD-OCT volumes and fundus images from 34 subjects with glaucoma. Based on a receiver operating characteristic (ROC) curve analysis, the present registered-fundus and multimodal vessel segmentation approaches [area under the curve (AUC) of 0.85 and 0.89, respectively] both perform significantly better than the two previous OCT-based approaches (AUC of 0.78 and 0.83, p < 0.05). The multimodal approach overall performs significantly better than the other three approaches (p < 0

  7. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 4: Mission peculiar spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) peculiar spacecraft segment and associated subsystems and modules are presented. The specifications considered include the following: (1) wideband communications subsystem module, (2) mission peculiar software, (3) hydrazine propulsion subsystem module, (4) solar array assembly, and (5) the scanning spectral radiometer.

  8. Texture analysis of automatic graph cuts segmentations for detection of lung cancer recurrence after stereotactic radiotherapy

    NASA Astrophysics Data System (ADS)

    Mattonen, Sarah A.; Palma, David A.; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.

    2015-03-01

    Stereotactic ablative radiotherapy (SABR) is a treatment for early-stage lung cancer with local control rates comparable to surgery. After SABR, benign radiation induced lung injury (RILI) results in tumour-mimicking changes on computed tomography (CT) imaging. Distinguishing recurrence from RILI is a critical clinical decision determining the need for potentially life-saving salvage therapies whose high risks in this population dictate their use only for true recurrences. Current approaches do not reliably detect recurrence within a year post-SABR. We measured the detection accuracy of texture features within automatically determined regions of interest, with the only operator input being the single line segment measuring tumour diameter, normally taken during the clinical workflow. Our leave-one-out cross validation on images taken 2-5 months post-SABR showed robustness of the entropy measure, with classification error of 26% and area under the receiver operating characteristic curve (AUC) of 0.77 using automatic segmentation; the results using manual segmentation were 24% and 0.75, respectively. AUCs for this feature increased to 0.82 and 0.93 at 8-14 months and 14-20 months post SABR, respectively, suggesting even better performance nearer to the date of clinical diagnosis of recurrence; thus this system could also be used to support and reinforce the physician's decision at that time. Based on our ongoing validation of this automatic approach on a larger sample, we aim to develop a computer-aided diagnosis system which will support the physician's decision to apply timely salvage therapies and prevent patients with RILI from undergoing invasive and risky procedures.

  9. Texture-based segmentation and analysis of emphysema depicted on CT images

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Zheng, Bin; Wang, Xingwei; Lederman, Dror; Pu, Jiantao; Sciurba, Frank C.; Gur, David; Leader, J. Ken

    2011-03-01

    In this study we present a texture-based method of emphysema segmentation depicted on CT examination consisting of two steps. Step 1, a fractal dimension based texture feature extraction is used to initially detect base regions of emphysema. A threshold is applied to the texture result image to obtain initial base regions. Step 2, the base regions are evaluated pixel-by-pixel using a method that considers the variance change incurred by adding a pixel to the base in an effort to refine the boundary of the base regions. Visual inspection revealed a reasonable segmentation of the emphysema regions. There was a strong correlation between lung function (FEV1%, FEV1/FVC, and DLCO%) and fraction of emphysema computed using the texture based method, which were -0.433, -.629, and -0.527, respectively. The texture-based method produced more homogeneous emphysematous regions compared to simple thresholding, especially for large bulla, which can appear as speckled regions in the threshold approach. In the texture-based method, single isolated pixels may be considered as emphysema only if neighboring pixels meet certain criteria, which support the idea that single isolated pixels may not be sufficient evidence that emphysema is present. One of the strength of our complex texture-based approach to emphysema segmentation is that it goes beyond existing approaches that typically extract a single or groups texture features and individually analyze the features. We focus on first identifying potential regions of emphysema and then refining the boundary of the detected regions based on texture patterns.

  10. Functional analysis of centipede development supports roles for Wnt genes in posterior development and segment generation.

    PubMed

    Hayden, Luke; Schlosser, Gerhard; Arthur, Wallace

    2015-01-01

    The genes of the Wnt family play important and highly conserved roles in posterior growth and development in a wide range of animal taxa. Wnt genes also operate in arthropod segmentation, and there has been much recent debate regarding the relationship between arthropod and vertebrate segmentation mechanisms. Due to its phylogenetic position, body form, and possession of many (11) Wnt genes, the centipede Strigamia maritima is a useful system with which to examine these issues. This study takes a functional approach based on treatment with lithium chloride, which causes ubiquitous activation of canonical Wnt signalling. This is the first functional developmental study performed in any of the 15,000 species of the arthropod subphylum Myriapoda. The expression of all 11 Wnt genes in Strigamia was analyzed in relation to posterior development. Three of these genes, Wnt11, Wnt5, and WntA, were strongly expressed in the posterior region and, thus, may play important roles in posterior developmental processes. In support of this hypothesis, LiCl treatment of S. maritima embryos was observed to produce posterior developmental defects and perturbations in AbdB and Delta expression. The effects of LiCl differ depending on the developmental stage treated, with more severe effects elicited by treatment during germband formation than by treatment at later stages. These results support a role for Wnt signalling in conferring posterior identity in Strigamia. In addition, data from this study are consistent with the hypothesis of segmentation based on a "clock and wavefront" mechanism operating in this species. PMID:25627713

  11. Segmentation of acute pyelonephritis area on kidney SPECT images using binary shape analysis

    NASA Astrophysics Data System (ADS)

    Wu, Chia-Hsiang; Sun, Yung-Nien; Chiu, Nan-Tsing

    1999-05-01

    Acute pyelonephritis is a serious disease in children that may result in irreversible renal scarring. The ability to localize the site of urinary tract infection and the extent of acute pyelonephritis has considerable clinical importance. In this paper, we are devoted to segment the acute pyelonephritis area from kidney SPECT images. A two-step algorithm is proposed. First, the original images are translated into binary versions by automatic thresholding. Then the acute pyelonephritis areas are located by finding convex deficiencies in the obtained binary images. This work gives important diagnosis information for physicians and improves the quality of medical care for children acute pyelonephritis disease.

  12. Thermal non-equilibrium analysis of porous annulus subjected to segmental isothermal heater - Part B

    NASA Astrophysics Data System (ADS)

    Al-Rashed, Abdullah A. A. A.; Salman, Ahmed N. J.; Khaleed, H. M. T.; Khan, T. M. Yunus; Kamangar, Sarfaraz

    2016-06-01

    Investigation of heat transfer in an annular porous cylinder subjected to segmental heating is carried out. Thermal non-equilibrium condition is applied to porous medium. Finite element method is used to solve the governing partial differential equations. The fluid is assumed to follow Darcy law. The boundary conditions are such that the fluid and solid phase have different temperatures at the hot wall. The current study is an extension of the paper part-A in this conference but elaborating the results for heat transfer behavior in terms of Nusselt number.

  13. Automatic Segmentation of Eight Tissue Classes in Neonatal Brain MRI

    PubMed Central

    Anbeek, Petronella; Išgum, Ivana; van Kooij, Britt J. M.; Mol, Christian P.; Kersbergen, Karina J.; Groenendaal, Floris; Viergever, Max A.; de Vries, Linda S.; Benders, Manon J. N. L.

    2013-01-01

    Purpose Volumetric measurements of neonatal brain tissues may be used as a biomarker for later neurodevelopmental outcome. We propose an automatic method for probabilistic brain segmentation in neonatal MRIs. Materials and Methods In an IRB-approved study axial T1- and T2-weighted MR images were acquired at term-equivalent age for a preterm cohort of 108 neonates. A method for automatic probabilistic segmentation of the images into eight cerebral tissue classes was developed: cortical and central grey matter, unmyelinated and myelinated white matter, cerebrospinal fluid in the ventricles and in the extra cerebral space, brainstem and cerebellum. Segmentation is based on supervised pixel classification using intensity values and spatial positions of the image voxels. The method was trained and evaluated using leave-one-out experiments on seven images, for which an expert had set a reference standard manually. Subsequently, the method was applied to the remaining 101 scans, and the resulting segmentations were evaluated visually by three experts. Finally, volumes of the eight segmented tissue classes were determined for each patient. Results The Dice similarity coefficients of the segmented tissue classes, except myelinated white matter, ranged from 0.75 to 0.92. Myelinated white matter was difficult to segment and the achieved Dice coefficient was 0.47. Visual analysis of the results demonstrated accurate segmentations of the eight tissue classes. The probabilistic segmentation method produced volumes that compared favorably with the reference standard. Conclusion The proposed method provides accurate segmentation of neonatal brain MR images into all given tissue classes, except myelinated white matter. This is the one of the first methods that distinguishes cerebrospinal fluid in the ventricles from cerebrospinal fluid in the extracerebral space. This method might be helpful in predicting neurodevelopmental outcome and useful for evaluating neuroprotective clinical

  14. DESCRIPTIVE ANALYSIS OF PITCH VOLUME IN SOUTHEASTERN CONFERENCE BASEBALL PITCHERS

    PubMed Central

    Love, Shawn; Aytar, Aydan; Bush, Heather

    2010-01-01

    Background: Representative data on typical pitch volume for collegiate pitchers functioning in their specific roles is sparse and is needed for training specificity. Objective: To report pitch volumes in Division I collegiate pitchers. The authors hypothesize that pitcher role will result in different pitch volumes. Methods: Pitchers from twelve Division I collegiate baseball teams pitch volume during the 2009 baseball season was retrospectively reviewed through each team's website. The number of pitches and innings pitched for each pitcher were recorded. Pitchers were categorized based on their role as “Starter-only” (n=15), “Reliever-only” (n=76), or “Combined Starter/Reliever” (n=94) and compared using ANOVA. Results: “Starter-only” pitchers threw the most pitches (97±10) and pitched the most innings (6.0±1.0) per appearance (p=<.001). “Combined Starter/Reliever” functioning as a starter threw significantly more pitches (68±19) and pitched more innings (4.0±1.3) per appearance compared to “Combined Starter/Reliever” functioning as a reliever and “Reliever-only” pitchers (p=<.001). The cumulative volume during a 13 week regular season revealed that “Starter-only” pitchers threw significantly more total pitches (1204±387) compared to “Combined Starter/Reliever” pitchers (613±182) who threw significantly more than “Reliever-only” pitchers (254±77) (P<.001). Discussion: Pitcher's specific roles and representative volumes should be used to design training and rehabilitation programs. Comparison of this data to reported adolescent pitch volumes reveal that adolescent pitch volume per appearance approaches collegiate levels. Conclusions: Collegiate pitcher roles dictate their throwing volume. Starter-only pitchers (8%) throw the greatest cumulative number of pitches and should be trained differently than the majority of college pitchers (92%) who function primarily as a reliever or in combination starter/reliever roles

  15. Effects of immersion on visual analysis of volume data.

    PubMed

    Laha, Bireswar; Sensharma, Kriti; Schiffbauer, James D; Bowman, Doug A

    2012-04-01

    Volume visualization has been widely used for decades for analyzing datasets ranging from 3D medical images to seismic data to paleontological data. Many have proposed using immersive virtual reality (VR) systems to view volume visualizations, and there is anecdotal evidence of the benefits of VR for this purpose. However, there has been very little empirical research exploring the effects of higher levels of immersion for volume visualization, and it is not known how various components of immersion influence the effectiveness of visualization in VR. We conducted a controlled experiment in which we studied the independent and combined effects of three components of immersion (head tracking, field of regard, and stereoscopic rendering) on the effectiveness of visualization tasks with two x-ray microscopic computed tomography datasets. We report significant benefits of analyzing volume data in an environment involving those components of immersion. We find that the benefits do not necessarily require all three components simultaneously, and that the components have variable influence on different task categories. The results of our study improve our understanding of the effects of immersion on perceived and actual task performance, and provide guidance on the choice of display systems to designers seeking to maximize the effectiveness of volume visualization applications. PMID:22402687

  16. A computer program for comprehensive ST-segment depression/heart rate analysis of the exercise ECG test.

    PubMed

    Lehtinen, R; Vänttinen, H; Sievänen, H; Malmivuo, J

    1996-06-01

    The ST-segment depression/heart rate (ST/HR) analysis has been found to improve the diagnostic accuracy of the exercise ECG test in detecting myocardial ischemia. Recently, three different continuous diagnostic variables based on the ST/HR analysis have been introduced; the ST/HR slope, the ST/HR index and the ST/HR hysteresis. The latter utilises both the exercise and recovery phases of the exercise ECG test, whereas the two former are based on the exercise phase only. This present article presents a computer program which not only calculates the above three diagnostic variables but also plots the full diagrams of ST-segment depression against heart rate during both exercise and recovery phases for each ECG lead from given ST/HR data. The program can be used in the exercise ECG diagnosis of daily clinical practice provided that the ST/HR data from the ECG measurement system can be linked to the program. At present, the main purpose of the program is to provide clinical and medical researchers with a practical tool for comprehensive clinical evaluation and development of the ST/HR analysis. PMID:8835841

  17. Analysis of flexible aircraft longitudinal dynamics and handling qualities. Volume 1: Analysis methods

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Schmidt, D. S.

    1985-01-01

    As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.

  18. Determination of fiber volume in graphite/epoxy materials using computer image analysis

    NASA Technical Reports Server (NTRS)

    Viens, Michael J.

    1990-01-01

    The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.

  19. Submarine pipeline on-bottom stability: Volume 1, Analysis and design guidelines: Final report

    SciTech Connect

    Not Available

    1988-11-01

    This report has been developed as a reference handbook for use in on-bottom pipeline stability analysis and design. It consists of two volumes. Volume one is devoted to descriptions of the various aspects of the problem: the pipeline design process; ocean physics, wave mechanics, hydrodynamic forces, and meteorogical data determination; geotechnical data collection and soil mechanics; and stability design procedures. Volume two describes, lists, and illustrates the analysis software. Diskettes containing the software and examples of the software are also included in Volume two. (115 refs., 127 figs., 7 tabs.)

  20. Full automation of morphological segmentation of retinal images: a comparison with human-based analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Mark P.; Yang, Shuyu; Mitra, Sunanda; Raman, Balaji; Nemeth, Sheila C.; Soliz, Peter

    2003-05-01

    Age-Related Macular Degeneration (ARMD) is the leading cause of irreversible visual loss among the elderly in the US and Europe. A computer-based system has been developed to provide the ability to track the position and margin of the ARMD associated lesion; drusen. Variations in the subject's retinal pigmentation, size and profusion of the lesions, and differences in image illumination and quality present significant challenges to most segmentation algorithms. An algorithm is presented that first classifies the image to optimize the variables of a mathematical morphology algorithm. A binary image is found by applying Otsu's method to the reconstructed image. Lesion size and area distribution statistics are then calculated. For training and validation, the University of Wisconsin provided longitudinal images of 22 subjects from their 10 year Beaver Dam Study. Using the Wisconsin Age-Related Maculopathy Grading System, three graders classified the retinal images according to drusen size and area of involvement. The percentages within the acceptable error between the three graders and the computer are as follows: Grader-A: Area: 84% Size: 81%; Grader-B: Area: 63% Size: 76%; Grader-C: Area: 81% Size: 88%. To validate the segmented position and boundary one grader was asked to digitally outline the drusen boundary. The average accuracy based on sensitivity and specificity was 0.87 for thirty four marked regions.

  1. Phylogenetic analysis, genomic diversity and classification of M class gene segments of turkey reoviruses.

    PubMed

    Mor, Sunil K; Marthaler, Douglas; Verma, Harsha; Sharafeldin, Tamer A; Jindal, Naresh; Porter, Robert E; Goyal, Sagar M

    2015-03-23

    From 2011 to 2014, 13 turkey arthritis reoviruses (TARVs) were isolated from cases of swollen hock joints in 2-18-week-old turkeys. In addition, two isolates from similar cases of turkey arthritis were received from another laboratory. Eight turkey enteric reoviruses (TERVs) isolated from fecal samples of turkeys were also used for comparison. The aims of this study were to characterize turkey reovirus (TRV) based on complete M class genome segments and to determine genetic diversity within TARVs in comparison to TERVs and chicken reoviruses (CRVs). Nucleotide (nt) cut off values of 84%, 83% and 85% for the M1, M2 and M3 gene segments were proposed and used for genotype classification, generating 5, 7, and 3 genotypes, respectively. Using these nt cut off values, we propose M class genotype constellations (GCs) for avian reoviruses. Of the seven GCs, GC1 and GC3 were shared between the TARVs and TERVs, indicating possible reassortment between turkey and chicken reoviruses. The TARVs and TERVs were divided into three GCs, and GC2 was unique to TARVs and TERVs. The proposed new GC approach should be useful in identifying reassortant viruses, which may ultimately be used in the design of a universal vaccine against both chicken and turkey reoviruses. PMID:25655814

  2. Analysis on the use of Multi-Sequence MRI Series for Segmentation of Abdominal Organs

    NASA Astrophysics Data System (ADS)

    Selver, M. A.; Selvi, E.; Kavur, E.; Dicle, O.

    2015-01-01

    Segmentation of abdominal organs from MRI data sets is a challenging task due to various limitations and artefacts. During the routine clinical practice, radiologists use multiple MR sequences in order to analyze different anatomical properties. These sequences have different characteristics in terms of acquisition parameters (such as contrast mechanisms and pulse sequence designs) and image properties (such as pixel spacing, slice thicknesses and dynamic range). For a complete understanding of the data, computational techniques should combine the information coming from these various MRI sequences. These sequences are not acquired in parallel but in a sequential manner (one after another). Therefore, patient movements and respiratory motions change the position and shape of the abdominal organs. In this study, the amount of these effects is measured using three different symmetric surface distance metrics performed to three dimensional data acquired from various MRI sequences. The results are compared to intra and inter observer differences and discussions on using multiple MRI sequences for segmentation and the necessities for registration are presented.

  3. Automated segmentation and analysis of fluorescent in situ hybridization (FISH) signals in interphase nuclei of pap-smear specimens

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Zheng, Bin; Li, Shibo; Zhang, Roy R.; Li, Yuhua; Mulvihill, John J.; Chen, Wei R.; Liu, Hong

    2009-02-01

    Interphase fluorescence in situ hybridization (FISH) technology is a potential and promising molecular imaging tool, which can be applied to screen and detect cervical cancer. However, manual FISH detection method is a subjective, tedious, and time-consuming process that results in a large inter-reader variability and possible detection error (in particular for heterogeneous cases). Automatic FISH image analysis aims to potentially improve detection efficiency and also produce more accurate and consistent results. In this preliminary study, a new computerized scheme is developed to automatically segment analyzable interaphase cells and detect FISH signals using digital fluorescence microscopic images acquired from Pap-smear specimens. First, due to the large intensity variations of the acquired interphase cells and overlapping cells, an iterative (multiple) threshold method and a feature-based classifier are applied to detect and segment all potentially analyzable interphase nuclei depicted on a single image frame. Second, a region labeling algorithm followed up a knowledge-based classifier is implemented to identify splitting and diffused FISH signals. Finally, each detected analyzable cell is classified as normal or abnormal based on the automatically counted number of FISH signals. To test the performance of this scheme, an image dataset involving 250 Pap-smear FISH image frames was collected and used in this study. The overall accuracy rate for segmenting analyzable interphase nuclei is 86.6% (360/424). The sensitivity and specificity for classifying abnormal and normal cells are 88.5% and 86.6%, respectively. The overall cell classification agreement rate between our scheme and a cytogeneticist is 86.6%. The testing results demonstrate the feasibility of applying this automated scheme in FISH image analysis.

  4. Fuzzy pulmonary vessel segmentation in contrast enhanced CT data

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kiraly, Atilla P.; Bakai, Annemarie; Das, Marco; Novak, Carol L.; Aach, Til

    2008-03-01

    Pulmonary vascular tree segmentation has numerous applications in medical imaging and computer-aided diagnosis (CAD), including detection and visualization of pulmonary emboli (PE), improved lung nodule detection, and quantitative vessel analysis. We present a novel approach to pulmonary vessel segmentation based on a fuzzy segmentation concept, combining the strengths of both threshold and seed point based methods. The lungs of the original image are first segmented and a threshold-based approach identifies core vessel components with a high specificity. These components are then used to automatically identify reliable seed points for a fuzzy seed point based segmentation method, namely fuzzy connectedness. The output of the method consists of the probability of each voxel belonging to the vascular tree. Hence, our method provides the possibility to adjust the sensitivity/specificity of the segmentation result a posteriori according to application-specific requirements, through definition of a minimum vessel-probability required to classify a voxel as belonging to the vascular tree. The method has been evaluated on contrast-enhanced thoracic CT scans from clinical PE cases and demonstrates overall promising results. For quantitative validation we compare the segmentation results to randomly selected, semi-automatically segmented sub-volumes and present the resulting receiver operating characteristic (ROC) curves. Although we focus on contrast enhanced chest CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.

  5. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 3: General purpose spacecraft segment and module specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications for the Earth Observatory Satellite (EOS) general purpose aircraft segment are presented. The satellite is designed to provide attitude stabilization, electrical power, and a communications data handling subsystem which can support various mission peculiar subsystems. The various specifications considered include the following: (1) structures subsystem, (2) thermal control subsystem, (3) communications and data handling subsystem module, (4) attitude control subsystem module, (5) power subsystem module, and (6) electrical integration subsystem.

  6. Cargo Logistics Airlift Systems Study (CLASS). Volume 1: Analysis of current air cargo system

    NASA Technical Reports Server (NTRS)

    Burby, R. J.; Kuhlman, W. H.

    1978-01-01

    The material presented in this volume is classified into the following sections; (1) analysis of current routes; (2) air eligibility criteria; (3) current direct support infrastructure; (4) comparative mode analysis; (5) political and economic factors; and (6) future potential market areas. An effort was made to keep the observations and findings relating to the current systems as objective as possible in order not to bias the analysis of future air cargo operations reported in Volume 3 of the CLASS final report.

  7. Automated segmentation of the lamina cribrosa using Frangi's filter: a novel approach for rapid identification of tissue volume fraction and beam orientation in a trabeculated structure in the eye.

    PubMed

    Campbell, Ian C; Coudrillier, Baptiste; Mensah, Johanne; Abel, Richard L; Ethier, C Ross

    2015-03-01

    The lamina cribrosa (LC) is a tissue in the posterior eye with a complex trabecular microstructure. This tissue is of great research interest, as it is likely the initial site of retinal ganglion cell axonal damage in glaucoma. Unfortunately, the LC is difficult to access experimentally, and thus imaging techniques in tandem with image processing have emerged as powerful tools to study the microstructure and biomechanics of this tissue. Here, we present a staining approach to enhance the contrast of the microstructure in micro-computed tomography (micro-CT) imaging as well as a comparison between tissues imaged with micro-CT and second harmonic generation (SHG) microscopy. We then apply a modified version of Frangi's vesselness filter to automatically segment the connective tissue beams of the LC and determine the orientation of each beam. This approach successfully segmented the beams of a porcine optic nerve head from micro-CT in three dimensions and SHG microscopy in two dimensions. As an application of this filter, we present finite-element modelling of the posterior eye that suggests that connective tissue volume fraction is the major driving factor of LC biomechanics. We conclude that segmentation with Frangi's filter is a powerful tool for future image-driven studies of LC biomechanics. PMID:25589572

  8. Automated segmentation of the lamina cribrosa using Frangi's filter: a novel approach for rapid identification of tissue volume fraction and beam orientation in a trabeculated structure in the eye

    PubMed Central

    Campbell, Ian C.; Coudrillier, Baptiste; Mensah, Johanne; Abel, Richard L.; Ethier, C. Ross

    2015-01-01

    The lamina cribrosa (LC) is a tissue in the posterior eye with a complex trabecular microstructure. This tissue is of great research interest, as it is likely the initial site of retinal ganglion cell axonal damage in glaucoma. Unfortunately, the LC is difficult to access experimentally, and thus imaging techniques in tandem with image processing have emerged as powerful tools to study the microstructure and biomechanics of this tissue. Here, we present a staining approach to enhance the contrast of the microstructure in micro-computed tomography (micro-CT) imaging as well as a comparison between tissues imaged with micro-CT and second harmonic generation (SHG) microscopy. We then apply a modified version of Frangi's vesselness filter to automatically segment the connective tissue beams of the LC and determine the orientation of each beam. This approach successfully segmented the beams of a porcine optic nerve head from micro-CT in three dimensions and SHG microscopy in two dimensions. As an application of this filter, we present finite-element modelling of the posterior eye that suggests that connective tissue volume fraction is the major driving factor of LC biomechanics. We conclude that segmentation with Frangi's filter is a powerful tool for future image-driven studies of LC biomechanics. PMID:25589572

  9. Thermal non-equilibrium analysis of porous annulus subjected to segmental isothermal heater - Part A

    NASA Astrophysics Data System (ADS)

    Al-Rashed, Abdullah A. A. A.; Salman, Ahmed N. J.; Khaleed, H. M. T.; Khan, T. M. Yunus; Kamangar, Sarfaraz

    2016-06-01

    The objective of present study is to evaluate the effect of the length and location of segmental heating of inner radius of annular cylinder containing porous medium between inner and outer radii. The fluid and solid matrix of porous medium are assumed to have temperature discrepancy subjected to isothermal heating of heater. The fluid is assumed to follow Darcy law and two separate equations are considered for energy transport to account for the thermal non-equilibrium condition. The boundary conditions are such that fluid and solid phase have different temperatures at the hot wall. The study is conducted for different lengths of heater corresponding to the 20%, 35% and 50% of the total height of the cylinder. The location of the heater is varied to three positions i.e. bottom section, mid-section and top of the cylinder.

  10. Quantitative trait locus analysis of leaf dissection in tomato using Lycopersicon pennellii segmental introgression lines.

    PubMed Central

    Holtan, Hans E E; Hake, Sarah

    2003-01-01

    Leaves are one of the most conspicuous and important organs of all seed plants. A fundamental source of morphological diversity in leaves is the degree to which the leaf is dissected by lobes and leaflets. We used publicly available segmental introgression lines to describe the quantitative trait loci (QTL) controlling the difference in leaf dissection seen between two tomato species, Lycopersicon esculentum and L. pennellii. We define eight morphological characteristics that comprise the mature tomato leaf and describe loci that affect each of these characters. We found 30 QTL that contribute one or more of these characters. Of these 30 QTL, 22 primarily affect leaf dissection and 8 primarily affect leaf size. On the basis of which characters are affected, four classes of loci emerge that affect leaf dissection. The majority of the QTL produce phenotypes intermediate to the two parent lines, while 5 QTL result in transgression with drastically increased dissection relative to both parent lines. PMID:14668401

  11. Analysis of Photoreceptor Rod Outer Segment Phagocytosis by RPE Cells In Situ

    PubMed Central

    Sethna, Saumil; Finnemann, Silvia C.

    2013-01-01

    Counting rhodopsin-positive phagosomes residing in the retinal pigment epithelium (RPE) in the eye at different times of day allows a quantitative assessment of engulfment and digestion phases of diurnal RPE phagocytosis, which efficiently clears shed photoreceptor outer segment fragments (POS) from the neural retina. Comparing such activities among age- and background-matched experimental wild-type and mutant mice or rats serves to identify roles for specific proteins in the phagocytic process. Here, we describe experimental procedures for mouse eye harvest, embedding, sectioning, immunofluorescence labeling of rod POS phagosomes in RPE cells in sagittal eye sections, imaging of POS phagosomes in the RPE by laser scanning confocal microscopy, and POS quantification. PMID:23150373

  12. Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images

    PubMed Central

    de Castro, J.; Méndez, A.; Tarquis, A. M.

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  13. Fractal analysis of laplacian pyramidal filters applied to segmentation of soil images.

    PubMed

    de Castro, J; Ballesteros, F; Méndez, A; Tarquis, A M

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  14. Intraspecific phylogeography of the gopher tortoise, Gopherus polyphemus: RFLP analysis of amplified mtDNA segments.

    PubMed

    Osentoski, M F; Lamb, T

    1995-12-01

    The slow rate of mtDNA evolution in turtles poses a limitation on the levels of intraspecific variation detectable by conventional restriction fragment surveys. We examined mtDNA variation in the gopher tortoise (Gopherus polyphemus) using an alternative restriction assay, one in which PCR-amplified segments of the mitochondrial genome were digested with tetranucleotide-site endonucleases. Restriction fragment polymorphisms representing four amplified regions were analysed to evaluate population genetic structure among 112 tortoises throughout the species' range. Thirty-six haplotypes were identified, and three major geographical assemblages (Eastern, Western, and Mid-Florida) were resolved by UPGMA and parsimony analyses. Eastern and Western assemblages abut near the Apalachicola drainage, whereas the Mid-Florida assemblage appears restricted to the Brooksville Ridge. The Eastern/Western assemblage boundary is remarkably congruent with phylogeographic profiles for eight additional species from the south-eastern U.S., representing both freshwater and terrestrial realms. PMID:8564009

  15. A link-segment model of upright human posture for analysis of head-trunk coordination

    NASA Technical Reports Server (NTRS)

    Nicholas, S. C.; Doxey-Gasway, D. D.; Paloski, W. H.

    1998-01-01

    Sensory-motor control of upright human posture may be organized in a top-down fashion such that certain head-trunk coordination strategies are employed to optimize visual and/or vestibular sensory inputs. Previous quantitative models of the biomechanics of human posture control have examined the simple case of ankle sway strategy, in which an inverted pendulum model is used, and the somewhat more complicated case of hip sway strategy, in which multisegment, articulated models are used. While these models can be used to quantify the gross dynamics of posture control, they are not sufficiently detailed to analyze head-trunk coordination strategies that may be crucial to understanding its underlying mechanisms. In this paper, we present a biomechanical model of upright human posture that extends an existing four mass, sagittal plane, link-segment model to a five mass model including an independent head link. The new model was developed to analyze segmental body movements during dynamic posturography experiments in order to study head-trunk coordination strategies and their influence on sensory inputs to balance control. It was designed specifically to analyze data collected on the EquiTest (NeuroCom International, Clackamas, OR) computerized dynamic posturography system, where the task of maintaining postural equilibrium may be challenged under conditions in which the visual surround, support surface, or both are in motion. The performance of the model was tested by comparing its estimated ground reaction forces to those measured directly by support surface force transducers. We conclude that this model will be a valuable analytical tool in the search for mechanisms of balance control.

  16. Analysis of iris structure and iridocorneal angle parameters with anterior segment optical coherence tomography in Fuchs' uveitis syndrome.

    PubMed

    Basarir, Berna; Altan, Cigdem; Pinarci, Eylem Yaman; Celik, Ugur; Satana, Banu; Demirok, Ahmet

    2013-06-01

    To evaluate the differences in the biometric parameters of iridocorneal angle and iris structure measured by anterior segment optical coherence tomography (AS-OCT) in Fuchs' uveitis syndrome (FUS). Seventy-six eyes of 38 consecutive patients with the diagnosis of unilateral FUS were recruited into this prospective, cross-sectional and comparative study. After a complete ocular examination, anterior segment biometric parameters were measured by Visante(®) AS-OCT. All parameters were compared between the two eyes of each patient statistically. The mean age of the 38 subjects was 32.5 ± 7.5 years (18 female and 20 male). The mean visual acuity was lower in eyes with FUS (0.55 ± 0.31) than in healthy eyes (0.93 ± 0.17). The central corneal thickness did not differ significantly between eyes. All iridocorneal angle parameters (angle-opening distance 500 and 750, scleral spur angle, trabecular-iris space (TISA) 500 and 750) except TISA 500 in temporal quadrant were significantly larger in eyes with FUS than in healthy eyes. Anterior chamber depth was deeper in the eyes with FUS than in the unaffected eyes. With regard to iris measurements, iris thickness in the thickest part, iris bowing and iris shape were all statistically different between the affected eye and the healthy eye in individual patients with FUS. However, no statistically significant differences were evident in iris thickness 500 μm, thickness in the middle and iris length. There were significant difference in iris shape between the two eyes of patients with glaucoma. AS-OCT as an imaging method provides us with many informative resultsin the analysis of anterior segment parameters in FUS. PMID:23277205

  17. Segmental hair analysis for 11-nor-Δ⁹-tetrahydrocannabinol-9-carboxylic acid and the patterns of cannabis use.

    PubMed

    Han, Eunyoung; Chung, Heesun; Song, Joon Myong

    2012-04-01

    Cannabis is the most widely abused drug in the world. The purpose of this study is to detect 11-nor-9-carboxy-Δ⁹-tetrahydrocannabinol (THCCOOH) in segmental hair and to evaluate the patterns of cannabis use. We investigated the relationship between the concentrations of THCCOOH in hair and the self-reported use data and the route of administration. For this purpose, the hair samples were washed, digested with 1 mL of 1 M NaOH at 85°C for 30 min along with the internal standard, THCCOOH-d₃ (2.5 pg/mg) and extracted in 2 mL of n-hexane-ethyl acetate (9:1) twice after adding 1 mL of 0.1N sodium acetate buffer (pH = 4.5) and 200 µL of acetic acid. The organic extract was transferred and evaporated and the mixture was derivatized with 50 µL of pentafluoropropionic anhydride and 25 µL of pentafluoropropanol for 30 min at 70°C. Reconstituted final extract was injected into the gas chromatography-tandem mass spectrometer operating in the negative chemical ionization mode. In segmental hair analysis, the concentrations of THCCOOH decreased from the proximal to distal segments. The concentrations of THCCOOH in hair and the self-reported dose and frequency of administration from cannabis users were not well correlated because of the low accuracy and reliability of the self-reported data. However, this study provides preliminary information on the dose and frequency of administration among cannabis users in our country. PMID:22417835

  18. Industrial process heat data analysis and evaluation. Volume 2

    SciTech Connect

    Lewandowski, A; Gee, R; May, K

    1984-07-01

    The Solar Energy Research Institute (SERI) has modeled seven of the Department of Energy (DOE) sponsored solar Industrial Process Heat (IPH) field experiments and has generated thermal performance predictions for each project. Additionally, these performance predictions have been compared with actual performance measurements taken at the projects. Predictions were generated using SOLIPH, an hour-by-hour computer code with the capability for modeling many types of solar IPH components and system configurations. Comparisons of reported and predicted performance resulted in good agreement when the field test reliability and availability was high. Volume I contains the main body of the work; objective model description, site configurations, model results, data comparisons, and summary. Volume II contains complete performance prediction results (tabular and graphic output) and computer program listings.

  19. Industrial process heat data analysis and evaluation. Volume 1

    SciTech Connect

    Lewandowski, A; Gee, R; May, K

    1984-07-01

    The Solar Energy Research Institute (SERI) has modeled seven of the Department of Energy (DOE) sponsored solar Industrial Process Heat (IPH) field experiments and has generated thermal performance predictions for each project. Additionally, these performance predictions have been compared with actual performance measurements taken at the projects. Predictions were generated using SOLIPH, an hour-by-hour computer code with the capability for modeling many types of solar IPH components and system configurations. Comparisons of reported and predicted performance resulted in good agreement when the field test reliability and availability was high. Volume I contains the main body of the work: objective, model description, site configurations, model results, data comparisons, and summary. Volume II contains complete performance prediction results (tabular and graphic output) and computer program listings.

  20. A STANDARD PROCEDURE FOR COST ANALYSIS OF POLLUTION CONTROL OPERATIONS. VOLUME II. APPENDICES

    EPA Science Inventory

    Volume I is a user guide for a standard procedure for the engineering cost analysis of pollution abatement operations and processes. The procedure applies to projects in various economic sectors: private, regulated, and public. Volume II, the bulk of the document, contains 11 app...

  1. Model analysis of tidal volume response to inspiratory elastic loads.

    PubMed

    Zin, W A; Rossi, A; Zocchi, L; Milic-Emili, J

    1984-07-01

    Based on experimental inspiratory driving pressure waveforms and active respiratory impedance data of anesthetized cats, we made model predictions of the factors that determine the immediate (first loaded breath) intrinsic (i.e., nonneural) tidal volume compensation to added inspiratory elastic loads. The time course of driving pressure (P) was given by P = atb, where a is the pressure at 1 s from onset of inspiration and represents the intensity of neuromuscular drive, t is time, and b is an index of the shape of the driving pressure wave. For a given active respiratory impedance, tidal volume compensation to added elastic loads decreases with increasing inspiratory duration and decreasing value of b but is independent of a. We have also assessed the validity of the "effective elastance" (Lynne-Davies et al., J. Appl. Physiol. 30: 512-516, 1971) as a predictor of tidal volume responses to elastic loads. In absence of vagal feedback, the effective elastance appears to be a reliable predictor, except for short inspiratory duration and a very high intrinsic resistance. PMID:6469787

  2. Segment-interaction in sprint start: Analysis of 3D angular velocity and kinetic energy in elite sprinters.

    PubMed

    Slawinski, J; Bonnefoy, A; Ontanon, G; Leveque, J M; Miller, C; Riquet, A; Chèze, L; Dumas, R

    2010-05-28

    The aim of the present study was to measure during a sprint start the joint angular velocity and the kinetic energy of the different segments in elite sprinters. This was performed using a 3D kinematic analysis of the whole body. Eight elite sprinters (10.30+/-0.14s 100 m time), equipped with 63 passive reflective markers, realised four maximal 10 m sprints start on an indoor track. An opto-electronic Motion Analysis system consisting of 12 digital cameras (250 Hz) was used to collect the 3D marker trajectories. During the pushing phase on the blocks, the 3D angular velocity vector and its norm were calculated for each joint. The kinetic energy of 16 segments of the lower and upper limbs and of the total body was calculated. The 3D kinematic analysis of the whole body demonstrated that joints such as shoulders, thoracic or hips did not reach their maximal angular velocity with a movement of flexion-extension, but with a combination of flexion-extension, abduction-adduction and internal-external rotation. The maximal kinetic energy of the total body was reached before clearing block (respectively, 537+/-59.3 J vs. 514.9+/-66.0 J; p< or =0.01). These results suggested that a better synchronization between the upper and lower limbs could increase the efficiency of pushing phase on the blocks. Besides, to understand low interindividual variances in the sprint start performance in elite athletes, a 3D complete body kinematic analysis shall be used. PMID:20226465

  3. Fault rupture segmentation

    NASA Astrophysics Data System (ADS)

    Cleveland, Kenneth Michael

    A critical foundation to earthquake study and hazard assessment is the understanding of controls on fault rupture, including segmentation. Key challenges to understanding fault rupture segmentation include, but are not limited to: What determines if a fault segment will rupture in a single great event or multiple moderate events? How is slip along a fault partitioned between seismic and seismic components? How does the seismicity of a fault segment evolve over time? How representative are past events for assessing future seismic hazards? In order to address the difficult questions regarding fault rupture segmentation, new methods must be developed that utilize the information available. Much of the research presented in this study focuses on the development of new methods for attacking the challenges of understanding fault rupture segmentation. Not only do these methods exploit a broader band of information within the waveform than has traditionally been used, but they also lend themselves to the inclusion of even more seismic phases providing deeper understandings. Additionally, these methods are designed to be fast and efficient with large datasets, allowing them to utilize the enormous volume of data available. Key findings from this body of work include demonstration that focus on fundamental earthquake properties on regional scales can provide general understanding of fault rupture segmentation. We present a more modern, waveform-based method that locates events using cross-correlation of the Rayleigh waves. Additionally, cross-correlation values can also be used to calculate precise earthquake magnitudes. Finally, insight regarding earthquake rupture directivity can be easily and quickly exploited using cross-correlation of surface waves.

  4. Investigating the Creeping Segment of the San Andreas Fault using InSAR time series analysis

    NASA Astrophysics Data System (ADS)

    Rolandone, Frederique; Ryder, Isabelle; Agram, Piyush S.; Burgmann, Roland; Nadeau, Robert M.

    2010-05-01

    We exploit the advanced Interferometric Synthetic Aperture Radar (InSAR) technique referred to as the Small BAseline Subset (SBAS) algorithm to analyze the creeping section of the San Andreas Fault in Central California. Various geodetic creep rate measurements along the Central San Andreas Fault (CSAF) have been made since 1969 including creepmeters, alignment arrays, geodolite, and GPS. They show that horizontal surface displacements increase from a few mm/yr at either end to a maximum of up to ~34 mm/yr in the central portion. They also indicate some discrepancies in rate estimates, with the range being as high as 10 mm/yr at some places along the fault. This variation is thought to be a result of the different geodetic techniques used and of measurements being made at variable distances from the fault. An interferometric stack of 12 interferograms for the period 1992-2001 shows the spatial variation of creep that occurs within a narrow (<2 km) zone close to the fault trace. The creep rate varies spatially along the fault but also in time. Aseismic slip on the CSAF shows several kinds of time dependence. Shallow slip, as measured by surface measurements across the narrow creeping zone, occurs partly as ongoing steady creep, along with brief episodes with slip from mm to cm. Creep rates along the San Juan Bautista segment increased after the 1989 Loma Prieta earthquake and slow slip transients of varying duration and magnitude occurred in both transition segments The main focus of this work is to use the SBAS technique to identify spatial and temporal variations of creep on the CSAF. We will present time series of line-of-sight (LOS) displacements derived from SAR data acquired by the ASAR instrument, on board the ENVISAT satellite, between 2003 and 2009. For each coherent pixel of the radar images we compute time-dependent surface displacements as well as the average LOS deformation rate. We compare our results with characteristic repeating microearthquakes that

  5. Analysis in ultrasmall volumes: microdispensing of picoliter droplets and analysis without protection from evaporation.

    PubMed

    Neugebauer, Sebastian; Evans, Stephanie R; Aguilar, Zoraida P; Mosbach, Marcus; Fritsch, Ingrid; Schuhmann, Wolfgang

    2004-01-15

    A new approach is reported for analysis of ultrasmall volumes. It takes advantage of the versatile positioning of a dispenser to shoot approximately 150-pL droplets of liquid onto a specific location of a substrate where analysis is performed rapidly, in a fraction of the time that it takes for the droplet to evaporate. In this report, the site where the liquid is dispensed carries out fast-scan cyclic voltammetry (FSCV), although the detection method does not need to be restricted to electrochemistry. The FSCV is performed at a microcavity having individually addressable gold electrodes, where one serves as working electrode and another as counter/pseudoreference electrode. Five or six droplets of 10 mM [Ru(NH(3))(6)]Cl(3) in 0.1 M KCl were dispensed and allowed to dry, followed by redissolution of the redox species and electrolyte with one or five droplets of water and immediate FSCV, demonstrating the ability to easily concentrate a sample and the reproducibility of redissolution, respectively. Because this approach does not integrate detection with microfluidics on the same chip, it simplifies fabrication of devices for analysis of ultrasmall volumes. It may be useful for single-step and multistep sample preparation, analyses, and bioassays in microarray formats if dispensing and changing of solutions are automated. However, care must be taken to avoid factors that affect the aim of the dispenser, such as drafts and clogging of the nozzle. PMID:14719897

  6. Risk factors for neovascular glaucoma after carbon ion radiotherapy of choroidal melanoma using dose-volume histogram analysis

    SciTech Connect

    Hirasawa, Naoki . E-mail: naoki_h@nirs.go.jp; Tsuji, Hiroshi; Ishikawa, Hitoshi; Koyama-Ito, Hiroko; Kamada, Tadashi; Mizoe, Jun-Etsu; Ito, Yoshiyuki; Naganawa, Shinji; Ohnishi, Yoshitaka; Tsujii, Hirohiko

    2007-02-01

    Purpose: To determine the risk factors for neovascular glaucoma (NVG) after carbon ion radiotherapy (C-ion RT) of choroidal melanoma. Methods and Materials: A total of 55 patients with choroidal melanoma were treated between 2001 and 2005 with C-ion RT based on computed tomography treatment planning. All patients had a tumor of large size or one located close to the optic disk. Univariate and multivariate analyses were performed to identify the risk factors of NVG for the following parameters; gender, age, dose-volumes of the iris-ciliary body and the wall of eyeball, and irradiation of the optic disk (ODI). Results: Neovascular glaucoma occurred in 23 patients and the 3-year cumulative NVG rate was 42.6 {+-} 6.8% (standard error), but enucleation from NVG was performed in only three eyes. Multivariate analysis revealed that the significant risk factors for NVG were V50{sub IC} (volume irradiated {>=}50 GyE to iris-ciliary body) (p = 0.002) and ODI (p = 0.036). The 3-year NVG rate for patients with V50{sub IC} {>=}0.127 mL and those with V50{sub IC} <0.127 mL were 71.4 {+-} 8.5% and 11.5 {+-} 6.3%, respectively. The corresponding rate for the patients with and without ODI were 62.9 {+-} 10.4% and 28.4 {+-} 8.0%, respectively. Conclusion: Dose-volume histogram analysis with computed tomography indicated that V50{sub IC} and ODI were independent risk factors for NVG. An irradiation system that can reduce the dose to both the anterior segment and the optic disk might be worth adopting to investigate whether or not incidence of NVG can be decreased with it.

  7. Reliability and reproducibility of macular segmentation using a custom-built optical coherence tomography retinal image analysis software

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Somfai, Gábor Márk; Ranganathan, Sudarshan; Tátrai, Erika; Ferencz, Mária; Puliafito, Carmen A.

    2009-11-01

    We determine the reliability and reproducibility of retinal thickness measurements with a custom-built OCT retinal image analysis software (OCTRIMA). Ten eyes of five healthy subjects undergo repeated standard macular thickness map scan sessions by two experienced examiners using a Stratus OCT device. Automatic/semi automatic thickness quantification of the macula and intraretinal layers is performed using OCTRIMA software. Intraobserver, interobserver, and intervisit repeatability and reproducibility coefficients, and intraclass correlation coefficients (ICCs) per scan are calculated. Intraobserver, interobserver, and intervisit variability combined account for less than 5% of total variability for the total retinal thickness measurements and less than 7% for the intraretinal layers except the outer segment/ retinal pigment epithelium (RPE) junction. There is no significant difference between scans acquired by different observers or during different visits. The ICCs obtained for the intraobserver and intervisit variability tests are greater than 0.75 for the total retina and all intraretinal layers, except the inner nuclear layer intraobserver and interobserver test and the outer plexiform layer, intraobserver, interobserver, and intervisit test. Our results indicate that thickness measurements for the total retina and all intraretinal layers (except the outer segment/RPE junction) performed using OCTRIMA are highly repeatable and reproducible.

  8. A spherical harmonics intensity model for 3D segmentation and 3D shape analysis of heterochromatin foci.

    PubMed

    Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl

    2016-08-01

    The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci. PMID:27037463

  9. A review of heart chamber segmentation for structural and functional analysis using cardiac magnetic resonance imaging.

    PubMed

    Peng, Peng; Lekadir, Karim; Gooya, Ali; Shao, Ling; Petersen, Steffen E; Frangi, Alejandro F

    2016-04-01

    Cardiovascular magnetic resonance (CMR) has become a key imaging modality in clinical cardiology practice due to its unique capabilities for non-invasive imaging of the cardiac chambers and great vessels. A wide range of CMR sequences have been developed to assess various aspects of cardiac structure and function, and significant advances have also been made in terms of imaging quality and acquisition times. A lot of research has been dedicated to the development of global and regional quantitative CMR indices that help the distinction between health and pathology. The goal of this review paper is to discuss the structural and functional CMR indices that have been proposed thus far for clinical assessment of the cardiac chambers. We include indices definitions, the requirements for the calculations, exemplar applications in cardiovascular diseases, and the corresponding normal ranges. Furthermore, we review the most recent state-of-the art techniques for the automatic segmentation of the cardiac boundaries, which are necessary for the calculation of the CMR indices. Finally, we provide a detailed discussion of the existing literature and of the future challenges that need to be addressed to enable a more robust and comprehensive assessment of the cardiac chambers in clinical practice. PMID:26811173

  10. Comparative analysis of the distribution of segmented filamentous bacteria in humans, mice and chickens.

    PubMed

    Yin, Yeshi; Wang, Yu; Zhu, Liying; Liu, Wei; Liao, Ningbo; Jiang, Mizu; Zhu, Baoli; Yu, Hongwei D; Xiang, Charlie; Wang, Xin

    2013-03-01

    Segmented filamentous bacteria (SFB) are indigenous gut commensal bacteria. They are commonly detected in the gastrointestinal tracts of both vertebrates and invertebrates. Despite the significant role they have in the modulation of the development of host immune systems, little information exists regarding the presence of SFB in humans. The aim of this study was to investigate the distribution and diversity of SFB in humans and to determine their phylogenetic relationships with their hosts. Gut contents from 251 humans, 92 mice and 72 chickens were collected for bacterial genomic DNA extraction and subjected to SFB 16S rRNA-specific PCR detection. The results showed SFB colonization to be age-dependent in humans, with the majority of individuals colonized within the first 2 years of life, but this colonization disappeared by the age of 3 years. Results of 16S rRNA sequencing showed that multiple operational taxonomic units of SFB could exist in the same individuals. Cross-species comparison among human, mouse and chicken samples demonstrated that each host possessed an exclusive predominant SFB sequence. In summary, our results showed that SFB display host specificity, and SFB colonization, which occurs early in human life, declines in an age-dependent manner. PMID:23151642

  11. Who Will More Likely Buy PHEV: A Detailed Market Segmentation Analysis

    SciTech Connect

    Lin, Zhenhong; Greene, David L

    2010-01-01

    Understanding the diverse PHEV purchase behaviors among prospective new car buyers is key for designing efficient and effective policies for promoting new energy vehicle technologies. The ORNL MA3T model developed for the U.S. Department of Energy is described and used to project PHEV purchase probabilities by different consumers. MA3T disaggregates the U.S. household vehicle market into 1458 consumer segments based on region, residential area, driver type, technology attitude, home charging availability and work charging availability and is calibrated to the EIA s Annual Energy Outlook. Simulation results from MA3T are used to identify the more likely PHEV buyers and provide explanations. It is observed that consumers who have home charging, drive more frequently and live in urban area are more likely to buy a PHEV. Early adopters are projected to be more likely PHEV buyers in the early market, but the PHEV purchase probability by the late majority consumer can increase over time when PHEV gradually becomes a familiar product. Copyright Form of EVS25.

  12. Segmentation and Tracking of Adherens Junctions in 3D for the Analysis of Epithelial Tissue Morphogenesis

    PubMed Central

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-01-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT) PMID:25884654

  13. Asymmetry analysis of the arm segments during forward handspring on floor.

    PubMed

    Exell, Timothy A; Robinson, Gemma; Irwin, Gareth

    2016-08-01

    Asymmetry in gymnastics underpins successful performance and may also have implications as an injury mechanism; therefore, understanding of this concept could be useful for coaches and clinicians. The aim of this study was to examine kinematic and external kinetic asymmetry of the arm segments during the contact phase of a fundamental skill, the forward handspring on floor. Using a repeated single subject design six female National elite gymnasts (age: 19 ± 1.5 years, mass: 58.64 ± 3.72 kg, height: 1.62 ± 0.41 m), each performed 15 forward handsprings, synchronised 3D kinematic and kinetic data were collected. Asymmetry between the lead and non-lead side arms was quantified during each trial. Significant kinetic asymmetry was observed for all gymnasts (p < 0.005) with the direction of the asymmetry being related to the lead leg. All gymnasts displayed kinetic asymmetry for ground reaction force. Kinematic asymmetry was present for more gymnasts at the shoulder than the distal joints. These findings provide useful information for coaching gymnastics skills, which may subjectively appear to be symmetrical. The observed asymmetry has both performance and injury implications. PMID:26625144

  14. 3D shape descriptors for face segmentation and fiducial points detection: an anatomical-based analysis

    NASA Astrophysics Data System (ADS)

    Salazar, Augusto E.; Cerón, Alexander; Prieto, Flavio A.

    2011-03-01

    The behavior of nine 3D shape descriptors which were computed on the surface of 3D face models, is studied. The set of descriptors includes six curvature-based ones, SPIN images, Folded SPIN Images, and Finger prints. Instead of defining clusters of vertices based on the value of a given primitive surface feature, a face template composed by 28 anatomical regions, is used to segment the models and to extract the location of different landmarks and fiducial points. Vertices are grouped by: region, region boundaries, and subsampled versions of them. The aim of this study is to analyze the discriminant capacity of each descriptor to characterize regions and to identify key points on the facial surface. The experiment includes testing with data from neutral faces and faces showing expressions. Also, in order to see the usefulness of the bending-invariant canonical form (BICF) to handle variations due to facial expressions, the descriptors are computed directly from the surface and also from its BICF. In the results: the values, distributions, and relevance indexes of each set of vertices, were analyzed.

  15. Differential white cell counts by frequency distribution analysis of cell volumes.

    PubMed

    Hughes-Jones, N C; Norley, I; Young, J M; England, J M

    1974-08-01

    Absolute neutrophil and lymphocyte counts on peripheral blood can be made by analysis of the output from a Coulter particle counter, utilizing the difference in the relative cell volume between these two types of cell. A comparison has been made between the results obtained by volume analysis and those obtained by standard microscopical techniques in 10 normal people and 45 patients. The absolute neutrophil count obtained by volume analysis agreed well with values obtained by microscopy; the lymphocyte count did not give such good agreement, since the smaller number of cells counted gave rise to larger sampling errors. The method of volume analysis is suitable for the assessment of absolute neutrophil counts for clinical use. PMID:4420188

  16. Synfuel program analysis. Volume 2: VENVAL users manual

    NASA Astrophysics Data System (ADS)

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This volume is intended for program analysts and is a users manual for the VENVAL model. It contains specific explanations as to input data requirements and programming procedures for the use of this model. VENVAL is a generalized computer program to aid in evaluation of prospective private sector production ventures. The program can project interrelated values of installed capacity, production, sales revenue, operating costs, depreciation, investment, dent, earnings, taxes, return on investment, depletion, and cash flow measures. It can also compute related public sector and other external costs and revenues if unit costs are furnished.

  17. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  18. A registration-based segmentation method with application to adiposity analysis of mice microCT images

    NASA Astrophysics Data System (ADS)

    Bai, Bing; Joshi, Anand; Brandhorst, Sebastian; Longo, Valter D.; Conti, Peter S.; Leahy, Richard M.

    2014-04-01

    Obesity is a global health problem, particularly in the U.S. where one third of adults are obese. A reliable and accurate method of quantifying obesity is necessary. Visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) are two measures of obesity that reflect different associated health risks, but accurate measurements in humans or rodent models are difficult. In this paper we present an automatic, registration-based segmentation method for mouse adiposity studies using microCT images. We co-register the subject CT image and a mouse CT atlas. Our method is based on surface matching of the microCT image and an atlas. Surface-based elastic volume warping is used to match the internal anatomy. We acquired a whole body scan of a C57BL6/J mouse injected with contrast agent using microCT and created a whole body mouse atlas by manually delineate the boundaries of the mouse and major organs. For method verification we scanned a C57BL6/J mouse from the base of the skull to the distal tibia. We registered the obtained mouse CT image to our atlas. Preliminary results show that we can warp the atlas image to match the posture and shape of the subject CT image, which has significant differences from the atlas. We plan to use this software tool in longitudinal obesity studies using mouse models.

  19. Segmented combustor

    NASA Technical Reports Server (NTRS)

    Halila, Ely E. (Inventor)

    1994-01-01

    A combustor liner segment includes a panel having four sidewalls forming a rectangular outer perimeter. A plurality of integral supporting lugs are disposed substantially perpendicularly to the panel and extend from respective ones of the four sidewalls. A plurality of integral bosses are disposed substantially perpendicularly to the panel and extend from respective ones of the four sidewalls, with the bosses being shorter than the lugs. In one embodiment, the lugs extend through supporting holes in an annular frame for mounting the liner segments thereto, with the bosses abutting the frame for maintaining a predetermined spacing therefrom.

  20. The power-proportion method for intracranial volume correction in volumetric imaging analysis

    PubMed Central

    Liu, Dawei; Johnson, Hans J.; Long, Jeffrey D.; Magnotta, Vincent A.; Paulsen, Jane S.

    2014-01-01

    In volumetric brain imaging analysis, volumes of brain structures are typically assumed to be proportional or linearly related to intracranial volume (ICV). However, evidence abounds that many brain structures have power law relationships with ICV. To take this relationship into account in volumetric imaging analysis, we propose a power law based method—the power-proportion method—for ICV correction. The performance of the new method is demonstrated using data from the PREDICT-HD study. PMID:25414635

  1. Evaluation of automated brain MR image segmentation and volumetry methods.

    PubMed

    Klauschen, Frederick; Goldman, Aaron; Barra, Vincent; Meyer-Lindenberg, Andreas; Lundervold, Arvid

    2009-04-01

    We compare three widely used brain volumetry methods available in the software packages FSL, SPM5, and FreeSurfer and evaluate their performance using simulated and real MR brain data sets. We analyze the accuracy of gray and white matter volume measurements and their robustness against changes of image quality using the BrainWeb MRI database. These images are based on "gold-standard" reference brain templates. This allows us to assess between- (same data set, different method) and also within-segmenter (same method, variation of image quality) comparability, for both of which we find pronounced variations in segmentation results for gray and white matter volumes. The calculated volumes deviate up to >10% from the reference values for gray and white matter depending on method and image quality. Sensitivity is best for SPM5, volumetric accuracy for gray and white matter was similar in SPM5 and FSL and better than in FreeSurfer. FSL showed the highest stability for white (<5%), FreeSurfer (6.2%) for gray matter for constant image quality BrainWeb data. Between-segmenter comparisons show discrepancies of up to >20% for the simulated data and 24% on average for the real data sets, whereas within-method performance analysis uncovered volume differences of up to >15%. Since the discrepancies between results reach the same order of magnitude as volume changes observed in disease, these effects limit the usability of the segmentation methods for following volume changes in individual patients over time and should be taken into account during the planning and analysis of brain volume studies. PMID:18537111

  2. The effect of lead selection on traditional and heart rate-adjusted ST segment analysis in the detection of coronary artery disease during exercise testing.

    PubMed

    Viik, J; Lehtinen, R; Turjanmaa, V; Niemelä, K; Malmivuo, J

    1997-09-01

    Several methods of heart rate-adjusted ST segment (ST/HR) analysis have been suggested to improve the diagnostic accuracy of exercise electrocardiography in the identification of coronary artery disease compared with traditional ST segment analysis. However, no comprehensive comparison of these methods on a lead-by-lead basis in all 12 electrocardiographic leads has been reported. This article compares the diagnostic performances of ST/HR hysteresis, ST/HR index, ST segment depression 3 minutes after recovery from exercise, and ST segment depression at peak exercise in a study population of 128 patients with angiographically proved coronary artery disease and 189 patients with a low likelihood of the disease. The methods were determined in each lead of the Mason-Likar modification of the standard 12-lead exercise electrocardiogram for each patient. The ST/HR hysteresis, ST/HR index, ST segment depression 3 minutes after recovery from exercise, and ST segment depression at peak exercise achieved more than 85% area under the receiver-operating characteristic curve in nine, none, three, and one of the 12 standard leads, respectively. The diagnostic performance of ST/HR hysteresis was significantly superior in each lead, with the exception of leads a VL and V1. Examination of individual leads in each study method revealed the high diagnostic performance of leads I and -aVR, indicating that the importance of these leads has been undervalued. In conclusion, the results indicate that when traditional ST segment analysis is used for the detection of coronary artery disease, more attention should be paid to the leads chosen for analysis, and lead-specific cut points should be applied. On the other hand, ST/HR hysteresis, which integrates the ST/HR depression of the exercise and recovery phases, seems to be relatively insensitive to the lead selection and significantly increases the diagnostic performance of exercise electrocardiography in the detection of coronary artery

  3. Evaluation of the Field Test of Project Information Packages: Volume III--Resource Cost Analysis.

    ERIC Educational Resources Information Center

    Al-Salam, Nabeel; And Others

    The third of three volumes evaluating the first year field test of the Project Information Packages (PIPs) provides a cost analysis study as a key element in the total evaluation. The resource approach to cost analysis is explained and the specific resource methodology used in the main cost analysis of the 19 PIP field-test projects detailed. The…

  4. Estimating temperature-dependent anisotropic hydrogen displacements with the invariom database and a new segmented rigid-body analysis program

    PubMed Central

    Lübben, Jens; Bourhis, Luc J.; Dittrich, Birger

    2015-01-01

    Invariom partitioning and notation are used to estimate anisotropic hydrogen displacements for incorporation in crystallographic refinement models. Optimized structures of the generalized invariom database and their frequency computations provide the information required: frequencies are converted to internal atomic displacements and combined with the results of a TLS (translation–libration–screw) fit of experimental non-hydrogen anisotropic displacement parameters to estimate those of H atoms. Comparison with TLS+ONIOM and neutron diffraction results for four example structures where high-resolution X-ray and neutron data are available show that electron density transferability rules established in the invariom approach are also suitable for streamlining the transfer of atomic vibrations. A new segmented-body TLS analysis program called APD-Toolkit has been coded to overcome technical limitations of the established program THMA. The influence of incorporating hydrogen anisotropic displacement parameters on conventional refinement is assessed. PMID:26664341

  5. STS-1 operational flight profile. Volume 6: Abort analysis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The abort analysis for the cycle 3 Operational Flight Profile (OFP) for the Space Transportation System 1 Flight (STS-1) is defined, superseding the abort analysis previously presented. Included are the flight description, abort analysis summary, flight design groundrules and constraints, initialization information, general abort description and results, abort solid rocket booster and external tank separation and disposal results, abort monitoring displays and discussion on both ground and onboard trajectory monitoring, abort initialization load summary for the onboard computer, list of the key abort powered flight dispersion analysis.

  6. Conjoint Analysis of Study Abroad Preferences: Key Attributes, Segments and Implications for Increasing Student Participation

    ERIC Educational Resources Information Center

    Garver, Michael S.; Divine, Richard L.

    2008-01-01

    An adaptive conjoint analysis was performed on the study abroad preferences of a sample of undergraduate college students. The results indicate that trip location, cost, and time spent abroad are the three most important determinants of student preference for different study abroad trip scenarios. The analysis also uncovered four different study…

  7. Measurement and analysis of grain boundary grooving by volume diffusion

    NASA Technical Reports Server (NTRS)

    Hardy, S. C.; Mcfadden, G. B.; Coriell, S. R.; Voorhees, P. W.; Sekerka, R. F.

    1991-01-01

    Experimental measurements of isothermal grain boundary grooving by volume diffusion are carried out for Sn bicrystals in the Sn-Pb system near the eutectic temperature. The dimensions of the groove increase with a temporal exponent of 1/3, and measurement of the associated rate constant allows the determination of the product of the liquid diffusion coefficient D and the capillarity length Gamma associated with the interfacial free energy of the crystal-melt interface. The small-slope theory of Mullins is generalized to the entire range of dihedral angles by using a boundary integral formulation of the associated free boundary problem, and excellent agreement with experimental groove shapes is obtained. By using the diffusivity measured by Jordon and Hunt, the present measured values of Gamma are found to agree to within 5 percent with the values obtained from experiments by Gunduz and Hunt on grain boundary grooving in a temperature gradient.

  8. Image-based segmentation for characterization and quantitative analysis of the spinal cord injuries by using diffusion patterns

    NASA Astrophysics Data System (ADS)

    Hannula, Markus; Olubamiji, Adeola; Kunttu, Iivari; Dastidar, Prasun; Soimakallio, Seppo; Öhman, Juha; Hyttinen, Jari

    2011-03-01

    In medical imaging, magnetic resonance imaging sequences are able to provide information of the damaged brain structure and the neuronal connections. The sequences can be analyzed to form 3D models of the geometry and further including functional information of the neurons of the specific brain area to develop functional models. Modeling offers a tool which can be used for the modeling of brain trauma from images of the patients and thus information to tailor the properties of the transplanted cells. In this paper, we present image-based methods for the analysis of human spinal cord injuries. In this effort, we use three dimensional diffusion tensor imaging, which is an effective method for analyzing the response of the water molecules. This way, our idea is to study how the injury affects on the tissues and how this can be made visible in the imaging. In this paper, we present here a study of spinal cord analysis to two subjects, one healthy volunteer and one spinal cord injury patient. We have done segmentations and volumetric analysis for detection of anatomical differences. The functional differences are analyzed by using diffusion tensor imaging. The obtained results show that this kind of analysis is capable of finding differences in spinal cords anatomy and function.

  9. Space shuttle navigation analysis. Volume 2: Baseline system navigation

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Rains, R. G.

    1980-01-01

    Studies related to the baseline navigation system for the orbiter are presented. The baseline navigation system studies include a covariance analysis of the Inertial Measurement Unit calibration and alignment procedures, postflight IMU error recovery for the approach and landing phases, on-orbit calibration of IMU instrument biases, and a covariance analysis of entry and prelaunch navigation system performance.

  10. Economic analysis of the space shuttle system, volume 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of the space shuttle system is presented. The analysis is based on economic benefits, recurring costs, non-recurring costs, and ecomomic tradeoff functions. The most economic space shuttle configuration is determined on the basis of: (1) objectives of reusable space transportation system, (2) various space transportation systems considered and (3) alternative space shuttle systems.

  11. Price-volume multifractal analysis and its application in Chinese stock markets

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Zhuang, Xin-tian; Liu, Zhi-ying

    2012-06-01

    An empirical research on Chinese stock markets is conducted using statistical tools. First, the multifractality of stock price return series, ri(ri=ln(Pt+1)-ln(Pt)) and trading volume variation series, vi(vi=ln(Vt+1)-ln(Vt)) is confirmed using multifractal detrended fluctuation analysis. Furthermore, a multifractal detrended cross-correlation analysis between stock price return and trading volume variation in Chinese stock markets is also conducted. It is shown that the cross relationship between them is also found to be multifractal. Second, the cross-correlation between stock price Pi and trading volume Vi is empirically studied using cross-correlation function and detrended cross-correlation analysis. It is found that both Shanghai stock market and Shenzhen stock market show pronounced long-range cross-correlations between stock price and trading volume. Third, a composite index R based on price and trading volume is introduced. Compared with stock price return series ri and trading volume variation series vi, R variation series not only remain the characteristics of original series but also demonstrate the relative correlation between stock price and trading volume. Finally, we analyze the multifractal characteristics of R variation series before and after three financial events in China (namely, Price Limits, Reform of Non-tradable Shares and financial crisis in 2008) in the whole period of sample to study the changes of stock market fluctuation and financial risk. It is found that the empirical results verified the validity of R.

  12. Three stage level set segmentation of mass core, periphery, and spiculations for automated image analysis of digital mammograms

    NASA Astrophysics Data System (ADS)

    Ball, John Eugene

    In this dissertation, level set methods are employed to segment masses in digital mammographic images and to classify land cover classes in hyperspectral data. For the mammography computer aided diagnosis (CAD) application, level set-based segmentation methods are designed and validated for mass-periphery segmentation, spiculation segmentation, and core segmentation. The proposed periphery segmentation uses the narrowband level set method in conjunction with an adaptive speed function based on a measure of the boundary complexity in the polar domain. The boundary complexity term is shown to be beneficial for delineating challenging masses with ill-defined and irregularly shaped borders. The proposed method is shown to outperform periphery segmentation methods currently reported in the literature. The proposed mass spiculation segmentation uses a generalized form of the Dixon and Taylor Line Operator along with narrowband level sets using a customized speed function. The resulting spiculation features are shown to be very beneficial for classifying the mass as benign or malignant. For example, when using patient age and texture features combined with a maximum likelihood (ML) classifier, the spiculation segmentation method increases the overall accuracy to 92% with 2 false negatives as compared to 87% with 4 false negatives when using periphery segmentation approaches. The proposed mass core segmentation uses the Chan-Vese level set method with a minimal variance criterion. The resulting core features are shown to be effective and comparable to periphery features, and are shown to reduce the number of false negatives in some cases. Most mammographic CAD systems use only a periphery segmentation, so those systems could potentially benefit from core features.

  13. Viscous wing theory development. Volume 1: Analysis, method and results

    NASA Technical Reports Server (NTRS)

    Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.

    1986-01-01

    Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.

  14. Ceramic component development analysis -- Volume 1. Final report

    SciTech Connect

    Boss, D.E.

    1998-06-09

    The development of advanced filtration media for advanced fossil-fueled power generating systems is a critical step in meeting the performance and emissions requirements for these systems. While porous metal and ceramic candle-filters have been available for some time, the next generation of filters will include ceramic-matrix composites (CMCs) (Techniweave/Westinghouse, Babcock and Wilcox (B and W), DuPont Lanxide Composites), intermetallic alloys (Pall Corporation), and alternate filter geometries (CeraMem Separations). The goal of this effort was to perform a cursory review of the manufacturing processes used by 5 companies developing advanced filters from the perspective of process repeatability and the ability for their processes to be scale-up to produce volumes. Given the brief nature of the on-site reviews, only an overview of the processes and systems could be obtained. Each of the 5 companies had developed some level of manufacturing and quality assurance documentation, with most of the companies leveraging the procedures from other products they manufacture. It was found that all of the filter manufacturers had a solid understanding of the product development path. Given that these filters are largely developmental, significant additional work is necessary to understand the process-performance relationships and projecting manufacturing costs.

  15. SLUDGE TREATMENT PROJECT ALTERNATIVES ANALYSIS SUMMARY REPORT [VOLUME 1

    SciTech Connect

    FREDERICKSON JR; ROURK RJ; HONEYMAN JO; JOHNSON ME; RAYMOND RE

    2009-01-19

    Highly radioactive sludge (containing up to 300,000 curies of actinides and fission products) resulting from the storage of degraded spent nuclear fuel is currently stored in temporary containers located in the 105-K West storage basin near the Columbia River. The background, history, and known characteristics of this sludge are discussed in Section 2 of this report. There are many compelling reasons to remove this sludge from the K-Basin. These reasons are discussed in detail in Section1, and they include the following: (1) Reduce the risk to the public (from a potential release of highly radioactive material as fine respirable particles by airborne or waterborn pathways); (2) Reduce the risk overall to the Hanford worker; and (3) Reduce the risk to the environment (the K-Basin is situated above a hazardous chemical contaminant plume and hinders remediation of the plume until the sludge is removed). The DOE-RL has stated that a key DOE objective is to remove the sludge from the K-West Basin and River Corridor as soon as possible, which will reduce risks to the environment, allow for remediation of contaminated areas underlying the basins, and support closure of the 100-KR-4 operable unit. The environmental and nuclear safety risks associated with this sludge have resulted in multiple legal and regulatory remedial action decisions, plans,and commitments that are summarized in Table ES-1 and discussed in more detail in Volume 2, Section 9.

  16. An analysis of light-induced admittance changes in rod outer segments

    PubMed Central

    Falk, G.; Fatt, P.

    1973-01-01

    1. Measurements were made of the time course and amplitude of the change in real part of admittance, ΔG, of a suspension of frog rod outer segments, following a flash of light bleaching about 1% of the rhodopsin content of the rods. The measurements, based on the use of a specially designed marginal oscillator, covered the frequency range between 500 Hz and 17 MHz. 2. The components of response, previously described for rods prepared by a method involving exposure to strongly hypertonic sucrose solutions, are present in similar form when rods are isolated and maintained in isotonic solutions made up with equi-osmotic concentrations of NaCl and sucrose or with Na2SO4. 3. Component I, identified as a slowly developing positive ΔG apparent at very low frequencies, is frequency-independent up to the characteristic frequency of admittance for the suspension, fY (about 2 MHz for rods suspended in a solution having the conductivity of Ringer solution), but decreases at still higher frequencies. 4. Component II, identified as a rapidly developing positive ΔG which appears only above a critical frequency about 2·5 decades below fY, increases approximately logarithmically with frequency to reach a limiting amplitude in the region of fY. 5. The amplitude of component II, ΔGII, measured in the region of fY, varies linearly with the conductivity of the suspending medium, Go, under conditions in which the conductivity of the rod interior is also a linear function of the external conductivity. The relation for a flash bleaching 1% of the rhodopsin content of the dark-adapted rod is [Formula: see text] 6. Measurements made on rods suspended in a low-conductivity solution, which has the effect of reducing the conductivity of the rod interior to about one ninth its value for rods suspended in Ringer solution, reveal a decline in component II for frequencies above 8 MHz. 7. To explain the frequency dependence of component II and its dependence on conductivity, it is proposed

  17. Evaluation of atlas based mouse brain segmentation

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Jomier, Julien; Aylward, Stephen; Tyszka, Mike; Moy, Sheryl; Lauder, Jean; Styner, Martin

    2009-02-01

    Magentic Reasonance Imaging for mouse phenotype study is one of the important tools to understand human diseases. In this paper, we present a fully automatic pipeline for the process of morphometric mouse brain analysis. The method is based on atlas-based tissue and regional segmentation, which was originally developed for the human brain. To evaluate our method, we conduct a qualitative and quantitative validation study as well as compare of b-spline and fluid registration methods as components in the pipeline. The validation study includes visual inspection, shape and volumetric measurements and stability of the registration methods against various parameter settings in the processing pipeline. The result shows both fluid and b-spline registration methods work well in murine settings, but the fluid registration is more stable. Additionally, we evaluated our segmentation methods by comparing volume differences between Fmr1 FXS in FVB background vs C57BL/6J mouse strains.

  18. Structural analysis of cylindrical thrust chambers, volume 1

    NASA Technical Reports Server (NTRS)

    Armstrong, W. H.

    1979-01-01

    Life predictions of regeneratively cooled rocket thrust chambers are normally derived from classical material fatigue principles. The failures observed in experimental thrust chambers do not appear to be due entirely to material fatigue. The chamber coolant walls in the failed areas exhibit progressive bulging and thinning during cyclic firings until the wall stress finally exceeds the material rupture stress and failure occurs. A preliminary analysis of an oxygen free high conductivity (OFHC) copper cylindrical thrust chamber demonstrated that the inclusion of cumulative cyclic plastic effects enables the observed coolant wall thinout to be predicted. The thinout curve constructed from the referent analysis of 10 firing cycles was extrapolated from the tenth cycle to the 200th cycle. The preliminary OFHC copper chamber 10-cycle analysis was extended so that the extrapolated thinout curve could be established by performing cyclic analysis of deformed configurations at 100 and 200 cycles. Thus the original range of extrapolation was reduced and the thinout curve was adjusted by using calculated thinout rates at 100 and 100 cycles. An analysis of the same underformed chamber model constructed of half-hard Amzirc to study the effect of material properties on the thinout curve is included.

  19. Aneurysms of The Middle Cerebral Artery Proximal Segment (M1) · Anatomical and Therapeutic Considerations · Revision of A Series. Analysis of a series of the pre bifurcation segment aneurysms

    PubMed Central

    Paulo, Marques-Sanches; Edgardo, Spagnuolo; Fernando, Martínez; Pablo, Pereda; Alejandro, Tarigo; Verónica, Verdier

    2010-01-01

    Aneurysms of the middle cerebral artery represent almost a third of all the aneurysms of the circle of Willis anterior sector. Among them, those located at its so-called M1 segment (from its origin up to the bifurcation) range between 2% and 7% of all the aneurysms. It is highly important to know the anatomy of the M1 segment, as well as of the arterial branches that arise from it, since causing its damage during dissection or occlusion of an aneurysm may determine the neurological sequelae. The authors of the present work, based on a recent anatomical analysis carried out by one of them (FM), have performed a study of the aneurysms of the M1 segment in a series of 1059 aneurysms treated with surgery along 25 years. At the mentioned location 23 aneurysms were found, which represented 2.2% of the total operated aneurysms. The cases, location of the aneurysms and their relation with the early branches of the middle cerebral artery were studied, as well as the surgical difficulties that they pose. A review of the scanty bibliography referring specifically to the aneurysms in this topography has been carried out. PMID:22028759

  20. Multi-segment analysis of spinal kinematics during sit-to-stand in patients with chronic low back pain.

    PubMed

    Christe, Guillaume; Redhead, Lucy; Legrand, Thomas; Jolles, Brigitte M; Favre, Julien

    2016-07-01

    While alterations in spinal kinematics have been frequently reported in patients with chronic low back pain (CLBP), a better characterization of the kinematics during functional activities is needed to improve our understanding and therapeutic solutions for this condition. Recent studies on healthy subjects showed the value of analyzing the spine during sit-to-stand transition (STST) using multi-segment models, suggesting that additional knowledge could be gained by conducting similar assessments in CLBP patients. The objectives of this study were to characterize three dimensional kinematics at the lower lumbar (LLS), upper lumbar (ULS), lower thoracic (LTS) and upper thoracic (UTS) joints during STST, and to test the hypothesis that CLBP patients perform this movement with smaller angle and angular velocity compared to asymptomatic controls. Ten CLBP patients (with minimal to moderate disability) and 11 asymptomatic controls with comparable demographics (52% male, 37.4±5.6 years old, 22.5±2.8kg/m(2)) were tested using a three-dimensional camera-based system following previously proposed protocols. Characteristic patterns of movement were identified at the LLS, ULS and UTS joints in the sagittal plane only. Significant differences in the form of smaller sagittal-plane angle and smaller angular velocity in the patient group compared to the control group were observed at these three joints. This indicated a more rigid spine in the patient group and suggested that CLBP rehabilitation could potentially be enhanced by targeting movement deficits in functional activities. The results further recommended the analysis of STST kinematics using a pelvis-lumbar-thoracic model including lower and upper lumbar and thoracic segments. PMID:27262182

  1. Finite element analysis of laminated plates and shells, volume 1

    NASA Technical Reports Server (NTRS)

    Seide, P.; Chang, P. N. H.

    1978-01-01

    The finite element method is used to investigate the static behavior of laminated composite flat plates and cylindrical shells. The analysis incorporates the effects of transverse shear deformation in each layer through the assumption that the normals to the undeformed layer midsurface remain straight but need not be normal to the mid-surface after deformation. A digital computer program was developed to perform the required computations. The program includes a very efficient equation solution code which permits the analysis of large size problems. The method is applied to the problem of stretching and bending of a perforated curved plate.

  2. The ACODEA Framework: Developing Segmentation and Classification Schemes for Fully Automatic Analysis of Online Discussions

    ERIC Educational Resources Information Center

    Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank

    2012-01-01

    Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…

  3. [Segmental neurofibromatosis].

    PubMed

    Zulaica, A; Peteiro, C; Pereiro, M; Pereiro Ferreiros, M; Quintas, C; Toribio, J

    1989-01-01

    Four cases of segmental neurofibromatosis (SNF) are reported. It is a rare entity considered to be a localized variant of neurofibromatosis (NF)-Riccardi's type V. Two cases are male and two female. The lesions are located to the head in a patient and the other three cases in the trunk. No family history nor transmission to progeny were manifested. The rest of the organs are undamaged. PMID:2502696

  4. Underground Test Area Subproject Phase I Data Analysis Task. Volume VIII - Risk Assessment Documentation Package

    SciTech Connect

    1996-12-01

    Volume VIII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the risk assessment documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  5. Underground Test Area Subproject Phase I Data Analysis Task. Volume IV - Hydrologic Parameter Data Documentation Package

    SciTech Connect

    1996-09-01

    Volume IV of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the hydrologic parameter data. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  6. Underground Test Area Subproject Phase I Data Analysis Task. Volume VI - Groundwater Flow Model Documentation Package

    SciTech Connect

    1996-11-01

    Volume VI of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the groundwater flow model data. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  7. Underground Test Area Subproject Phase I Data Analysis Task. Volume VII - Tritium Transport Model Documentation Package

    SciTech Connect

    1996-12-01

    Volume VII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the tritium transport model documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  8. Underground Test Area Subproject Phase I Data Analysis Task. Volume II - Potentiometric Data Document Package

    SciTech Connect

    1996-12-01

    Volume II of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the potentiometric data. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  9. Bivariate analysis of flood peaks and volumes using copulas. An application to the Danube River

    NASA Astrophysics Data System (ADS)

    Papaioannou, George; Bacigal, Tomas; Jeneiova, Katarina; Kohnová, Silvia; Szolgay, Jan; Loukas, Athanasios

    2014-05-01

    A multivariate analysis on flood variables such as flood peaks, volumes and durations, is essential for hydrotechnical projects design. A lot of authors have suggested the use of bivariate distributions for the frequency analysis of flood peaks and volumes due to the supposition that the marginal probability distribution type is the same for these variables. The application of Copulas, which are becoming gradually widespread, can overcome this constraint. The selection of the appropriate copula type/families is not extensively treated in the literature and it remains a challenge in copula analysis. In this study a bivariate copula analysis with the use of different copula families is carried out on the basis of flood peak and the corresponding volumes along a river. This bivariate analysis of flood peaks and volumes is based on streamflow daily data of a time-series more than 100 years from several gauged stations of the Danube River. The methodology applied using annual maximum flood peaks (AMF) with the independent annual maximum volumes of fixed durations at 5, 10, 15,20,25,30 and 60 days. The discharge-volume pairs correlation are examined using Kendall's tau correlation analysis. The copulas families that selected for the bivariate modeling of the extracted pairs discharge and volumes are the Archimedean, Extreme-value and other copula families. The evaluation of the copulas performance achieved with the use of scatterplots of the observed and bootstrapped simulated pairs and formal tests of goodness of fit. Suitability of copulas was statistically compared. Archimedean (e.g. Frank and Clayton) copulas revealed to be more capable for bivariate modeling of floods than the other examined copula families at the Danube River. Results showed in general that copulas are effective tools for bivariate modeling of the two study random variables.

  10. Satellite power systems (SPS) concept definition study. Volume 7: SPS program plan and economic analysis, appendixes

    NASA Technical Reports Server (NTRS)

    Hanley, G.

    1978-01-01

    Three appendixes in support of Volume 7 are contained in this document. The three appendixes are: (1) Satellite Power System Work Breakdown Structure Dictionary; (2) SPS cost Estimating Relationships; and (3) Financial and Operational Concept. Other volumes of the final report that provide additional detail are: Executive Summary; SPS Systems Requirements; SPS Concept Evolution; SPS Point Design Definition; Transportation and Operations Analysis; and SPS Technology Requirements and Verification.

  11. Power plant performance monitoring and improvement: Volume 5, Turbine cycle performance analysis: Interim report

    SciTech Connect

    Crim, H.G. Jr.; Westcott, J.C.; de Mello, R.W.; Brandon, R.E.; Kona, C.; Schmehl, T.G.; Reddington, J.R.

    1987-12-01

    This volume describes advanced instrumentation and computer programs for turbine cycle performance analysis. Unit conditions are displayed on-line. Included are techniques for monitoring the performance of feedwater heaters and the main condenser, procedures for planning turbine maintenance based on an analysis of preoutage testing and performance history, and an overview of the project's computerized data handling and display systems. (DWL)

  12. Passive solar design handbook. Volume 3: Passive solar design analysis

    NASA Astrophysics Data System (ADS)

    Jones, R. W.; Bascomb, J. D.; Kosiewicz, C. E.; Lazarus, G. S.; McFarland, R. D.; Wray, W. O.

    1982-07-01

    Simple analytical methods concerning the des