Note: This page contains sample records for the topic volume segmentation analysis from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: August 15, 2014.
1

Economic Analysis. Volume II. Course Segments 19-34.  

ERIC Educational Resources Information Center

The second volume of the United States Naval Academy's individualized instruction course in economic analysis covers segments 19-34 of the course. Topics in this volume include the national income accounts, the theory of income determination, and the role of fiscal policy in income determination. Other segments of the course, the behavioral…

Sterling Inst., Washington, DC. Educational Technology Center.

2

Automated segmentation and dose-volume analysis with DICOMautomaton  

NASA Astrophysics Data System (ADS)

Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.

Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.

2014-03-01

3

Measurement of intraperitoneal volume by segmental bioimpedance analysis during peritoneal dialysis  

Microsoft Academic Search

Background:Currently, ultrafiltration during peritoneal dialysis is determined from direct measurement of weight differences between the initial filling and final draining volumes. A new technique based on segmental bioimpedance analysis (SBIA) has been developed to accurately measure intraperitoneal volume continuously during peritoneal dialysis.

Fansan Zhu; Nicholas A. Hoenich; George Kaysen; Claudio Ronco; Daniel Schneditz; Lola Murphy; Sally Santacroce; Amy Pangilinan; Frank Gotch; Nathan W. Levin

2003-01-01

4

Volume Segmentation and Ghost Particles  

NASA Astrophysics Data System (ADS)

Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed.

Ziskin, Isaac; Adrian, Ronald

2011-11-01

5

NSEG, a segmented mission analysis program for low and high speed aircraft. Volume 1: Theoretical development  

NASA Technical Reports Server (NTRS)

A rapid mission analysis code based on the use of approximate flight path equations of motion is presented. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed characteristics were specified in tabular form. The code also contains extensive flight envelope performance mapping capabilities. Approximate take off and landing analyses were performed. At high speeds, centrifugal lift effects were accounted for. Extensive turbojet and ramjet engine scaling procedures were incorporated in the code.

Hague, D. S.; Rozendaal, H. L.

1977-01-01

6

A pulse-shape analysis approach to 3-D position determination in large-volume segmented HPGe detectors  

Microsoft Academic Search

This paper is focused on the problem of the spatial localization of radiation-matter interaction in segmented large-volume HPGe detectors. The information is stored in the shapes of the current signals from the various segments. In order to design the algorithms, pulse shapes in a truly coaxial HPGe detector are calculated in closed form. Possible signatures dependent on a single interaction

E. Gatti; G. Casati; A. Geraci; A. Pullia; G. Ripamonti

1999-01-01

7

Inter-sport variability of muscle volume distribution identified by segmental bioelectrical impedance analysis in four ball sports  

PubMed Central

The aim of this study was to evaluate and quantify differences in muscle distribution in athletes of various ball sports using segmental bioelectrical impedance analysis (SBIA). Participants were 115 male collegiate athletes from four ball sports (baseball, soccer, tennis, and lacrosse). Percent body fat (%BF) and lean body mass were measured, and SBIA was used to measure segmental muscle volume (MV) in bilateral upper arms, forearms, thighs, and lower legs. We calculated the MV ratios of dominant to nondominant, proximal to distal, and upper to lower limbs. The measurements consisted of a total of 31 variables. Cluster and factor analyses were applied to identify redundant variables. The muscle distribution was significantly different among groups, but the %BF was not. The classification procedures of the discriminant analysis could correctly distinguish 84.3% of the athletes. These results suggest that collegiate ball game athletes have adapted their physique to their sport movements very well, and the SBIA, which is an affordable, noninvasive, easy-to-operate, and fast alternative method in the field, can distinguish ball game athletes according to their specific muscle distribution within a 5-minute measurement. The SBIA could be a useful, affordable, and fast tool for identifying talents for specific sports.

Yamada, Yosuke; Masuo, Yoshihisa; Nakamura, Eitaro; Oda, Shingo

2013-01-01

8

Inter-sport variability of muscle volume distribution identified by segmental bioelectrical impedance analysis in four ball sports.  

PubMed

The aim of this study was to evaluate and quantify differences in muscle distribution in athletes of various ball sports using segmental bioelectrical impedance analysis (SBIA). Participants were 115 male collegiate athletes from four ball sports (baseball, soccer, tennis, and lacrosse). Percent body fat (%BF) and lean body mass were measured, and SBIA was used to measure segmental muscle volume (MV) in bilateral upper arms, forearms, thighs, and lower legs. We calculated the MV ratios of dominant to nondominant, proximal to distal, and upper to lower limbs. The measurements consisted of a total of 31 variables. Cluster and factor analyses were applied to identify redundant variables. The muscle distribution was significantly different among groups, but the %BF was not. The classification procedures of the discriminant analysis could correctly distinguish 84.3% of the athletes. These results suggest that collegiate ball game athletes have adapted their physique to their sport movements very well, and the SBIA, which is an affordable, noninvasive, easy-to-operate, and fast alternative method in the field, can distinguish ball game athletes according to their specific muscle distribution within a 5-minute measurement. The SBIA could be a useful, affordable, and fast tool for identifying talents for specific sports. PMID:24379714

Yamada, Yosuke; Masuo, Yoshihisa; Nakamura, Eitaro; Oda, Shingo

2013-01-01

9

Morphological segmentation and partial volume analysis for volumetry of solid pulmonary lesions in thoracic CT scans  

Microsoft Academic Search

Volumetric growth assessment of pulmonary lesions is crucial to both lung cancer screening and oncological therapy monitoring. While several methods for small pulmonary nodules have previously been presented, the segmentation of larger tumors that appear frequently in oncological patients and are more likely to be complexly interconnected with lung morphology has not yet received much attention. We present a fast,

Jan-martin Kuhnigk; Volker Dicken; Lars Bornemann; Annemarie Bakai; Dag Wormanns; Stefan Krass; Heinz-otto Peitgen

2006-01-01

10

Effect of body mass index (BMI) on estimation of extracellular volume (ECV) in hemodialysis (HD) patients using segmental and whole body bioimpedance analysis  

Microsoft Academic Search

The aim of the study was to investigate whether body mass index (BMI) influences the estimation of extracellular volume (ECV) in hemodialysis (HD) patients when using segmental bioimpedance analysis (SBIA) compared to wrist-to-ankle bioimpedance analysis (WBIA) during HD with ultrafiltration (UF). Twenty five HD patients (M:F 19:6,) were studied, and further subdivided into two groups of patients, one group with

Mary Carter; Alice T. Morris; Fansan Zhu; Wojciech Zaluska; Nathan W. Levin

2005-01-01

11

NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 2: Program users manual  

NASA Technical Reports Server (NTRS)

A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

Hague, D. S.; Rozendaal, H. L.

1977-01-01

12

Computer Analysis and Design of Cable-Stayed Bridge Structures Having Prestressed Post-Tensioned Concrete Segmental Box-Girders. Volume 1. Theory.  

National Technical Information Service (NTIS)

Volume 1 is a comprehensive document on cable-stayed bridge structures. A discussion of the different configurations including decks, towers, and span arrangements appears with detailed recommendations on segmental construction techniques with emphasis on...

H. J. Farran

1988-01-01

13

Automated volume analysis of head and neck lesions on CT scans using 3D level set segmentation  

PubMed Central

The authors have developed a semiautomatic system for segmentation of a diverse set of lesions in head and neck CT scans. The system takes as input an approximate bounding box, and uses a multistage level set to perform the final segmentation. A data set consisting of 69 lesions marked on 33 scans from 23 patients was used to evaluate the performance of the system. The contours from automatic segmentation were compared to both 2D and 3D gold standard contours manually drawn by three experienced radiologists. Three performance metric measures were used for the comparison. In addition, a radiologist provided quality ratings on a 1 to 10 scale for all of the automatic segmentations. For this pilot study, the authors observed that the differences between the automatic and gold standard contours were larger than the interobserver differences. However, the system performed comparably to the radiologists, achieving an average area intersection ratio of 85.4% compared to an average of 91.2% between two radiologists. The average absolute area error was 21.1% compared to 10.8%, and the average 2D distance was 1.38 mm compared to 0.84 mm between the radiologists. In addition, the quality rating data showed that, despite the very lax assumptions made on the lesion characteristics in designing the system, the automatic contours approximated many of the lesions very well.

Street, Ethan; Hadjiiski, Lubomir; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Mukherji, Suresh K.; Chan, Heang-Ping

2009-01-01

14

Automated volume analysis of head and neck lesions on CT scans using 3D level set segmentation.  

PubMed

The authors have developed a semiautomatic system for segmentation of a diverse set of lesions in head and neck CT scans. The system takes as input an approximate bounding box, and uses a multistage level set to perform the final segmentation. A data set consisting of 69 lesions marked on 33 scans from 23 patients was used to evaluate the performance of the system. The contours from automatic segmentation were compared to both 2D and 3D gold standard contours manually drawn by three experienced radiologists. Three performance metric measures were used for the comparison. In addition, a radiologist provided quality ratings on a 1 to 10 scale for all of the automatic segmentations. For this pilot study, the authors observed that the differences between the automatic and gold standard contours were larger than the interobserver differences. However, the system performed comparably to the radiologists, achieving an average area intersection ratio of 85.4% compared to an average of 91.2% between two radiologists. The average absolute area error was 21.1% compared to 10.8%, and the average 2D distance was 1.38 mm compared to 0.84 mm between the radiologists. In addition, the quality rating data showed that, despite the very lax assumptions made on the lesion characteristics in designing the system, the automatic contours approximated many of the lesions very well. PMID:18072505

Street, Ethan; Hadjiiski, Lubomir; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Mukherji, Suresh K; Chan, Heang-Ping

2007-11-01

15

Glandular segmentation of cone beam breast CT volume images  

NASA Astrophysics Data System (ADS)

Cone beam breast CT (CBBCT) has potential as an alternative to mammography for screening breast cancer while limiting the radiation dose to that of a two-view mammogram. A clinical trial of CBBCT has been underway and volumetric breast images have been obtained. Although these images clearly show the 3D structure of the breast, they are limited by quantum noise due to dose limitations. Noise from these images adds to the challenges of glandular/adipose tissue segmentation. In response to this, an automated method for reducing noise and segmenting glandular tissue in CBBCT images was developed. A histogram based 2-means clustering algorithm was used in conjunction with a seven-point 3D median filter to reduce quantum noise. Following this, a 2D parabolic correction was applied to flatten the adipose tissue in each slice to reduce system inhomogeneities. Finally, a median smoothing algorithm was applied to further reduce noise for optimal segmentation. The algorithm was tested on actual breast scan volume data sets for subjective analysis and on a 3D mathematical phantom to test the algorithm. Subjective comparison of the actual breast scans with the denoised and segmented volumes showed good segmentation with little to no noticeable degradation. The mathematical phantom, after denoising and segmentation, was found to accurately measure the percent glandularity within 0.03% of the actual value for the phantom containing larger spherical shapes, but was only able to preserve small micro-calcification sized spheres of 0.8 and 1.0 mm, and small fibers with diameters of 1.2 and 1.4 mm.

Packard, Nathan; Boone, John M.

2007-03-01

16

Interobserver variation in clinical target volume and organs at risk segmentation in post-parotidectomy radiotherapy: can segmentation protocols help?  

PubMed Central

Objective : A study of interobserver variation in the segmentation of the post-operative clinical target volume (CTV) and organs at risk (OARs) for parotid tumours was undertaken. The segmentation exercise was performed as a baseline, and repeated after 3 months using a segmentation protocol to assess whether CTV conformity improved. Methods : Four head and neck oncologists independently segmented CTVs and OARs (contralateral parotid, spinal cord and brain stem) on CT data sets of five patients post parotidectomy. For each CTV or OAR delineation, total volume was calculated. The conformity level (CL) between different clinicians' outlines was measured using a validated outline analysis tool. The data for CTVs were reaanalysed after using the cochlear sparing therapy and conventional radiation segmentation protocol. Results : Significant differences in CTV morphology were observed at baseline, yielding a mean CL of 30% (range 25–39%). The CL improved after using the segmentation protocol with a mean CL of 54% (range 50–65%). For OARs, the mean CL was 60% (range 53–68%) for the contralateral parotid gland, 23% (range 13–27%) for the brain stem and 25% (range 22–31%) for the spinal cord. Conclusions There was low conformity for CTVs and OARs between different clinicians. The CL for CTVs improved with use of a segmentation protocol, but the CLs remained lower than expected. This study supports the need for clear guidelines for segmentation of target and OARs to compare and interpret the results of head and neck cancer radiation studies.

Mukesh, M; Benson, R; Jena, R; Hoole, A; Roques, T; Scrase, C; Martin, C; Whitfield, G A; Gemmill, J; Jefferies, S

2012-01-01

17

Medical volume segmentation using bank of Gabor filters  

Microsoft Academic Search

In this paper, we will present an unsupervised approach for segmenting medical volume images based on texture proper- ties. The texture properties of the volume data are defined based on spatial frequencies as implemented using a statis- tical method known as Gabor filters. Each Gabor filter in the bank is tuned to detect patterns of a specific frequency and orientation

Adebayo Olowoyeye; Mihran Tuceryan; Shiaofen Fang

2009-01-01

18

FDG PET Metabolic Tumor Volume Segmentation and Pathologic Volume of Primary Human Solid Tumors.  

PubMed

OBJECTIVE. The purpose of this study was to establish the correlation and reliability among the pathologic tumor volume and gradient and fixed threshold segmentations of (18)F-FDG PET metabolic tumor volume of human solid tumors. MATERIALS AND METHODS. There were 52 patients included in the study who had undergone baseline PET/CT with subsequent resection of head and neck, lung, and colorectal tumors. The pathologic volume was calculated from three dimensions of the gross tumor specimen as a reference standard. The primary tumor metabolic tumor volume was segmented using gradient and 30%, 40%, and 50% maximum standardized uptake value (SUVmax) threshold methods. Pearson correlation coefficient, intraclass correlation coefficient, and Bland-Altman analyses were performed to establish the correlation and reliability among the pathologic volume and segmented metabolic tumor volume. RESULTS. The mean pathologic volume; gradient-based metabolic tumor volume; and 30%, 40%, and 50% SUVmax threshold metabolic tumor volumes were 13.46, 13.75, 15.47, 10.63, and 7.57 mL, respectively. The intraclass correlation coefficients among the pathologic volume and the gradient-based and 30%, 40%, and 50% SUVmax threshold metabolic tumor volumes were 0.95, 0.85, 0.80, and 0.76, respectively. The Bland-Altman biases were -0.3, -2.0, 2.82, and 5.9 mL, respectively. Of the small tumors (< 10 mL), 23 of the 35 patients had PET segmented volume outside 50% of the pathologic volume, and among the large tumors (? 10 mL) three of the 17 patients had PET segmented volumes that were outside 50% of pathologic volume. CONCLUSION. FDG PET metabolic tumor volume estimated using gradient segmentation had superior correlation and reliability with the estimated ellipsoid pathologic volume of the tumors compared with threshold method segmentation. PMID:24758668

Sridhar, Praveen; Mercier, Gustavo; Tan, Josenia; Truong, Minh Tam; Daly, Benedict; Subramaniam, Rathan M

2014-05-01

19

3D visualization for medical volume segmentation validation  

NASA Astrophysics Data System (ADS)

This paper presents a 3-D visualization tool that manipulates and/or enhances by user input the segmented targets and other organs. A 3-D visualization tool is developed to create a precise and realistic 3-D model from CT/MR data set for manipulation in 3-D and permitting physician or planner to look through, around, and inside the various structures. The 3-D visualization tool is designed to assist and to evaluate the segmentation process. It can control the transparency of each 3-D object. It displays in one view a 2-D slice (axial, coronal, and/or sagittal)within a 3-D model of the segmented tumor or structures. This helps the radiotherapist or the operator to evaluate the adequacy of the generated target compared to the original 2-D slices. The graphical interface enables the operator to easily select a specific 2-D slice of the 3-D volume data set. The operator is enabled to manually override and adjust the automated segmentation results. After correction, the operator can see the 3-D model again and go back and forth till satisfactory segmentation is obtained. The novelty of this research work is in using state-of-the-art of image processing and 3-D visualization techniques to facilitate a process of a medical volume segmentation validation and assure the accuracy of the volume measurement of the structure of interest.

Eldeib, Ayman M.

2002-05-01

20

Accurate colon residue detection algorithm with partial volume segmentation  

NASA Astrophysics Data System (ADS)

Colon cancer is the second leading cause of cancer-related death in the United States. Earlier detection and removal of polyps can dramatically reduce the chance of developing malignant tumor. Due to some limitations of optical colonoscopy used in clinic, many researchers have developed virtual colonoscopy as an alternative technique, in which accurate colon segmentation is crucial. However, partial volume effect and existence of residue make it very challenging. The electronic colon cleaning technique proposed by Chen et al is a very attractive method, which is also kind of hard segmentation method. As mentioned in their paper, some artifacts were produced, which might affect the accurate colon reconstruction. In our paper, instead of labeling each voxel with a unique label or tissue type, the percentage of different tissues within each voxel, which we call a mixture, was considered in establishing a maximum a posterior probability (MAP) image-segmentation framework. A Markov random field (MRF) model was developed to reflect the spatial information for the tissue mixtures. The spatial information based on hard segmentation was used to determine which tissue types are in the specific voxel. Parameters of each tissue class were estimated by the expectation-maximization (EM) algorithm during the MAP tissue-mixture segmentation. Real CT experimental results demonstrated that the partial volume effects between four tissue types have been precisely detected. Meanwhile, the residue has been electronically removed and very smooth and clean interface along the colon wall has been obtained.

Li, Xiang; Liang, Zhengrong; Zhang, PengPeng; Kutcher, Gerald J.

2004-05-01

21

Content analysis for audio classification and segmentation  

Microsoft Academic Search

In this paper, we present our study of audio content analysis for classification and segmentation, in which an audio stream is segmented according to audio type or speaker identity. We propose a robust approach that is capable of classifying and segmenting an audio stream into speech, music, environment sound, and silence. Audio classification is processed in two steps, which makes

Lie Lu; Hong-jiang Zhang; Hao Jiang

2002-01-01

22

FLAIR histogram segmentation for measurement of leukoaraiosis volume.  

PubMed

The purposes of this study were to develop a method to measure brain and white matter hyperintensity (leukoaraiosis) volume that is based on the segmentation of the intensity histogram of fluid-attenuated inversion recovery (FLAIR) images and to assess the accuracy and reproducibility of the method. Whole-head synthetic image phantoms with manually introduced leukoaraiosis lesions of varying severity were constructed. These synthetic image phantom sets incorporated image contrast and anatomic features that mimicked leukoaraiosis found in real life. One set of synthetic image phantoms was used to develop the segmentation algorithm (FLAIR-histoseg). A second set was used to measure its accuracy. Test retest reproducibility was assessed in 10 elderly volunteers who were imaged twice. The mean absolute error of the FLAIR-histoseg method was 6.6% for measurement of leukoaraiosis volume and 1.4% for brain volume. The mean test retest coefficient of variation was 1.4% for leukoaraiosis volume and 0.3% for brain volume. We conclude that the FLAIR-histoseg method is an accurate and reproducible method for measuring leukoaraiosis and whole-brain volume in elderly subjects. PMID:11747022

Jack, C R; O'Brien, P C; Rettman, D W; Shiung, M M; Xu, Y; Muthupillai, R; Manduca, A; Avula, R; Erickson, B J

2001-12-01

23

FLAIR Histogram Segmentation for Measurement of Leukoaraiosis Volume  

PubMed Central

The purpose of this study was to develop a method to measure brain and white matter hyperintensity (leukoaraiosis) volume that is based on the segmentation of the intensity histogram of fluid attenuated inversion recovery (FLAIR) images, and to assess the accuracy and reproducibility of the method. Whole head synthetic image phantoms with manually introduced leukoaraiosis lesions of varying severity were constructed. These synthetic image phantom sets incorporated image contrast and anatomic features which mimicked leukoaraiosis found in real life. One set of synthetic image phantoms was used to develop the segmentation algorithm (FLAIR-histoseg). A second set was used to measure its accuracy. Test re-test reproducibility was assessed in 10 elderly volunteers who were imaged twice. The mean absolute error of the FLAIR-histoseg method for measurement of leukoaraiosis volume was 6.6% and for brain volume 1.4%. The mean test re-test coefficient of variation for leukoaraiosis volume was 1.4% and for brain volume was 0.3%. We conclude that the FLAIR-histoseg method is an accurate and reproducible method for measuring leukoaraiosis and whole brain volume in elderly subjects.

Jack, Clifford R.; O'Brien, Peter C.; Rettman, Daniel W.; Shiung, Maria M.; Xu, Yuecheng; Muthupillai, Raja; Manduca, Armando; Avula, Ramesh; Erickson, Bradley J.

2009-01-01

24

3D segmentation and visualization of lung volume using CT  

NASA Astrophysics Data System (ADS)

Three-dimensional (3D)-based detection and diagnosis plays important role in significantly improving detection and diagnosis of lung cancers through computed tomography (CT). This paper presents a 3D approach for segmenting and visualizing lung volume by using CT images. An edge-preserving filter (3D sigma filter) is first performed on CT slices to enhance the signal-to-noise ratio, and wavelet transform (WT)-based interpolation incorporated with volume rendering is utilized to construct 3D volume data. Then an adaptive 3D region-growing algorithm is designed to segment lung mask incorporated with automatic seed locating algorithm through fuzzy logic algorithm, in which 3D morphological closing algorithm is performed on the mask to fill out cavities. Finally, a 3D visualization tool is designed to view the volume data, its projections or intersections at any angle. This approach was tested on single detector CT images and the experiment results demonstrate that it is effective and robust. This study lays groundwork for 3D-based computerized detection and diagnosis of lung cancer with CT imaging. In addition, this approach can be integrated into PACS system serving as a visualization tool for radiologists" reading and interpretation.

Zhang, Haibo; Sun, Xuejun; Duan, Huichuan

2005-04-01

25

Artificial Neural Network-Based System for PET Volume Segmentation.  

PubMed

Tumour detection, classification, and quantification in positron emission tomography (PET) imaging at early stage of disease are important issues for clinical diagnosis, assessment of response to treatment, and radiotherapy planning. Many techniques have been proposed for segmenting medical imaging data; however, some of the approaches have poor performance, large inaccuracy, and require substantial computation time for analysing large medical volumes. Artificial intelligence (AI) approaches can provide improved accuracy and save decent amount of time. Artificial neural networks (ANNs), as one of the best AI techniques, have the capability to classify and quantify precisely lesions and model the clinical evaluation for a specific problem. This paper presents a novel application of ANNs in the wavelet domain for PET volume segmentation. ANN performance evaluation using different training algorithms in both spatial and wavelet domains with a different number of neurons in the hidden layer is also presented. The best number of neurons in the hidden layer is determined according to the experimental results, which is also stated Levenberg-Marquardt backpropagation training algorithm as the best training approach for the proposed application. The proposed intelligent system results are compared with those obtained using conventional techniques including thresholding and clustering based approaches. Experimental and Monte Carlo simulated PET phantom data sets and clinical PET volumes of nonsmall cell lung cancer patients were utilised to validate the proposed algorithm which has demonstrated promising results. PMID:20936152

Sharif, Mhd Saeed; Abbod, Maysam; Amira, Abbes; Zaidi, Habib

2010-01-01

26

Artificial Neural Network-Based System for PET Volume Segmentation  

PubMed Central

Tumour detection, classification, and quantification in positron emission tomography (PET) imaging at early stage of disease are important issues for clinical diagnosis, assessment of response to treatment, and radiotherapy planning. Many techniques have been proposed for segmenting medical imaging data; however, some of the approaches have poor performance, large inaccuracy, and require substantial computation time for analysing large medical volumes. Artificial intelligence (AI) approaches can provide improved accuracy and save decent amount of time. Artificial neural networks (ANNs), as one of the best AI techniques, have the capability to classify and quantify precisely lesions and model the clinical evaluation for a specific problem. This paper presents a novel application of ANNs in the wavelet domain for PET volume segmentation. ANN performance evaluation using different training algorithms in both spatial and wavelet domains with a different number of neurons in the hidden layer is also presented. The best number of neurons in the hidden layer is determined according to the experimental results, which is also stated Levenberg-Marquardt backpropagation training algorithm as the best training approach for the proposed application. The proposed intelligent system results are compared with those obtained using conventional techniques including thresholding and clustering based approaches. Experimental and Monte Carlo simulated PET phantom data sets and clinical PET volumes of nonsmall cell lung cancer patients were utilised to validate the proposed algorithm which has demonstrated promising results.

Sharif, Mhd Saeed; Abbod, Maysam; Amira, Abbes; Zaidi, Habib

2010-01-01

27

Thyroid segmentation and volume estimation in ultrasound images.  

PubMed

Physicians usually diagnose the pathology of the thyroid gland by its volume. However, even if the thyroid glands are found and the shapes are hand-marked from ultrasound (US) images, most physicians still depend on computed tomography (CT) images, which are expensive to obtain, for precise measurements of the volume of the thyroid gland. This approach relies heavily on the experience of the physicians and is very time consuming. Patients are exposed to high radiation when obtaining CT images. In contrast, US imaging does not require ionizing radiation and is relatively inexpensive. US imaging is thus one of the most commonly used auxiliary tools in clinical diagnosis. The present study proposes a complete solution to estimate the volume of the thyroid gland directly from US images. The radial basis function neural network is used to classify blocks of the thyroid gland. The integral region is acquired by applying a specific-region-growing method to potential points of interest. The parameters for evaluating the thyroid volume are estimated using a particle swarm optimization algorithm. Experimental results of the thyroid region segmentation and volume estimation in US images show that the proposed approach is very promising. PMID:20172782

Chang, Chuan-Yu; Lei, Yue-Fong; Tseng, Chin-Hsiao; Shih, Shyang-Rong

2010-06-01

28

Semi-automatic Active Contour Approach to Segmentation of Computed Tomography Volumes  

Microsoft Academic Search

In this paper a method for three-dimensional (3-D) semi-automatic segmentation of volumes of medical images is described. The method is semi-automatic in the sense that, in the initial phase, the user assistance is required for manual segmentation of a certain number of slices (cross-sections) of the volume. In the second phase, the algorithm for automatic segmentation is started. The segmentation

Sven Loncaric; Domagoj Kovacevic; Erich Sorantin

29

Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET  

NASA Astrophysics Data System (ADS)

Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.

Hatt, M.; Lamare, F.; Boussion, N.; Turzo, A.; Collet, C.; Salzenstein, F.; Roux, C.; Jarritt, P.; Carson, K.; Cheze-LeRest, C.; Visvikis, D.

2007-07-01

30

Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET  

PubMed Central

Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the Fuzzy Hidden Markov Chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical Hidden Markov Chain (HMC) algorithm, FHMC takes into account noise, voxel’s intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the “fuzzy” nature of the object on interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8mm3 and 64mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.

Hatt, Mathieu; Lamare, Frederic; Boussion, Nicolas; Roux, Christian; Turzo, Alexandre; Cheze-Lerest, Catherine; Jarritt, Peter; Carson, Kathryn; Salzenstein, Fabien; Collet, Christophe; Visvikis, Dimitris

2007-01-01

31

Semiautomatic regional segmentation to measure orbital fat volumes in thyroid-associated ophthalmopathy. A validation study.  

PubMed

This study was designed to validate a novel semi-automated segmentation method to measure regional intra-orbital fat tissue volume in Graves' ophthalmopathy. Twenty-four orbits from 12 patients with Graves' ophthalmopathy, 24 orbits from 12 controls, ten orbits from five MRI study simulations and two orbits from a digital model were used. Following manual region of interest definition of the orbital volumes performed by two operators with different levels of expertise, an automated procedure calculated intra-orbital fat tissue volumes (global and regional, with automated definition of four quadrants). In patients with Graves' disease, clinical activity score and degree of exophthalmos were measured and correlated with intra-orbital fat volumes. Operator performance was evaluated and statistical analysis of the measurements was performed. Accurate intra-orbital fat volume measurements were obtained with coefficients of variation below 5%. The mean operator difference in total fat volume measurements was 0.56%. Patients had significantly higher intra-orbital fat volumes than controls (p<0.001 using Student's t test). Fat volumes and clinical score were significantly correlated (p<0.001). The semi-automated method described here can provide accurate, reproducible intra-orbital fat measurements with low inter-operator variation and good correlation with clinical data. PMID:24007725

Comerci, M; Elefante, A; Strianese, D; Senese, R; Bonavolontà, P; Alfano, B; Bonavolontà, B; Brunetti, A

2013-08-01

32

Clinical value of prostate segmentation and volume determination on MRI in benign prostatic hyperplasia.  

PubMed

Benign prostatic hyperplasia (BPH) is a nonmalignant pathological enlargement of the prostate, which occurs primarily in the transitional zone. BPH is highly prevalent and is a major cause of lower urinary tract symptoms in aging males, although there is no direct relationship between prostate volume and symptom severity. The progression of BPH can be quantified by measuring the volumes of the whole prostate and its zones, based on image segmentation on magnetic resonance imaging. Prostate volume determination via segmentation is a useful measure for patients undergoing therapy for BPH. However, prostate segmentation is not widely used due to the excessive time required for even experts to manually map the margins of the prostate. Here, we review and compare new methods of prostate volume segmentation using both manual and automated methods, including the ellipsoid formula, manual planimetry, and semiautomated and fully automated segmentation approaches. We highlight the utility of prostate segmentation in the clinical context of assessing BPH. PMID:24675166

Garvey, Brian; Türkbey, Bar??; Truong, Hong; Bernardo, Marcelino; Periaswamy, Senthil; Choyke, Peter L

2014-01-01

33

A Fast Automatic Method for 3D Volume Segmentation of the Human Cerebrovascular  

Microsoft Academic Search

We present a new method for 3-D volume segmentation of the human cerebrovascular structures from Magnetic Resonance Angiograms (MRA) and Magnetic Resonance Ventriculargrams (MRV). A slice through the volume containing large vein or artery structures is chosen, which becomes the seed location for the segmentation process. A modified 3-D computer graphics based region-filling algorithm is used to sweep the vascular

M. Sabry; Aly A. Farag; Stephen Hushek; Thomas Moriarty

2002-01-01

34

Independent component analysis for texture segmentation  

Microsoft Academic Search

Independent component analysis (ICA) of textured images is presented as a computational technique for creating a new data dependent filter bank for use in texture segmentation. We show that the ICA filters are able to capture the inherent properties of textured images. The new filters are similar to Gabor filters, but seem to be richer in the sense that their

Robert Jenssen; Torbjørn Eltoft

2003-01-01

35

Relaxed image foresting transforms for interactive volume image segmentation  

NASA Astrophysics Data System (ADS)

The Image Foresting Transform (IFT) is a framework for image partitioning, commonly used for interactive segmentation. Given an image where a subset of the image elements (seed-points) have been assigned correct segmentation labels, the IFT completes the labeling by computing minimal cost paths from all image elements to the seed-points. Each image element is then given the same label as the closest seed-point. Here, we propose the relaxed IFT (RIFT). This modified version of the IFT features an additional parameter to control the smoothness of the segmentation boundary. The RIFT yields more intuitive segmentation results in the presence of noise and weak edges, while maintaining a low computational complexity. We show an application of the method to the refinement of manual segmentations of a thoracolumbar muscle in magnetic resonance images. The performed study shows that the refined segmentations are qualitatively similar to the manual segmentations, while intra-user variations are reduced by more than 50%.

Malmberg, Filip; Nyström, Ingela; Mehnert, Andrew; Engstrom, Craig; Bengtsson, Ewert

2010-03-01

36

An algorithm for 3D localization of multiple pulses in large-volume segmented HPGe detectors  

Microsoft Academic Search

We focus on the problem of the spatial localization of energy releasing events (hits) in segmented large-volume HPGe detectors. We present an algorithm for a precise radial coordinate estimation of events occurring at the same time in the same segment. The algorithm was designed bearing in mind that, with up to thousands of parallel channels, it is mandatory to perform

E. Gatti; G. Casati; A. Geraci; S. Riboldi; G. Ripamonti; F. Camera; B. Million

2000-01-01

37

Image Segmentation Analysis for NASA Earth Science Applications  

NASA Technical Reports Server (NTRS)

NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

Tilton, James C.

2010-01-01

38

Automated lung tumor segmentation for whole body PET volume based on novel downhill region growing  

NASA Astrophysics Data System (ADS)

We propose an automated lung tumor segmentation method for whole body PET images based on a novel downhill region growing (DRG) technique, which regards homogeneous tumor hotspots as 3D monotonically decreasing functions. The method has three major steps: thoracic slice extraction with K-means clustering of the slice features; hotspot segmentation with DRG; and decision tree analysis based hotspot classification. To overcome the common problem of leakage into adjacent hotspots in automated lung tumor segmentation, DRG employs the tumors' SUV monotonicity features. DRG also uses gradient magnitude of tumors' SUV to improve tumor boundary definition. We used 14 PET volumes from patients with primary NSCLC for validation. The thoracic region extraction step achieved good and consistent results for all patients despite marked differences in size and shape of the lungs and the presence of large tumors. The DRG technique was able to avoid the problem of leakage into adjacent hotspots and produced a volumetric overlap fraction of 0.61 +/- 0.13 which outperformed four other methods where the overlap fraction varied from 0.40 +/- 0.24 to 0.59 +/- 0.14. Of the 18 tumors in 14 NSCLC studies, 15 lesions were classified correctly, 2 were false negative and 15 were false positive.

Ballangan, Cherry; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Feng, Dagan

2010-03-01

39

Recent advances in segmented gamma scanner analysis  

SciTech Connect

The segmented gamma scanner (SGS) is used in many facilities to assay low-density scrap and waste generated in the facilities. The procedures for using the SGS can cause a negative bias if the sample does not satisfy the assumptions made in the method. Some process samples do not comply with the assumptions. This paper discusses the effect of the presence of lumps on the SGS assay results, describes a method to detect the presence of lumps, and describes an approach to correct for the lumps. Other recent advances in SGS analysis are also discussed.

Sprinkle, J.K. Jr.; Hsue, S.T.

1987-01-01

40

3D robust Chan-Vese model for industrial computed tomography volume data segmentation  

NASA Astrophysics Data System (ADS)

Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.

Liu, Linghui; Zeng, Li; Luan, Xiao

2013-11-01

41

Estimation of body fluid changes during peritoneal dialysis by segmental bioimpedance analysis  

Microsoft Academic Search

Estimation of body fluid changes during peritoneal dialysis by segmental bioimpedance analysis.BackgroundCommonly used bioimpedance analysis (BIA) is insensitive to changes in peritoneal fluid volume. The purpose of this study was to show, to our knowledge for the first time, that a new segmental approach accurately measures extracellular fluid changes during peritoneal dialysis (PD).MethodsFourteen stable PD patients were studied during a

Fansan Zhu; Daniel Schneditz; Allen M Kaufman; Nathan W Levin

2000-01-01

42

Watershed segmentation of medical volumes with paint drop marking  

Microsoft Academic Search

We present an improvement of the classical marker-controlled watershed approach in the direction of a better exploitation of user-defined markers. The combined action of a partial flooding and paint drops falling downwards on the gray value relief from marker locations, leads to a robust and meaningful identification of the candidate basins, which is a prerequisite for an accurate segmentation. This

Alberto Signoroni; G. Zanetti; R. Grazioli; Riccardo Leonardi

2010-01-01

43

A Linear Program Formulation for the Segmentation of Ciona Membrane Volumes  

PubMed Central

We address the problem of cell segmentation in confocal microscopy membrane volumes of the ascidian Ciona used in the study of morphogenesis. The primary challenges are non-uniform and patchy membrane staining and faint spurious boundaries from other organelles (e.g. nuclei). Traditional segmentation methods incorrectly attach to faint boundaries producing spurious edges. To address this problem, we propose a linear optimization framework for the joint correction of multiple over-segmentations obtained from different methods. The main idea motivating this approach is that multiple over-segmentations, resulting from a pool of methods with various parameters, are likely to agree on the correct segment boundaries, while spurious boundaries are method- or parameter-dependent. The challenge is to make an optimized decision on selecting the correct boundaries while discarding the spurious ones. The proposed unsupervised method achieves better performance than state of the art methods for cell segmentation from membrane images.

Delibaltov, Diana L.; Ghosh, Pratim; Rodoplu, Volkan; Veeman, Michael; Smith, William; Manjunath, B.S.

2014-01-01

44

Bioimpedance Measurement of Segmental Fluid Volumes and Hemodynamics.  

National Technical Information Service (NTIS)

Bioimpedance has become a useful tool to measure changes in body fluid compartment volumes. An Electrical Impedance Spectroscopic (EIS) system is described that extends the capabilities of conventional fixed frequency impedance plethysmographic (IPG) meth...

L. D. Montgomery Y. C. Wu Y. T. E. Ku W. A. Gerth

2000-01-01

45

Segmentation and analysis of hyperspectral data  

Microsoft Academic Search

Summary form only given. We review a previously presented algorithm that segments hyperspectral images on the basis of the two- or three-dimensional histograms of their principal components. Some modifications to improve our previous approach are detailed. After exploring the application of morphology directly to the segmented (digital) images, we focus on the processing of our segmented images in tandem with

S. R. Rotman; Jeny Silverman; C. E. Caefer

2002-01-01

46

Audio content analysis for online audiovisual data segmentation and classification  

Microsoft Academic Search

While current approaches for audiovisual data segmentation and classification are mostly focused on visual cues, audio signals may actually play a more important role in content parsing for many applications. An approach to automatic segmentation and classification of audiovisual data based on audio content analysis is proposed. The audio signal from movies or TV programs is segmented and classified into

Tong Zhang; C.-C. Jay Kuo

2001-01-01

47

Greedy modular subspace segment principle component analysis  

NASA Astrophysics Data System (ADS)

Hyperspectral images collect hundreds of co-registered images of the earth surface with different wavelengths in visible and short-wave inferred region. With such high spectral resolution, many adjacent bands are highly correlated, i.e., they contain a lot redundant information. How to remove unnecessary information from this huge amount of data and preserve all the information is a challenging problem. Principal component analysis (PCA) is one of the widely used algorithms for this problem. It assumes the larger variance contains the most information, so it projects the data into the direction to maximize the variance. Most of the signals will be kept in the first several principal components, and the rest will be considered to be noise and neglected. To further reduce the redundancy, segment PCA is proposed, which first separate the whole spectral bands into blocks and then perform the original PCA in each block individually. Both these two approaches perform well for data compression, but for image classification in its feature space, they did not achieve comparable results. In this study, we adopt the greedy modular subspaces transformation (GMST) to find the optimal feature subspace for the segment PCA. It is expected to provide a comparable classification results with high compression performance.

Chen, Hsin-Ting; Ren, Hsuan; Chang, Yang-Lang

2007-10-01

48

LANDSAT-D program. Volume 2: Ground segment  

NASA Technical Reports Server (NTRS)

Raw digital data, as received from the LANDSAT spacecraft, cannot generate images that meet specifications. Radiometric corrections must be made to compensate for aging and for differences in sensitivity among the instrument sensors. Geometric corrections must be made to compensate for off-nadir look angle, and to calculate spacecraft drift from its prescribed path. Corrections must also be made for look-angle jitter caused by vibrations induced by spacecraft equipment. The major components of the LANDSAT ground segment and their functions are discussed.

1984-01-01

49

Similarity enhancement for automatic segmentation of cardiac structures in computed tomography volumes  

PubMed Central

The aim of this research is proposing a 3–D similarity enhancement technique useful for improving the segmentation of cardiac structures in Multi-Slice Computerized Tomography (MSCT) volumes. The similarity enhancement is obtained by subtracting the intensity of the current voxel and the gray levels of their adjacent voxels in two volumes resulting after preprocessing. Such volumes are: a.- a volume obtained after applying a Gaussian distribution and a morphological top-hat filter to the input and b.- a smoothed volume generated by processing the input with an average filter. Then, the similarity volume is used as input to a region growing algorithm. This algorithm is applied to extract the shape of cardiac structures, such as left and right ventricles, in MSCT volumes. Qualitative and quantitative results show the good performance of the proposed approach for discrimination of cardiac cavities.

Vera, Miguel; Bravo, Antonio; Garreau, Mireille; Medina, Ruben

2011-01-01

50

Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest  

NASA Astrophysics Data System (ADS)

Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

2009-02-01

51

Free volume and water vapor permeability of dense segmented polyurethane membrane  

Microsoft Academic Search

This paper presented micro-structure, free volume and water vapor permeability of dense segmented polyurethane (SPU) membrane. Wide angle X-ray diffraction (WAXD), differential scanning calorimetry (DSC) and transmission electron microscopy (TEM) techniques were employed to investigate the micro-structure. Free volume was measured by positron annihilation life spectroscopy (PALS). WAXD and DSC results indicated microcrystalline structure presents in the polymer membrane. Percent

S. Mondal; J. L. Hu; Z. Yong

2006-01-01

52

A Bayesian Approach to Video Object Segmentation via Merging 3D Watershed Volumes  

Microsoft Academic Search

Abstract—In this paper, we propose a Bayesian approach to video object segmentation. Our method consists of two stages. In the first stage, we partition the video data into a set of 3D watershed volumes, where each watershed volume is a series of corresponding 2D image regions. These 2D image regions are obtainedb yap plyingto eachimage frame the marker-controlled watersheds egmentation,

Yi-ping Hung; Yu-pao Tsai; Chih-chuan Lai

2002-01-01

53

A Bayesian approach to video object segmentation via merging 3-D watershed volumes  

Microsoft Academic Search

In this letter, we propose a Bayesian approach to video object segmentation. Our method consists of two stages. In the first stage, we partition the video data into a set of three-dimensional (3-D) watershed volumes, where each watershed volume is a series of corresponding two-dimensional (2-D) image regions. These 2-D image regions are obtained by applying to each image frame

Yu-Pao Tsai; Chih-Chuan Lai; Yi-Ping Hung; Zen-Chung Shih

2005-01-01

54

The dose volume constraint satisfaction problem for inverse treatment planning with field segments  

Microsoft Academic Search

The prescribed goals of radiation treatment planning are often expressed in terms of dose-volume constraints. We present a novel formulation of a dose-volume constraint satisfaction search for the discretized radiation therapy model. This approach does not rely on any explicit cost function. Inverse treatment planning uses the aperture-based approach with predefined, according to geometric rules, segmental fields. The solver utilizes

Darek Michalski; Ying Xiao; Yair Censor; James M. Galvin

2004-01-01

55

Texture segmentation and analysis for tissue characterization  

NASA Astrophysics Data System (ADS)

Early detection of tissue changes in a disease process is of utmost interest and a challenge for non-invasive imaging techniques. Texture is an important property of image regions and many texture descriptors have been proposed in the literature. In this paper we introduce a new approach related to texture descriptors and texture grouping. There exist some applications, e.g. shape from texture, that require a more dense sampling as provided by the pseudo-Wigner distribution. Therefore, the first step to the problem is to use a modular pattern detection in textured images based on the use of a Pseudo-Wigner Distribution (PWD) followed by a PCA stage. The second scheme is to consider a direct local frequency analysis by splitting the PWD spectra following a "cortex-like" structure. As an alternative technique, the use of a Gabor multiresolution approach was considered. Gabor functions constitute a family of band-pass filters that gather the most salient properties of spatial frequency and orientation selectivity. This paper presents a comparison of time-frequency methods, based on the use of the PWD, with sparse filtering approaches using a Gabor-based multiresolution representation. Performance the current methods is evaluated for the segmentation for synthetic texture mosaics and for osteoporosis images.

Redondo, Rafael; Fischer, Sylvain; Cristobal, Gabriel; Forero, Manuel; Santos, Andres; Hormigo, Javier; Gabarda, Salvador

2004-10-01

56

Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory.  

PubMed

In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region. PMID:23286081

Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin

2012-01-01

57

A Modified Probabilistic Neural Network for Partial Volume Segmentation in Brain MR Image  

Microsoft Academic Search

A modified probabilistic neural network (PNN) for brain tissue segmentation with magnetic resonance imaging (MRI) is proposed. In this approach, covariance matrices are used to replace the singular smoothing factor in the PNN's kernel function, and weighting factors are added in the pattern of summation layer. This weighted probabilistic neural network (WPNN) classifier can account for partial volume effects, which

Tao Song; Mo M. Jamshidi; Roland R. Lee; Mingxiong Huang

2007-01-01

58

Comparative assessment of statistical brain MR image segmentation algorithms and their impact on partial volume correction in PET.  

PubMed

Magnetic resonance imaging (MRI)-guided partial volume effect correction (PVC) in brain positron emission tomography (PET) is now a well-established approach to compensate the large bias in the estimate of regional radioactivity concentration, especially for small structures. The accuracy of the algorithms developed so far is, however, largely dependent on the performance of segmentation methods partitioning MRI brain data into its main classes, namely gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). A comparative evaluation of three brain MRI segmentation algorithms using simulated and clinical brain MR data was performed, and subsequently their impact on PVC in 18F-FDG and 18F-DOPA brain PET imaging was assessed. Two algorithms, the first is bundled in the Statistical Parametric Mapping (SPM2) package while the other is the Expectation Maximization Segmentation (EMS) algorithm, incorporate a priori probability images derived from MR images of a large number of subjects. The third, here referred to as the HBSA algorithm, is a histogram-based segmentation algorithm incorporating an Expectation Maximization approach to model a four-Gaussian mixture for both global and local histograms. Simulated under different combinations of noise and intensity non-uniformity, MR brain phantoms with known true volumes for the different brain classes were generated. The algorithms' performance was checked by calculating the kappa index assessing similarities with the "ground truth" as well as multiclass type I and type II errors including misclassification rates. The impact of image segmentation algorithms on PVC was then quantified using clinical data. The segmented tissues of patients' brain MRI were given as input to the region of interest (RoI)-based geometric transfer matrix (GTM) PVC algorithm, and quantitative comparisons were made. The results of digital MRI phantom studies suggest that the use of HBSA produces the best performance for WM classification. For GM classification, it is suggested to use the EMS. Segmentation performed on clinical MRI data show quite substantial differences, especially when lesions are present. For the particular case of PVC, SPM2 and EMS algorithms show very similar results and may be used interchangeably. The use of HBSA is not recommended for PVC. The partial volume corrected activities in some regions of the brain show quite large relative differences when performing paired analysis on 2 algorithms, implying a careful choice of the segmentation algorithm for GTM-based PVC. PMID:16828315

Zaidi, Habib; Ruest, Torsten; Schoenahl, Frederic; Montandon, Marie-Louise

2006-10-01

59

Topology Adaptive Deformable Surfaces for Medical Image Volume Segmentation  

Microsoft Academic Search

Deformable models, which include deformable con- tours (the popular snakes) and deformable surfaces, are a power- ful model-based medical image analysis technique. We develop a new class of deformable models by formulating deformable sur- faces in terms of an affine cell image decomposition (ACID). Our approach significantly extends standard deformable surfaces, while retaining their interactivity and other desirable properties. In

Tim Mcinerney; Demetri Terzopoulos

1999-01-01

60

Hybrid image segmentation for Earth remote sensing data analysis  

Microsoft Academic Search

Image segmentation is a partitioning of an image into constituent parts using image attributes such as pixel intensity, spectral values, and\\/or textural properties. Image segmentation produces an image representation in terms of edges and regions of various shapes and interrelationships. It is a key step in several approaches to image compression and image analysis. The author has devised a hybrid

James C. Tilton

1996-01-01

61

Skin Segmentation Using Color Pixel Classification: Analysis and Comparison  

Microsoft Academic Search

This work presents a study of three important issues of the color pixel classification approach to skin segmentation: color representation, color quantization, and classification algorithm. Our analysis of several representative color spaces using the Bayesian classifier with the histogram technique shows that skin segmentation based on color pixel classification is largely unaffected by the choice of the color space. However,

Son Lam Phung; Abdesselam Bouzerdoum; Douglas Chai

2005-01-01

62

Automatic segmentation of tumor-laden lung volumes from the LIDC database  

NASA Astrophysics Data System (ADS)

The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

O'Dell, Walter G.

2012-02-01

63

Volume rendering segmented data using 3D textures: a practical approach for intra-operative visualization  

NASA Astrophysics Data System (ADS)

Volume rendering has high utility in visualization of segmented datasets. However, volume rendering of the segmented labels along with the original data causes undesirable intermixing/bleeding artifacts arising from interpolation at the sharp boundaries. This issue is further amplified in 3D textures based volume rendering due to the inaccessibility of the interpolation stage. We present an approach which helps minimize intermixing artifacts while maintaining the high performance of 3D texture based volume rendering - both of which are critical for intra-operative visualization. Our approach uses a 2D transfer function based classification scheme where label distinction is achieved through an encoding that generates unique gradient values for labels. This helps ensure that labelled voxels always map to distinct regions in the 2D transfer function, irrespective of interpolation. In contrast to previously reported algorithms, our algorithm does not require multiple passes for rendering and supports greater than 4 masks. It also allows for real-time modification of the colors/opacities of the segmented structures along with the original data. Additionally, these capabilities are available with minimal texture memory requirements amongst comparable algorithms. Results are presented on clinical and phantom data.

Subramanian, Navneeth; Mullick, Rakesh; Vaidya, Vivek

2006-03-01

64

Dynamic Seismic Analysis of Long Segmented Lifelines.  

National Technical Information Service (NTIS)

The difference in ground motion along a lifeline, the incoherent motion, is an essential component of the input. A long, straight, segmented pipe, with each link attached to the ground via a spring and dashpot is subjected to incoherent ground motion caus...

I. Nelson P. Weidlinger

1978-01-01

65

Microarray Analysis of Focal Segmental Glomerulosclerosis  

Microsoft Academic Search

Background: Focal segmental glomerulosclerosis (FSGS) is a leading cause of chronic renal failure in children. Recent studies have begun to define the molecular pathogenesis of this heterogeneous condition. Here we use oligonucleotide microarrays to obtain a global gene expression profile of kidney biopsy specimens from patients with FSGS in order to better understand the pathogenesis of this disease. Methods: We

Kristopher Schwab; David P. Witte; Bruce J. Aronow; Prasad Devarajan; S. Steven Potter; Larry T. Patterson

2004-01-01

66

Automated target recognition technique for image segmentation and scene analysis.  

National Technical Information Service (NTIS)

Automated target recognition software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army's Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/o...

C. W. Baumgart C. A. Ciarcia

1994-01-01

67

Automated cerebellar segmentation: Validation and application to detect smaller volumes in children prenatally exposed to alcohol?  

PubMed Central

Objective To validate an automated cerebellar segmentation method based on active shape and appearance modeling and then segment the cerebellum on images acquired from adolescents with histories of prenatal alcohol exposure (PAE) and non-exposed controls (NC). Methods Automated segmentations of the total cerebellum, right and left cerebellar hemispheres, and three vermal lobes (anterior, lobules I–V; superior posterior, lobules VI–VII; inferior posterior, lobules VIII–X) were compared to expert manual labelings on 20 subjects, studied twice, that were not used for model training. The method was also used to segment the cerebellum on 11 PAE and 9 NC adolescents. Results The test–retest intraclass correlation coefficients (ICCs) of the automated method were greater than 0.94 for all cerebellar volume and mid-sagittal vermal area measures, comparable or better than the test–retest ICCs for manual measurement (all ICCs > 0.92). The ICCs computed on all four cerebellar measurements (manual and automated measures on the repeat scans) to compare comparability were above 0.97 for non-vermis parcels, and above 0.89 for vermis parcels. When applied to patients, the automated method detected smaller cerebellar volumes and mid-sagittal areas in the PAE group compared to controls (p < 0.05 for all regions except the superior posterior lobe, consistent with prior studies). Discussion These results demonstrate excellent reliability and validity of automated cerebellar volume and mid-sagittal area measurements, compared to manual measurements. These data also illustrate that this new technology for automatically delineating the cerebellum leads to conclusions regarding the effects of prenatal alcohol exposure on the cerebellum consistent with prior studies that used labor intensive manual delineation, even with a very small sample.

Cardenas, Valerie A.; Price, Mathew; Infante, M. Alejandra; Moore, Eileen M.; Mattson, Sarah N.; Riley, Edward P.; Fein, George

2014-01-01

68

Segmental hair analysis for cocaine and heroin abuse determination  

Microsoft Academic Search

Segmental hair analysis was performed to obtain information about the history of drug abuse of subjects in a rehabilitation programme. The analytical data from hair samples were correlated, when possible, with urine analysis and to toxicological anamnesis. Toxicological analysis of hair seems to be a valid tool in this specific field.

S. Strano-Rossi; A. Bermejo-Barrera; M. Chiarotti

1995-01-01

69

The Effect of Segment Selection on Acoustic Analysis  

PubMed Central

Objective/Hypothesis Acoustic analysis is a commonly used method for quantitatively measuring vocal fold function. Voice signals are analyzed by selecting a waveform segment and using various algorithms to arrive at parameters such as jitter, shimmer, and signal-to-noise ratio (SNR). Accurate and reliable methods for selecting a representative vowel segment have not been established. Study Design Prospective repeated measures experiment Methods We applied a moving window method by isolating consecutive, overlapping segments of the raw voice signal from onset through offset. Ten normal voice signals were analyzed using acoustic measures calculated from the moving window. The location and value of minimum perturbation/maximum SNR was compared across individuals. The moving window method was compared with data from the whole vowel excluding onset and offset, the mid-vowel and the visually selected steadiest portion of the voice signal. Results Results showed that the steadiest portion of the waveforms, as defined by minimum perturbation and maximum SNR values, was not consistent across individuals. Perturbation and nonlinear dynamic values differed significantly based on what segment of the waveform was used. Other commonly used segments selection methods resulted in significantly higher perturbation values and significantly lower SNR values than those determined by the moving window method (p<0.001). Conclusions The selection of a sample for acoustic analysis can introduce significant inconsistencies into the analysis procedure. The moving window technique may provide more accurate and reliable acoustic measures by objectively identifying the steadiest segment of the voice sample.

Choi, Seong Hee; Lee, JiYeoun; Sprecher, Alicia J.; Jiang, Jack J.

2011-01-01

70

A novel colonic polyp volume segmentation method for computer tomographic colonography  

NASA Astrophysics Data System (ADS)

Colorectal cancer is the third most common type of cancer. However, this disease can be prevented by detection and removal of precursor adenomatous polyps after the diagnosis given by experts on computer tomographic colonography (CTC). During CTC diagnosis, the radiologist looks for colon polyps and measures not only the size but also the malignancy. It is a common sense that to segment polyp volumes from their complicated growing environment is of much significance for accomplishing the CTC based early diagnosis task. Previously, the polyp volumes are mainly given from the manually or semi-automatically drawing by the radiologists. As a result, some deviations cannot be avoided since the polyps are usually small (6~9mm) and the radiologists' experience and knowledge are varying from one to another. In order to achieve automatic polyp segmentation carried out by the machine, we proposed a new method based on the colon decomposition strategy. We evaluated our algorithm on both phantom and patient data. Experimental results demonstrate our approach is capable of segment the small polyps from their complicated growing background.

Wang, Huafeng; Li, Lihong C.; Han, Hao; Song, Bowen; Peng, Hao; Wang, Yunhong; Wang, Lihua; Liang, Zhengrong

2014-03-01

71

A Discussion on the Evaluation of A New Automatic Liver Volume Segmentation Method for Specified CT Image Datasets  

Microsoft Academic Search

This paper presents discussions on experimental result eval- uation outcomes of a new liver volume segmentation method developed for 10 specified CT image datasets. Precise liver surface segmentation is the first step and one of the major tasks in individual surgical resection virtual reality simulations. There are five major difficulties: Firstly, the automatic initialization of liver detection is often unreliable.

Ying Chi; Peter M M Cashman; Fernando Bello; Richard I Kitney

2007-01-01

72

AMASS: algorithm for MSI analysis by semi-supervised segmentation.  

PubMed

Mass Spectrometric Imaging (MSI) is a molecular imaging technique that allows the generation of 2D ion density maps for a large complement of the active molecules present in cells and sectioned tissues. Automatic segmentation of such maps according to patterns of co-expression of individual molecules can be used for discovery of novel molecular signatures (molecules that are specifically expressed in particular spatial regions). However, current segmentation techniques are biased toward the discovery of higher abundance molecules and large segments; they allow limited opportunity for user interaction, and validation is usually performed by similarity to known anatomical features. We describe here a novel method, AMASS (Algorithm for MSI Analysis by Semi-supervised Segmentation). AMASS relies on the discriminating power of a molecular signal instead of its intensity as a key feature, uses an internal consistency measure for validation, and allows significant user interaction and supervision as options. An automated segmentation of entire leech embryo data images resulted in segmentation domains congruent with many known organs, including heart, CNS ganglia, nephridia, nephridiopores, and lateral and ventral regions, each with a distinct molecular signature. Likewise, segmentation of a rat brain MSI slice data set yielded known brain features and provided interesting examples of co-expression between distinct brain regions. AMASS represents a new approach for the discovery of peptide masses with distinct spatial features of expression. Software source code and installation and usage guide are available at http://bix.ucsd.edu/AMASS/ . PMID:21800894

Bruand, Jocelyne; Alexandrov, Theodore; Sistla, Srinivas; Wisztorski, Maxence; Meriaux, Céline; Becker, Michael; Salzet, Michel; Fournier, Isabelle; Macagno, Eduardo; Bafna, Vineet

2011-10-01

73

Analysis techniques for adaptively controlled segmented mirror arrays  

NASA Astrophysics Data System (ADS)

The employment of adaptively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues as compared to that of monolithic mirror designs. Performance analysis issues include simulation of adaptive control, execution of polynomial fitting, calculation of best fit rigid body motions, and prediction of line-of-sight error. The generation of finite element models of individual segments involves challenges associated with correctly representing the geometry of the optical surface. Design issues include segment structural design optimization and optimum placement of actuators. Manufacturing issues include development of actuation inputs during stressed optic polishing. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

Michels, Gregory J.; Genberg, Victor L.

2012-07-01

74

Segmental hair analysis and estimation of methamphetamine use pattern.  

PubMed

The aim of this study was to investigate whether the results of segmental hair analysis can be used to estimate patterns of methamphetamine (MA) use. Segmental hair analysis for MA and amphetamine (AP) was performed. Hair was cut into the hair root, consecutive 1 cm length segments and 1-4 cm length segments. Whole hair was also analyzed. The hair samples were incubated for 20 h in 1 mL methanol containing 1 % hydrochloric acid after washing the hair samples. Hair extracts were evaporated and derivatization was performed using trifluoroacetic anhydride in ethylacetate at 65 °C for 30 min. Derivatized extract was analyzed by gas chromatography/mass spectrometry. The 15 subjects consisted of 13 males and two females and their ages ranged from 25 to 42 (mean, 32). MA and AP concentrations in the whole hair ranged from 3.00 to 105.10 ng/mg (mean, 34.53) and from 0.05 to 4.76 ng/mg (mean, 2.42), respectively. Based on the analysis of the 1 cm length segmental hair, the results were interpreted in a way to distinguish between continuous use of MA (n?=?10), no recent but previous use of MA (n?=?3), and recent but no previous use of MA (n?=?2). Furthermore, the individuals were interpreted as light, moderate, and heavy users based on concentration ranges previously published. PMID:22955559

Han, Eunyoung; Yang, Heejin; Seol, Ilung; Park, Yunshin; Lee, Bongwoo; Song, Joon Myong

2013-03-01

75

Issues about axial diffusion during segmental hair analysis.  

PubMed

The detection of a single drug exposure in hair (doping offence, drug-facilitated crime) is based on the presence of the compound of interest in the segment corresponding to the period of the alleged event. However, in some cases, the drug is detected in consecutive segments. As a consequence, interpretation of the results is a challenge that deserves particular attention. Literature evaluation and data obtained from the 20-year experience in drug testing in hair of the author are used as the basis to establish a theory to validate the concept of single exposure in authentic forensic cases where the drug is detected in 2 or 3 segments. The gained experience recommends to wait for 4-5 weeks after the alleged event and then to collect strands of hair. Assuming normal hair growth rate (1 cm/mo), it is advisable to cut the strand into 3 segments of 2 cm to document eventual exposure. Administration of a single dose would be confirmed by the presence of the drug in the proximal 2-cm segment (root), whereas not detected in the 2 other segments. However, in the daily experience of the author, it was noticed that sometimes (about 1 case from 10 examinations), the drug can be detected in 2 or 3 consecutive segments. Such a disposition was even observed in volunteer experiments in the literature. As it was also described for cocaine in early 1996, there is considerable variability in the area over which incorporated drug can be distributed in the hair shaft and in the rate of axial distribution of drug along the hair shaft. This can explain why a small amount of drug, as compared with the concentration in the proximal segment, can be measured in the second segment, as a result of an irregular movement. Another explanation for broadening the band of positive hair from a single dose is that drugs and metabolites are incorporated into hair during formation of the hair shaft via diffusion from sweat and other secretions. The presence of confounding interferences in the hair matrix or changes in the hair structure due to cosmetic treatments might mislead the final result of hair analysis. To qualify for a single exposure in hair, the author proposes to consider that the highest drug concentration must be detected in the segment corresponding to the period of the alleged event (calculated with a hair growth rate at 1 cm/mo) and that the measured concentration be at least 3 times higher than those measured in the previous or the following segments. This must only be done using scalp hair after cutting the hair directly close to the scalp. PMID:23666571

Kintz, Pascal

2013-06-01

76

Segmentation vs. non-segmentation based neural techniques for cursive word recognition: an experimental analysis  

Microsoft Academic Search

This paper compares segmentation-based and non-segmentation based techniques for cursive word recognition. In our segmentation based technique, every word is segmented into characters, the chain code features are extracted from segmented characters, the features are fed to neural network classifier and finally the words are constructed using a string compare algorithm. In our non-segmentation based technique, the chain code features

Xialong Fan; Brijesh Verma

2001-01-01

77

Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification  

NASA Astrophysics Data System (ADS)

Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

2014-03-01

78

Analysis of segmented human body scans  

Microsoft Academic Search

Analysis on a dataset of 3D scanned surfaces have presented prob- lems because of incompleteness on the surfaces and because of variances in shape, size and pose. In this paper, a high-resolution generic model is aligned to data in the Civilian American and Eu- ropean Surface Anthropometry Resources (CAESAR) database in order to obtain a consistent parameterization. A Radial Basis

Pengcheng Xi; Won-sook Lee; Chang Shu

2007-01-01

79

Segmentation of interest region in medical volume images using geometric deformable model.  

PubMed

In this paper, we present a new segmentation method using the level set framework for medical volume images. The method was implemented using the surface evolution principle based on the geometric deformable model and the level set theory. And, the speed function in the level set approach consists of a hybrid combination of three integral measures derived from the calculus of variation principle. The terms are defined as robust alignment, active region, and smoothing. These terms can help to obtain the precise surface of the target object and prevent the boundary leakage problem. The proposed method has been tested on synthetic and various medical volume images with normal tissue and tumor regions in order to evaluate its performance on visual and quantitative data. The quantitative validation of the proposed segmentation is shown with higher Jaccard's measure score (72.52%-94.17%) and lower Hausdorff distance (1.2654 mm-3.1527 mm) than the other methods such as mean speed (67.67%-93.36% and 1.3361mm-3.4463 mm), mean-variance speed (63.44%-94.72% and 1.3361 mm-3.4616 mm), and edge-based speed (0.76%-42.44% and 3.8010 mm-6.5389 mm). The experimental results confirm that the effectiveness and performance of our method is excellent compared with traditional approaches. PMID:22402196

Lee, Myungeun; Cho, Wanhyun; Kim, Sunworl; Park, Soonyoung; Kim, Jong Hyo

2012-05-01

80

Machine learning based vesselness measurement for coronary artery segmentation in cardiac CT volumes  

NASA Astrophysics Data System (ADS)

Automatic coronary centerline extraction and lumen segmentation facilitate the diagnosis of coronary artery disease (CAD), which is a leading cause of death in developed countries. Various coronary centerline extraction methods have been proposed and most of them are based on shortest path computation given one or two end points on the artery. The major variation of the shortest path based approaches is in the different vesselness measurements used for the path cost. An empirically designed measurement (e.g., the widely used Hessian vesselness) is by no means optimal in the use of image context information. In this paper, a machine learning based vesselness is proposed by exploiting the rich domain specific knowledge embedded in an expert-annotated dataset. For each voxel, we extract a set of geometric and image features. The probabilistic boosting tree (PBT) is then used to train a classifier, which assigns a high score to voxels inside the artery and a low score to those outside. The detection score can be treated as a vesselness measurement in the computation of the shortest path. Since the detection score measures the probability of a voxel to be inside the vessel lumen, it can also be used for the coronary lumen segmentation. To speed up the computation, we perform classification only for voxels around the heart surface, which is achieved by automatically segmenting the whole heart from the 3D volume in a preprocessing step. An efficient voxel-wise classification strategy is used to further improve the speed. Experiments demonstrate that the proposed learning based vesselness outperforms the conventional Hessian vesselness in both speed and accuracy. On average, it only takes approximately 2.3 seconds to process a large volume with a typical size of 512x512x200 voxels.

Zheng, Yefeng; Loziczonek, Maciej; Georgescu, Bogdan; Zhou, S. Kevin; Vega-Higuera, Fernando; Comaniciu, Dorin

2011-03-01

81

Spatiotemporal Analysis of Face Profiles: Detection, Segmentation, and Registration  

Microsoft Academic Search

We use a two-image approach to construct a 3D human facial model for multimedia applications. The images used are those of faces at direct frontal and side views. The selection of the side view from a sequence of facial images is automatically done by applying a spatiotemporal approach to face profile analysis. The extracted side profile is then segmented based

Behzad Dariush; Sing Bang Kang; Keith Waters

1998-01-01

82

The effect of segment parameter error on gait analysis results  

Microsoft Academic Search

The extent to which errors in predicting body segment parameters (SP) influence biomechanical analysis of human motion is unclear. Therefore, the current study quantitatively evaluated the differences in SP estimates using literature predictive functions and computed the effect of SP variation on the kinetic output of walking. For a group of 15 young males, significant differences (P<0.05) were observed between

D. J Pearsall; P. A Costigan

1999-01-01

83

3D surface analysis and classification in neuroimaging segmentation.  

PubMed

This work emphasizes new algorithms for 3D edge and corner detection used in surface extraction and new concept of image segmentation in neuroimaging based on multidimensional shape analysis and classification. We propose using of NifTI standard for describing input data which enables interoperability and enhancement of existing computing tools used widely in neuroimaging research. In methods section we present our newly developed algorithm for 3D edge and corner detection, together with the algorithm for estimating local 3D shape. Surface of estimated shape is analyzed and segmented according to kernel shapes. PMID:21755723

Zagar, Martin; Mlinari?, Hrvoje; Knezovi?, Josip

2011-06-01

84

Small rural hospitals: an example of market segmentation analysis.  

PubMed

In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution. PMID:10111266

Mainous, A G; Shelby, R L

1991-01-01

85

MRI segmentation analysis in temporal lobe and idiopathic generalized epilepsy  

PubMed Central

Background Temporal lobe epilepsy (TLE) and idiopathic generalized epilepsy (IGE) patients have each been associated with extensive brain atrophy findings, yet to date there are no reports of head to head comparison of both patient groups. Our aim was to assess and compare between tissue-specific and structural brain atrophy findings in TLE to IGE patients and to healthy controls (HC). Methods TLE patients were classified in TLE lesional (L-TLE) or non-lesional (NL-TLE) based on presence or absence of MRI temporal structural abnormalities. High resolution 3 T MRI with automated segmentation by SIENAX and FIRST tools were performed in a group of patients with temporal lobe epilepsy (11 L-TLE and 15 NL-TLE) and in15 IGE as well as in 26 HC. Normal brain volume (NBV), normal grey matter volume (NGMV), normal white matter volume (NWMV), and volumes of subcortical deep grey matter structures were quantified. Using regression analyses, differences between the groups in both volume and left/right asymmetry were evaluated. Additionally, laterality of results was also evaluated to separately quantify ipsilateral and contralateral effects in the TLE group. Results All epilepsy groups had significantly lower NBV and NWMV compared to HC (p?volume than HC and IGE (p?=?0.001), and all epilepsy groups had significantly lower amygdala volume than HC (p?

2014-01-01

86

Documented Safety Analysis for the B695 Segment  

SciTech Connect

This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems, and keeping them as simple as possible while complying with industry standards and institutional requirements. No operations to be performed in the B695 Segment or building system are considered to be complex. No anticipated future change in the facility mission is expected to impact the extent of safety analysis documented in this DSA.

Laycak, D

2008-09-11

87

Linear test bed. Volume 1: Test bed no. 1. [aerospike test bed with segmented combustor  

NASA Technical Reports Server (NTRS)

The Linear Test Bed program was to design, fabricate, and evaluation test an advanced aerospike test bed which employed the segmented combustor concept. The system is designated as a linear aerospike system and consists of a thrust chamber assembly, a power package, and a thrust frame. It was designed as an experimental system to demonstrate the feasibility of the linear aerospike-segmented combustor concept. The overall dimensions are 120 inches long by 120 inches wide by 96 inches in height. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure, at a mixture ratio of 5.5. At the design conditions, the sea level thrust is 200,000 pounds. The complete program including concept selection, design, fabrication, component test, system test, supporting analysis and posttest hardware inspection is described.

1972-01-01

88

Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation  

PubMed Central

Background Intensity modulated radiotherapy for head and neck cancer necessitates accurate definition of organs at risk (OAR) and clinical target volumes (CTV). This crucial step is time consuming and prone to inter- and intra-observer variations. Automatic segmentation by atlas deformable registration may help to reduce time and variations. We aim to test a new commercial atlas algorithm for automatic segmentation of OAR and CTV in both ideal and clinical conditions. Methods The updated Brainlab automatic head and neck atlas segmentation was tested on 20 patients: 10 cN0-stages (ideal population) and 10 unselected N-stages (clinical population). Following manual delineation of OAR and CTV, automatic segmentation of the same set of structures was performed and afterwards manually corrected. Dice Similarity Coefficient (DSC), Average Surface Distance (ASD) and Maximal Surface Distance (MSD) were calculated for “manual to automatic” and “manual to corrected” volumes comparisons. Results In both groups, automatic segmentation saved about 40% of the corresponding manual segmentation time. This effect was more pronounced for OAR than for CTV. The edition of the automatically obtained contours significantly improved DSC, ASD and MSD. Large distortions of normal anatomy or lack of iodine contrast were the limiting factors. Conclusions The updated Brainlab atlas-based automatic segmentation tool for head and neck Cancer patients is timesaving but still necessitates review and corrections by an expert.

2013-01-01

89

Method and apparatus of segmenting an object in a data set and of determination of the volume of segmented object  

US Patent & Trademark Office Database

The invention relates to a method of segmenting an object in a data set, wherein the object is initially segmented resulting in a first set (N.sub.0) of voxels. An erosion operation is performed on the first set (N.sub.0) of voxels resulting in an eroded set (N.sub.-) of voxels. A dilation operation is performed on the eroded set (N.sub.-) of voxels resulting in a dilated set (N.sub.+) of voxels. The erosion operation depends on a variable erosion threshold (.THETA..sub.-), and the dilation operation depends on a variable dilation threshold (.THETA..sub.+).

2010-07-20

90

Survival and Prognostic Analysis of Adjacent Segments after Spinal Fusion  

PubMed Central

Background To examine the survival function and prognostic factors of the adjacent segments based on a second operation after thoracolumbar spinal fusion. Methods This retrospective study reviewed 3,188 patients (3,193 cases) who underwent a thoracolumbar spinal fusion at the author's hospital. Survival analysis was performed on the event of a second operation due to adjacent segment degeneration. The prognostic factors, such as the cause of the disease, surgical procedure, age, gender and number of fusion segments, were examined. Sagittal alignment and the location of the adjacent segment were measured in the second operation cases, and their association with the types of degeneration was investigated. Results One hundred seven patients, 112 cases (3.5%), underwent a second operation due to adjacent segment degeneration. The survival function was 97% and 94% at 5 and 10 years after surgery, respectively, showing a 0.6% linear reduction per year. The significant prognostic factors were old age, degenerative disease, multiple-level fusion and male. Among the second operation cases, the locations of the adjacent segments were the thoracolumbar junctional area and lumbosacral area in 11.6% and 88.4% of cases, respectively. Sagittal alignment was negative or neutral, positive and strongly positive in 47.3%, 38.9%, and 15.7%, respectively. Regarding the type of degeneration, spondylolisthesis or kyphosis, retrolisthesis, and neutral balance in the sagittal view was noted in 13.4%, 36.6%, and 50% of cases, respectively. There was a significant difference according to the location of the adjacent segment (p = 0.000) and sagittal alignment (p = 0.041). Conclusions The survival function of the adjacent segments was 94% at 10 years, which had decreased linearly by 0.6% per a year. The likelihood of a second operation was high in those with old age, degenerative disease, multiple-level fusion and male. There was a tendency for the type of degeneration to be spondylolisthesis or kyphosis in cases of the thoracolumbar junctional area and strongly positive sagittal alignment, but retrolisthesis in cases of the lumbosacral area and neutral or positive sagittal alignment.

Ahn, Dong Ki; Choi, Dae Jung; Kim, Kwan Soo; Yang, Seung Jin

2010-01-01

91

Image segmentation and registration for the analysis of joint motion from 3D MRI  

NASA Astrophysics Data System (ADS)

We report an image segmentation and registration method for studying joint morphology and kinematics from in vivo MRI scans and its application to the analysis of ankle joint motion. Using an MR-compatible loading device, a foot was scanned in a single neutral and seven dynamic positions including maximal flexion, rotation and inversion/eversion. A segmentation method combining graph cuts and level sets was developed which allows a user to interactively delineate 14 bones in the neutral position volume in less than 30 minutes total, including less than 10 minutes of user interaction. In the subsequent registration step, a separate rigid body transformation for each bone is obtained by registering the neutral position dataset to each of the dynamic ones, which produces an accurate description of the motion between them. We have processed six datasets, including 3 normal and 3 pathological feet. For validation our results were compared with those obtained from 3DViewnix, a semi-automatic segmentation program, and achieved good agreement in volume overlap ratios (mean: 91.57%, standard deviation: 3.58%) for all bones. Our tool requires only 1/50 and 1/150 of the user interaction time required by 3DViewnix and NIH Image Plus, respectively, an improvement that has the potential to make joint motion analysis from MRI practical in research and clinical applications.

Hu, Yangqiu; Haynor, David R.; Fassbind, Michael; Rohr, Eric; Ledoux, William

2006-03-01

92

Segmented canonical discriminant analysis of in situ hyperspectral data for identifying 13 urban tree species  

Microsoft Academic Search

A total of 458 in situ hyperspectral data were collected from 13 urban tree species in the City of Tampa, FL, USA using a spectrometer. The 13 species include 11 broadleaf and two conifer species. Three different techniques, segmented canonical discriminant analysis (CDA), segmented principal component analysis (PCA) and segmented stepwise discriminate analysis (SDA), were applied and compared for dimension

Ruiliang Pu; Desheng Liu

2011-01-01

93

Demonstration of 156 Inch Motor with Segmented Fiberglass Case and Ablative Nozzle. Volume I--Motor Design and Fabrication.  

National Technical Information Service (NTIS)

The objective of this program was to successfully test fire a one million pound thrust class, 156 in diameter, segmented FRP case, solid propellant rocketmotor, followed by the hydroburst test of the fiberglass case. Volume 1 contains a program summary; d...

R. F. Zeigler T. Walker

1968-01-01

94

Comparative assessment of statistical brain MR image segmentation algorithms and their impact on partial volume correction in PET  

Microsoft Academic Search

Magnetic resonance imaging (MRI)-guided partial volume effect correction (PVC) in brain positron emission tomography (PET) is now a well-established approach to compensate the large bias in the estimate of regional radioactivity concentration, especially for small structures. The accuracy of the algorithms developed so far is, however, largely dependent on the performance of segmentation methods partitioning MRI brain data into its

Habib Zaidi; Torsten Ruest; Frederic Schoenahl; Marie-Louise Montandon

2006-01-01

95

Influence of cold walls on PET image quantification and volume segmentation: A phantom study  

SciTech Connect

Purpose: Commercially available fillable plastic inserts used in positron emission tomography phantoms usually have thick plastic walls, separating their content from the background activity. These “cold” walls can modify the intensity values of neighboring active regions due to the partial volume effect, resulting in errors in the estimation of standardized uptake values. Numerous papers suggest that this is an issue for phantom work simulating tumor tissue, quality control, and calibration work. This study aims to investigate the influence of the cold plastic wall thickness on the quantification of 18F-fluorodeoxyglucose on the image activity recovery and on the performance of advanced automatic segmentation algorithms for the delineation of active regions delimited by plastic walls.Methods: A commercial set of six spheres of different diameters was replicated using a manufacturing technique which achieves a reduction in plastic walls thickness of up to 90%, while keeping the same internal volume. Both sets of thin- and thick-wall inserts were imaged simultaneously in a custom phantom for six different tumor-to-background ratios. Intensity values were compared in terms of mean and maximum standardized uptake values (SUVs) in the spheres and mean SUV of the hottest 1 ml region (SUV{sub max}, SUV{sub mean}, and SUV{sub peak}). The recovery coefficient (RC) was also derived for each sphere. The results were compared against the values predicted by a theoretical model of the PET-intensity profiles for the same tumor-to-background ratios (TBRs), sphere sizes, and wall thicknesses. In addition, ten automatic segmentation methods, written in house, were applied to both thin- and thick-wall inserts. The contours obtained were compared to computed tomography derived gold standard (“ground truth”), using five different accuracy metrics.Results: The authors' results showed that thin-wall inserts achieved significantly higher SUV{sub mean}, SUV{sub max}, and RC values (up to 25%, 16%, and 25% higher, respectively) compared to thick-wall inserts, which was in agreement with the theory. This effect decreased with increasing sphere size and TBR, and resulted in substantial (>5%) differences between thin- and thick-wall inserts for spheres up to 30 mm diameter and TBR up to 4. Thinner plastic walls were also shown to significantly improve the delineation accuracy for the majority of the segmentation methods tested, by increasing the proportion of lesion voxels detected, although the errors in image quantification remained non-negligible.Conclusions: This study quantified the significant effect of a 90% reduction in the thickness of insert walls on SUV quantification and PET-based boundary detection. Mean SUVs inside the inserts and recovery coefficients were particularly affected by the presence of thick cold walls, as predicted by a theoretical approach. The accuracy of some delineation algorithms was also significantly improved by the introduction of thin wall inserts instead of thick wall inserts. This study demonstrates the risk of errors deriving from the use of cold wall inserts to assess and compare the performance of PET segmentation methods.

Berthon, B.; Marshall, C. [Wales Research and Diagnostic Positron Emission Tomography Imaging Centre, Cardiff CF14 4XN (United Kingdom)] [Wales Research and Diagnostic Positron Emission Tomography Imaging Centre, Cardiff CF14 4XN (United Kingdom); Edwards, A.; Spezi, E. [Department of Medical Physics, Velindre Cancer Centre, Cardiff CF14 2TL (United Kingdom)] [Department of Medical Physics, Velindre Cancer Centre, Cardiff CF14 2TL (United Kingdom); Evans, M. [Velindre Cancer Centre, Cardiff CF14 2TL (United Kingdom)] [Velindre Cancer Centre, Cardiff CF14 2TL (United Kingdom)

2013-08-15

96

An analysis of segmentation dynamics throughout embryogenesis in the centipede Strigamia maritima  

PubMed Central

Background Most segmented animals add segments sequentially as the animal grows. In vertebrates, segment patterning depends on oscillations of gene expression coordinated as travelling waves in the posterior, unsegmented mesoderm. Recently, waves of segmentation gene expression have been clearly documented in insects. However, it remains unclear whether cyclic gene activity is widespread across arthropods, and possibly ancestral among segmented animals. Previous studies have suggested that a segmentation oscillator may exist in Strigamia, an arthropod only distantly related to insects, but further evidence is needed to document this. Results Using the genes even skipped and Delta as representative of genes involved in segment patterning in insects and in vertebrates, respectively, we have carried out a detailed analysis of the spatio-temporal dynamics of gene expression throughout the process of segment patterning in Strigamia. We show that a segmentation clock is involved in segment formation: most segments are generated by cycles of dynamic gene activity that generate a pattern of double segment periodicity, which is only later resolved to the definitive single segment pattern. However, not all segments are generated by this process. The most posterior segments are added individually from a localized sub-terminal area of the embryo, without prior pair-rule patterning. Conclusions Our data suggest that dynamic patterning of gene expression may be widespread among the arthropods, but that a single network of segmentation genes can generate either oscillatory behavior at pair-rule periodicity or direct single segment patterning, at different stages of embryogenesis.

2013-01-01

97

High volume data storage architecture analysis  

NASA Technical Reports Server (NTRS)

A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

Malik, James M.

1990-01-01

98

Scatter segmentation in dynamic SPECT images using principal component analysis  

NASA Astrophysics Data System (ADS)

Dynamic single photon emission computed tomography (dSPECT) provides time-varying spatial information about changes of tracer distribution in the body from data acquired using a standard (single slow rotation) protocol. Variations of tracer distribution observed in the images might be due to physiological processes in the body, but may also stem from reconstruction artefacts. These two possibilities are not easily separated because of the highly underdetermined nature of the dynamic reconstruction problem. Since it is expected that temporal changes in tracer distribution may carry important diagnostic information, the analysis of dynamic SPECT images should consider and use this additional information. In this paper we present a segmentation scheme for aggregating voxels with similar time activity curves (TACs). Voxel aggregates are created through region merging based on a similarity criterion on a reduced set of features, which is derived after transformation into eigenspace. Region merging was carried out on dSPECT images from simulated and patient myocardial perfusion studies using various stopping criteria and ranges of accumulated variances in eigenspace. Results indicate that segmentation clearly separates heart and liver tissues from the background. The segmentation quality did not change significantly if more than 99% of the variance was incorporated into the feature vector. The heart behaviour followed an expected exponential decay curve while some variation of time behaviour in liver was observed. Scatter artefacts from photons originating from liver could be identified in long as well as in short studies.

Toennies, Klaus D.; Celler, Anna; Blinder, Stephan; Møller, Torsten; Harrop, Ronald

2003-05-01

99

Study of tracking and data acquisition system for the 1990's. Volume 4: TDAS space segment architecture  

NASA Technical Reports Server (NTRS)

Tracking and data acquisition system (TDAS) requirements, TDAS architectural goals, enhanced TDAS subsystems, constellation and networking options, TDAS spacecraft options, crosslink implementation, baseline TDAS space segment architecture, and treat model development/security analysis are addressed.

Orr, R. S.

1984-01-01

100

Ionogram analysis using fuzzy segmentation and connectedness techniques  

NASA Astrophysics Data System (ADS)

We present a new procedure for the analysis of ionograms that evolves from methods developed for image analysis and utilizes techniques based on the concepts of fuzzy segmentation and connectedness. Ionogram traces are often not "crisply" defined, and we demonstrate that it is possible to approximate them as fuzzy subsets within the two-dimensional space defined by the time-of-flight and the radio frequency. A real number between 0 and 1 is assigned to each pixel in an ionogram, thereby defining the membership of that pixel to each of the fuzzy subsets, effectively creating a "gray scale" ionogram. In this context, ionogram analysis becomes a problem in fuzzy geometry, and various geometrical properties, including the topological concepts of connectedness, adjacency, height, width, and major axis, can be defined. It is shown that not only does the fuzzy segmentation process separate signals from the chaotic noise background that often characterizes ionograms, but that it can also be applied to classify ionospheric echoes according to standard nomenclature, e.g., normal E, F, or Es layers. Furthermore, in reference to the skeleton or thinning extraction procedures employed in imaging processing, the fuzzy connectedness between echoes in selected segments can be used to determine the primary layers that are characteristic of vertical incidence ionospheric reflection. This information can be provided as input to automatic scaling or true-height inversion routines, which can then be used to derive either the standard URSI set of ionospheric parameters or the electron density distribution in the overhead ionosphere, or both. This fuzzy algorithm approach has been successfully applied to midlatitude ionogram data from advanced digital ionospheric sounders operated by the National Central University and Utah State University.

Tsai, Lung-Chih; Berkey, Frank T.

2000-09-01

101

Three dimensional level set based semiautomatic segmentation of atherosclerotic carotid artery wall volume using 3D ultrasound imaging  

NASA Astrophysics Data System (ADS)

3D segmentation of carotid plaque from ultrasound (US) images is challenging due to image artifacts and poor boundary definition. Semiautomatic segmentation algorithms for calculating vessel wall volume (VWV) have been proposed for the common carotid artery (CCA) but they have not been applied on plaques in the internal carotid artery (ICA). In this work, we describe a 3D segmentation algorithm that is robust to shadowing and missing boundaries. Our algorithm uses distance regularized level set method with edge and region based energy to segment the adventitial wall boundary (AWB) and lumen-intima boundary (LIB) of plaques in the CCA, ICA and external carotid artery (ECA). The algorithm is initialized by manually placing points on the boundary of a subset of transverse slices with an interslice distance of 4mm. We propose a novel user defined stopping surface based energy to prevent leaking of evolving surface across poorly defined boundaries. Validation was performed against manual segmentation using 3D US volumes acquired from five asymptomatic patients with carotid stenosis using a linear 4D probe. A pseudo gold-standard boundary was formed from manual segmentation by three observers. The Dice similarity coefficient (DSC), Hausdor distance (HD) and modified HD (MHD) were used to compare the algorithm results against the pseudo gold-standard on 1205 cross sectional slices of 5 3D US image sets. The algorithm showed good agreement with the pseudo gold standard boundary with mean DSC of 93.3% (AWB) and 89.82% (LIB); mean MHD of 0.34 mm (AWB) and 0.24 mm (LIB); mean HD of 1.27 mm (AWB) and 0.72 mm (LIB). The proposed 3D semiautomatic segmentation is the first step towards full characterization of 3D plaque progression and longitudinal monitoring.

Hossain, Md. Murad; AlMuhanna, Khalid; Zhao, Limin; Lal, Brajesh K.; Sikdar, Siddhartha

2014-03-01

102

Improvement of partial volume segmentation for brain tissue on diffusion tensor images using multiple-tensor estimation.  

PubMed

To improve evaluations of cortical and subcortical diffusivity in neurological diseases, it is necessary to improve the accuracy of brain diffusion tensor imaging (DTI) data segmentation. The conventional partial volume segmentation method fails to classify voxels with multiple white matter (WM) fiber orientations such as fiber-crossing regions. Our purpose was to improve the performance of segmentation by taking into account the partial volume effects due to both multiple tissue types and multiple WM fiber orientations. We quantitatively evaluated the overall performance of the proposed method using digital DTI phantom data. Moreover, we applied our method to human DTI data, and compared our results with those of a conventional method. In the phantom experiments, the conventional method and proposed method yielded almost the same root mean square error (RMSE) for gray matter (GM) and cerebrospinal fluid (CSF), while the RMSE in the proposed method was smaller than that in the conventional method for WM. The volume overlap measures between our segmentation results and the ground truth of the digital phantom were more than 0.8 in all three tissue types, and were greater than those in the conventional method. In visual comparisons for human data, the WM/GM/CSF regions obtained using our method were in better agreement with the corresponding regions depicted in the structural image than those obtained using the conventional method. The results of the digital phantom experiment and human data demonstrated that our method improved accuracy in the segmentation of brain tissue data on DTI compared to the conventional method. PMID:23589185

Kumazawa, Seiji; Yoshiura, Takashi; Honda, Hiroshi; Toyofuku, Fukai

2013-12-01

103

Bifilar Analysis Study, Volume 1.  

National Technical Information Service (NTIS)

A coupled rotor/bifilar/airframe analysis was developed and utilized to study the dynamic characteristics of the centrifugally tuned, rotor-hub-mounted, bifilar vibration absorber. The analysis contains the major components that impact the bifilar absorbe...

W. Miao T. Mouzakis

1980-01-01

104

Multicomponent image segmentation: a comparative analysis between a hybrid genetic algorithm and self-organizing maps  

Microsoft Academic Search

Image segmentation is an essential process in image analysis. Several methods have been developed to segment multicomponent images and the success of these methods depends on the characteristics of the acquired image and the percentage of imperfections in the process of its acquisition. Many of the segmentation methods are parametric, which means that many parameters need to be computed or

M. M. Awad; K. Chehdi; A. Nasri

2009-01-01

105

Segmental chloride and fluid handling during correction of chloride-depletion alkalosis without volume expansion in the rat.  

PubMed Central

To determine whether chloride-depletion metabolic alkalosis (CDA) can be corrected by provision of chloride without volume expansion or intranephronal redistribution of fluid reabsorption, CDA was produced in Sprague-Dawley rats by peritoneal dialysis against 0.15 M NaHCO3; controls (CON) were dialyzed against Ringer's bicarbonate. Animals were infused with isotonic solutions containing the same Cl and total CO2 (tCO2) concentrations as in postdialysis plasma at rates shown to be associated with slight but stable volume contraction. During the subsequent 6 h, serum Cl and tCO2 concentrations remained stable and normal in CON and corrected towards normal in CDA; urinary chloride excretion was less and bicarbonate excretion greater than those in CON during this period. Micropuncture and microinjection studies were performed in the 3rd h after dialysis. Plasma volumes determined by 125I-albumin were not different. Inulin clearance and fractional chloride excretion were lower (P less than 0.05) in CDA. Superficial nephron glomerular filtration rate determined from distal puncture sites was lower (P less than 0.02) in CDA (27.9 +/- 2.3 nl/min) compared with that in CON (37.9 +/- 2.6). Fractional fluid and chloride reabsorption in the proximal convoluted tubule and within the loop segment did not differ. Fractional chloride delivery to the early distal convolution did not differ but that out of this segment was less (P less than 0.01) in group CDA. Urinary recovery of 36Cl injected into the collecting duct segment was lower (P less than 0.01) in CDA (CON 74 +/- 3; CDA 34 +/- 4%). These data show that CDA can be corrected by the provision of chloride without volume expansion or alterations in the intranephronal distribution of fluid reabsorption. Enhanced chloride reabsorption in the collecting duct segment, and possibly in the distal convoluted tubule, contributes importantly to this correction.

Galla, J H; Bonduris, D N; Dumbauld, S L; Luke, R G

1984-01-01

106

3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head  

NASA Astrophysics Data System (ADS)

Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

2010-03-01

107

Three-dimensional analysis tool for segmenting and measuring the structure of telomeres in mammalian nuclei  

NASA Astrophysics Data System (ADS)

Quantitative analysis in combination with fluorescence microscopy calls for innovative digital image measurement tools. We have developed a three-dimensional tool for segmenting and analyzing FISH stained telomeres in interphase nuclei. After deconvolution of the images, we segment the individual telomeres and measure a distribution parameter we call ?T. This parameter describes if the telomeres are distributed in a sphere-like volume (?T ? 1) or in a disk-like volume (?T >> 1). Because of the statistical nature of this parameter, we have to correct for the fact that we do not have an infinite number of telomeres to calculate this parameter. In this study we show a way to do this correction. After sorting mouse lymphocytes and calculating ?T and using the correction introduced in this paper we show a significant difference between nuclei in G2 and nuclei in either G0/G1 or S phase. The mean values of ?T for G0/G1, S and G2 are 1.03, 1.02 and 13 respectively.

Vermolen, Bart J.; Young, Ian T.; Chuang, Alice; Wark, Landon; Chuang, Tony; Mai, Sabine; Garini, Yuval

2005-03-01

108

Image segmentation by iterative parallel region growing with application to data compression and image analysis  

NASA Technical Reports Server (NTRS)

Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

Tilton, James C.

1988-01-01

109

Heart sound segmentation of pediatric auscultations using wavelet analysis.  

PubMed

Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms. PMID:24110586

Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T

2013-01-01

110

Evaluation of atlas based auto-segmentation for head and neck target volume delineation in adaptive/replan IMRT  

NASA Astrophysics Data System (ADS)

IMRT for head and neck patients requires clinicians to delineate clinical target volumes (CTV) on a planning-CT (>2hrs/patient). When patients require a replan-CT, CTVs must be re-delineated. This work assesses the performance of atlas-based autosegmentation (ABAS), which uses deformable image registration between planning and replan-CTs to auto-segment CTVs on the replan-CT, based on the planning contours. Fifteen patients with planning-CT and replan-CTs were selected. One clinician delineated CTVs on the planning-CTs and up to three clinicians delineated CTVs on the replan-CTs. Replan-CT volumes were auto-segmented using ABAS using the manual CTVs from the planning-CT as an atlas. ABAS CTVs were edited manually to make them clinically acceptable. Clinicians were timed to estimate savings using ABAS. CTVs were compared using dice similarity coefficient (DSC) and mean distance to agreement (MDA). Mean inter-observer variability (DSC>0.79 and MDA<2.1mm) was found to be greater than intra-observer variability (DSC>0.91 and MDA<1.5mm). Comparing ABAS to manual CTVs gave DSC=0.86 and MDA=2.07mm. Once edited, ABAS volumes agreed more closely with the manual CTVs (DSC=0.87 and MDA=1.87mm). The mean clinician time required to produce CTVs reduced from 169min to 57min when using ABAS. ABAS segments volumes with accuracy close to inter-observer variability however the volumes require some editing before clinical use. Using ABAS reduces contouring time by a factor of three.

Speight, R.; Karakaya, E.; Prestwich, R.; Sen, M.; Lindsay, R.; Harding, R.; Sykes, J.

2014-03-01

111

Segmenting human motion for automated rehabilitation exercise analysis.  

PubMed

This paper proposes an approach for the automated segmentation and identification of movement segments from continuous time series data of human movement, collected through motion capture of ambulatory sensors. The proposed approach uses a two stage identification and recognition process, based on velocity and stochastic modeling of each motion to be identified. In the first stage, motion segment candidates are identified based on a unique sequence of velocity features such as velocity peaks and zero velocity crossings. In the second stage, Hidden Markov models are used to accurately identify segment locations from the identified candidates. The approach is capable of on-line segmentation and identification, enabling interactive feedback in rehabilitation applications. The approach is validated on a rehabilitation movement dataset, and achieves a segmentation accuracy of 89%. PMID:23366526

Feng-Shun Lin, Jonathan; Kuli?, Dana

2012-01-01

112

A framework for automatic heart sound analysis without segmentation  

PubMed Central

Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

2011-01-01

113

Analysis of image segmentation aimed for sense matching  

Microsoft Academic Search

Image segmentation technique is the foundation of the high-level digital image processing, and it is widely applied in many areas, it is also a classic difficult problem on the domain of advanced information processing. Because of itsí» importance and difficulties, image segmentation processing motivates large numbers of researchers to work for it, and quite a number of segmentation thoughts and

Xianglong M. Liao; Zhiguo Cao

2001-01-01

114

Meteorological Analysis Models, Volume 2.  

National Technical Information Service (NTIS)

As part of the SEASAT program, two sets of analysis programs were developed. One set of programs produce 63 x 63 horizontal mesh analyses on a polar stereographic grid. The other set produces 187 x 187 third mesh analyses. The parameters analyzed include ...

R. A. Langland D. L. Stark

1976-01-01

115

Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET  

NASA Astrophysics Data System (ADS)

In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image.

Bousse, Alexandre; Pedemonte, Stefano; Thomas, Benjamin A.; Erlandsson, Kjell; Ourselin, Sébastien; Arridge, Simon; Hutton, Brian F.

2012-10-01

116

HIPPOCAMPAL VOLUME AND SHAPE ANALYSIS IN AN OLDER ADULT POPULATION  

PubMed Central

This report presents a manual segmentation protocol for the hippocampus that yields a reliable and comprehensive measure of volume, a goal that has proven difficult with prior methods. Key features of this method include alignment of the images in the long axis of the hippocampus and the use of a three-dimensional image visualization function to disambiguate anterior and posterior hippocampal boundaries. We describe procedures for hippocampal volumetry and shape analysis, provide inter- and intra-rater reliability data, and examine correlates of hippocampal volume in a sample of healthy older adults. Participants were 40 healthy older adults with no significant cognitive complaints, no evidence of mild cognitive impairment or dementia, and no other neurological or psychiatric disorder. Using a 1.5 T GE Signa scanner, three-dimensional spoiled gradient recalled acquisition in a steady state (SPGR) sequences were acquired for each participant. Images were resampled into 1 mm isotropic voxels, and realigned along the interhemispheric fissure in the axial and coronal planes, and the long axis of the hippocampus in the sagittal plane. Using the BRAINS program (Andreasen et al., 1993), the boundaries of the hippocampus were visualized in the three orthogonal views, and boundary demarcations were transferred to the coronal plane for tracing. Hippocampal volumes were calculated after adjusting for intracranial volume (ICV). Intra- and inter-rater reliabilities, measured using the intraclass correlation coefficient, exceeded .94 for both the left and right hippocampus. Total ICV-adjusted volumes were 3.48 (±0.43) cc for the left hippocampus and 3.68 (±0.42) for the right. There were no significant hippocampal volume differences between males and females (p > .05). In addition to providing a comprehensive volumetric measurement of the hippocampus, the refinements included in our tracing protocol permit analysis of changes in hippocampal shape. Shape analyses may yield novel information about structural brain changes in aging and dementia that are not reflected in volumetric measurements alone. These and other novel directions in research on hippocampal function and dysfunction will be facilitated by the use of reliable, comprehensive, and consistent segmentation and measurement methods.

McHugh, Tara L.; Saykin, Andrew J.; Wishart, Heather A.; Flashman, Laura A.; Cleavinger, Howard B.; Rabin, Laura A.; Mamourian, Alexander C.; Shen, Li

2012-01-01

117

Blood vessel segmentation using line-direction vector based on Hessian analysis  

NASA Astrophysics Data System (ADS)

For decision of the treatment strategy, grading of stenoses is important in diagnosis of vascular disease such as arterial occlusive disease or thromboembolism. It is also important to understand the vasculature in minimally invasive surgery such as laparoscopic surgery or natural orifice translumenal endoscopic surgery. Precise segmentation and recognition of blood vessel regions are indispensable tasks in medical image processing systems. Previous methods utilize only ``lineness'' measure, which is computed by Hessian analysis. However, difference of the intensity values between a voxel of thin blood vessel and a voxel of surrounding tissue is generally decreased by the partial volume effect. Therefore, previous methods cannot extract thin blood vessel regions precisely. This paper describes a novel blood vessel segmentation method that can extract thin blood vessels with suppressing false positives. The proposed method utilizes not only lineness measure but also line-direction vector corresponding to the largest eigenvalue in Hessian analysis. By introducing line-direction information, it is possible to distinguish between a blood vessel voxel and a voxel having a low lineness measure caused by noise. In addition, we consider the scale of blood vessel. The proposed method can reduce false positives in some line-like tissues close to blood vessel regions by utilization of iterative region growing with scale information. The experimental result shows thin blood vessel (0.5 mm in diameter, almost same as voxel spacing) can be extracted finely by the proposed method.

Nimura, Yukitaka; Kitasaka, Takayuki; Mori, Kensaku

2010-03-01

118

Fractal Segmentation and Clustering Analysis for Seismic Time Slices  

NASA Astrophysics Data System (ADS)

Fractal analysis has become part of the standard approach for quantifying texture on gray-tone or colored images. In this research we introduce a multi-stage fractal procedure to segment, classify and measure the clustering patterns on seismic time slices from a 3-D seismic survey. Five fractal classifiers (c1)-(c5) were designed to yield standardized, unbiased and precise measures of the clustering of seismic signals. The classifiers were tested on seismic time slices from the AKAL field, Cantarell Oil Complex, Mexico. The generalized lacunarity (c1), fractal signature (c2), heterogeneity (c3), rugosity of boundaries (c4) and continuity resp. tortuosity (c5) of the clusters are shown to be efficient measures of the time-space variability of seismic signals. The Local Fractal Analysis (LFA) of time slices has proved to be a powerful edge detection filter to detect and enhance linear features, like faults or buried meandering rivers. The local fractal dimensions of the time slices were also compared with the self-affinity dimensions of the corresponding parts of porosity-logs. It is speculated that the spectral dimension of the negative-amplitude parts of the time-slice yields a measure of connectivity between the formation's high-porosity zones, and correlates with overall permeability.

Ronquillo, G.; Oleschko, K.; Korvin, G.; Arizabalo, R. D.

2002-05-01

119

Robust analysis of feature spaces: color image segmentation  

Microsoft Academic Search

A general technique for the recovery of significant im- age features is presented. The technique is based on the mean shift algorithm, a simple nonparametric procedure for estimating density gradients. Drawbacks of the current methods (including robust clustering) are avoided. Feature space of any nature can be processed, and as an example, color image segmentation is discussed. The segmentation is

Dorin Comaniciu; Peter Meer

1997-01-01

120

Object density-based image segmentation and its applications in biomedical image analysis.  

PubMed

In many applications of medical image analysis, the density of an object is the most important feature for isolating an area of interest (image segmentation). In this research, an object density-based image segmentation methodology is developed, which incorporates intensity-based, edge-based and texture-based segmentation techniques. The proposed method consists of three main stages: preprocessing, object segmentation and final segmentation. Image enhancement, noise reduction and layer-of-interest extraction are several subtasks of preprocessing. Object segmentation utilizes a marker-controlled watershed technique to identify each object of interest (OI) from the background. A marker estimation method is proposed to minimize over-segmentation resulting from the watershed algorithm. Object segmentation provides an accurate density estimation of OI which is used to guide the subsequent segmentation steps. The final stage converts the distribution of OI into textural energy by using fractal dimension analysis. An energy-driven active contour procedure is designed to delineate the area with desired object density. Experimental results show that the proposed method is 98% accurate in segmenting synthetic images. Segmentation of microscopic images and ultrasound images shows the potential utility of the proposed method in different applications of medical image processing. PMID:19473717

Yu, Jinhua; Tan, Jinglu

2009-12-01

121

The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System  

NASA Technical Reports Server (NTRS)

Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

2008-01-01

122

Automatic brain tumor segmentation  

NASA Astrophysics Data System (ADS)

A system that automatically segments and labels complete glioblastoma-multiform tumor volumes in magnetic resonance images of the human brain is presented. The magnetic resonance images consist of three feature images (T1- weighted, proton density, T2-weighted) and are processed by a system which integrates knowledge-based techniques with multispectral analysis and is independent of a particular magnetic resonance scanning protocol. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intra-cranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intra-cranial region, with region analysis used in performing the final tumor labeling. This system has been trained on eleven volume data sets and tested on twenty-two unseen volume data sets acquired from a single magnetic resonance imaging system. The knowledge-based tumor segmentation was compared with radiologist-verified `ground truth' tumor volumes and results generated by a supervised fuzzy clustering algorithm. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time.

Clark, Matthew C.; Hall, Lawrence O.; Goldgof, Dmitry B.; Velthuizen, Robert P.; Murtaugh, F. R.; Silbiger, Martin L.

1998-06-01

123

REACH. Teacher's Guide, Volume III. Task Analysis.  

ERIC Educational Resources Information Center

Designed for use with individualized instructional units (CE 026 345-347, CE 026 349-351) in the electromechanical cluster, this third volume of the postsecondary teacher's guide presents the task analysis which was used in the development of the REACH (Refrigeration, Electro-Mechanical, Air Conditioning, Heating) curriculum. The major blocks of…

Morris, James Lee; And Others

124

Market Segmentation Analysis of Preferences for GM Derived Animal Foods in the UK  

Microsoft Academic Search

This paper undertakes a detailed market segmentation analysis of the demand for GM derived animal foods in the UK with the aim of illustrating how this analysis can provide distinct information that can assists in evaluating the welfare impacts of proposed changes to the EU's GM labelling policy. The specific modelling approach employed was the latent segment (LS) model which

Andreas Kontoleon; Mitsuyasu Yabe

2006-01-01

125

Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models  

NASA Astrophysics Data System (ADS)

Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

2012-12-01

126

Finite Volume Methods: Foundation and Analysis  

NASA Technical Reports Server (NTRS)

Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

Barth, Timothy; Ohlberger, Mario

2003-01-01

127

A Fuzzy, Nonparametric Segmentation Framework for DTI and MRI Analysis  

Microsoft Academic Search

This paper presents a novel statistical fuzzy-segmentation method for diffusion tensor (DT) images and magnetic resonance (MR) images. Typical fuzzy-segmentation schemes, e.g. those based on fuzzy- C-means (FCM), incorporate Gaussian class models which are inherently biased towards ellipsoidal clusters. Fiber bundles in DT images, however, comprise tensors that can inherently lie on more-complex manifolds. Un- like FCM-based schemes, the proposed

Suyash P. Awate; James C. Gee

2007-01-01

128

Robust smooth segmentation approach for array CGH data analysis  

Microsoft Academic Search

Motivation: Array comparative genomic hybridization (aCGH) provi- des a genome-wide technique to screen for copy number alteration. The existing segmentation approaches for analyzing aCGH data are based on modeling data as a series of discrete segments with unknown boundaries and unknown heights. Although the biological process of copy number alteration is discrete, in reality a variety of biological and experimental

Jian Huang; Arief Gusnanto; Kathleen O'sullivan; Johan Staaf; Åke Borg; Yudi Pawitan

2007-01-01

129

Adolescents and alcohol: an explorative audience segmentation analysis  

PubMed Central

Background So far, audience segmentation of adolescents with respect to alcohol has been carried out mainly on the basis of socio-demographic characteristics. In this study we examined whether it is possible to segment adolescents according to their values and attitudes towards alcohol to use as guidance for prevention programmes. Methods A random sample of 7,000 adolescents aged 12 to 18 was drawn from the Municipal Basic Administration (MBA) of 29 Local Authorities in the province North-Brabant in the Netherlands. By means of an online questionnaire data were gathered on values and attitudes towards alcohol, alcohol consumption and socio-demographic characteristics. Results We were able to distinguish a total of five segments on the basis of five attitude factors. Moreover, the five segments also differed in drinking behavior independently of socio-demographic variables. Conclusions Our investigation was a first step in the search for possibilities of segmenting by factors other than socio-demographic characteristics. Further research is necessary in order to understand these results for alcohol prevention policy in concrete terms.

2012-01-01

130

Atlas-Based Segmentation Improves Consistency and Decreases Time Required for Contouring Postoperative Endometrial Cancer Nodal Volumes  

SciTech Connect

Purpose: Accurate target delineation of the nodal volumes is essential for three-dimensional conformal and intensity-modulated radiotherapy planning for endometrial cancer adjuvant therapy. We hypothesized that atlas-based segmentation ('autocontouring') would lead to time savings and more consistent contours among physicians. Methods and Materials: A reference anatomy atlas was constructed using the data from 15 postoperative endometrial cancer patients by contouring the pelvic nodal clinical target volume on the simulation computed tomography scan according to the Radiation Therapy Oncology Group 0418 trial using commercially available software. On the simulation computed tomography scans from 10 additional endometrial cancer patients, the nodal clinical target volume autocontours were generated. Three radiation oncologists corrected the autocontours and delineated the manual nodal contours under timed conditions while unaware of the other contours. The time difference was determined, and the overlap of the contours was calculated using Dice's coefficient. Results: For all physicians, manual contouring of the pelvic nodal target volumes and editing the autocontours required a mean {+-} standard deviation of 32 {+-} 9 vs. 23 {+-} 7 minutes, respectively (p = .000001), a 26% time savings. For each physician, the time required to delineate the manual contours vs. correcting the autocontours was 30 {+-} 3 vs. 21 {+-} 5 min (p = .003), 39 {+-} 12 vs. 30 {+-} 5 min (p = .055), and 29 {+-} 5 vs. 20 {+-} 5 min (p = .0002). The mean overlap increased from manual contouring (0.77) to correcting the autocontours (0.79; p = .038). Conclusion: The results of our study have shown that autocontouring leads to increased consistency and time savings when contouring the nodal target volumes for adjuvant treatment of endometrial cancer, although the autocontours still required careful editing to ensure that the lymph nodes at risk of recurrence are properly included in the target volume.

Young, Amy V. [Department of Radiation Oncology, Beth Israel Medical Center, New York, NY (Israel); Department of Radiation Oncology, St. Luke's-Roosevelt Hospital, New York, NY (United States); Wortham, Angela [Department of Radiation Oncology, State University of New York Health Science Center of Brooklyn, Brooklyn, NY (United States); Wernick, Iddo; Evans, Andrew [Department of Radiation Oncology, St. Luke's-Roosevelt Hospital, New York, NY (United States); Ennis, Ronald D., E-mail: REnnis@chpnet.or [Department of Radiation Oncology, St. Luke's-Roosevelt Hospital, New York, NY (United States)

2011-03-01

131

Combining DOM tree and geometric layout analysis for online medical journal article segmentation  

Microsoft Academic Search

We describe an HTML web page segmentation algorithm, which is applied to segment online medical journal articles (regular HTML and PDF-Converted-HTML files). The web page content is modeled by a zone tree structure based primarily on the geometric layout of the web page. For a given journal article, a zone tree is generated by combining DOM tree analysis and recursive

Jie Zou; Daniel X. Le; George R. Thoma

2006-01-01

132

Rigid model-based 3D segmentation of the bones of joints in MR and CT images for motion analysis.  

PubMed

There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of images of the joint acquired under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. In this article, a two-step model-based segmentation strategy is proposed that utilizes the unique context of the current application wherein the shape of each individual bone is preserved in all scans of a particular joint while the spatial arrangement of the bones alters significantly among bones and scans. In the first step, a rigid deterministic model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. Subsequently, in other images of the same joint, this model is used to search for the same bone by minimizing an energy function that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations, yielding true positive and false positive volume fractions in the range 89%-97% and 0.2%-0.7%. The method requires 1-2 minutes of operator time and 6-7 min of computer time per data set, which makes it significantly more efficient than live wire-the method currently available for the task that can be used routinely. PMID:18777924

Liu, Jiamin; Udupa, Jayaram K; Saha, Punam K; Odhner, Dewey; Hirsch, Bruce E; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A

2008-08-01

133

Volume analyzer SYNAPSE VINCENT for liver analysis.  

PubMed

In recent years, there has been an active movement to ensure the greater safety of actual surgeries, by simulating it preoperatively with the use of three-dimensional image visualization technologies. Along with this movement, the Ministry of Health, Labour and Welfare has named "Image-supported navigation in hepatectomy" as part of advanced medical techniques. This method aims to improve the safety during a surgery by calculating the volume of the liver dominated by each blood vessel or simulating, prior to surgery, the volume of resection zone or the remaining liver volume. These calculations and simulations are carried out using the three-dimensional images produced by extractions of the liver, vascular and tumor regions from the computed tomography images, which were collected using the tomography apparatus prior to hepatectomy. In order to facilitate the achievement of such preoperative simulations, the volume analyzer SYNAPSE VINCENT (VINCENT, hereafter) by Fujifilm, in its Liver Analysis Application, comes equipped with unique features. This paper will introduce the technologies behind those unique features and provide a direction for future research and developments. PMID:24520049

Ohshima, Shunsuke

2014-04-01

134

Prioritization of brain MRI volumes using medical image perception model and tumor region segmentation.  

PubMed

The objective of the present study is to explore prioritization methods in diagnostic imaging modalities to automatically determine the contents of medical images. In this paper, we propose an efficient prioritization of brain MRI. First, the visual perception of the radiologists is adapted to identify salient regions. Then this saliency information is used as an automatic label for accurate segmentation of brain lesion to determine the scientific value of that image. The qualitative and quantitative results prove that the rankings generated by the proposed method are closer to the rankings created by radiologists. PMID:24034739

Mehmood, Irfan; Ejaz, Naveed; Sajjad, Muhammad; Baik, Sung Wook

2013-10-01

135

Health Lifestyles: Audience Segmentation Analysis for Public Health Interventions  

Microsoft Academic Search

This article is concerned with the application of market segmentation techniques in order to improve the planning and implementation of public health education programs. Seven distinctive patterns of health attitudes, social influences, and behaviors are identified using cluster analytic techniques in a sample drawn from four central California cities, and are subjected to construct and predictive validation: The lifestyle clusters

Michael D. Slater; June A. Flora

1991-01-01

136

Occluded human body segmentation and its application to behavior analysis  

Microsoft Academic Search

This paper addresses the problem of occluded human segmentation and then uses its results for human behavior recognition. To tackle this ill-posed problem, a novel clustering scheme is proposed for constructing a model space for posture classification. Then, a model-driven approach is proposed for separating an occluded region to individual objects. For reducing the model space, a particle filtering technique

Jun-Wei Hsieh; Sin-Yu Chen; Chi-Hung Chuang; Miao-Fen Chueh; Shiaw-Shian Yu

2010-01-01

137

Applications of Recursive Segmentation to the Analysis of DNA Sequences  

Microsoft Academic Search

Recursive segmentation is a procedure that partitions a DNA sequence into domains with a homogeneous composition of the four nucleotides A, C, G and T. This procedure can also be applied to any sequence converted from a DNA sequence, such as to a binary strong(G + C)\\/weak(A+ T) sequence, to a binary sequence indicating the presence or absence of the

Wentian Li; Pedro Bernaola-galván; Fatameh Haghighi; Ivo Grosse

2002-01-01

138

Semiautomated three-dimensional segmentation software to quantify carpal bone volume changes on wrist CT scans for arthritis assessment  

PubMed Central

Rapid progression of joint destruction is an indication of poor prognosis in patients with rheumatoid arthritis. Computed tomography (CT) has the potential to serve as a gold standard for joint imaging since it provides high resolution three-dimensional (3D) images of bone structure. The authors have developed a method to quantify erosion volume changes on wrist CT scans. In this article they present a description and validation of the methodology using multiple scans of a hand phantom and five human subjects. An anthropomorphic hand phantom was imaged with a clinical CT scanner at three different orientations separated by a 30-deg angle. A reader used the semiautomated software tool to segment the individual carpal bones of each CT scan. Reproducibility was measured as the root-mean-square standard deviation (RMMSD) and coefficient of variation (CoV) between multiple measurements of the carpal volumes. Longitudinal erosion progression was studied by inserting simulated erosions in a paired second scan. The change in simulated erosion size was calculated by performing 3D image registration and measuring the volume difference between scans in a region adjacent to the simulated erosion. The RMSSD for the total carpal volumes was 21.0 mm3 (CoV=1.3%) for the phantom, and 44.1 mm3 (CoV=3.0%) for the in vivo subjects. Using 3D registration and local volume difference calculations, the RMMSD was 1.0?3.0 mm3. The reader time was approximately 5 min per carpal bone. There was excellent agreement between the measured and simulated erosion volumes. The effect of a poorly measured volume for a single erosion is mitigated by the large number of subjects that would comprise a clinical study and that there will be many erosions measured per patient. CT promises to be a quantifiable tool to measure erosion volumes and may serve as a gold standard that can be used in the validation of other modalities such as magnetic resonance imaging.

Duryea, J.; Magalnick, M.; Alli, S.; Yao, L.; Wilson, M.; Goldbach-Mansky, R.

2008-01-01

139

A novel 3D mesh compression using mesh segmentation with multiple principal plane analysis  

Microsoft Academic Search

This paper proposes a novel scheme for 3D model compression based on mesh segmentation using multiple principal plane analysis. This algorithm first performs a mesh segmentation scheme, based on fusion of the well-known k-means clustering and the proposed principal plane analysis to separate the input 3D mesh into a set of disjointed polygonal regions. The boundary indexing scheme for the

Shyi-chyi Cheng; Chen-tsung Kuo; Da-chun Wu

2010-01-01

140

Comparison of Acute and Chronic Traumatic Brain Injury Using Semi-Automatic Multimodal Segmentation of MR Volumes  

PubMed Central

Abstract Although neuroimaging is essential for prompt and proper management of traumatic brain injury (TBI), there is a regrettable and acute lack of robust methods for the visualization and assessment of TBI pathophysiology, especially for of the purpose of improving clinical outcome metrics. Until now, the application of automatic segmentation algorithms to TBI in a clinical setting has remained an elusive goal because existing methods have, for the most part, been insufficiently robust to faithfully capture TBI-related changes in brain anatomy. This article introduces and illustrates the combined use of multimodal TBI segmentation and time point comparison using 3D Slicer, a widely-used software environment whose TBI data processing solutions are openly available. For three representative TBI cases, semi-automatic tissue classification and 3D model generation are performed to perform intra-patient time point comparison of TBI using multimodal volumetrics and clinical atrophy measures. Identification and quantitative assessment of extra- and intra-cortical bleeding, lesions, edema, and diffuse axonal injury are demonstrated. The proposed tools allow cross-correlation of multimodal metrics from structural imaging (e.g., structural volume, atrophy measurements) with clinical outcome variables and other potential factors predictive of recovery. In addition, the workflows described are suitable for TBI clinical practice and patient monitoring, particularly for assessing damage extent and for the measurement of neuroanatomical change over time. With knowledge of general location, extent, and degree of change, such metrics can be associated with clinical measures and subsequently used to suggest viable treatment options.

Chambers, Micah C.; Alger, Jeffry R.; Filippou, Maria; Prastawa, Marcel W.; Wang, Bo; Hovda, David A.; Gerig, Guido; Toga, Arthur W.; Kikinis, Ron; Vespa, Paul M.; Van Horn, John D.

2011-01-01

141

Relationship between Stroke Volume and Pulse Pressure during Blood Volume Perturbation: A Mathematical Analysis  

PubMed Central

Arterial pulse pressure has been widely used as surrogate of stroke volume, for example, in the guidance of fluid therapy. However, recent experimental investigations suggest that arterial pulse pressure is not linearly proportional to stroke volume. However, mechanisms underlying the relation between the two have not been clearly understood. The goal of this study was to elucidate how arterial pulse pressure and stroke volume respond to a perturbation in the left ventricular blood volume based on a systematic mathematical analysis. Both our mathematical analysis and experimental data showed that the relative change in arterial pulse pressure due to a left ventricular blood volume perturbation was consistently smaller than the corresponding relative change in stroke volume, due to the nonlinear left ventricular pressure-volume relation during diastole that reduces the sensitivity of arterial pulse pressure to perturbations in the left ventricular blood volume. Therefore, arterial pulse pressure must be used with care when used as surrogate of stroke volume in guiding fluid therapy.

2014-01-01

142

Failure analysis for model-based organ segmentation using outlier detection  

NASA Astrophysics Data System (ADS)

During the last years Model-Based Segmentation (MBS) techniques have been used in a broad range of medical applications. In clinical practice, such techniques are increasingly employed for diagnostic purposes and treatment decisions. However, it is not guaranteed that a segmentation algorithm will converge towards the desired solution. In specific situations as in the presence of rare anatomical variants (which cannot be represented) or for images with an extremely low quality, a meaningful segmentation might not be feasible. At the same time, an automated estimation of the segmentation reliability is commonly not available. In this paper we present an approach for the identification of segmentation failures using concepts from the field of outlier detection. The approach is validated on a comprehensive set of Computed Tomography Angiography (CTA) images by means of Receiver Operating Characteristic (ROC) analysis. Encouraging results in terms of an Area Under the ROC Curve (AUC) of up to 0.965 were achieved.

Saalbach, Axel; Wächter Stehle, Irina; Lorenz, Cristian; Weese, Jürgen

2014-03-01

143

Proteomic Analysis of the Retina: Removal of RPE Alters Outer Segment Assembly and Retinal Protein Expression  

PubMed Central

The mechanisms that regulate the complex physiologic task of photoreceptor outer segment assembly remain an enigma. One limiting factor in revealing the mechanism(s) by which this process is modulated is that not all of the role players that participate in this process are known. The purpose of this study was to determine some of the retinal proteins that likely play a critical role in regulating photoreceptor outer segment assembly. To do so, we analyzed and compared the proteome map of tadpole Xenopus laevis retinal pigment epithelium (RPE)-supported retinas containing organized outer segments with that of RPE-deprived retinas containing disorganized outer segments. Solubilized proteins were labeled with CyDye fluors followed by multiplexed two-dimensional separation. The intensity of protein spots and comparison of proteome maps was performed using DeCyder software. Identification of differentially regulated proteins was determined using nanoLC-ESI-MS/MS analysis. We found a total of 27 protein spots, 21 of which were unique proteins, which were differentially expressed in retinas with disorganized outer segments. We predict that in the absence of the RPE, oxidative stress initiates an unfolded protein response. Subsequently, downregulation of several candidate Müller glial cell proteins may explain the inability of photoreceptors to properly fold their outer segment membranes. In this study we have used identification and bioinformatics assessment of proteins that are differentially expressed in retinas with disorganized outer segments as a first step in determining probable key molecules involved in regulating photoreceptor outer segment assembly.

Wang, XiaoFei; Nookala, Suba; Narayanan, Chidambarathanu; Giorgianni, Francesco; Beranova-Giorgianni, Sarka; McCollum, Gary; Gerling, Ivan; Penn, John S.; Jablonski, Monica M.

2008-01-01

144

Comparative Genomic and Transcriptomic Analysis of Tandemly and Segmentally Duplicated Genes in Rice  

PubMed Central

Tandem and segmental duplications significantly contribute to gene family expansion and genome evolution. Genome-wide identification of tandem and segmental genes has been analyzed before in several plant genomes. However, comparative studies in functional bias, expression divergence and their roles in species domestication are still lacking. We have carried out a genome-wide identification and comparative analysis of tandem and segmental genes in the rice genome. A total of 3,646 and 3,633 pairs of tandem and segmental genes, respectively, were identified in the genome. They made up around 30% of total annotated rice genes (excluding transposon-coding genes). Both tandem and segmental duplicates showed different physical locations and exhibited a biased subset of functions. These two types of duplicated genes were also under different functional constrains as shown by nonsynonymous substitutions per site (Ka) and synonymous substitutions per site (Ks) analysis. They are also differently regulated depending on the tissues and abiotic and biotic stresses based on transcriptomics data. The expression divergence might be related to promoter differentiation and DNA methylation status after tandem or segmental duplications. Both tandem and segmental duplications differ in their contribution to genetic novelty but evidence suggests that they play their role in species domestication and genome evolution.

Jiang, Shu-Ye; Gonzalez, Jose M.; Ramachandran, Srinivasan

2013-01-01

145

Combined texture feature analysis of segmentation and classification of benign and malignant tumour CT slices.  

PubMed

A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity. PMID:23094909

Padma, A; Sukanesh, R

2013-01-01

146

Theoretical analysis and experimental verification on valve-less piezoelectric pump with hemisphere-segment bluff-body  

NASA Astrophysics Data System (ADS)

Existing researches on no-moving part valves in valve-less piezoelectric pumps mainly concentrate on pipeline valves and chamber bottom valves, which leads to the complex structure and manufacturing process of pump channel and chamber bottom. Furthermore, position fixed valves with respect to the inlet and outlet also makes the adjustability and controllability of flow rate worse. In order to overcome these shortcomings, this paper puts forward a novel implantable structure of valve-less piezoelectric pump with hemisphere-segments in the pump chamber. Based on the theory of flow around bluff-body, the flow resistance on the spherical and round surface of hemisphere-segment is different when fluid flows through, and the macroscopic flow resistance differences thus formed are also different. A novel valve-less piezoelectric pump with hemisphere-segment bluff-body (HSBB) is presented and designed. HSBB is the no-moving part valve. By the method of volume and momentum comparison, the stress on the bluff-body in the pump chamber is analyzed. The essential reason of unidirectional fluid pumping is expounded, and the flow rate formula is obtained. To verify the theory, a prototype is produced. By using the prototype, experimental research on the relationship between flow rate, pressure difference, voltage, and frequency has been carried out, which proves the correctness of the above theory. This prototype has six hemisphere-segments in the chamber filled with water, and the effective diameter of the piezoelectric bimorph is 30mm. The experiment result shows that the flow rate can reach 0.50 mL/s at the frequency of 6 Hz and the voltage of 110 V. Besides, the pressure difference can reach 26.2 mm H2O at the frequency of 6 Hz and the voltage of 160 V. This research proposes a valve-less piezoelectric pump with hemisphere-segment bluff-body, and its validity and feasibility is verified through theoretical analysis and experiment.

Ji, Jing; Zhang, Jianhui; Xia, Qixiao; Wang, Shouyin; Huang, Jun; Zhao, Chunsheng

2014-05-01

147

Teeth segmentation of dental periapical radiographs based on local singularity analysis.  

PubMed

Teeth segmentation for periapical raidographs is one of the most critical tasks for effective periapical lesion or periodontitis detection, as both types of anomalies usually occur around tooth boundaries and dental radiographs are often subject to noise, low contrast, and uneven illumination. In this paper, we propose an effective scheme to segment each tooth in periapical radiographs. The method consists of four stages: image enhancement using adaptive power law transformation, local singularity analysis using Hölder exponent, tooth recognition using Otsu's thresholding and connected component analysis, and tooth delineation using snake boundary tracking and morphological operations. Experimental results of 28 periapical radiographs containing 106 teeth in total and 75 useful for dental examination demonstrate that 105 teeth are successfully isolated and segmented, and the overall mean segmentation accuracy of all 75 useful teeth in terms of (TP, FP) is (0.8959, 0.0093) with standard deviation (0.0737, 0.0096), respectively. PMID:24252317

Lin, P L; Huang, P Y; Huang, P W; Hsu, H C; Chen, C C

2014-02-01

148

Analysis of radially cracked ring segments subject to forces and couples  

NASA Technical Reports Server (NTRS)

Results of planar boundary collocation analysis are given for ring segment (C shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5, and ratios of crack length to segment width in the range 0.1 to 0.8.

Gross, B.; Strawley, J. E.

1975-01-01

149

Improved assessment of body cell mass by segmental bioimpedance analysis in malnourished subjects and acromegaly  

Microsoft Academic Search

Background: Estimation of body cell mass (BCM) has been regarded valuable for the assessement of malnutrition.Aim: To investigate the value of segmental bioelectrical impedance analysis (BIA) for BCM estimation in malnourished subjects and acromegaly.Methods: Nineteen controls and 63 patients with either reduced (liver cirrhosis without and with ascites, Cushing's disease) or increased BCM (acromegaly) were included. Whole-body and segmental BIA

M. PIRLICH; T. SCHÜTZ; J. OCKENGA; H. BIERING; H. GERL; B. SCHMIDT; S. ERTL; M. PLAUTH; H. LOCHS

2003-01-01

150

An Automatic Segmentation Method for Regional Analysis of Femoral Neck Images Acquired by pQCT  

Microsoft Academic Search

We developed an automatic method for regional analysis of femoral neck images acquired by peripheral quantitative computed\\u000a tomography (pQCT), based on automatic spatial re-alignment and segmentation; the segmentation method, based on a morphological\\u000a approach, explicitly accounts for the presence of three different bone compartments: cortical region, trabecular region, and\\u000a transition zone between cortical and trabecular compartments. The proposed method was

G. Rizzo; E. Scalco; D. Tresoldi; I. Villa; G. L. Moro; C. L. Lafortuna; A. Rubinacci

2011-01-01

151

An EM approach to MAP solution of segmenting tissue mixtures: a numerical analysis.  

PubMed

This work presents an iterative expectation-maximization (EM) approach to the maximum a posteriori (MAP) solution of segmenting tissue mixtures inside each image voxel. Each tissue type is assumed to follow a normal distribution across the field-of-view (FOV). Furthermore, all tissue types are assumed to be independent from each other. Under these assumptions, the summation of all tissue mixtures inside each voxel leads to the image density mean value at that voxel. The summation of all the tissue mixtures' unobservable random processes leads to the observed image density at that voxel, and the observed image density value also follows a normal distribution (image data are observed to follow a normal distribution in many applications). By modeling the underlying tissue distributions as a Markov random field across the FOV, the conditional expectation of the posteriori distribution of the tissue mixtures inside each voxel is determined, given the observed image data and the current-iteration estimation of the tissue mixtures. Estimation of the tissue mixtures at next iteration is computed by maximizing the conditional expectation. The iterative EM approach to a MAP solution is achieved by a finite number of iterations and reasonable initial estimate. This MAP-EM framework provides a theoretical solution to the partial volume effect, which has been a major cause of quantitative imprecision in medical image processing. Numerical analysis demonstrated its potential to estimate tissue mixtures accurately and efficiently. PMID:19188116

Liang, Zhengrong; Wang, Su

2009-02-01

152

Health lifestyles: audience segmentation analysis for public health interventions.  

PubMed

This article is concerned with the application of market segmentation techniques in order to improve the planning and implementation of public health education programs. Seven distinctive patterns of health attitudes, social influences, and behaviors are identified using cluster analytic techniques in a sample drawn from four central California cities, and are subjected to construct and predictive validation: The lifestyle clusters predict behaviors including seatbelt use, vitamin C use, and attention to health information. The clusters also predict self-reported improvements in health behavior as measured in a two-year follow-up survey, e.g., eating less salt and losing weight, and self-reported new moderate and new vigorous exercise. Implications of these lifestyle clusters for public health education and intervention planning, and the larger potential of lifestyle clustering techniques in public health efforts, are discussed. PMID:2055779

Slater, M D; Flora, J A

1991-01-01

153

On-Line Segmentation of Human Motion for Automated Rehabilitation Exercise Analysis.  

PubMed

To enable automated analysis of rehabilitation movements, an approach for accurately identifying and segmenting movement repetitions is required. This paper proposes an approach for on-line, automated segmentation and identification of movement segments from continuous time-series data of human movement, obtained from body-mounted inertial measurement units or from motion capture data. The proposed approach uses a two-stage identification and recognition process, based on velocity features and stochastic modeling of each motion to be identified. In the first stage, motion segment candidates are identified based on a characteristic sequence of velocity features such as velocity peaks and zero velocity crossings. In the second stage, hidden Markov models are used to accurately identify segment locations from the identified candidates. The proposed approach is capable of on-line segmentation and identification, enabling interactive feedback in rehabilitation applications. The approach is validated on 20 healthy subjects and 4 rehabilitation patients performing rehabilitation movements, achieving segmentation accuracy of 87% with user specific templates and 79-83% accuracy with user-independent templates. PMID:23661321

Lin, Jonathan; Kulic, Dana

2013-05-01

154

Design and validation of Segment - freely available software for cardiovascular image analysis  

PubMed Central

Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Conclusions Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.

2010-01-01

155

Automated abdominal lymph node segmentation based on RST analysis and SVM  

NASA Astrophysics Data System (ADS)

This paper describes a segmentation method for abdominal lymph node (LN) using radial structure tensor analysis (RST) and support vector machine. LN analysis is one of crucial parts of lymphadenectomy, which is a surgical procedure to remove one or more LNs in order to evaluate them for the presence of cancer. Several works for automated LN detection and segmentation have been proposed. However, there are a lot of false positives (FPs). The proposed method consists of LN candidate segmentation and FP reduction. LN candidates are extracted using RST analysis in each voxel of CT scan. RST analysis can discriminate between difference local intensity structures without influence of surrounding structures. In FP reduction process, we eliminate FPs using support vector machine with shape and intensity information of the LN candidates. The experimental result reveals that the sensitivity of the proposed method was 82.0 % with 21.6 FPs/case.

Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Misawa, Kazunari; Mori, Kensaku

2014-03-01

156

Microarray kit analysis of cytokines in blood product units and segments  

PubMed Central

BACKGROUND Cytokine concentrations in transfused blood components are of interest for some clinical trials. It is not always possible to process samples of transfused components quickly after their administration. Additionally, it is not practical to sample material in an acceptable manner from many bags of components before transfusion, and after transfusion, the only representative remaining fluid of the component may be that in the “segment,” as the bag may have been completely transfused. Multiplex array technology allows rapid simultaneous testing of multiple analytes in small volume samples. We used this technology to measure leukocyte cytokine levels in blood products to determine (1) whether concentrations in segments correlate with those in the main bag, and thus, whether segments could be used for estimation of the concentrations in the transfused component; and (2) whether concentrations after sample storage at 4C for 24 hrs do not differ from concentrations before storage, thus allowing for processing within 24 hrs, rather than immediately after transfusion. STUDY DESIGN AND METHODS Leukocyte cytokines were measured in the supernatant from bags and segments of leukoreduced red blood cells, non-leukoreduced whole blood, and leukoreduced plateletphereses using the ProteoPlex Human Cytokine Array kit (Novagen). RESULTS Cytokine concentrations in packed red blood cell and whole blood, or plateletphereses stored at 4°C did not differ between bag and segment samples (all p>0.05). There was no evidence of systematic differences between segment and bag concentrations. Cytokine concentrations in samples from plateletphereses did not change within 24 hrs storage at 4°C. CONCLUSION Samples from either bag or segment can be used to study cytokine concentrations in groups of blood products. Cytokine concentrations in plateletphereses appear to be stable for at least 24 hrs of storage at 4°C, and, thus, samples stored with those conditions may be used to estimate the cytokine concentrations of the component at the time of transfusion.

Weiskopf, Richard B.; Yau, Rebecca; Sanchez, Rosa; Lowell, Clifford; Toy, Pearl

2009-01-01

157

Automatic segmentation of the colon  

NASA Astrophysics Data System (ADS)

Virtual colonoscopy is a minimally invasive technique that enables detection of colorectal polyps and cancer. Normally, a patient's bowel is prepared with colonic lavage and gas insufflation prior to computed tomography (CT) scanning. An important step for 3D analysis of the image volume is segmentation of the colon. The high-contrast gas/tissue interface that exists in the colon lumen makes segmentation of the majority of the colon relatively easy; however, two factors inhibit automatic segmentation of the entire colon. First, the colon is not the only gas-filled organ in the data volume: lungs, small bowel, and stomach also meet this criteria. User-defined seed points placed in the colon lumen have previously been required to spatially isolate only the colon. Second, portions of the colon lumen may be obstructed by peristalsis, large masses, and/or residual feces. These complicating factors require increased user interaction during the segmentation process to isolate additional colon segments. To automate the segmentation of the colon, we have developed a method to locate seed points and segment the gas-filled lumen with no user supervision. We have also developed an automated approach to improve lumen segmentation by digitally removing residual contrast-enhanced fluid resulting from a new bowel preparation that liquefies and opacifies any residual feces.

Wyatt, Christopher L.; Ge, Yaorong; Vining, David J.

1999-05-01

158

Fast Iris Segmentation by Rotation Average Analysis of Intensity-Inversed Image  

NASA Astrophysics Data System (ADS)

Iris recognition is a reliable and accurate biometric technique used in modern personnel identification system. Segmentation of the effective iris region is the base of iris feature encoding and recognition. In this paper, a novel method is presented for fast iris segmentation. There are two steps to finish the iris segmentation. The first step is iris location, which is based on rotation average analysis of intensity-inversed image and non-linear circular regression. The second step is eyelid detection. A new method to detect the eyelids utilizing a simplified mathematical model of arc with three free parameters is implemented for quick fitting. Comparatively, the conventional model with four parameters is less optimal. Experiments were carried out on both self-collected images and CASIA database. The results show that our method is fast and robust in segmenting the effective iris region with high tolerance of noise and scaling.

Li, Wei; Jiang, Lin-Hua

159

A robust and fast line segment detector based on top-down smaller eigenvalue analysis  

NASA Astrophysics Data System (ADS)

In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.

Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing

2014-01-01

160

Tracking and data acquisition system for the 1990's. Volume 5: TDAS ground segment architecture and operations concept  

NASA Technical Reports Server (NTRS)

Tracking and data acquisition system (TDAS) ground segment and operational requirements, TDAS RF terminal configurations, TDAS ground segment elements, the TDAS network, and the TDAS ground terminal hardware are discussed.

Daly, R.

1983-01-01

161

Asymmetry analysis using automatic segmentation and classification for breast cancer detection in thermograms  

Microsoft Academic Search

Thermal infrared imaging has shown effective results as a diagnostic tool in breast cancer detection. It can be used as a complementary to traditional mammography. Asymmetry analysis are usually used to help detect abnormalities. However, in infrared imaging, this cannot be done without human interference. This paper proposes an automatic approach to asymmetry analysis in thermograms. It includes automatic segmentation

Hairong Qi; Jonathan F. Head

2001-01-01

162

Analysis of grounding systems in soils with cylindrical soil volumes  

Microsoft Academic Search

A theoretical model for the analysis of grounding systems located in soils with cylindrical soil volumes is presented for the first time. Exact closed-form analytical expressions for earth potentials due to current sources in different regions of such soil structures have been obtained. More precisely, the soil models considered contain horizontal semi-cylindrical soil volumes and vertical cylindrical soil volumes. Numerical

Jinxi Ma; F. P. Dawlibi

2000-01-01

163

Automated image segmentation for breast analysis using infrared images  

Microsoft Academic Search

In order to realize a fully automated thermogram analysis package for breast cancer detection, it is necessary to identify the region of interest in the thermal image prior to analysis. A nearly fully automated approach is outlined that is able to successfully locate the breast regions in most of the images analyzed. The approach consists of a sequence of Canny

N. Scales; C. Kerry; M. Prize

2004-01-01

164

Fire flame detection using color segmentation and space-time analysis  

NASA Astrophysics Data System (ADS)

This paper presents a fire flame detection using CCTV cameras based on image processing. The scheme relies on color segmentation and space-time analysis. The segmentation is performed to extract fire-like-color regions in an image. Many methods are benchmarked against each other to find the best for practical CCTV camera. After that, the space-time analysis is used to recognized fire behavior. A space-time window is generated from contour of the threshold image. Feature extraction is done in Fourier domain of the window. Neural network is used for behavior recognition. The system will be shown to be practical and robust.

Ruchanurucks, Miti; Saengngoen, Praphin; Sajjawiso, Theeraphat

2011-10-01

165

Segmentation of ECG-gated multidetector row-CT cardiac images for functional analysis  

NASA Astrophysics Data System (ADS)

Multi-row detector CT (MDCT) gated with ECG-tracing allows continuous image acquisition of the heart during a breath-hold with a high spatial and temporal resolution. Dynamic segmentation and display of CT images, especially short- and long-axis view, is important in functional analysis of cardiac morphology. The size of dynamic MDCT cardiac images, however, is typically very large involving several hundred CT images and thus a manual analysis of these images can be time-consuming and tedious. In this paper, an automatic scheme was proposed to segment and reorient the left ventricular images in MDCT. Two segmentation techniques, deformable model and region-growing methods, were developed and tested. The contour of the ventricular cavity was segmented iteratively from a set of initial coarse boundary points placed on a transaxial CT image and was propagated to adjacent CT images. Segmented transaxial diastolic cardiac phase MDCT images were reoriented along the long- and short-axis of the left ventricle. The axes were estimated by calculating the principal components of the ventricular boundary points and then confirmed or adjusted by an operator. The reorientation of the coordinates was applied to other transaxial MDCT image sets reconstructed at different cardiac phases. Estimated short-axes of the left ventricle were in a close agreement with the qualitative assessment by a radiologist. Preliminary results from our methods were promising, with a considerable reduction in analysis time and manual operations.

Kim, Jinsung; Na, Yonghum; Bae, Kyongtae T.

2002-05-01

166

Finite difference based vibration simulation analysis of a segmented distributed piezoelectric structronic plate system  

NASA Astrophysics Data System (ADS)

Electrical modeling of piezoelectric structronic systems by analog circuits has the disadvantages of huge circuit structure and low precision. However, studies of electrical simulation of segmented distributed piezoelectric structronic plate systems (PSPSs) by using output voltage signals of high-speed digital circuits to evaluate the real-time dynamic displacements are scarce in the literature. Therefore, an equivalent dynamic model based on the finite difference method (FDM) is presented to simulate the actual physical model of the segmented distributed PSPS with simply supported boundary conditions. By means of the FDM, the four-ordered dynamic partial differential equations (PDEs) of the main structure/segmented distributed sensor signals/control moments of the segmented distributed actuator of the PSPS are transformed to finite difference equations. A dynamics matrix model based on the Newmark-? integration method is established. The output voltage signal characteristics of the lower modes (m <= 3, n <= 3) with different finite difference mesh dimensions and different integration time steps are analyzed by digital signal processing (DSP) circuit simulation software. The control effects of segmented distributed actuators with different effective areas are consistent with the results of the analytical model in relevant references. Therefore, the method of digital simulation for vibration analysis of segmented distributed PSPSs presented in this paper can provide a reference for further research into the electrical simulation of PSPSs.

Ren, B. Y.; Wang, L.; Tzou, H. S.; Yue, H. H.

2010-08-01

167

Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass  

NASA Technical Reports Server (NTRS)

Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

Juhasz, Albert J.

2001-01-01

168

The Influence of Segmental Impedance Analysis in Predicting Validity of Consumer Grade Bioelectrical Impedance Analysis Devices  

NASA Astrophysics Data System (ADS)

Consumer grade bioelectric impedance analysis (BIA) instruments measure the body's impedance at 50 kHz, and yield a quick estimate of percent body fat. The frequency dependence of the impedance gives more information about the current pathway and the response of different tissues. This study explores the impedance response of human tissue at a range of frequencies from 0.2 - 102 kHz using a four probe method and probe locations standard for segmental BIA research of the arm. The data at 50 kHz, for a 21 year old healthy Caucasian male (resistance of 180?±10 and reactance of 33?±2) is in agreement with previously reported values [1]. The frequency dependence is not consistent with simple circuit models commonly used in evaluating BIA data, and repeatability of measurements is problematic. This research will contribute to a better understanding of the inherent difficulties in estimating body fat using consumer grade BIA devices. [1] Chumlea, William C., Richard N. Baumgartner, and Alex F. Roche. ``Specific resistivity used to estimate fat-free mass from segmental body measures of bioelectrical impedance.'' Am J Clin Nutr 48 (1998): 7-15.

Sharp, Andy; Heath, Jennifer; Peterson, Janet

2008-05-01

169

Microreactors with integrated UV/Vis spectroscopic detection for online process analysis under segmented flow.  

PubMed

Combining reaction and detection in multiphase microfluidic flow is becoming increasingly important for accelerating process development in microreactors. We report the coupling of UV/Vis spectroscopy with microreactors for online process analysis under segmented flow conditions. Two integration schemes are presented: one uses a cross-type flow-through cell subsequent to a capillary microreactor for detection in the transmission mode; the other uses embedded waveguides on a microfluidic chip for detection in the evanescent wave field. Model experiments reveal the capabilities of the integrated systems in real-time concentration measurements and segmented flow characterization. The application of such integration for process analysis during gold nanoparticle synthesis is demonstrated, showing its great potential in process monitoring in microreactors operated under segmented flow. PMID:24178763

Yue, Jun; Falke, Floris H; Schouten, Jaap C; Nijhuis, T Alexander

2013-12-21

170

3-D segmentation and quantitative analysis of inner and outer walls of thrombotic abdominal aortic aneurysms  

NASA Astrophysics Data System (ADS)

An abdominal aortic aneurysm (AAA) is an area of a localized widening of the abdominal aorta, with a frequent presence of thrombus. A ruptured aneurysm can cause death due to severe internal bleeding. AAA thrombus segmentation and quantitative analysis are of paramount importance for diagnosis, risk assessment, and determination of treatment options. Until now, only a small number of methods for thrombus segmentation and analysis have been presented in the literature, either requiring substantial user interaction or exhibiting insufficient performance. We report a novel method offering minimal user interaction and high accuracy. Our thrombus segmentation method is composed of an initial automated luminal surface segmentation, followed by a cost function-based optimal segmentation of the inner and outer surfaces of the aortic wall. The approach utilizes the power and flexibility of the optimal triangle mesh-based 3-D graph search method, in which cost functions for thrombus inner and outer surfaces are based on gradient magnitudes. Sometimes local failures caused by image ambiguity occur, in which case several control points are used to guide the computer segmentation without the need to trace borders manually. Our method was tested in 9 MDCT image datasets (951 image slices). With the exception of a case in which the thrombus was highly eccentric, visually acceptable aortic lumen and thrombus segmentation results were achieved. No user interaction was used in 3 out of 8 datasets, and 7.80 +/- 2.71 mouse clicks per case / 0.083 +/- 0.035 mouse clicks per image slice were required in the remaining 5 datasets.

Lee, Kyungmoo; Yin, Yin; Wahle, Andreas; Olszewski, Mark E.; Sonka, Milan

2008-04-01

171

Finite volume methods: foundation and analysis  

Microsoft Academic Search

Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, meteorology, electromagnetics, semi-conductor device simulation, models of biological processes and many other engineering areas governed by conservative systems that can be written in integral control volume form.

Timothy Barth; Mario Ohlberger

2004-01-01

172

Robust Detection and Identification of Sparse Segments in Ultra-High Dimensional Data Analysis  

PubMed Central

Summary Copy number variants (CNVs) are alternations of DNA of a genome that results in the cell having a less or more than two copies of segments of the DNA. CNVs correspond to relatively large regions of the genome, ranging from about one kilobase to several megabases, that are deleted or duplicated. Motivated by CNV analysis based on next generation sequencing data, we consider the problem of detecting and identifying sparse short segments hidden in a long linear sequence of data with an unspecified noise distribution. We propose a computationally efficient method that provides a robust and near-optimal solution for segment identification over a wide range of noise distributions. We theoretically quantify the conditions for detecting the segment signals and show that the method near-optimally estimates the signal segments whenever it is possible to detect their existence. Simulation studies are carried out to demonstrate the efficiency of the method under different noise distributions. We present results from a CNV analysis of a HapMap Yoruban sample to further illustrate the theory and the methods.

Cai, T. Tony; Jeng, X. Jessie; Li, Hongzhe

2012-01-01

173

A segmented principal component analysis--regression approach to QSAR study of peptides.  

PubMed

We employed segmented principal component analysis and regression, as a new methodology in quantitative structure-activity relationship (QSAR), to define new amino acid indices. The descriptors are first classified into different groups (based on the similarity in the information contents they are possessing) and then each group is subjected to principal component analysis (PCA), separately. The extracted principal components (PC) from the descriptor data matrix of each group can be considered as new sources of amino acid indices. These indices were used as input variables for QSAR study of two dipeptide data sets (58 angiotensin-converting enzyme (ACE) inhibitor activity, and 48 bitter tasting threshold (BTT) activity). Modeling between the indices and biological activity was achieved utilizing segmented principal component regression (SPCR) and segmented partial least squares (SPLS) methods. Both methods resulted in reliable QSAR models. In comparison with conventional principal component regression (PCR) and partial least square (PLS), the segmented ones produced more predictive models. In addition, the developed models showed better performances with respect to the previously reported models for the same data sets. It can be concluded that by segmentation of variables and partitioning of the information into informative and redundant parts, it is possible to discard the redundant part of variables and to obtain more appropriate models. PMID:22575548

Hemmateenejad, Bahram; Miri, Ramin; Elyasi, Maryam

2012-07-21

174

Nucleus Segmentation in Automated Cell Microarray Image Analysis  

Microsoft Academic Search

Live cell microarray technology (1), allows the simulta- neous analysis of many gene products. These microar- rays are growing cells in spots printed on a glass slide using a robotic arrayer. The cells growing on the DNA and gelatin spots express the DNA and divide 2-3 times in the process of creating a microarray with features consisting of clusters of

Roberto A Lotufo; Ashish Choudhary; Robert Cornelison; Spyro Mousses; Edward R. Dougherty

175

Scientific and clinical evidence for the use of fetal ECG ST segment analysis (STAN).  

PubMed

Fetal electrocardiogram waveform analysis has been studied for many decades, but it is only in the last 20 years that computerization has made real-time analysis practical for clinical use. Changes in the ST segment have been shown to correlate with fetal condition, in particular with acid-base status. Meta-analysis of randomized trials (five in total, four using the computerized system) has shown that use of computerized ST segment analysis (STAN) reduces the need for fetal blood sampling by about 40%. However, although there are trends to lower rates of low Apgar scores and acidosis, the differences are not statistically significant. There is no effect on cesarean section rates. Disadvantages include the need for amniotic membranes to be ruptured so that a fetal scalp electrode can be applied, and the need for STAN values to be interpreted in conjunction with detailed fetal heart rate pattern analysis. PMID:24597897

Steer, Philip J; Hvidman, Lone Egly

2014-06-01

176

An analysis of the adventure travel market: From conceptual development to market segmentation  

Microsoft Academic Search

Despite the growing importance of adventure travel as a viable market segment in the international travel and tourism industry, not much systematic investigation has been attempted in this area. In an effort to propose a comprehensive market analysis, this study approaches the adventure travel marketing system from the conceptual dimension of adventure tourism linking to various marketing environmental factors for

Hyesook Heidi Sung

2000-01-01

177

Reproducibility of Data Obtained by a Newly Developed Anterior Eye Segment Analysis System, EAS-1000  

Microsoft Academic Search

The reproducibility of data obtained from the recently developed anterior eye segment analysis system (EAS-1000) was evaluated. 40 normal eyes and 62 cataractous eyes were examined at Kanazawa Medical University Hospital or Yayoi Hospital. The radius of the corneal curvature, the corneal thickness, anterior chamber depth, whole lens thickness, anterior chamber angle and the scattering light intensity were all observed

Yasuo Sakamoto; Kazuyuki Sasaki; Yoshinobu Nakamura; Noriko Watanabe

1992-01-01

178

Loads analysis and testing of flight configuration solid rocket motor outer boot ring segments  

NASA Technical Reports Server (NTRS)

The loads testing on in-house-fabricated flight configuration Solid Rocket Motor (SRM) outer boot ring segments. The tests determined the bending strength and bending stiffness of these beams and showed that they compared well with the hand analysis. The bending stiffness test results compared very well with the finite element data.

Ahmed, Rafiq

1990-01-01

179

A segmented principal component analysis-regression approach to quantitative structure-activity relationship modeling.  

PubMed

The major problem associated with application of principal component regression (PCR) in QSAR studies is that this model extracts the eigenvectors solely from the matrix of descriptors, which might not have essentially good relationship with the biological activity. This article describes a novel segmentation approach to PCR (SPCAR), in which the descriptors are firstly segmented to different blocks and then principal component analysis (PCA) is applied on each segment to extract significant principal components (PCs). In this way, the PCs having useful and redundant information are separated. A linear regression analysis based on stepwise selection of variables is then employed to connect a relationship between the informative extracted PCs and biological activity. The proposed method was first applied to model the aqueous toxicity of aliphatic compounds. The effect of the number of segments on the prediction ability of the method was investigated. Finally, a correlation analysis was achieved to identify those descriptors having significant contribution in the selected PCs and in aqueous toxicity. The proposed method was further validated by the analysis of Selwood data set consisting of 31 compounds and 53 descriptors. A comparison between the conventional PCR algorithm and SPCAR reveals the superiority of the latter. For external prediction set, SPCAR represented all requirements to be considered as predicted model whereas PCR did not. In addition, a comparison was made between the models obtained by SPCAR and those reported previously. PMID:19523553

Hemmateenejad, Bahram; Elyasi, Maryam

2009-07-30

180

A comparison of the whole-body and segmental methodologies of bioimpedance analysis  

Microsoft Academic Search

Theory supports the use of a segmental methodology (SM) for bioimpedance analysis (BIA) of body water (BW). However, previous studies have generally failed to show a significant improvement when the SM is used in place of a whole-body methodology. A pilot study was conducted to compare the two methodologies in control and overweight subjects. BW of each subject was measured

B. J. Thomas; B. H. Cornish; M. J. Pattemore; M. Jacobs; L. C. Ward

2003-01-01

181

A comparison of segmental and wrist-to-ankle methodologies of bioimpedance analysis  

Microsoft Academic Search

The common approach of bioelectrical impedance analysis to estimate body water uses a wrist-to-ankle methodology which, although not indicated by theory, has the advantage of ease of application particularly for clinical studies involving patients with debilitating diseases. A number of authors have suggested the use of a segmented protocol in which the impedances of the trunk and limbs are measured

B. J. Thomas; B. H. Cornish; L. C. Ward; M. A. Patterson

1998-01-01

182

Speech analysis and synthesis based on pitch-synchronous segmentation of the speech waveform  

Microsoft Academic Search

This report describes a new speech analysis\\/synthesis method. This new technique does not attempt to model the human speech production mechanism. Instead, we represent the speech waveform directly in terms of the speech waveform defined in a pitch period. A significant merit of this approach is the complete elimination of pitch interference because each pitch-synchronously segmented waveform does not include

George S. Kang; Lawrence J. Fransen

1994-01-01

183

Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity  

NASA Astrophysics Data System (ADS)

In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

2012-04-01

184

Finite Volume Methods: Foundation and Analysis.  

National Technical Information Service (NTIS)

Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, elect...

T. Barth M. Ohlberger

2002-01-01

185

The Incidence of Adjacent Segment Degeneration after Cervical Disc Arthroplasty (CDA): A Meta Analysis of Randomized Controlled Trials  

PubMed Central

Background Cervical disc arthroplasty is being used as an alternative degenerative disc disease treatment with fusion of the cervical spine in order to preserve motion. However, whether replacement arthoplasty in the spine achieves its primary patient centered objective of lowering the frequency of adjacent segment degeneration is not verified yet. Methodology We conducted a meta-analysis according to the guidelines of the Cochrane Collaboration using databases including PubMed, Cochrane Central Register of Controlled Trials and Embase. The inclusion criteria were: 1) Randomized, controlled study of degenerative disc disease of the cervical spine involving single segment or double segments using Cervical disc arthroplasty (CDA) with anterior cervical discectomy and fusion (ACDF) as controls; 2) A minimum of two-year follow-up using imaging and clinical analyses; 3) Definite diagnostic evidences for “adjacent segment degeneration” and “adjacent segment disease”; 4) At least a minimum of 30 patients per population. Two authors independently selected trials; assessed methodological quality, extracted data and the results were pooled. Results No study has specifically compared the results of adjacent segment degenerative; Two papers describing 140 patients with 162 symptomatic cervical segment disorders and compared the rate of postoperative adjacent segment disease development between CDA and ACDF treatments, three publications describing the rate of adjacent-segment surgery including 1273 patients with symptomatic cervical segments. The result of the meta-analysis indicates that there were fewer the rate of adjacent segment disease and the rate for adjacent-segment surgery comparing CDA with ACDF, but the difference was not statistically significant. Conclusions Based on available evidence, it cannot be concluded, that CDA can significantly reduce the postoperative rate of the adjacent segment degenerative and adjacent segment disease. However, due to some limitations, the results of this meta-analysis should be cautiously accepted, and further studies are needed.

Yang, Baohui; Li, Haopeng; Zhang, Ting; He, Xijing; Xu, Siyue

2012-01-01

186

Continuation of static tests of segments of tunnel linings. Volume II data. Final report 21 Nov 77-30 Jun 79  

Microsoft Academic Search

Volume II presents the complete set of laboratory data obtained from a continuation series of 19 quasi-static tests of segments of cylindrical tunnel linings. Test specimens included scale models of a composite-integral structure and a steel structure with backpacking fielded in the MIGHT EPIC\\/DIABLO HAWK structures experiments and a corrugated structure with backpacking fielded in PILE DRIVER Event. Also included

H. C. Davis; K. B. Morrill; J. L. Merritt

1979-01-01

187

Unsupervised synthetic aperture radar image segmentation with superpixels in independent space based on independent component analysis  

NASA Astrophysics Data System (ADS)

Synthetic aperture radar (SAR) image segmentation is a challenging problem in recent years because of the speckle noise. An unsupervised SAR image segmentation with superpixels by independent component analysis (ICA) is proposed. ICA independent space is proposed to represent SAR images for feature extraction effectively. First, the SAR image is divided into small regions by mean-shift algorithm and then those regions are merged in region adjacent graph and full-connected graph based on the Mining Spanning Tree theory, which balances the speed and quality of segmentation. Finally, experiments on X-band TerraSAR images and comparisons with simple linear iterative clustering and graph-cut illustrate the excellent performance of the new method.

Ji, Jian; Li, Xiao-yuan

2014-01-01

188

Two-dimensional finite-element analysis of tapered segmented structures  

NASA Astrophysics Data System (ADS)

We present the results of the theoretical study and two-dimensional frequency domain finite-element simulation of tapered segmented waveguides. The application that we propose for this device is an adiabatically tapered and chirped PSW transmission, to eliminate higher order modes that can be propagated in a multimode semiconductor waveguide assuring mono mode propagation at 1.55?m. We demonstrate that by reducing the taper functions for the design of a segmented waveguide we can filter higher order modes at pump wavelength in WDM systems and at the same time low coupling losses between the continuous waveguide and the segmented waveguide. We obtained the cutoff wavelength as a function of the duty cycle of the segmented waveguide to show that we can, in fact, guide 1.55?m fundamental mode over a silicon-on-insulator platform using both, silica and SU-8 as substrate material. For the two-dimensional finite element analysis a new module over a commercial platform is proposed. Its contribution is the inclusion of the anisotropic perfectly matched layer that is more suitable for solving periodic segmented structures and other discontinuity problems.

Rubio Noriega, Ruth; Hernandez-Figueroa, Hugo

2013-03-01

189

Influence of volume expansion on NaC1 reabsorption in the diluting segments of the nephron: a study using clearance methods.  

PubMed

Whether volume expansion influences NaC1 reabsorption by the diluting segment of the nephron remains a matter of controversy. In the present studies this question has been examined in normal unanesthetized dogs, undergoing maximal water diuresis. Free water clearance (CH2O/GFR) has been used as the index of NaC1 reabsorption in the diluting segment. Three expressions have been employed for "distal delivery" of NaC1: a) V/GFR, designated as the "volume term"; b) (CNa/GFR + CH2O/GFR), the "sodium term;" and c) (CC1/GFR + CH2O/GFR), the "chloride term". The validity of these terms is discussed. Three techniques were used to increase distal delivery: 1) the administration of acetazolamide to dogs in which extracellular fluid (ECF) volume was not expanded (grop 1); 2) "moderate" volume expansion (group 2); and 3) "marked" volume expansion (group 3). CH2O/GFR increased progressively with rising values for "distal delivery" regardless of which term was used to calculate the latter. With all three delivery terms, differences in distal NaC1 reabsorption emerged between the two volume-expanded groups, though only with the "chloride" term did substantial differences also emerge between the nonexpanded group 1 dogs and both volume-expanded groups. In group 1, values for CH2O/GFR increased in close to a linear fashion up to distal delivery values equal to 24% of the volume of glomerular filtrate. However, at high rates of distal delivery the rate of rise of CH2O/GFR was less in group 2 than in group 1 and the depression of values was even greater in group 3. Within the limits of the techniques used, the data suggest that volume expansion inhibits fractional NaC1 reabsorption in the diluting segment of the nephron in a dose-related fashion. The "chloride" term was found to be superior to the "volume" and "sodium" terms in revealing these changes. PMID:972443

Danovitch, G M; Bricker, N S

1976-09-01

190

Segmental analysis of indocyanine green pharmacokinetics for the reliable diagnosis of functional vascular insufficiency  

NASA Astrophysics Data System (ADS)

Accurate and reliable diagnosis of functional insufficiency of peripheral vasculature is essential since Raynaud phenomenon (RP), most common form of peripheral vascular insufficiency, is commonly associated with systemic vascular disorders. We have previously demonstrated that dynamic imaging of near-infrared fluorophore indocyanine green (ICG) can be a noninvasive and sensitive tool to measure tissue perfusion. In the present study, we demonstrated that combined analysis of multiple parameters, especially onset time and modified Tmax which means the time from onset of ICG fluorescence to Tmax, can be used as a reliable diagnostic tool for RP. To validate the method, we performed the conventional thermographic analysis combined with cold challenge and rewarming along with ICG dynamic imaging and segmental analysis. A case-control analysis demonstrated that segmental pattern of ICG dynamics in both hands was significantly different between normal and RP case, suggesting the possibility of clinical application of this novel method for the convenient and reliable diagnosis of RP.

Kang, Yujung; Lee, Jungsul; An, Yuri; Jeon, Jongwook; Choi, Chulhee

2011-03-01

191

New Software for Market Segmentation Analysis: A Chi-Square Interaction Detector. AIR 1983 Annual Forum Paper.  

ERIC Educational Resources Information Center

The advantages and disadvantages of new software for market segmentation analysis are discussed, and the application of this new, chi-square based procedure (CHAID), is illustrated. A comparison is presented of an earlier, binary segmentation technique (THAID) and a multiple discriminant analysis. It is suggested that CHAID is superior to earlier…

Lay, Robert S.

192

Sum of segmental bioimpedance analysis during ultrafiltration and hemodialysis reduces sensitivity to changes in body position  

Microsoft Academic Search

Sum of segmental bioimpedance analysis during ultrafiltration and hemodialysis reduces sensitivity to changes in body position.BackgroundBioimpedance, a noninvasive technique to analyze body composition, has attracted interest in determining body hydration in hemodialysis patients. However, the so-called whole-body (wrist-to-ankle) bioimpedance analysis (WBIA) is sensitive to changes in regional fluid distribution and tends to underestimate fluid changes during ultrafiltration in hemodialysis patients.

Fansan Zhu; Daniel Schneditz; Nathan W. Levin

1999-01-01

193

Market segmentation for multiple option healthcare delivery systems--an application of cluster analysis.  

PubMed

Healthcare providers of multiple option plans may be confronted with special market segmentation problems. This study demonstrates how cluster analysis may be used for discovering distinct patterns of preference for multiple option plans. The availability of metric, as opposed to categorical or ordinal, data provides the ability to use sophisticated analysis techniques which may be superior to frequency distributions and cross-tabulations in revealing preference patterns. PMID:10105775

Jarboe, G R; Gates, R H; McDaniel, C D

1990-01-01

194

Comparison of Five Segmentation Tools for {sup 18}F-Fluoro-Deoxy-Glucose-Positron Emission Tomography-Based Target Volume Definition in Head and Neck Cancer  

SciTech Connect

Purpose: Target-volume delineation for radiation treatment to the head and neck area traditionally is based on physical examination, computed tomography (CT), and magnetic resonance imaging. Additional molecular imaging with {sup 18}F-fluoro-deoxy-glucose (FDG)-positron emission tomography (PET) may improve definition of the gross tumor volume (GTV). In this study, five methods for tumor delineation on FDG-PET are compared with CT-based delineation. Methods and Materials: Seventy-eight patients with Stages II-IV squamous cell carcinoma of the head and neck area underwent coregistered CT and FDG-PET. The primary tumor was delineated on CT, and five PET-based GTVs were obtained: visual interpretation, applying an isocontour of a standardized uptake value of 2.5, using a fixed threshold of 40% and 50% of the maximum signal intensity, and applying an adaptive threshold based on the signal-to-background ratio. Absolute GTV volumes were compared, and overlap analyses were performed. Results: The GTV method of applying an isocontour of a standardized uptake value of 2.5 failed to provide successful delineation in 45% of cases. For the other PET delineation methods, volume and shape of the GTV were influenced heavily by the choice of segmentation tool. On average, all threshold-based PET-GTVs were smaller than on CT. Nevertheless, PET frequently detected significant tumor extension outside the GTV delineated on CT (15-34% of PET volume). Conclusions: The choice of segmentation tool for target-volume definition of head and neck cancer based on FDG-PET images is not trivial because it influences both volume and shape of the resulting GTV. With adequate delineation, PET may add significantly to CT- and physical examination-based GTV definition.

Schinagl, Dominic A.X. [Department of Radiation Oncology, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands)], E-mail: d.schinagl@rther.umcn.nl; Vogel, Wouter V. [Department of Nuclear Medicine, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); Hoffmann, Aswin L. [Department of Radiation Oncology, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); Dalen, Jorn A. van; Oyen, Wim J. [Department of Nuclear Medicine, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); Kaanders, Johannes H.A.M. [Department of Radiation Oncology, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands)

2007-11-15

195

ICA-Based Segmentation of the Brain on Perfusion Data.  

National Technical Information Service (NTIS)

An Independent Component Analysis (ICA) based segmentation technique is presented allowing the quantitative assessment of cerebral blood volume (CBV), cerebral blood flow (CBF) and mean transit time (MTT) from dynamic susceptibility contrast magnetic reso...

T. A. Tasciyan C. F. Beckmann E. D. Morris S. M. Smith

2001-01-01

196

Mean-Field Analysis of Recursive Entropic Segmentation of Biological Sequences  

NASA Astrophysics Data System (ADS)

Horizontal gene transfer in bacteria results in genomic sequences which are mosaic in nature. An important first step in the analysis of a bacterial genome would thus be to model the statistically nonstationary nucleotide or protein sequence with a collection of P stationary Markov chains, and partition the sequence of length N into M statistically stationary segments/domains. This can be done for Markov chains of order K = 0 using a recursive segmentation scheme based on the Jensen-Shannon divergence, where the unknown parameters P and M are estimated from a hypothesis testing/model selection process. In this talk, we describe how the Jensen-Shannon divergence can be generalized to Markov chains of order K > 0, as well as an algorithm optimizing the positions of a fixed number of domain walls. We then describe a mean field analysis of the generalized recursive Jensen-Shannon segmentation scheme, and show how most domain walls appear as local maxima in the divergence spectrum of the sequence, before highlighting the main problem associated with the recursive segmentation scheme, i.e. the strengths of the domain walls selected recursively do not decrease monotonically. This problem is especially severe in repetitive sequences, whose statistical signatures we will also discuss.

Cheong, Siew-Ann; Stodghill, Paul; Schneider, David; Myers, Christopher

2007-03-01

197

Analysis of TIN's for Dredged Volume Computation.  

National Technical Information Service (NTIS)

This paper will examine some of the commercially available software packages which can be used to create Triangulated Irregular Networks (TIN's) and compute dredge volumes from these TIN's. It will also give some guidance on selecting a package to use for...

J. Ruby

1994-01-01

198

Accurate 3D Left-Right Brain Hemisphere Segmentation in MR Images Based on Shape Bottlenecks and Partial Volume Estimation  

Microsoft Academic Search

Current automatic methods based on mid-sagittal plane to segment left and right human brain hemispheres in 3D magnetic resonance\\u000a (MR) images simply use a planar surface. However, the two brain hemispheres, in fact, can not be separated by just a simple\\u000a plane properly. A novel automatic method to segment left and right brain hemispheres in MR images is proposed in

Lu Zhao; Jussi Tohka; Ulla Ruotsalainen

2007-01-01

199

Analysis of gene expression levels in individual bacterial cells without image segmentation  

SciTech Connect

Highlights: Black-Right-Pointing-Pointer We present a method for extracting gene expression data from images of bacterial cells. Black-Right-Pointing-Pointer The method does not employ cell segmentation and does not require high magnification. Black-Right-Pointing-Pointer Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. Black-Right-Pointing-Pointer We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

Kwak, In Hae; Son, Minjun [Physics Department, University of Florida, P.O. Box 118440, Gainesville, FL 32611-8440 (United States)] [Physics Department, University of Florida, P.O. Box 118440, Gainesville, FL 32611-8440 (United States); Hagen, Stephen J., E-mail: sjhagen@ufl.edu [Physics Department, University of Florida, P.O. Box 118440, Gainesville, FL 32611-8440 (United States)

2012-05-11

200

Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis  

NASA Astrophysics Data System (ADS)

The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.

Chung, Howard; Cobzas, Dana; Birdsell, Laura; Lieffers, Jessica; Baracos, Vickie

2009-02-01

201

Analysis, design, and test of a graphite/polyimide Shuttle orbiter body flap segment  

NASA Technical Reports Server (NTRS)

For future missions, increases in Space Shuttle orbiter deliverable and recoverable payload weight capability may be needed. Such increases could be obtained by reducing the inert weight of the Shuttle. The application of advanced composites in orbiter structural components would make it possible to achieve such reductions. In 1975, NASA selected the orbiter body flap as a demonstration component for the Composite for Advanced Space Transportation Systems (CASTS) program. The progress made in 1977 through 1980 was integrated into a design of a graphite/polyimide (Gr/Pi) body flap technology demonstration segment (TDS). Aspects of composite body flap design and analysis are discussed, taking into account the direct-bond fibrous refractory composite insulation (FRCI) tile on Gr/Pi structure, Gr/Pi body flap weight savings, the body flap design concept, and composite body flap analysis. Details regarding the Gr/Pi technology demonstration segment are also examined.

Graves, S. R.; Morita, W. H.

1982-01-01

202

Segmentation of biological target volumes on multi-tracer PET images based on information fusion for achieving dose painting in radiotherapy.  

PubMed

Medical imaging plays an important role in radiotherapy. Dose painting consists in the application of a nonuniform dose prescription on a tumoral region, and is based on an efficient segmentation of biological target volumes (BTV). It is derived from PET images, that highlight tumoral regions of enhanced glucose metabolism (FDG), cell proliferation (FLT) and hypoxia (FMiso). In this paper, a framework based on Belief Function Theory is proposed for BTV segmentation and for creating 3D parametric images for dose painting. We propose to take advantage of neighboring voxels for BTV segmentation, and also multi-tracer PET images using information fusion to create parametric images. The performances of BTV segmentation was evaluated on an anthropomorphic phantom and compared with two other methods. Quantitative results show the good performances of our method. It has been applied to data of five patients suffering from lung cancer. Parametric images show promising results by highlighting areas where a high frequency or dose escalation could be planned. PMID:23285594

Lelandais, Benoît; Gardin, Isabelle; Mouchard, Laurent; Vera, Pierre; Ruan, Su

2012-01-01

203

A multi-scale segmentation\\/object relationship modelling methodology for landscape analysis  

Microsoft Academic Search

Natural complexity can best be explored using spatial analysis tools based on concepts of landscape as process continuums that can be partially decomposed into objects or patches. We introduce a five-step methodology based on multi-scale segmentation and object relationship modelling. Hierarchical patch dynamics (HPD) is adopted as the theoretical framework to address issues of heterogeneity, scale, connectivity and quasi-equilibriums in

C. Burnett; Thomas Blaschke

2003-01-01

204

Automated lung segmentation in magnetic resonance images  

NASA Astrophysics Data System (ADS)

Segmentation of the lungs within magnetic resonance (MR) scans is a necessary preprocessing step in the computerized analysis of thoracic MR images. This task is complicated by potentially significant cardiac and pulmonary motion artifacts, partial volume effect, and morphological deformation from disease. We have developed an automated segmentation method to account for these complications. First, the thorax is segmented using a threshold obtained from analysis of the cumulative gray-level histogram constructed along a diagonal line through the center of the image. Next two separate lung-thresholded images are created. The first lung-thresholded image is created using histogram-based gray-level thresholding techniques applied to the segmented thorax. To include lung areas that may be adversely affected by artifact or disease, a second lung-thresholded image is created by applying a grayscale erosion operator to the first lung-thresholded image. After a rolling ball filter is applied to the lung contour to eliminate non-lung pixels from the thresholded lung regions, a logical OR operation is used to combine the two lung-thresholded images into the final segmented lung regions. Modifications to this approach were required to properly segment sections in the lung bases. In a preliminary evaluation, the automated method was applied to 10 MR scans, an observer evaluated the segmented lung regions using a five-point scale ("highly accurate segmentation" to "highly inaccurate segmentation"). Eighty-five percent of the segmented lung regions were rated as highly or moderately accurate.

Sensakovic, William F.; Armato, Samuel G., III; Starkey, Adam

2005-04-01

205

Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms  

PubMed Central

Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as “gold standard.” Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F?f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time (compared to an average of 39 min per case by manual segmentation). Conclusions: The computerized liver extraction scheme provides an efficient and accurate way of measuring liver volumes in CT.

Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu, Jianwu; Hori, Masatoshi

2010-01-01

206

Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms  

SciTech Connect

Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time (compared to an average of 39 min per case by manual segmentation). Conclusions: The computerized liver extraction scheme provides an efficient and accurate way of measuring liver volumes in CT.

Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

2010-05-15

207

Unsupervised segmentation of scenes containing vegetation (Forsythia) and soil by hierarchical analysis of bi-dimensional histograms  

Microsoft Academic Search

An unsupervised algorithm for the segmentation of scenes containing vegetation and soil is presented. It is based on a hierarchical analysis of bi-dimensional color histograms. Its performances reveal as good as those obtained from an expert (manual) segmentation, and from a neural network approach.

A. Clément; B. Vigouroux

2003-01-01

208

Level Set Hyperspectral Segmentation: Near-Optimal Speed Functions using Best Band Analysis and Scaled Spectral Angle Mapper  

Microsoft Academic Search

This paper presents a semi-automated supervised level set hyperspectral image segmentation algorithm. The proposed method uses near-optimal speed functions (which control the level set segmentation) that are composed of a spectral similarity term and a stopping term. The spectral similarity term is used to compare pixels to class training signatures and is based on an optimized best bands analysis (BBA)

John E. Ball; L. M. Bruce

2006-01-01

209

Fully Bayesian inference for structural MRI: application to segmentation and statistical analysis of T2-hypointensities.  

PubMed

Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data. PMID:23874537

Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark

2013-01-01

210

Analysis of volume dependence of Grüneisen ratio  

NASA Astrophysics Data System (ADS)

Various models on volume dependence of the Grüneisen ratio have been analyzed in the present study. The Sharma model [Mod. Phys. Lett. B 22/31 (2008) 3113] is found to be similar to that used by Nie [Phys. Stat. Sol. (b) 219 (2004) 241] on the basis of approximation made by Jeanloz [J. Geophys. Res. 94 (1989) 5929]. The Nie expression is amended in a manner so that the resulted expression follows the constraint of high pressure thermodynamics in the limit of infinite pressure. The newly developed relationship is applied successfully on materials for which experimental data are accessible such as epsilon-iron, NaCl, Li, Na, and K.

Srivastava, S. K.; Sinha, Pallavi

2009-11-01

211

3-D segmentation of the rim and cup in spectral-domain optical coherence tomography volumes of the optic nerve head  

NASA Astrophysics Data System (ADS)

Glaucoma is a group of diseases which can cause vision loss and blindness due to gradual damage to the optic nerve. The ratio of the optic disc cup to the optic disc is an important structural indicator for assessing the presence of glaucoma. The purpose of this study is to develop and evaluate a method which can segment the optic disc cup and neuroretinal rim in spectral-domain OCT scans centered on the optic nerve head. Our method starts by segmenting 3 intraretinal surfaces using a fast multiscale 3-D graph search method. Based on one of the segmented surfaces, the retina of the OCT volume is flattened to have a consistent shape across scans and patients. Selected features derived from OCT voxel intensities and intraretinal surfaces were used to train a k-NN classifier that can determine which A-scans in the OCT volume belong to the background, optic disc cup and neuroretinal rim. Through 3-fold cross validation with a training set of 20 optic nerve head-centered OCT scans (10 right eye scans and 10 left eye scans from 10 glaucoma patients) and a testing set of 10 OCT scans (5 right eye scans and 5 left eye scans from 5 different glaucoma patients), segmentation results of the optic disc cup and rim for all 30 OCT scans were obtained. The average unsigned errors of the optic disc cup and rim were 1.155 +/- 1.391 pixels (0.035 +/- 0.042 mm) and 1.295 +/- 0.816 pixels (0.039 +/- 0.024 mm), respectively.

Lee, Kyungmoo; Niemeijer, Meindert; Garvin, Mona K.; Kwon, Young H.; Sonka, Milan; Abràmoff, Michael D.

2009-02-01

212

Linked state machines: analysis. Volume 3  

SciTech Connect

The analysis of a linked state machine (LSM) is that process whereby its state or output behavior is determined. A method is developed for the staged (iterated) analysis of LSMs. This method is based on composition and reduction and partly avoids the ''state space explosion'' since reduction is applied at each stage. Reduction is a procedure for reducing the number of states in an LSM while retaining identical external behavior and similar internal behavior. As an example of analysis through composition and reduction, a small part of the IEEE Standard Digital Interface (IEEE Std 488-78) is analyzed. This analysis yielded an LSM with 8 states that is equivalent to the original LSM of 64 states. Two attributes of the analysis process - the possibility of limiting the number of states and of guiding reduction so that only a selected part of the LSM behavior is modeled - give promise that computer-aided analysis (and design) of LSMs will be feasible. 5 refs.

Knudsen, H.K.

1985-04-01

213

Analysis of Drosophila Segmentation Network Identifies a JNK Pathway Factor Overexpressed in Kidney Cancer  

PubMed Central

We constructed a large-scale functional network model in Drosophila melanogaster built around two key transcription factors involved in the process of embryonic segmentation. Analysis of the model allowed the identification of a new role for the ubiquitin E3 ligase complex factor SPOP. In Drosophila, the gene encoding SPOP is a target of segmentation transcription factors. Drosophila SPOP mediates degradation of the Jun-kinase phosphatase Puckered thereby inducing TNF/Eiger dependent apoptosis. In humans we found that SPOP plays a conserved role in TNF-mediated JNK signaling and was highly expressed in 99% of clear cell renal cell carcinoma (RCC), the most prevalent form of kidney cancer. SPOP expression distinguished histological subtypes of RCC and facilitated identification of clear cell RCC as the primary tumor for metastatic lesions.

Liu, Jiang; Ghanim, Murad; Xue, Lei; Brown, Christopher D.; Iossifov, Ivan; Angeletti, Cesar; Hua, Sujun; Negre, Nicolas; Ludwig, Michael; Stricker, Thomas; Al-Ahmadie, Hikmat A.; Tretiakova, Maria; Camp, Robert L.; Perera-Alberto, Montse; Rimm, David L.; Xu, Tian; Rzhetsky, Andrey; White, Kevin P.

2009-01-01

214

Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation.  

PubMed

The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

Alzubi, Shadi; Islam, Naveed; Abbod, Maysam

2011-01-01

215

Multiresolution Analysis Using Wavelet, Ridgelet, and Curvelet Transforms for Medical Image Segmentation  

PubMed Central

The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise.

AlZubi, Shadi; Islam, Naveed; Abbod, Maysam

2011-01-01

216

Confined volume blasting experiments: Description and analysis  

SciTech Connect

A series of bench-scale blasting experiments was conducted to produce rubble beds for use in retorting experiments. The experiments consisted of blasting oil shale with explosives within a confined volume containing 25% void. A variety of blasting geometries was used to control the fragment size distribution and void distribution in the rubble. The series of well controlled tests provided excellent data for use in validating rock fragmentation models. Analyses of the experiments with PRONTO, a dynamic finite element computer code, and a newly developed fracturing model provided good agreement between code predictions and experimental measurements of fracture extent and fragment size. CAROM, a dynamic distinct element code developed to model rock motion during blasting, was used to model the fully fragmented tests. Calculations of the void distribution agreed well with experimentally measured values. 9 refs., 11 figs., 1 tab.

Gorham-Bergeron, E.; Kuszmaul, J.S.; Bickel, T.C.; Shirey, D.L.

1987-01-01

217

Linked State Machines: Analysis. Volume 3.  

National Technical Information Service (NTIS)

The analysis of a linked state machine (LSM) is that process whereby its state or output behavior is determined. A method is developed for the staged (iterated) analysis of LSMs. This method is based on composition and reduction and partly avoids the ''st...

H. K. Knudsen

1985-01-01

218

Correlation Between Hydrodynamic Volume, Density in Solution and Unperturbed Dimensions of Poly(Ester Urethane)s with Different Hard Segments  

Microsoft Academic Search

The dilute solution property of segmented poly(ester urethane)s obtained by the reaction of aromatic diisocyanates, 4,4?-methylene diphenylene diisocyanate, and 2,4-tolylene diisocyanate with poly(ethylene glycol)adipate and 4,4?-dihidroxydiethoxydiphenyl sulfone as chain extender, using a multistep polyaddition process, was studied by viscometry in N,N-dimethyl-formamide at 17–45°C. The correlation between hydrodynamic volume, coil density at a given concentration, and unperturbed dimension calculated from viscosity

Silvia Ioan; Mihaela Lupu; Doina Macocinschi

2005-01-01

219

Screening Analysis : Volume 1, Description and Conclusions.  

SciTech Connect

The SOR consists of three analytical phases leading to a Draft EIS. The first phase Pilot Analysis, was performed for the purpose of testing the decision analysis methodology being used in the SOR. The Pilot Analysis is described later in this chapter. The second phase, Screening Analysis, examines all possible operating alternatives using a simplified analytical approach. It is described in detail in this and the next chapter. This document also presents the results of screening. The final phase, Full-Scale Analysis, will be documented in the Draft EIS and is intended to evaluate comprehensively the few, best alternatives arising from the screening analysis. The purpose of screening is to analyze a wide variety of differing ways of operating the Columbia River system to test the reaction of the system to change. The many alternatives considered reflect the range of needs and requirements of the various river users and interests in the Columbia River Basin. While some of the alternatives might be viewed as extreme, the information gained from the analysis is useful in highlighting issues and conflicts in meeting operating objectives. Screening is also intended to develop a broad technical basis for evaluation including regional experts and to begin developing an evaluation capability for each river use that will support full-scale analysis. Finally, screening provides a logical method for examining all possible options and reaching a decision on a few alternatives worthy of full-scale analysis. An organizational structure was developed and staffed to manage and execute the SOR, specifically during the screening phase and the upcoming full-scale analysis phase. The organization involves ten technical work groups, each representing a particular river use. Several other groups exist to oversee or support the efforts of the work groups.

Bonneville Power Administration; Corps of Engineers; Bureau of Reclamation

1992-08-01

220

Laser power conversion system analysis, volume 1  

NASA Technical Reports Server (NTRS)

The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

1979-01-01

221

MRI segmentation: Methods and applications  

Microsoft Academic Search

The current literature on MRI segmentation methods is reviewed. Particular emphasis is placed on the relative merits of single image versus multispectral segmentation, and supervised versus unsupervised segmentation methods. Image pre-processing and registration are discussed, as well as methods of validation. The application of MRI segmentation for tumor volume measurements during the course of therapy is presented here as an

L. P. Clarke; R. P. Velthuizen; M. A. Camacho; J. J. Heine; M. Vaidyanathan; L. O. Hall; R. W. Thatcher; M. L. Silbiger

1995-01-01

222

Tumor segmentation from breast magnetic resonance images using independent component texture analysis  

NASA Astrophysics Data System (ADS)

A new spectral signature analysis method for tumor segmentation in breast magnetic resonance images is presented. The proposed method is called an independent component texture analysis (ICTA), which consists of three techniques including independent component analysis (ICA), entropy-based thresholding, and texture feature registration (TFR). ICTA was mainly developed to resolve the inconsistency in the results of independent components (ICs) due to the random initial projection vector of ICA and then accordingly determine the most likely IC. A series of experiments were conducted to compare and evaluate ICTA with principal component texture analysis, traditional ICA, traditional principal component analysis (PCA), fuzzy c-means, constrained energy minimization, and orthogonal subspace projection methods. The experimental results showed that ICTA had higher efficiency than existing methods.

Yang, Sheng-Chih; Huang, Chieh-Ling; Chang, Tsai-Rong; Lin, Chi-Yuan

2013-04-01

223

Reliable cell segmentation based on spectral phasor analysis of hyperspectral stimulated Raman scattering imaging data.  

PubMed

Hyperspectral stimulated Raman scattering (SRS) imaging has rapidly become an emerging tool for high content analyses of cell and tissue systems. The label-free nature of SRS imaging combined with its chemical specificity allows in situ and in vivo biochemical quantification at submicrometer resolution without sectioning and staining. Current hyperspectral SRS data analysis methods are based on either linear unmixing or multivariate analysis, which are not sensitive to small spectral variations and often provide obscure information on the cell composition. Here, we demonstrate a spectral phasor analysis method that allows fast and reliable cellular organelle segmentation of mammalian cells, without any a priori knowledge of their composition or basis spectra. We further show that, in combination with a branch-bound algorithm for optimal selection of a few wavenumbers, spectral phasor analysis provides a robust solution to label-free single cell analysis. PMID:24684208

Fu, Dan; Xie, X Sunney

2014-05-01

224

Analysis of Barium Clouds. Volume I.  

National Technical Information Service (NTIS)

Several aspects of the analysis of barium ion clouds are presented including ion cloud modeling, comparison of radar and optical data, and correlation of data with theory. A quantitative model has been developed from which various properties of barium ion...

B. Kivel L. F. Cianciolo L. M. Linson S. Powers

1972-01-01

225

Introduction to Psychology and Leadership. Part Six; Authority and Responsibility. Segments III & IV, Volume VI-B.  

ERIC Educational Resources Information Center

The sixth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on authority and responsibility and is presented in two separate documents. Like Volume One (EM 010 420), it is a self-instructional syndactic and linear text…

Westinghouse Learning Corp., Annapolis, MD.

226

Introduction to Psychology and Leadership. Part Eight; Senior-Subordinate Relationships. Segments IV, V, & VI, Volume VIII-B.  

ERIC Educational Resources Information Center

The eighth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on senior-subordinate relationships, and is presented in two separate documents. Like Volume One (EM 010 420), this document is a self-instructional syndactic…

Westinghouse Learning Corp., Annapolis, MD.

227

Segmentation of Unstructured Datasets  

NASA Technical Reports Server (NTRS)

Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

Bhat, Smitha

1996-01-01

228

Profiling the different needs and expectations of patients for population-based medicine: a case study using segmentation analysis  

PubMed Central

Background This study illustrates an evidence-based method for the segmentation analysis of patients that could greatly improve the approach to population-based medicine, by filling a gap in the empirical analysis of this topic. Segmentation facilitates individual patient care in the context of the culture, health status, and the health needs of the entire population to which that patient belongs. Because many health systems are engaged in developing better chronic care management initiatives, patient profiles are critical to understanding whether some patients can move toward effective self-management and can play a central role in determining their own care, which fosters a sense of responsibility for their own health. A review of the literature on patient segmentation provided the background for this research. Method First, we conducted a literature review on patient satisfaction and segmentation to build a survey. Then, we performed 3,461 surveys of outpatient services users. The key structures on which the subjects’ perception of outpatient services was based were extrapolated using principal component factor analysis with varimax rotation. After the factor analysis, segmentation was performed through cluster analysis to better analyze the influence of individual attitudes on the results. Results Four segments were identified through factor and cluster analysis: the “unpretentious,” the “informed and supported,” the “experts” and the “advanced” patients. Their policies and managerial implications are outlined. Conclusions With this research, we provide the following: – a method for profiling patients based on common patient satisfaction surveys that is easily replicable in all health systems and contexts; – a proposal for segments based on the results of a broad-based analysis conducted in the Italian National Health System (INHS). Segments represent profiles of patients requiring different strategies for delivering health services. Their knowledge and analysis might support an effort to build an effective population-based medicine approach.

2012-01-01

229

Dense nuclei segmentation based on graph cut and convexity-concavity analysis.  

PubMed

With the rapid advancement of 3D confocal imaging technology, more and more 3D cellular images will be available. However, robust and automatic extraction of nuclei shape may be hindered by a highly cluttered environment, as for example, in fly eye tissues. In this paper, we present a novel and efficient nuclei segmentation algorithm based on the combination of graph cut and convex shape assumption. The main characteristic of the algorithm is that it segments nuclei foreground using a graph-cut algorithm with our proposed new initialization method and splits overlapping or touching cell nuclei by simple convexity and concavity analysis. Experimental results show that the proposed algorithm can segment complicated nuclei clumps effectively in our fluorescent fruit fly eye images. Evaluation on a public hand-labelled 2D benchmark demonstrates substantial quantitative improvement over other methods. For example, the proposed method achieves a 3.2 Hausdorff distance decrease and a 1.8 decrease in the merged nuclei error per slice. PMID:24237576

Qi, J

2014-01-01

230

Advanced finite element analysis of L4-L5 implanted spine segment  

NASA Astrophysics Data System (ADS)

In the paper finite element (FE) analysis of implanted lumbar spine segment is presented. The segment model consists of two lumbar vertebrae L4 and L5 and the prosthesis. The model of the intervertebral disc prosthesis consists of two metallic plates and a polyurethane core. Bone tissue is modelled as a linear viscoelastic material. The prosthesis core is made of a polyurethane nanocomposite. It is modelled as a non-linear viscoelastic material. The constitutive law of the core, derived in one of the previous papers, is implemented into the FE software Abaqus®. It was done by means of the User-supplied procedure UMAT. The metallic plates are elastic. The most important parts of the paper include: description of the prosthesis geometrical and numerical modelling, mathematical derivation of stiffness tensor and Kirchhoff stress and implementation of the constitutive model of the polyurethane core into Abaqus® software. Two load cases were considered, i.e. compression and stress relaxation under constant displacement. The goal of the paper is to numerically validate the constitutive law, which was previously formulated, and to perform advanced FE analyses of the implanted L4-L5 spine segment in which non-standard constitutive law for one of the model materials, i.e. the prosthesis core, is implemented.

Pawlikowski, Marek; Doma?ski, Janusz; Suchocki, Cyprian

2014-03-01

231

Laser power conversion system analysis, volume 2  

NASA Technical Reports Server (NTRS)

The orbit-to-ground laser power conversion system analysis investigated the feasibility and cost effectiveness of converting solar energy into laser energy in space, and transmitting the laser energy to earth for conversion to electrical energy. The analysis included space laser systems with electrical outputs on the ground ranging from 100 to 10,000 MW. The space laser power system was shown to be feasible and a viable alternate to the microwave solar power satellite. The narrow laser beam provides many options and alternatives not attainable with a microwave beam.

Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

1979-01-01

232

Automated system for ST segment and arrhythmia analysis in exercise radionuclide ventriculography  

SciTech Connect

A computer-based system for interpretation of the electrocardiogram (ECG) in the diagnosis of arrhythmia and ST segment abnormality in an exercise system is presented. The system was designed for inclusion in a gamma camera so the ECG diagnosis could be combined with the diagnostic capability of radionuclide ventriculography. Digitized data are analyzed in a beat-by-beat mode and a contextual diagnosis of underlying rhythm is provided. Each beat is assigned a beat code based on a combination of waveform analysis and RR interval measurement. The waveform analysis employs a new correlation coefficient formula which corrects for baseline wander. Selective signal averaging, in which only normal beats are included, is done for an improved signal-to-noise ratio prior to ST segment analysis. Template generation, R wave detection, QRS window size, baseline correction, and continuous updating of heart rate have all been automated. ST level and slope measurements are computed on signal-averaged data. Arrhythmia analysis of 13 passages of abnormal rhythm by computer was found to be correct in 98.4 percent of all beats. 25 passages of exercise data, 1-5 min in length, were evaluated by the cardiologist and found to be in agreement in 95.8 percent in measurements of ST level and 91.7 percent in measurements of ST slope.

Hsia, P.W.; Jenkins, J.M.; Shimoni, Y.; Gage, K.P.; Santinga, J.T.; Pitt, B.

1986-06-01

233

Community Analysis System. Volume I: Technical Description.  

National Technical Information Service (NTIS)

The Community Analysis System consists of a set of computer programs built around a single data base structure. Variations from city to city are in the numbers specified in the data base, thus allowing for one flexible set of programs. The programs now op...

S. A. Weber W. L. Parsons D. L. Birch

1977-01-01

234

Texture analysis improves level set segmentation of the anterior abdominal wall  

SciTech Connect

Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention.Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall.Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture.Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture analysis can improve the level set segmentation around the abdominal region.

Xu, Zhoubing [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States)] [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Allen, Wade M. [Institute of Imaging Science, Vanderbilt University, Nashville, Tennessee 37235 (United States)] [Institute of Imaging Science, Vanderbilt University, Nashville, Tennessee 37235 (United States); Baucom, Rebeccah B.; Poulose, Benjamin K. [General Surgery, Vanderbilt University Medical Center, Nashville, Tennessee 37235 (United States)] [General Surgery, Vanderbilt University Medical Center, Nashville, Tennessee 37235 (United States); Landman, Bennett A. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 and Institute of Imaging Science, Vanderbilt University, Nashville, Tennessee 37235 (United States)] [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 and Institute of Imaging Science, Vanderbilt University, Nashville, Tennessee 37235 (United States)

2013-12-15

235

Identification of regions of positive selection using Shared Genomic Segment analysis  

PubMed Central

We applied a shared genomic segment (SGS) analysis, incorporating an error model, to identify complete, or near complete, selective sweeps in the HapMap phase II data sets. This method is based on detecting heterozygous sharing across all individuals within a population, to identify regions of sharing with at least one allele in common. We identified multiple interesting regions, many of which are concordant with positive selection regions detected by previous population genetic tests. Others are suggested to be novel regions. Our finding illustrates the utility of SGS as a method for identifying regions of selection, and some of these regions have been proposed to be candidate regions for harboring disease genes.

Cai, Zheng; Camp, Nicola J; Cannon-Albright, Lisa; Thomas, Alun

2011-01-01

236

Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy  

PubMed Central

An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single-cell level, a context that is heavily dependent on the use of time-lapse movies. Extracting quantitative data on the single-cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single-cell, fluorescent images – segmentation and lineage reconstruction – to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood-based scoring method for frame-to-frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open-source software.

Wang, Quanli; Niemi, Jarad; Tan, Chee-Meng; You, Lingchong; West, Mike

2009-01-01

237

Multi-Modal Glioblastoma Segmentation: Man versus Machine  

PubMed Central

Background and Purpose Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. Methods We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. Results Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p?=?0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (?) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. Conclusions In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity.

Pica, Alessia; Schucht, Philippe; Beck, Jurgen; Verma, Rajeev Kumar; Slotboom, Johannes; Reyes, Mauricio; Wiest, Roland

2014-01-01

238

Old document image segmentation using the autocorrelation function and multiresolution analysis  

NASA Astrophysics Data System (ADS)

Recent progress in the digitization of heterogeneous collections of ancient documents has rekindled new challenges in information retrieval in digital libraries and document layout analysis. Therefore, in order to control the quality of historical document image digitization and to meet the need of a characterization of their content using intermediate level metadata (between image and document structure), we propose a fast automatic layout segmentation of old document images based on five descriptors. Those descriptors, based on the autocorrelation function, are obtained by multiresolution analysis and used afterwards in a specific clustering method. The method proposed in this article has the advantage that it is performed without any hypothesis on the document structure, either about the document model (physical structure), or the typographical parameters (logical structure). It is also parameter-free since it automatically adapts to the image content. In this paper, firstly, we detail our proposal to characterize the content of old documents by extracting the autocorrelation features in the different areas of a page and at several resolutions. Then, we show that is possible to automatically find the homogeneous regions defined by similar indices of autocorrelation without knowledge about the number of clusters using adapted hierarchical ascendant classification and consensus clustering approaches. To assess our method, we apply our algorithm on 316 old document images, which encompass six centuries (1200-1900) of French history, in order to demonstrate the performance of our proposal in terms of segmentation and characterization of heterogeneous corpus content. Moreover, we define a new evaluation metric, the homogeneity measure, which aims at evaluating the segmentation and characterization accuracy of our methodology. We find a 85% of mean homogeneity accuracy. Those results help to represent a document by a hierarchy of layout structure and content, and to define one or more signatures for each page, on the basis of a hierarchical representation of homogeneous blocks and their topology.

Mehri, Maroua; Gomez-Krämer, Petra; Héroux, Pierre; Mullot, Rémy

2013-01-01

239

A zeptoliter volume meter for analysis of single protein molecules.  

PubMed

A central goal in bioanalytics is to determine the concentration of and interactions between biomolecules. Nanotechnology allows performing such analyses in a highly parallel, low-cost, and miniaturized fashion. Here we report on label-free volume, concentration, and mobility analysis of single protein molecules and nanoparticles during their diffusion through a subattoliter detection volume, confined by a 100 nm aperture in a thin gold film. A high concentration of small fluorescent molecules renders the aqueous solution in the aperture brightly fluorescent. Nonfluorescent analytes diffusing into the aperture displace the fluorescent molecules in the solution, leading to a decrease of the detected fluorescence signal, while analytes diffusing out of the aperture return the fluorescence level. The resulting fluorescence fluctuations provide direct information on the volume, concentration, and mobility of the nonfluorescent analytes through fluctuation analysis in both time and amplitude. PMID:22149182

Sandén, Tor; Wyss, Romain; Santschi, Christian; Hassaïne, Ghérici; Deluz, Cédric; Martin, Olivier J F; Wennmalm, Stefan; Vogel, Horst

2012-01-11

240

A novel method for the measurement of linear body segment parameters during clinical gait analysis.  

PubMed

Clinical gait analysis is a valuable tool for the understanding of motion disorders and treatment outcomes. Most standard models used in gait analysis rely on predefined sets of body segment parameters that must be measured on each individual. Traditionally, these parameters are measured using calipers and tape measures. The process can be time consuming and is prone to several sources of error. This investigation explored a novel method for rapid recording of linear body segment parameters using magnetic-field based digital calipers commonly used for a different purpose in prosthetics and orthotics. The digital method was found to be comparable to traditional in all linear measures and data capture was significantly faster with the digital method, with mean time savings for 10 measurements of 2.5 min. Digital calipers only record linear distances, and were less accurate when diameters were used to approximate limb circumferences. Experience in measuring BSPs is important, as an experienced measurer was significantly faster than a graduate student and showed less difference between methods. Comparing measurement of adults vs. children showed greater differences with adults, and some method-dependence. If the hardware is available, digital caliper measurement of linear BSPs is accurate and rapid. PMID:23602545

Geil, Mark D

2013-09-01

241

A fuzzy, nonparametric segmentation framework for DTI and MRI analysis: with applications to DTI-tract extraction.  

PubMed

This paper presents a novel fuzzy-segmentation method for diffusion tensor (DT) and magnetic resonance (MR) images. Typical fuzzy-segmentation schemes, e.g., those based on fuzzy C means (FCM), incorporate Gaussian class models that are inherently biased towards ellipsoidal clusters characterized by a mean element and a covariance matrix. Tensors in fiber bundles, however, inherently lie on specific manifolds in Riemannian spaces. Unlike FCM-based schemes, the proposed method represents these manifolds using nonparametric data-driven statistical models. The paper describes a statistically-sound (consistent) technique for nonparametric modeling in Riemannian DT spaces. The proposed method produces an optimal fuzzy segmentation by maximizing a novel information-theoretic energy in a Markov-random-field framework. Results on synthetic and real, DT and MR images, show that the proposed method provides information about the uncertainties in the segmentation decisions, which stem from imaging artifacts including noise, partial voluming, and inhomogeneity. By enhancing the nonparametric model to capture the spatial continuity and structure of the fiber bundle, we exploit the framework to extract the cingulum fiber bundle. Typical tractography methods for tract delineation, incorporating thresholds on fractional anisotropy and fiber curvature to terminate tracking, can face serious problems arising from partial voluming and noise. For these reasons, tractography often fails to extract thin tracts with sharp changes in orientation, such as the cingulum. The results demonstrate that the proposed method extracts this structure significantly more accurately as compared to tractography. PMID:18041267

Awate, Suyash P; Zhang, Hui; Gee, James C

2007-11-01

242

Exploratory trend and pattern analysis of 1981 through 1983 Licensee Event Report data. Appendices. Volume 2  

Microsoft Academic Search

This volume contains appendixes supporting Volume 1, ''Exploratory Trends and Patterns Analysis of 1981 through 1983 Licensee Event Report Data.'' Together, Volumes 1 and 2 document research performed for the United States Nuclear Regulatory Commission's Office for Analysis and Evaluation of Operational Data (AEOD) as a part of its Trends and Patterns Analysis of Operational Data Program. Volume 1 contains

O. V. Hester; M. R. Groh; F. G. Farmer

1986-01-01

243

Introduction to Psychology and Leadership. Part Six; Authority and Responsibility. Segments I & II, Volume VI-A.  

ERIC Educational Resources Information Center

The sixth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on authority and responsibility and is presented in two separate documents. It is a self-instructional linear text with information and quizzes. EM 010 433 is…

Westinghouse Learning Corp., Annapolis, MD.

244

Introduction to Psychology and Leadership. Part Eight; Senior-Subordinate Relationships. Segments I, II, & III, Volume VIII-A.  

ERIC Educational Resources Information Center

The eighth volume of the introduction to psychology and leadership course (see the final reports which summarize the development project, EM 010 418, EM 010 419, and EM 010 484) concentrates on senior-subordinate relationships, and is presented in two separate documents. This document is a linear text with information and quizzes. EM 010 438 is…

Westinghouse Learning Corp., Annapolis, MD.

245

A Bayesian Learning Application to Automated Tumour Segmentation for Tissue Microarray Analysis  

NASA Astrophysics Data System (ADS)

Tissue microarray (TMA) is a high throughput analysis tool to identify new diagnostic and prognostic markers in human cancers. However, standard automated method in tumour detection on routine histochemical images for TMA construction is under developed. This paper presents a MRF based Bayesian learning system for automated tumour cell detection in routine histochemical virtual slides to assist TMA construction. The experimental results show that the proposed method is able to achieve 80% accuracy on average by pixel-based quantitative performance evaluation that compares the automated segmentation outputs with the manually marked ground truth data. The presented technique greatly reduces labor-intensive workloads for pathologists, highly speeds up the process of TMA construction and allows further exploration of fully automated TMA analysis.

Wang, Ching-Wei

246

Design and analysis of modules for segmented X-ray optics  

NASA Astrophysics Data System (ADS)

Lightweight and high resolution mirrors are needed for future space-based X-ray telescopes to achieve advances in high-energy astrophysics. The slumped glass mirror technology in development at NASA GSFC aims to build X-ray mirror modules with an area to mass ratio of ~17 cm2/kg at 1 keV and a resolution of 10 arc-sec Half Power Diameter (HPD) or better at an affordable cost. As the technology nears the performance requirements, additional engineering effort is needed to ensure the modules are compatible with space-flight. This paper describes Flight Mirror Assembly (FMA) designs for several X-ray astrophysics missions studied by NASA and defines generic driving requirements and subsequent verification tests necessary to advance technology readiness for mission implementation. The requirement to perform X-ray testing in a horizontal beam, based on the orientation of existing facilities, is particularly burdensome on the mirror technology, necessitating mechanical over-constraint of the mirror segments and stiffening of the modules in order to prevent self-weight deformation errors from dominating the measured performance. This requirement, in turn, drives the mass and complexity of the system while limiting the testable angular resolution. Design options for a vertical X-ray test facility alleviating these issues are explored. An alternate mirror and module design using kinematic constraint of the mirror segments, enabled by a vertical test facility, is proposed. The kinematic mounting concept has significant advantages including potential for higher angular resolution, simplified mirror integration, and relaxed thermal requirements. However, it presents new challenges including low vibration modes and imperfections in kinematic constraint. Implementation concepts overcoming these challenges are described along with preliminary test and analysis results demonstrating the feasibility of kinematically mounting slumped glass mirror segments.

McClelland, Ryan S.; Biskach, Michael P.; Chan, Kai-Wing; Saha, Timo T.; Zhang, William W.

2012-09-01

247

Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis  

Microsoft Academic Search

A new segmented compressed sampling (CS) method foranalog-to-information conversion(AIC)isproposed.Ananalog signal measured by a number of parallel branches of mixers and integrators (BMIs), each characterized by a specific random sam- pling waveform, is first segmented in time into segments. Then the subsamples collected on different segments and different BMIs are reused so that a larger number of samples (at most )

Omid Taheri; Sergiy A. Vorobyov

2011-01-01

248

Safety analysis report for the Galileo Mission. Volume 2, book 2: Accident model document, Appendices  

NASA Astrophysics Data System (ADS)

This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report 2, Volume 2. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

1988-12-01

249

Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices  

SciTech Connect

This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

Not Available

1988-12-15

250

Public Segmentation and Government–Public Relationship Building: A Cluster Analysis of Publics in the United States and 19 European Countries  

Microsoft Academic Search

The purposes of this study are (a) to suggest a model of public segmentation and (b) to examine each segment's level of trust in government. By using individuals’ cognitive perceptions of government and participation in social organizations, as well as media use and demographic characteristics, as public segmentation criteria, a cluster analysis of international survey datasets of the United States

Hyehyun Hong; Youngah Lee; Jongmin Park

2012-01-01

251

Theoretical analysis of segmented Wolter/LSM X-ray telescope systems  

NASA Technical Reports Server (NTRS)

The Segmented Wolter I/LSM X-ray Telescope, which consists of a Wolter I Telescope with a tilted, off-axis convex spherical Layered Synthetic Microstructure (LSM) optics placed near the primary focus to accommodate multiple off-axis detectors, has been analyzed. The Skylab ATM Experiment S056 Wolter I telescope and the Stanford/MSFC nested Wolter-Schwarzschild x-ray telescope have been considered as the primary optics. A ray trace analysis has been performed to calculate the RMS blur circle radius, point spread function (PSF), the meridional and sagittal line functions (LST), and the full width half maximum (PWHM) of the PSF to study the spatial resolution of the system. The effects on resolution of defocussing the image plane, tilting and decentrating of the multilayer (LSM) optics have also been investigated to give the mounting and alignment tolerances of the LSM optic. Comparison has been made between the performance of the segmented Wolter/LSM optical system and that of the Spectral Slicing X-ray Telescope (SSXRT) systems.

Shealy, D. L.; Chao, S. H.

1986-01-01

252

Integration of 3D Scale-based Pseudo-enhancement Correction and Partial Volume Image Segmentation for Improving Electronic Colon Cleansing in CT Colonograpy  

PubMed Central

Orally administered tagging agents are usually used in CT colonography (CTC) to differentiate residual bowel content from native colonic structures. However, the high-density contrast agents tend to introduce pseudo-enhancement (PE) effect on neighboring soft tissues and elevate their observed CT attenuation value toward that of the tagged materials (TMs), which may result in an excessive electronic colon cleansing (ECC) since the pseudo-enhanced soft tissues are incorrectly identified as TMs. To address this issue, we integrated a 3D scale-based PE correction into our previous ECC pipeline based on the maximum a posteriori expectation-maximization partial volume (PV) segmentation. The newly proposed ECC scheme takes into account both the PE and PV effects that commonly appear in CTC images. We evaluated the new scheme on 40 patient CTC scans, both qualitatively through display of segmentation results, and quantitatively through radiologists’ blind scoring (human observer) and computer-aided detection (CAD) of colon polyps (computer observer). Performance of the presented algorithm has shown consistent improvements over our previous ECC pipeline, especially for the detection of small polyps submerged in the contrast agents. The CAD results of polyp detection showed that 4 more submerged polyps were detected for our new ECC scheme over the previous one.

Zhang, Hao; Li, Lihong; Zhu, Hongbin; Han, Hao; Song, Bowen; Liang, Zhengrong

2014-01-01

253

Integration of 3D scale-based pseudo-enhancement correction and partial volume image segmentation for improving electronic colon cleansing in CT colonograpy.  

PubMed

Orally administered tagging agents are usually used in CT colonography (CTC) to differentiate residual bowel content from native colonic structures. However, the high-density contrast agents tend to introduce pseudo-enhancement (PE) effect on neighboring soft tissues and elevate their observed CT attenuation value toward that of the tagged materials (TMs), which may result in an excessive electronic colon cleansing (ECC) since the pseudo-enhanced soft tissues are incorrectly identified as TMs. To address this issue, we integrated a 3D scale-based PE correction into our previous ECC pipeline based on the maximum a posteriori expectation-maximization partial volume (PV) segmentation. The newly proposed ECC scheme takes into account both the PE and PV effects that commonly appear in CTC images. We evaluated the new scheme on 40 patient CTC scans, both qualitatively through display of segmentation results, and quantitatively through radiologists' blind scoring (human observer) and computer-aided detection (CAD) of colon polyps (computer observer). Performance of the presented algorithm has shown consistent improvements over our previous ECC pipeline, especially for the detection of small polyps submerged in the contrast agents. The CAD results of polyp detection showed that 4 more submerged polyps were detected for our new ECC scheme over the previous one. PMID:24699352

Zhang, Hao; Li, Lihong; Zhu, Hongbin; Han, Hao; Song, Bowen; Liang, Zhengrong

2014-01-01

254

Segmentation and volumetric measurement of renal cysts and parenchyma from MR images of polycystic kidneys using multi-spectral analysis method  

NASA Astrophysics Data System (ADS)

For segmentation and volume measurement of renal cysts and parenchyma from kidney MR images in subjects with autosomal dominant polycystic kidney disease (ADPKD), a semi-automated, multi-spectral anaylsis (MSA) method was developed and applied to T1- and T2-weighted MR images. In this method, renal cysts and parenchyma were characterized and segmented for their characteristic T1 and T2 signal intensity differences. The performance of the MSA segmentation method was tested on ADPKD phantoms and patients. Segmented renal cysts and parenchyma volumes were measured and compared with reference standard measurements by fluid displacement method in the phantoms and stereology and region-based thresholding methods in patients, respectively. As results, renal cysts and parenchyma were segmented successfully with the MSA method. The volume measurements obtained with MSA were in good agreement with the measurements by other segmentation methods for both phantoms and subjects. The MSA method, however, was more time-consuming than the other segmentation methods because it required pre-segmentation, image registration and tissue classification-determination steps.

Bae, K. T.; Commean, P. K.; Brunsden, B. S.; Baumgarten, D. A.; King, B. F., Jr.; Wetzel, L. H.; Kenney, P. J.; Chapman, A. B.; Torres, V. E.; Grantham, J. J.; Guay-Woodford, L. M.; Tao, C.; Miller, J. P.; Meyers, C. M.; Bennett, W. M.

2008-04-01

255

Quantitative analysis of volume images: electron microscopic tomography of HIV  

NASA Astrophysics Data System (ADS)

Three-dimensional objects should be represented by 3D images. So far, most of the evaluation of images of 3D objects have been done visually, either by looking at slices through the volumes or by looking at 3D graphic representations of the data. In many applications a more quantitative evaluation would be valuable. Our application is the analysis of volume images of the causative agent of the acquired immune deficiency syndrome (AIDS), namely human immunodeficiency virus (HIV), produced by electron microscopic tomography (EMT). A structural analysis of the virus is of importance. The representation of some of the interesting structural features will depend on the orientation and the position of the object relative to the digitization grid. We describe a method of defining orientation and position of objects based on the moment of inertia of the objects in the volume image. In addition to a direct quantification of the 3D object a quantitative description of the convex deficiency may provide valuable information about the geometrical properties. The convex deficiency is the volume object subtracted from its convex hull. We describe an algorithm for creating an enclosing polyhedron approximating the convex hull of an arbitrarily shaped object.

Nystroem, Ingela; Bengtsson, Ewert W.; Nordin, Bo G.; Borgefors, Gunilla

1994-05-01

256

Segmentation and segment connection of obstructed colon  

NASA Astrophysics Data System (ADS)

Segmentation of colon CT images is the main factor that inhibits automation of virtual colonoscopy. There are two main reasons that make efficient colon segmentation difficult. First, besides the colon, the small bowel, lungs, and stomach are also gas-filled organs in the abdomen. Second, peristalsis or residual feces often obstruct the colon, so that it consists of multiple gas-filled segments. In virtual colonoscopy, it is very useful to automatically connect the centerlines of these segments into a single colon centerline. Unfortunately, in some cases this is a difficult task. In this study a novel method for automated colon segmentation and connection of colon segments' centerlines is proposed. The method successfully combines features of segments, such as centerline and thickness, with information on main colon segments. The results on twenty colon cases show that the method performs well in cases of small obstructions of the colon. Larger obstructions are mostly also resolved properly, especially if they do not appear in the sigmoid part of the colon. Obstructions in the sigmoid part of the colon sometimes cause improper classification of the small bowel segments. If a segment is too small, it is classified as the small bowel segment. However, such misclassifications have little impact on colon analysis.

Medved, Mario; Truyen, Roel; Likar, Bostjan; Pernus, Franjo

2004-05-01

257

SD-OCT to differentiate traumatic submacular hemorrhage types using automatic three-dimensional segmentation analysis.  

PubMed

Traumatic submacular hemorrhage may present with significant decrease in vision and may have varying outcomes. Following injury, the hemorrhage can collect either between the neurosensory retina and retinal pigment epithelium (RPE) or below the RPE. This differentiation may be important to prognosticate and to guide treatment. In two patients with post-traumatic submacular hemorrhage, Cirrius spectral domain high-definition optical coherence tomography (OCT) (Carl Zeiss Meditec, Dublin, CA) was used to differentiate traumatic submacular hemorrhage types using automation three-dimensional segmentation analysis. Based on the OCT findings, the patient with sub-RPE bleed was subjected to pneumatic displacement. En face C-scan imaging just below the RPE allowed for the diagnosis of the exact location of choroidal rupture that was masked due to hemorrhage. PMID:21366180

Sampangi, Raju; Chandrakumar, H V; Somashekar, Sandhya E; Joshi, Gauri R; Ganesh, Sri

2011-01-01

258

Hemodynamic segmentation of MR brain perfusion images using independent component analysis, thresholding, and Bayesian estimation.  

PubMed

Dynamic-susceptibility-contrast MR perfusion imaging is a widely used imaging tool for in vivo study of cerebral blood perfusion. However, visualization of different hemodynamic compartments is less investigated. In this work, independent component analysis, thresholding, and Bayesian estimation were used to concurrently segment different tissues, i.e., artery, gray matter, white matter, vein and sinus, choroid plexus, and cerebral spinal fluid, with corresponding signal-time curves on perfusion images of five normal volunteers. Based on the spatiotemporal hemodynamics, sequential passages and microcirculation of contrast-agent particles in these tissues were decomposed and analyzed. Late and multiphasic perfusion, indicating the presence of contrast agents, was observed in the choroid plexus and the cerebral spinal fluid. An arterial input function was modeled using the concentration-time curve of the arterial area on the same slice, rather than remote slices, for the deconvolution calculation of relative cerebral blood flow. PMID:12704771

Kao, Yi-Hsuan; Guo, Wan-Yuo; Wu, Yu-Te; Liu, Kuo-Ching; Chai, Wen-Yen; Lin, Chiao-Yuan; Hwang, Yi-Shuan; Jy-Kang Liou, Adrain; Wu, Hsiu-Mei; Cheng, Hui-Cheng; Yeh, Tzu-Chen; Hsieh, Jen-Chuen; Mu Huo Teng, Michael

2003-05-01

259

Segmentation of two- and three-dimensional data from electron microscopy using eigenvector analysis.  

PubMed

An automatic image segmentation method is used to improve processing and visualization of data obtained by electron microscopy. Exploiting affinity criteria between pixels, e.g., proximity and gray level similarity, in conjunction with an eigenvector analysis, the image is subdivided into areas which correspond to objects or meaningful regions. Extending a proposal by Shi and Malik (1997, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 731-737) the approach was adapted to the field of electron microscopy, especially to three-dimensional application as needed by electron tomography. Theory, implementation, parameter setting, and results obtained with a variety of data are presented and discussed. The method turns out to be a powerful tool for visualization with the potential for further improvement by developing and tuning new affinity. PMID:12160706

Frangakis, Achilleas S; Hegerl, Reiner

2002-01-01

260

A new approach of graph cuts based segmentation for thermal IR image analysis  

NASA Astrophysics Data System (ADS)

Thermal Infra Red images are one of the most investigated and popular data modalities whose usage has grown exponentially from humble origins to being one of the most extensively harnessed imaging forms. Instead of capturing the radiometry in visible spectra, the thermal Images focus on the near to mid Infrared spectra thereby producing a scene structure quite different from their visual counterpart images. Also traditionally the spatial resolution of the infra red images has been typically lower than traditional color images. The above reasons have contributed to the past trend of minimal automated analysis of thermal images wherein intensity (which corresponds to heat content) and to a lesser extent spatiality formed the primary features of interest in an IR image. In this work we extend the automated processing of Infra red images by using an advanced image analysis technique called Graph cuts. Graph cuts have the unique property of providing global optimal segmentation which has contributed to its popularity. We side step the extensive computational requirements of a Graph cuts procedure (which consider pixels as the vertices of graphs) by performing preprocessing by performing initial segmentation to obtain a short list of candidate regions. Features extracted on the candidate regions are then used as an input for the graph cut procedure. Appropriate energy functions are used to combine traditionally used graph cuts feature like intensity feature with new salient features like gradients. The results show the effectiveness of using the above technique for automated processing of thermal infrared images especially when compared with traditional techniques like intensity thresholding.

Hu, Xuezhang; Chakravarty, Sumit

2012-12-01

261

Segmentation of Fluorescence Microscopy Images for Quantitative Analysis of Cell Nuclear Architecture  

PubMed Central

Abstract Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments.

Russell, Richard A.; Adams, Niall M.; Stephens, David A.; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S.

2009-01-01

262

Segmentation propagation using a 3D embryo atlas for high-throughput MRI phenotyping: comparison and validation with manual segmentation.  

PubMed

Effective methods for high-throughput screening and morphometric analysis are crucial for phenotyping the increasing number of mouse mutants that are being generated. Automated segmentation propagation for embryo phenotyping is an emerging application that enables noninvasive and rapid quantification of substructure volumetric data for morphometric analysis. We present a study to assess and validate the accuracy of brain and kidney volumes generated via segmentation propagation in an ex vivo mouse embryo MRI atlas comprising three different groups against the current "gold standard"--manual segmentation. Morphometric assessment showed good agreement between automatically and manually segmented volumes, demonstrating that it is possible to assess volumes for phenotyping a population of embryos using segmentation propagation with the same variation as manual segmentation. As part of this study, we have made our average atlas and segmented volumes freely available to the community for use in mouse embryo phenotyping studies. These MRI datasets and automated methods of analyses will be essential for meeting the challenge of high-throughput, automated embryo phenotyping. PMID:22556102

Norris, Francesca C; Modat, Marc; Cleary, Jon O; Price, Anthony N; McCue, Karen; Scambler, Peter J; Ourselin, Sebastien; Lythgoe, Mark F

2013-03-01

263

Phenotypic analysis using very small volumes of blood.  

PubMed

Analysis of cell-surface phenotype of peripheral blood leukocytes is one of the most common applications of flow cytometry. In mouse research, the small size of the animal limits the amount of blood available. Standard staining methods using lysis of erythrocytes or gradient separation followed by repeated washing involve unavoidable losses of cells that generally limit analysis of blood to terminal methods. Time-course studies, therefore, require sacrifice of groups of mice at each time point. Thus, a method is needed that can be used with much smaller volumes of blood. This will allow serial sampling of the same animal over time, decreasing experimental variability and reducing animal use. The method described here is a no-lyse, no-wash method that uses triggering on a fluorescence parameter. The method allows routine analysis of the phenotype of peripheral blood leukocytes using whole-blood volumes of 20 µl per tube. The data are comparable with values from traditional methods requiring much higher volumes of blood. Due to interference by erythrocytes, light-scatter parameters are not usable with this method. This method has been used for time-course studies of peripheral blood populations in mice lasting as long as four weeks. PMID:20938921

Weaver, James L; McKinnon, Katherine; Germolec, Dori R

2010-10-01

264

Dose-Volume Differences for Computed Tomography and Magnetic Resonance Imaging Segmentation and Planning for Proton Prostate Cancer Therapy  

SciTech Connect

Purpose: To determine the influence of magnetic-resonance-imaging (MRI)-vs. computed-tomography (CT)-based prostate and normal structure delineation on the dose to the target and organs at risk during proton therapy. Methods and Materials: Fourteen patients were simulated in the supine position using both CT and T2 MRI. The prostate, rectum, and bladder were delineated on both imaging modalities. The planning target volume (PTV) was generated from the delineated prostates with a 5-mm axial and 8-mm superior and inferior margin. Two plans were generated and analyzed for each patient: an MRI plan based on the MRI-delineated PTV, and a CT plan based on the CT-delineated PTV. Doses of 78 Gy equivalents (GE) were prescribed to the PTV. Results: Doses to normal structures were lower when MRI was used to delineate the rectum and bladder compared with CT: bladder V50 was 15.3% lower (p = 0.04), and rectum V50 was 23.9% lower (p = 0.003). Poor agreement on the definition of the prostate apex was seen between CT and MRI (p = 0.007). The CT-defined prostate apex was within 2 mm of the apex on MRI only 35.7% of the time. Coverage of the MRI-delineated PTV was significantly decreased with the CT-based plan: the minimum dose to the PTV was reduced by 43% (p < 0.001), and the PTV V99% was reduced by 11% (p < 0.001). Conclusions: Using MRI to delineate the prostate results in more accurate target definition and a smaller target volume compared with CT, allowing for improved target coverage and decreased doses to critical normal structures.

Yeung, Anamaria R. [Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, FL (United States); Vargas, Carlos E. [University of Florida Proton Therapy Institute, Jacksonville, FL (United States)], E-mail: c2002@ufl.edu; Falchook, Aaron; Louis, Debbie C. [University of Florida Proton Therapy Institute, Jacksonville, FL (United States); Olivier, Kenneth [Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, FL (United States); Keole, Sameer; Yeung, Daniel [University of Florida Proton Therapy Institute, Jacksonville, FL (United States); Mendenhall, Nancy P. [Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, FL (United States); Li Zuofeng [University of Florida Proton Therapy Institute, Jacksonville, FL (United States)

2008-12-01

265

Segmentation with area constraints.  

PubMed

Image segmentation approaches typically incorporate weak regularity conditions such as boundary length or curvature terms, or use shape information. High-level information such as a desired area or volume, or a particular topology are only implicitly specified. In this paper we develop a segmentation method with explicit bounds on the segmented area. Area constraints allow for the soft selection of meaningful solutions, and can counteract the shrinking bias of length-based regularization. We analyze the intrinsic problems of convex relaxations proposed in the literature for segmentation with size constraints. Hence, we formulate the area-constrained segmentation task as a mixed integer program, propose a branch and bound method for exact minimization, and use convex relaxations to obtain the required lower energy bounds on candidate solutions. We also provide a numerical scheme to solve the convex subproblems. We demonstrate the method for segmentations of vesicles from electron tomography images. PMID:23084504

Niethammer, Marc; Zach, Christopher

2013-01-01

266

Differential Gene Expression Profiling and Biological Process Analysis in Proximal Nerve Segments after Sciatic Nerve Transection  

PubMed Central

After traumatic injury, peripheral nerves can spontaneously regenerate through highly sophisticated and dynamic processes that are regulated by multiple cellular elements and molecular factors. Despite evidence of morphological changes and of expression changes of a few regulatory genes, global knowledge of gene expression changes and related biological processes during peripheral nerve injury and regeneration is still lacking. Here we aimed to profile global mRNA expression changes in proximal nerve segments of adult rats after sciatic nerve transection. According to DNA microarray analysis, the huge number of genes was differentially expressed at different time points (0.5 h–14 d) post nerve transection, exhibiting multiple distinct temporal expression patterns. The expression changes of several genes were further validated by quantitative real-time RT-PCR analysis. The gene ontology enrichment analysis was performed to decipher the biological processes involving the differentially expressed genes. Collectively, our results highlighted the dynamic change of the important biological processes and the time-dependent expression of key regulatory genes after peripheral nerve injury. Interestingly, we, for the first time, reported the presence of olfactory receptors in sciatic nerves. Hopefully, this study may provide a useful platform for deeply studying peripheral nerve injury and regeneration from a molecular-level perspective.

Wang, Yongjun; Gu, Yun; Liu, Dong; Wang, Chunming; Ding, Guohui; Chen, Jianping; Liu, Jie; Gu, Xiaosong

2013-01-01

267

Identifying Like-Minded Audiences for Global Warming Public Engagement Campaigns: An Audience Segmentation Analysis and Tool Development  

PubMed Central

Background Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation – a process of identifying coherent groups within a population – can be used to improve the effectiveness of public engagement campaigns. Methodology/Principal Findings In Fall 2008, we conducted a nationally representative survey of American adults (n?=?2,164) to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%), to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%). Three of the segments (totaling 70%) were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18%) were unsupportive, and one was largely disengaged (12%), having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively. Conclusions/Significance In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are available to assist in that process.

Maibach, Edward W.; Leiserowitz, Anthony; Roser-Renouf, Connie; Mertz, C. K.

2011-01-01

268

New monitors of intravascular volume: a comparison of arterial pressure waveform analysis and the intrathoracic blood volume  

Microsoft Academic Search

Objective: Two new monitoring techniques, the analysis of arterial pressure waveform during mechanical ventilation and the determination\\u000a of intrathoracic blood volume, were evaluated for preload assessment in a model of graded hemorrhage. Design: 8 anesthetized dogs bled of 10, 20, and 30 % of their blood volume, then retransfused and volume loaded with plasma expander.\\u000a Central venous pressure (CVP), pulmonary

S. Preisman; U. Pfeiffer; N. Lieberman; A. Perel

1997-01-01

269

Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.  

PubMed

A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10?mg) was incubated in 2M sodium hydroxide (80°C, 10?min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4?min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5?ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. PMID:24817045

Jakobsson, Gerd; Kronstrand, Robert

2014-06-01

270

Quantitative Analysis of the Drosophila Segmentation Regulatory Network Using Pattern Generating Potentials  

PubMed Central

Cis-regulatory modules that drive precise spatial-temporal patterns of gene expression are central to the process of metazoan development. We describe a new computational strategy to annotate genomic sequences based on their “pattern generating potential” and to produce quantitative descriptions of transcriptional regulatory networks at the level of individual protein-module interactions. We use this approach to convert the qualitative understanding of interactions that regulate Drosophila segmentation into a network model in which a confidence value is associated with each transcription factor-module interaction. Sequence information from multiple Drosophila species is integrated with transcription factor binding specificities to determine conserved binding site frequencies across the genome. These binding site profiles are combined with transcription factor expression information to create a model to predict module activity patterns. This model is used to scan genomic sequences for the potential to generate all or part of the expression pattern of a nearby gene, obtained from available gene expression databases. Interactions between individual transcription factors and modules are inferred by a statistical method to quantify a factor's contribution to the module's pattern generating potential. We use these pattern generating potentials to systematically describe the location and function of known and novel cis-regulatory modules in the segmentation network, identifying many examples of modules predicted to have overlapping expression activities. Surprisingly, conserved transcription factor binding site frequencies were as effective as experimental measurements of occupancy in predicting module expression patterns or factor-module interactions. Thus, unlike previous module prediction methods, this method predicts not only the location of modules but also their spatial activity pattern and the factors that directly determine this pattern. As databases of transcription factor specificities and in vivo gene expression patterns grow, analysis of pattern generating potentials provides a general method to decode transcriptional regulatory sequences and networks.

Richards, Adam; McCutchan, Michael; Wakabayashi-Ito, Noriko; Hammonds, Ann S.; Celniker, Susan E.; Kumar, Sudhir; Wolfe, Scot A.; Brodsky, Michael H.; Sinha, Saurabh

2010-01-01

271

Integrated segmentation of cellular structures  

NASA Astrophysics Data System (ADS)

Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

2011-03-01

272

Multi-Atlas Multi-Shape Segmentation of Fetal Brain MRI for Volumetric and Morphometric Analysis of Ventriculomegaly  

PubMed Central

The recent development of motion robust super-resolution fetal brain MRI holds out the potential for dramatic new advances in volumetric and morphometric analysis. Volumetric analysis based on volumetric and morphometric biomarkers of the developing fetal brain must include segmentation. Automatic segmentation of fetal brain MRI is challenging, however, due to the highly variable size and shape of the developing brain; possible structural abnormalities; and the relatively poor resolution of fetal MRI scans. To overcome these limitations, we present a novel, constrained, multi-atlas, multi-shape automatic segmentation method that specifically addresses the challenge of segmenting multiple structures with similar intensity values in subjects with strong anatomic variability. Accordingly, we have applied this method to shape segmentation of normal, dilated, or fused lateral ventricles for quantitative analysis of ventriculomegaly (VM), which is a pivotal finding in the earliest stages of fetal brain development, and warrants further investigation. Utilizing these innovative techniques, we introduce novel volumetric and morphometric biomarkers of VM comparing these values to those that are generated by standard methods of VM analysis, i.e., by measuring the ventricular atrial diameter (AD) on manually selected sections of 2D ultrasound or 2D MRI. To this end, we studied 25 normal and abnormal fetuses in the gestation age (GA) range of 19 to 39 weeks (mean=28.26, stdev=6.56). This heterogenous dataset was essentially used to 1) validate our segmentation method for normal and abnormal ventricles; and 2) show that the proposed biomarkers may provide improved detection of VM as compared to the AD measurement.

Gholipour, Ali; Akhondi-Asl, Alireza; Estroff, Judy A.; Warfield, Simon K.

2012-01-01

273

Model Documentation of the Gas Analysis Modeling System. Volume 1. Model Overview.  

National Technical Information Service (NTIS)

This is Volume 1 of three volumes of documentation for the Gas Analysis Modeling System (GAMS) developed by the Analysis and Forecasting Branch, Reserves ad Natural Gas Division, Office of Oil and Gas, Energy Information Administration (EIA), US Departmen...

A. S. Kydes

1984-01-01

274

Interactive 3D Analysis of Blood Vessel Trees and Collateral Vessel Volumes in Magnetic Resonance Angiograms in the Mouse Ischemic Hindlimb Model  

PubMed Central

The quantitative analysis of blood vessel volumes from magnetic resonance angiograms (MRA) or ?CT images is difficult and time-consuming. This fact, when combined with a study that involves multiple scans of multiple subjects, can represent a significant portion of research time. In order to enhance analysis options and to provide an automated and fast analysis method, we developed a software plugin for the ImageJ and Fiji image processing frameworks that enables the quick and reproducible volume quantification of blood vessel segments. The novel plugin named Volume Calculator (VolCal), accepts any binary (thresholded) image and produces a three-dimensional schematic representation of the vasculature that can be directly manipulated by the investigator. Using MRAs of the mouse hindlimb ischemia model, we demonstrate quick and reproducible blood vessel volume calculations with 95 – 98% accuracy. In clinical settings this software may enhance image interpretation and the speed of data analysis and thus enhance intervention decisions for example in peripheral vascular disease or aneurysms. In summary, we provide a novel, fast and interactive quantification of blood vessel volumes for single blood vessels or sets of vessel segments with particular focus on collateral formation after an ischemic insult.

Marks, Peter C.; Preda, Marilena; Henderson, Terry; Liaw, Lucy; Lindner, Volkhard; Friesel, Robert E.; Pinz, Ilka M.

2014-01-01

275

Parallel runway requirement analysis study. Volume 2: Simulation manual  

NASA Technical Reports Server (NTRS)

This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

Ebrahimi, Yaghoob S.; Chun, Ken S.

1993-01-01

276

Local label learning (LLL) for subcortical structure segmentation: Application to hippocampus segmentation.  

PubMed

Automatic and reliable segmentation of subcortical structures is an important but difficult task in quantitative brain image analysis. Multi-atlas based segmentation methods have attracted great interest due to their promising performance. Under the multi-atlas based segmentation framework, using deformation fields generated for registering atlas images onto a target image to be segmented, labels of the atlases are first propagated to the target image space and then fused to get the target image segmentation based on a label fusion strategy. While many label fusion strategies have been developed, most of these methods adopt predefined weighting models that are not necessarily optimal. In this study, we propose a novel local label learning strategy to estimate the target image's segmentation label using statistical machine learning techniques. In particular, we use a L1-regularized support vector machine (SVM) with a k nearest neighbor (kNN) based training sample selection strategy to learn a classifier for each of the target image voxel from its neighboring voxels in the atlases based on both image intensity and texture features. Our method has produced segmentation results consistently better than state-of-the-art label fusion methods in validation experiments on hippocampal segmentation of over 100 MR images obtained from publicly available and in-house datasets. Volumetric analysis has also demonstrated the capability of our method in detecting hippocampal volume changes due to Alzheimer's disease. Hum Brain Mapp 35:2674-2697, 2014. © 2013 Wiley Periodicals, Inc. PMID:24151008

Hao, Yongfu; Wang, Tianyao; Zhang, Xinqing; Duan, Yunyun; Yu, Chunshui; Jiang, Tianzi; Fan, Yong

2014-06-01

277

[Precision MRI-based joint surface and cartilage density analysis of the knee joint using rapid water-excitation sequence and semi-automatic segmentation algorithm].  

PubMed

The aim of this study was to analyse the precision of three-dimensional joint surface and cartilage thickness measurements in the knee, using a fast, high-resolution water-excitation sequence and a semiautomated segmentation algorithm. The knee joint of 8 healthy volunteers, aged 22 to 29 years, were examined at a resolution of 1.5 mm x 0.31 mm x 0.31 mm, with four sagittal data sets being acquired after repositioning the joint. After semiautomated segmentation with a B-spline Snake algorithm and 3D reconstruction of the patellar, femoral and tibial cartilages, the joint surface areas (triangulation), cartilage volume, and mean and maximum thickness (Euclidean distance transformation) were analysed, independently of the orientation of the sections. The precision (CV%) for the surface areas was 2.1 to 6.6%. The mean cartilage thickness and cartilage volume showed coefficients of 1.9 to 3.5% (except for the femoral condyles), the value for the medial femoral condyle being 9.1%, and for the lateral condyle 6.5%. For maximum thickness, coefficients of between 2.6 and 5.9% were found. In the present study we investigate for the first time the precision of MRI-based joint surface area measurements in the knee, and of cartilage thickness analyses in the femur. Using a selective water-excitation sequence, the acquisition time can be reduced by more than 50%. The poorer precision in the femoral condyles can be attributed to partial volume effects that occur at the edges of the joint surfaces with a sagittal image protocol. Since MRI is non-invasive, it is highly suitable for examination of healthy subjects (generation of individual finite element models, analysis of functional adaptation to mechanical stimulation, measurement of cartilage deformation in vivo) and as a diagnostic tool for follow-up, indication for therapy, and objective evaluation of new therapeutic agents in osteoarthritis. PMID:11155531

Heudorfer, L; Hohe, J; Faber, S; Englmeier, K H; Reiser, M; Eckstein, F

2000-11-01

278

Comparison of Open Tubular Cadmium Reactor and Packed Cadmium Column in Automated Gas-Segmented Continuous Flow Nitrate Analysis  

Microsoft Academic Search

Detailed procedures are provided for preparing packed cadmium columns to reduce nitrate to nitrite. Experiments demonstrated the importance of conditioning both open tubular cadmium reactor (OTCR) and packed copper-coated cadmium columns to achieve 100% reduction efficiency. The effects of segmentation bubbles in the OTCR upon reduction efficiency and baseline noise in nitrate analysis are investigated using an auto-analyzer. Metal particles

Jia-Zhong Zhang; Charles J. Fischer; Peter B. Ortner

2000-01-01

279

Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms  

PubMed Central

High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation.

Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly

2013-01-01

280

Concept Area Two Objectives and Test Items (Rev.) Part One, Part Two. Economic Analysis Course. Segments 17-49.  

ERIC Educational Resources Information Center

A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This report deals with the second concept area of the course and focuses on macroeconomics. Segments 17 through 49 are presented,…

Sterling Inst., Washington, DC. Educational Technology Center.

281

Analysis of the spatial sensitivity of conductance/admittance catheter ventricular volume estimation.  

PubMed

Conductance catheters are known to have a nonuniform spatial sensitivity due to the distribution of the electric field. The Geselowitz relation is applied to murine and multisegment conductance catheters using finite element models to determine the spatial sensitivity in a uniform medium and simplified left ventricle models. A new formulation is proposed that allows determination of the spatial sensitivity to admittance. Analysis of FEM numerical modeling results using the Geselowitz relation provides a true measure of parallel conductance in simplified left ventricle models for assessment of the admittance method and hypertonic saline techniques. The spatial sensitivity of blood conductance (Gb) is determined throughout the cardiac cycle. Gb is converted to volume using Wei's equation to determine if the presence of myocardium alters the nonlinear relationship through changes to the electric field. Results show that muscle conductance (Gm) from the admittance method matches results from the Geselowitz relation and that the relationship between Gb and volume is accurately fit using Wei's equation. Single-segment admittance measurements in large animals result in a more evenly distributed sensitivity to the LV blood pool. The hypertonic saline method overestimates parallel conductance throughout the cardiac cycle in both murine and multisegment conductance catheters. PMID:23559022

Larson, Erik R; Feldman, Marc D; Valvano, Jonathan W; Pearce, John A

2013-08-01

282

Multi-Feature Analysis and Classification of Human Chromosome Images Using Centromere Segmentation Algorithms  

Microsoft Academic Search

Classification of homologous human chromosomes is essential to advanced studies of cancer genetics. This paper describes novel segmentation and classification algorithms to extract multiple features, from microscopy images of chromosomes, for classification purposes. Multicolour images of metaphase chromosomes prepared by applying PNA probes are used for this purpose. Centromeres are segmented using an iterative fuzzy algorithm as well as a

Parvin Mousavi; Rabab Kreidieh Ward; Peter M. Lansdorp; Sidney Fels

2000-01-01

283

Multiscale deformable model segmentation and statistical shape analysis using medial descriptions  

Microsoft Academic Search

This paper presents a multiscale framework based on a medial representation for the segmentation and shape characterization of anatomical objects in medical imagery. The segmentation procedure is based on a Bayesian deformable templates methodology in which the prior information about the geometry and shape of anatomical objects is incorporated via the construction of exemplary templates. The anatomical variability is accommodated

Sarang Joshi; Stephen Pizer; P. Thomas Fletcher; Paul Yushkevich; Andrew Thall; J. S. Marron

2002-01-01

284

Understanding the market for geographic information: A market segmentation and characteristics analysis  

NASA Technical Reports Server (NTRS)

Findings and results from a marketing research study are presented. The report identifies market segments and the product types to satisfy demand in each. An estimate of market size is based on the specific industries in each segment. A sample of ten industries was used in the study. The scientific study covered U.S. firms only.

Piper, William S.; Mick, Mark W.

1994-01-01

285

Segmentation, reconstruction, and analysis of blood thrombi in 2-photon microscopy images  

Microsoft Academic Search

In this paper, we study the problem of segmenting, reconstructing, and analyzing the structure and growth of thrombi (clots) in vivo in blood vessels based on 2-photon microscopic image data. First, we develop an algorithm for segmenting clots in 3-D microscopic images which incorporates the density-based clustering algorithm and other methods for dealing with imaging artifacts. Next, we apply the

Jian Mu; Xiaomin Liu; Malgorzata M. Kamocka; Zhiliang Xu; Mark S. Alber; Elliot D. Rosen; Danny Z. Chen

2009-01-01

286

Optimization of segmented constrained layer damping with mathematical programming using strain energy analysis and modal data  

Microsoft Academic Search

A new method for enhancement of damping capabilities of segmented constrained layer damping material is proposed. Constrained layer damping has been extensively used since many years to damp flexural vibrations. The shear deformation occurring in the viscoelastic core is mainly responsible for the dissipation of energy. Cutting both the constraining and the constrained layer, which leads to segmentation, increases the

Grégoire Lepoittevin; Gerald Kress

2010-01-01

287

A Theoretical Analysis of How Segmentation of Dynamic Visualizations Optimizes Students' Learning  

ERIC Educational Resources Information Center

This article reviews studies investigating segmentation of dynamic visualizations (i.e., showing dynamic visualizations in pieces with pauses in between) and discusses two not mutually exclusive processes that might underlie the effectiveness of segmentation. First, cognitive activities needed for dealing with the transience of dynamic…

Spanjers, Ingrid A. E.; van Gog, Tamara; van Merrienboer, Jeroen J. G.

2010-01-01

288

A New Unsupervised Color Image Segmentation Algorithm upon a Statistical Multidimensional Data Analysis Approach  

Microsoft Academic Search

The problem of segmenting images into coherent regions has been a major subject of research in the field of computer vision. A new unsupervised color image segmentation algorithm which is use the hyperbolic filter for detecting modal regions of a multivariate probability density function is presented in this paper. The algorithm is carried out in a four stages processing, where

A. Hamid; R. Allaoui; A. Sbihi

2005-01-01

289

Real Time System for Multi-Sensor Image Analysis through Pyramidal Segmentation.  

National Technical Information Service (NTIS)

A state of the art, fully functional, multi-scale and multi-channel segmentation tool has been developed. It is based on the recently developed computational theory of the 2-normal segmentations. A fast multi-scale pyramidal algorithm has been designed an...

L. Rudin, S. Osher, G. Koepfler, J. M. Morel

1992-01-01

290

Reducing Pervasive False-Positive Identical-by-Descent Segments Detected by Large-Scale Pedigree Analysis  

PubMed Central

Analysis of genomic segments shared identical-by-descent (IBD) between individuals is fundamental to many genetic applications, from demographic inference to estimating the heritability of diseases, but IBD detection accuracy in nonsimulated data is largely unknown. In principle, it can be evaluated using known pedigrees, as IBD segments are by definition inherited without recombination down a family tree. We extracted 25,432 genotyped European individuals containing 2,952 father–mother–child trios from the 23andMe, Inc. data set. We then used GERMLINE, a widely used IBD detection method, to detect IBD segments within this cohort. Exploiting known familial relationships, we identified a false-positive rate over 67% for 2–4 centiMorgan (cM) segments, in sharp contrast with accuracies reported in simulated data at these sizes. Nearly all false positives arose from the allowance of haplotype switch errors when detecting IBD, a necessity for retrieving long (>6 cM) segments in the presence of imperfect phasing. We introduce HaploScore, a novel, computationally efficient metric that scores IBD segments proportional to the number of switch errors they contain. Applying HaploScore filtering to the IBD data at a precision of 0.8 produced a 13-fold increase in recall when compared with length-based filtering. We replicate the false IBD findings and demonstrate the generalizability of HaploScore to alternative data sources using an independent cohort of 555 European individuals from the 1000 Genomes project. HaploScore can improve the accuracy of segments reported by any IBD detection method, provided that estimates of the genotyping error rate and switch error rate are available.

Durand, Eric Y.; Eriksson, Nicholas; McLean, Cory Y.

2014-01-01

291

A comparison between handgrip strength, upper limb fat free mass by segmental bioelectrical impedance analysis (SBIA) and anthropometric measurements in young males  

NASA Astrophysics Data System (ADS)

The mechanical function and size of a muscle may be closely linked. Handgrip strength (HGS) has been used as a predictor of functional performing. Anthropometric measurements have been made to estimate arm muscle area (AMA) and physical muscle mass volume of upper limb (ULMMV). Electrical volume estimation is possible by segmental BIA measurements of fat free mass (SBIA-FFM), mainly muscle-mass. Relationship among these variables is not well established. We aimed to determine if physical and electrical muscle mass estimations relate to each other and to what extent HGS is to be related to its size measured by both methods in normal or overweight young males. Regression analysis was used to determine association between these variables. Subjects showed a decreased HGS (65.5%), FFM, (85.5%) and AMA (74.5%). It was found an acceptable association between SBIA-FFM and AMA (r2 = 0.60) and poorer between physical and electrical volume (r2 = 0.55). However, a paired Student t-test and Bland and Altman plot showed that physical and electrical models were not interchangeable (pt<0.0001). HGS showed a very weak association with anthropometric (r2 = 0.07) and electrical (r2 = 0.192) ULMMV showing that muscle mass quantity does not mean muscle strength. Other factors influencing HGS like physical training or nutrition require more research.

Gonzalez-Correa, C. H.; Caicedo-Eraso, J. C.; Varon-Serna, D. R.

2013-04-01

292

Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis  

SciTech Connect

Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

2004-01-19

293

Automated segmentation of free-lying cell nuclei in Pap smears for malignancy-associated change analysis.  

PubMed

This paper presents an automated algorithm for robustly detecting and segmenting free-lying cell nuclei in bright-field microscope images of Pap smears. This is an essential initial step in the development of an automated screening system for cervical cancer based on malignancy associated change (MAC) analysis. The proposed segmentation algorithm makes use of gray-scale annular closings to identify free-lying nuclei-like objects together with marker-based watershed segmentation to accurately delineate the nuclear boundaries. The algorithm also employs artifact rejection based on size, shape, and granularity to ensure only the nuclei of intermediate squamous epithelial cells are retained. An evaluation of the performance of the algorithm relative to expert manual segmentation of 33 fields-of-view from 11 Pap smear slides is also presented. The results show that the sensitivity and specificity of nucleus detection is 94.71% and 85.30% respectively, and that the accuracy of segmentation, measured using the Dice coefficient, of the detected nuclei is 97.30±1.3%. PMID:23367143

Moshavegh, Ramin; Ehteshami Bejnordi, Babak; Mehnert, Andrew; Sujathan, K; Malm, Patrik; Bengtsson, Ewert

2012-01-01

294

Vessel segmentation in 3D spectral OCT scans of the retina  

NASA Astrophysics Data System (ADS)

The latest generation of spectral optical coherence tomography (OCT) scanners is able to image 3D cross-sectional volumes of the retina at a high resolution and high speed. These scans offer a detailed view of the structure of the retina. Automated segmentation of the vessels in these volumes may lead to more objective diagnosis of retinal vascular disease including hypertensive retinopathy, retinopathy of prematurity. Additionally, vessel segmentation can allow color fundus images to be registered to these 3D volumes, possibly leading to a better understanding of the structure and localization of retinal structures and lesions. In this paper we present a method for automatically segmenting the vessels in a 3D OCT volume. First, the retina is automatically segmented into multiple layers, using simultaneous segmentation of their boundary surfaces in 3D. Next, a 2D projection of the vessels is produced by only using information from certain segmented layers. Finally, a supervised, pixel classification based vessel segmentation approach is applied to the projection image. We compared the influence of two methods for the projection on the performance of the vessel segmentation on 10 optic nerve head centered 3D OCT scans. The method was trained on 5 independent scans. Using ROC analysis, our proposed vessel segmentation system obtains an area under the curve of 0.970 when compared with the segmentation of a human observer.

Niemeijer, Meindert; Garvin, Mona K.; van Ginneken, Bram; Sonka, Milan; Abràmoff, Michael D.

2008-04-01

295

Volume measurements of normal orbital structures by computed tomographic analysis  

SciTech Connect

Computed tomographic digital data and special off-line computer graphic analysis were used to measure volumes of normal orbital soft tissue, extraocular muscle, orbital fat, and total bony orbit in vivo in 29 patients (58 orbits). The upper limits of normal for adult bony orbit, soft tissue exclusive of the globe, orbital fat, and muscle are 30.1 cm/sup 3/, 20.0 cm/sup 3/, 14.4 cm/sup 3/, and 6.5 cm/sup 3/, respectively. There are small differences in men as a group compared with women but minimal difference between right and left orbits in the same person. The accuracy of the techniques was established at 7%-8% for these orbit structural volumes in physical phantoms and in simulated silicone orbit phantoms in dry skulls. Mean values and upper limits of normal for volumes were determined in adult orbital structures for future comparison with changes due to endocrine ophthalmopathy, trauma, and congenital deformity.

Forbes, G.; Gehring, D.G.; Gorman, C.A.; Brennan, M.D.; Jackson, I.T.

1985-07-01

296

BpMatch: an efficient algorithm for a segmental analysis of genomic sequences.  

PubMed

Here, we propose BpMatch: an algorithm that, working on a suitably modified suffix-tree data structure, is able to compute, in a fast and efficient way, the coverage of a source sequence S on a target sequence T, by taking into account direct and reverse segments, eventually overlapped. Using BpMatch, the operator should define a priori, the minimum length l of a segment and the minimum number of occurrences minRep, so that only segments longer than l and having a number of occurrences greater than minRep are considered to be significant. BpMatch outputs the significant segments found and the computed segment-based distance. On the worst case, assuming the alphabet dimension d is a constant, the time required by BpMatch to calculate the coverage is O(l²n). On the average, by setting l ? 2 log(d)(n), the time required to calculate the coverage is only O(n). BpMatch, thanks to the minRep parameter, can also be used to perform a self-covering: to cover a sequence using segments coming from itself, by avoiding the trivial solution of having a single segment coincident with the whole sequence. The result of the self-covering approach is a spectral representation of the repeats contained in the sequence. BpMatch is freely available on: www.sourceforge.net/projects/bpmatch. PMID:22350206

Felicioli, Claudio; Marangoni, Roberto

2012-01-01

297

Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 4  

NASA Technical Reports Server (NTRS)

The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 4 of the four major tasks included in the study. Task 4 uses flight plan segment wind and temperature differences as indicators of dates and geographic areas for which significant forecast errors may have occurred. An in-depth analysis is then conducted for the days identified. The analysis show that significant errors occur in the operational forecast on 15 of the 33 arbitrarily selected days included in the study. Wind speeds in an area of maximum winds are underestimated by at least 20 to 25 kts. on 14 of these days. The analysis also show that there is a tendency to repeat the same forecast errors from prog to prog. Also, some perceived forecast errors from the flight plan comparisons could not be verified by visual inspection of the corresponding National Meteorological Center forecast and analyses charts, and it is likely that they are the result of weather data interpolation techniques or some other data processing procedure in the airlines' flight planning systems.

Keitz, J. F.

1982-01-01

298

Fully automatic segmentation of white matter hyperintensities in MR images of the elderly  

Microsoft Academic Search

The role of quantitative image analysis in large clinical trials is continuously increasing. Several methods are available for performing white matter hyperintensity (WMH) volume quantification. They vary in the amount of the human interaction involved. In this paper, we describe a fully automatic segmentation that was used to quantify WMHs in a large clinical trial on elderly subjects. Our segmentation

F. Admiraal-Behloul; D. M. J. van den Heuvel; H. Olofsen; M. J. P. van Osch; J. van der Grond; M. A. van Buchem; J. H. C. Reiber

2005-01-01

299

Magnetic field analysis of Lorentz motors using a novel segmented magnetic equivalent circuit method.  

PubMed

A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

2013-01-01

300

Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method  

PubMed Central

A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results.

Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

2013-01-01

301

Segmental structure of the Brassica napus genome based on comparative analysis with Arabidopsis thaliana.  

PubMed

Over 1000 genetically linked RFLP loci in Brassica napus were mapped to homologous positions in the Arabidopsis genome on the basis of sequence similarity. Blocks of genetically linked loci in B. napus frequently corresponded to physically linked markers in Arabidopsis. This comparative analysis allowed the identification of a minimum of 21 conserved genomic units within the Arabidopsis genome, which can be duplicated and rearranged to generate the present-day B. napus genome. The conserved regions extended over lengths as great as 50 cM in the B. napus genetic map, equivalent to approximately 9 Mb of contiguous sequence in the Arabidopsis genome. There was also evidence for conservation of chromosome landmarks, particularly centromeric regions, between the two species. The observed segmental structure of the Brassica genome strongly suggests that the extant Brassica diploid species evolved from a hexaploid ancestor. The comparative map assists in exploiting the Arabidopsis genomic sequence for marker and candidate gene identification within the larger, intractable genomes of the Brassica polyploids. PMID:16020789

Parkin, Isobel A P; Gulden, Sigrun M; Sharpe, Andrew G; Lukens, Lewis; Trick, Martin; Osborn, Thomas C; Lydiate, Derek J

2005-10-01

302

Motion analysis of knee joint using dynamic volume images  

NASA Astrophysics Data System (ADS)

Acquisition and analysis of three-dimensional movement of knee joint is desired in orthopedic surgery. We have developed two methods to obtain dynamic volume images of knee joint. One is a 2D/3D registration method combining a bi-plane dynamic X-ray fluoroscopy and a static three-dimensional CT, the other is a method using so-called 4D-CT that uses a cone-beam and a wide 2D detector. In this paper, we present two analyses of knee joint movement obtained by these methods: (1) transition of the nearest points between femur and tibia (2) principal component analysis (PCA) of six parameters representing the three dimensional movement of knee. As a preprocessing for the analysis, at first the femur and tibia regions are extracted from volume data at each time frame and then the registration of the tibia between different frames by an affine transformation consisting of rotation and translation are performed. The same transformation is applied femur as well. Using those image data, the movement of femur relative to tibia can be analyzed. Six movement parameters of femur consisting of three translation parameters and three rotation parameters are obtained from those images. In the analysis (1), axis of each bone is first found and then the flexion angle of the knee joint is calculated. For each flexion angle, the minimum distance between femur and tibia and the location giving the minimum distance are found in both lateral condyle and medial condyle. As a result, it was observed that the movement of lateral condyle is larger than medial condyle. In the analysis (2), it was found that the movement of the knee can be represented by the first three principal components with precision of 99.58% and those three components seem to strongly relate to three major movements of femur in the knee bend known in orthopedic surgery.

Haneishi, Hideaki; Kohno, Takahiro; Suzuki, Masahiko; Moriya, Hideshige; Mori, Sin-ichiro; Endo, Masahiro

2006-03-01

303

Parallel runway requirement analysis study. Volume 1: The analysis  

NASA Technical Reports Server (NTRS)

The correlation of increased flight delays with the level of aviation activity is well recognized. A main contributor to these flight delays has been the capacity of airports. Though new airport and runway construction would significantly increase airport capacity, few programs of this type are currently underway, let alone planned, because of the high cost associated with such endeavors. Therefore, it is necessary to achieve the most efficient and cost effective use of existing fixed airport resources through better planning and control of traffic flows. In fact, during the past few years the FAA has initiated such an airport capacity program designed to provide additional capacity at existing airports. Some of the improvements that that program has generated thus far have been based on new Air Traffic Control procedures, terminal automation, additional Instrument Landing Systems, improved controller display aids, and improved utilization of multiple runways/Instrument Meteorological Conditions (IMC) approach procedures. A useful element to understanding potential operational capacity enhancements at high demand airports has been the development and use of an analysis tool called The PLAND_BLUNDER (PLB) Simulation Model. The objective for building this simulation was to develop a parametric model that could be used for analysis in determining the minimum safety level of parallel runway operations for various parameters representing the airplane, navigation, surveillance, and ATC system performance. This simulation is useful as: a quick and economical evaluation of existing environments that are experiencing IMC delays, an efficient way to study and validate proposed procedure modifications, an aid in evaluating requirements for new airports or new runways in old airports, a simple, parametric investigation of a wide range of issues and approaches, an ability to tradeoff air and ground technology and procedures contributions, and a way of considering probable blunder mechanisms and range of blunder scenarios. This study describes the steps of building the simulation and considers the input parameters, assumptions and limitations, and available outputs. Validation results and sensitivity analysis are addressed as well as outlining some IMC and Visual Meteorological Conditions (VMC) approaches to parallel runways. Also, present and future applicable technologies (e.g., Digital Autoland Systems, Traffic Collision and Avoidance System II, Enhanced Situational Awareness System, Global Positioning Systems for Landing, etc.) are assessed and recommendations made.

Ebrahimi, Yaghoob S.

1993-01-01

304

Finite element analysis of weightbath hydrotraction treatment of degenerated lumbar spine segments in elastic phase.  

PubMed

3D finite element models of human lumbar functional spinal units (FSU) were used for numerical analysis of weightbath hydrotraction therapy (WHT) applied for treating degenerative diseases of the lumbar spine. Five grades of age-related degeneration were modeled by material properties. Tensile material parameters of discs were obtained by parameter identification based on in vivo measured elongations of lumbar segments during regular WHT, compressive material constants were obtained from the literature. It has been proved numerically that young adults of 40-45 years have the most deformable and vulnerable discs, while the stability of segments increases with further aging. The reasons were found by analyzing the separated contrasting effects of decreasing incompressibility and increasing hardening of nucleus, yielding non-monotonous functions of stresses and deformations in terms of aging and degeneration. WHT consists of indirect and direct traction phases. Discs show a bilinear material behaviour with higher resistance in indirect and smaller in direct traction phase. Consequently, although the direct traction load is only 6% of the indirect one, direct traction deformations are 15-90% of the indirect ones, depending on the grade of degeneration. Moreover, the ratio of direct stress relaxation remains equally about 6-8% only. Consequently, direct traction controlled by extra lead weights influences mostly the deformations being responsible for the nerve release; while the stress relaxation is influenced mainly by the indirect traction load coming from the removal of the compressive body weight and muscle forces in the water. A mildly degenerated disc in WHT shows 0.15mm direct, 0.45mm indirect and 0.6mm total extension; 0.2mm direct, 0.6mm indirect and 0.8mm total posterior contraction. A severely degenerated disc exhibits 0.05mm direct, 0.05mm indirect and 0.1mm total extension; 0.05mm direct, 0.25mm indirect and 0.3mm total posterior contraction. These deformations are related to the instant elastic phase of WHT that are doubled during the creep period of the treatment. The beneficial clinical impacts of WHT are still evident even 3 months later. PMID:19883918

Kurutz, M; Oroszváry, L

2010-02-10

305

Breast Tissue 3D Segmentation and Visualization on MRI  

PubMed Central

Tissue segmentation and visualization are useful for breast lesion detection and quantitative analysis. In this paper, a 3D segmentation algorithm based on Kernel-based Fuzzy C-Means (KFCM) is proposed to separate the breast MR images into different tissues. Then, an improved volume rendering algorithm based on a new transfer function model is applied to implement 3D breast visualization. Experimental results have been shown visually and have achieved reasonable consistency.

Cui, Xiangfei; Sun, Feifei

2013-01-01

306

Evaluation of accuracy in partial volume analysis of small objects  

NASA Astrophysics Data System (ADS)

Accurate and robust assessment of quantitative parameters is a key issue in many fields of medical image analysis, and can have a direct impact on diagnosis and treatment monitoring. Especially for the analysis of small structures such as focal lesions in patients with MS, the finite spatial resolution of imaging devices is often a limiting factor that results in a mixture of different tissue types. We propose a new method that allows an accurate quantification of medical image data, focusing on a dedicated model for partial volume (PV) artifacts. Today, a widely accepted model assumption is that of a uniformly distributed linear mixture of pure tissues. However, several publications have clearly shown that this is not an appropriate choice in many cases. We propose a generalization of current PV models based on the Beta distribution, yielding a more accurate quantification. Furthermore, we present a new classification scheme. Prior knowledge obtained from a set of training data allows a robust initial estimate of the proper model parameters, even in cases of objects with predominant PV artifacts. A maximum likelihood based clustering algorithm is employed, resulting in a robust volume estimate. Experiments are carried out on more than 100 stylized software phantoms as well as on realistic phantom data sets. A comparison with current mixture models shows the capabilities of our approach.

Rexilius, Jan; Peitgen, Heinz-Otto

2008-04-01

307

First histopathological and immunophenotypic analysis of early dynamic events in a patient with segmental vitiligo associated with halo nevi.  

PubMed

Segmental vitiligo is often ascribed to the neurogenic theory of melanocyte destruction, although data about the initial etiopathological events are scarce. Clinical, histopathological and T-cell phenotypic analyses were performed during the early onset of a segmental vitiligo lesion in a patient with associated halo nevi. Histopathological analysis revealed a lymphocytic infiltrate, mainly composed of CD8+ T-cells and some CD4(+) T-cells around the dermo-epidermal junction. Flow cytometry analysis of resident T-cells revealed a clear enrichment of pro-inflammatory IFN-gamma producing CD8+ T-cells in lesional skin compared to the non-lesional skin. Using human leukocyte antigen-peptide tetramers (MART-1, tyrosinase, gp100), increased numbers of T cells, recognizing melanocyte antigens were found in segmental vitiligo lesional skin, as compared with the non-lesional skin and the blood. Our findings indicate that a CD8+ melanocyte specific T cell-mediated immune response, as observed in generalized vitiligo, also plays a role in segmental vitiligo with associated halo nevi. PMID:20370855

van Geel, Nanja A C; Mollet, Ilse G; De Schepper, Sofie; Tjin, Esther P M; Vermaelen, Karim; Clark, Rachael A; Kupper, Thomas S; Luiten, Rosalie M; Lambert, Jo

2010-06-01

308

An entropy-based automated cell nuclei segmentation and quantification: application in analysis of wound healing process.  

PubMed

The segmentation and quantification of cell nuclei are two very significant tasks in the analysis of histological images. Accurate results of cell nuclei segmentation are often adapted to a variety of applications such as the detection of cancerous cell nuclei and the observation of overlapping cellular events occurring during wound healing process in the human body. In this paper, an automated entropy-based thresholding system for segmentation and quantification of cell nuclei from histologically stained images has been presented. The proposed translational computation system aims to integrate clinical insight and computational analysis by identifying and segmenting objects of interest within histological images. Objects of interest and background regions are automatically distinguished by dynamically determining 3 optimal threshold values for the 3 color components of an input image. The threshold values are determined by means of entropy computations that are based on probability distributions of the color intensities of pixels and the spatial similarity of pixel intensities within neighborhoods. The effectiveness of the proposed system was tested over 21 histologically stained images containing approximately 1800 cell nuclei, and the overall performance of the algorithm was found to be promising, with high accuracy and precision values. PMID:23533544

Oswal, Varun; Belle, Ashwin; Diegelmann, Robert; Najarian, Kayvan

2013-01-01

309

An Entropy-Based Automated Cell Nuclei Segmentation and Quantification: Application in Analysis of Wound Healing Process  

PubMed Central

The segmentation and quantification of cell nuclei are two very significant tasks in the analysis of histological images. Accurate results of cell nuclei segmentation are often adapted to a variety of applications such as the detection of cancerous cell nuclei and the observation of overlapping cellular events occurring during wound healing process in the human body. In this paper, an automated entropy-based thresholding system for segmentation and quantification of cell nuclei from histologically stained images has been presented. The proposed translational computation system aims to integrate clinical insight and computational analysis by identifying and segmenting objects of interest within histological images. Objects of interest and background regions are automatically distinguished by dynamically determining 3 optimal threshold values for the 3 color components of an input image. The threshold values are determined by means of entropy computations that are based on probability distributions of the color intensities of pixels and the spatial similarity of pixel intensities within neighborhoods. The effectiveness of the proposed system was tested over 21 histologically stained images containing approximately 1800 cell nuclei, and the overall performance of the algorithm was found to be promising, with high accuracy and precision values.

Oswal, Varun; Belle, Ashwin; Diegelmann, Robert; Najarian, Kayvan

2013-01-01

310

Analysis of DNA Sequences through Segmentation: Exploring the Mosaic via Statistical Measures  

NASA Astrophysics Data System (ADS)

The Jensen-Shannon divergence provides a quantitative entropic measure through which genomic DNA can be divided into compositionally distinct domains by a standard recursive segmentation procedure. In this article we show the scaling behaviour observed in domain length distribution and further explore the significance of these domains in the context of gene location, in application to the segmentation of a complete bacterial genome. We also show that this entropic measure has the potential of detecting the horizontally transferred genes in a genome.

Ramaswamy, Ramakrishna; Azad, Rajeev K.

311

Multi-scale Deformable Model Segmentation and Statistical Shape Analysis Using Medial Descriptions  

Microsoft Academic Search

Abstract, This paper presents a multiscale framework based on a medial representation for the segmentation and shape characteri-zation of anatomical objects in medical imagery. The segmentation procedure is based on a Bayesian deformable templates method-ology in which the prior information about the geometry and shape of anatomical objects is incorporated via the construction of ex-emplary templates. The anatomical variability is

Sarang C. Joshi; Stephen M. Pizer; P. Thomas Fletcher; Paul A. Yushkevich; Andrew Thall; J. S. Marron

2002-01-01

312

A Comprehensive Analysis of Swift XRT Data. II. Diverse Physical Origins of the Shallow Decay Segment  

Microsoft Academic Search

The origin of the shallow decay segment in Swift XRT light curves remains a puzzle. We analyze the properties of this segment with a sample of 53 long Swift GRBs detected before 2007 February. We show that the distributions of the sample's characteristics are lognormal or normal, and its isotropic X-ray energy (Eiso,X) is linearly correlated with the prompt gamma-ray

En-Wei Liang; Bin-Bin Zhang; Bing Zhang

2007-01-01

313

Characterization of Transmembrane Segments 3, 4, and 5 of MalF by Mutational Analysis  

Microsoft Academic Search

MalF and MalG are the cytoplasmic membrane components of the binding protein-dependent ATP binding cassette maltose transporter in Escherichia coli. They are thought to form the transport channel and are thus of critical importance for the mechanism of transport. To study the contributions of individual transmembrane segments of MalF, we isolated 27 point mutations in membrane-spanning segments 3, 4, and

ANGELIKA STEINKE; SANDRA GRAU; AMY DAVIDSON; ECKHARD HOFMANN; MICHAEL EHRMANN

2001-01-01

314

Cell segmentation by multi-resolution analysis and maximum likelihood estimation (MAMLE)  

PubMed Central

Background Cell imaging is becoming an indispensable tool for cell and molecular biology research. However, most processes studied are stochastic in nature, and require the observation of many cells and events. Ideally, extraction of information from these images ought to rely on automatic methods. Here, we propose a novel segmentation method, MAMLE, for detecting cells within dense clusters. Methods MAMLE executes cell segmentation in two stages. The first relies on state of the art filtering technique, edge detection in multi-resolution with morphological operator and threshold decomposition for adaptive thresholding. From this result, a correction procedure is applied that exploits maximum likelihood estimate as an objective function. Also, it acquires morphological features from the initial segmentation for constructing the likelihood parameter, after which the final segmentation is obtained. Conclusions We performed an empirical evaluation that includes sample images from different imaging modalities and diverse cell types. The new method attained very high (above 90%) cell segmentation accuracy in all cases. Finally, its accuracy was compared to several existing methods, and in all tests, MAMLE outperformed them in segmentation accuracy.

2013-01-01

315

Genetic properties of medium (M) and small (S) genomic RNA segments of Seoul hantavirus isolated from Rattus norvegicus and antigenicity analysis of recombinant nucleocapsid protein  

Microsoft Academic Search

A novel isolate of Seoul (SEO) hantaviruses was detected and identified in Rattus norvegicus in Shandong Province, China and designated as JUN5-14. The partial M segment and the coding region of nucleocapsid protein\\u000a (NP) in the S segment of JUN5-14 were PCR-amplified and sequenced. Nucleotide sequence analysis of the partial M segment (300 bp)\\u000a revealed that JUN5-14 isolate was closely related

Zexin Tao; Zhiyu Wang; Shaoxia Song; Hongling Wen; Guijie Ren; Guiting Wang

2007-01-01

316

Three-dimensional volume analysis of vasculature in engineered tissues  

NASA Astrophysics Data System (ADS)

Three-dimensional textural and volumetric image analysis holds great potential in understanding the image data produced by multi-photon microscopy. In this paper, an algorithm that quantitatively analyzes the texture and the morphology of vasculature in engineered tissues is proposed. The investigated 3D artificial tissues consist of Human Umbilical Vein Endothelial Cells (HUVEC) embedded in collagen exposed to two regimes of ultrasound standing wave fields under different pressure conditions. Textural features were evaluated using the normalized Gray-Scale Cooccurrence Matrix (GLCM) combined with Gray-Level Run Length Matrix (GLRLM) analysis. To minimize error resulting from any possible volume rotation and to provide a comprehensive textural analysis, an averaged version of nine GLCM and GLRLM orientations is used. To evaluate volumetric features, an automatic threshold using the gray level mean value is utilized. Results show that our analysis is able to differentiate among the exposed samples, due to morphological changes induced by the standing wave fields. Furthermore, we demonstrate that providing more textural parameters than what is currently being reported in the literature, enhances the quantitative understanding of the heterogeneity of artificial tissues.

YousefHussien, Mohammed; Garvin, Kelley; Dalecki, Diane; Saber, Eli; Helguera, María.

2013-01-01

317

Analysis of Hantavirus Genetic Diversity in Argentina: S Segment-Derived Phylogeny  

PubMed Central

Nucleotide sequences were determined for the complete S genome segments of the six distinct hantavirus genotypes from Argentina and for two cell culture-isolated Andes virus strains from Chile. Phylogenetic analysis indicates that, although divergent from each other, all Argentinian hantavirus genotypes group together and form a novel phylogenetic clade with the Andes virus. The previously characterized South American hantaviruses Laguna Negra virus and Rio Mamore virus make up another clade that originates from the same ancestral node as the Argentinian/Chilean viruses. Within the clade of Argentinian/Chilean viruses, three subclades can be defined, although the branching order is somewhat obscure. These are made of (i) “Lechiguanas-like” virus genotypes, (ii) Maciel virus and Pergamino virus genotypes, and (iii) strains of the Andes virus. Two hantavirus genotypes from Brazil, Araraquara and Castello dos Sonhos, were found to group with Maciel virus and Andes virus, respectively. The nucleocapsid protein amino acid sequence variability among the members of the Argentinian/Chilean clade does not exceed 5.8%. It is especially low (3.5%) among oryzomyine species-associated virus genotypes, suggesting recent divergence from the common ancestor. Interestingly, the Maciel and Pergamino viruses fit well with the rest of the clade although their hosts are akodontine rodents. Taken together, these data suggest that under conditions in which potential hosts display a high level of genetic diversity and are sympatric, host switching may play a prominent role in establishing hantavirus genetic diversity. However, cospeciation still remains the dominant factor in the evolution of hantaviruses.

Bohlman, Marlene C.; Morzunov, Sergey P.; Meissner, John; Taylor, Mary Beth; Ishibashi, Kimiko; Rowe, Joan; Levis, Silvana; Enria, Delia; St. Jeor, Stephen C.

2002-01-01

318

Global Warming’s Six Americas: An Audience Segmentation Analysis (Invited)  

NASA Astrophysics Data System (ADS)

One of the first rules of effective communication is to “know thy audience.” People have different psychological, cultural and political reasons for acting - or not acting - to reduce greenhouse gas emissions, and climate change educators can increase their impact by taking these differences into account. In this presentation we will describe six unique audience segments within the American public that each responds to the issue in its own distinct way, and we will discuss methods of engaging each. The six audiences were identified using a nationally representative survey of American adults conducted in the fall of 2008 (N=2,164). In two waves of online data collection, the public’s climate change beliefs, attitudes, risk perceptions, values, policy preferences, conservation, and energy-efficiency behaviors were assessed. The data were subjected to latent class analysis, yielding six groups distinguishable on all the above dimensions. The Alarmed (18%) are fully convinced of the reality and seriousness of climate change and are already taking individual, consumer, and political action to address it. The Concerned (33%) - the largest of the Six Americas - are also convinced that global warming is happening and a serious problem, but have not yet engaged with the issue personally. Three other Americas - the Cautious (19%), the Disengaged (12%) and the Doubtful (11%) - represent different stages of understanding and acceptance of the problem, and none are actively involved. The final America - the Dismissive (7%) - are very sure it is not happening and are actively involved as opponents of a national effort to reduce greenhouse gas emissions. Mitigating climate change will require a diversity of messages, messengers and methods that take into account these differences within the American public. The findings from this research can serve as guideposts for educators on the optimal choices for reaching and influencing target groups with varied informational needs, values and beliefs.

Roser-Renouf, C.; Maibach, E.; Leiserowitz, A.

2009-12-01

319

Sequence analysis on the information of folding initiation segments in ferredoxin-like fold proteins  

PubMed Central

Background While some studies have shown that the 3D protein structures are more conservative than their amino acid sequences, other experimental studies have shown that even if two proteins share the same topology, they may have different folding pathways. There are many studies investigating this issue with molecular dynamics or Go-like model simulations, however, one should be able to obtain the same information by analyzing the proteins’ amino acid sequences, if the sequences contain all the information about the 3D structures. In this study, we use information about protein sequences to predict the location of their folding segments. We focus on proteins with a ferredoxin-like fold, which has a characteristic topology. Some of these proteins have different folding segments. Results Despite the simplicity of our methods, we are able to correctly determine the experimentally identified folding segments by predicting the location of the compact regions considered to play an important role in structural formation. We also apply our sequence analyses to some homologues of each protein and confirm that there are highly conserved folding segments despite the homologues’ sequence diversity. These homologues have similar folding segments even though the homology of two proteins’ sequences is not so high. Conclusion Our analyses have proven useful for investigating the common or different folding features of the proteins studied.

2014-01-01

320

Stress and strain analysis of contractions during ramp distension in partially obstructed guinea pig jejunal segments  

PubMed Central

Previous studies have demonstrated morphological and biomechanical remodeling in the intestine proximal to an obstruction. The present study aimed to obtain stress and strain thresholds to initiate contraction and the maximal contraction stress and strain in partially obstructed guinea pig jejunal segments. Partial obstruction and sham operations were surgically created in mid-jejunum of male guinea pigs. The animals survived 2, 4, 7, and 14 days, respectively. Animals not being operated on served as normal controls. The segments were used for no-load state, zero-stress state and distension analyses. The segment was inflated to 10 cmH2O pressure in an organ bath containing 37°C Krebs solution and the outer diameter change was monitored. The stress and strain at the contraction threshold and at maximum contraction were computed from the diameter, pressure and the zero-stress state data. Young’s modulus was determined at the contraction threshold. The muscle layer thickness in obstructed intestinal segments increased up to 300%. Compared with sham-obstructed and normal groups, the contraction stress threshold, the maximum contraction stress and the Young’s modulus at the contraction threshold increased whereas the strain threshold and maximum contraction strain decreased after 7 days obstruction (P<0.05 and 0.01). In conclusion, in the partially obstructed intestinal segments, a larger distension force was needed to evoke contraction likely due to tissue remodeling. Higher contraction stresses were produced and the contraction deformation (strain) became smaller.

Zhao, Jingbo; Liao, Donghua; Yang, Jian; Gregersen, Hans

2011-01-01

321

Tumor Burden Analysis on Computed Tomography by Automated Liver and Tumor Segmentation  

PubMed Central

The paper presents the automated computation of hepatic tumor burden from abdominal CT images of diseased populations with images with inconsistent enhancement. The automated segmentation of livers is addressed first. A novel three-dimensional (3D) affine invariant shape parameterization is employed to compare local shape across organs. By generating a regular sampling of the organ's surface, this parameterization can be effectively used to compare features of a set of closed 3D surfaces point-to-point, while avoiding common problems with the parameterization of concave surfaces. From an initial segmentation of the livers, the areas of atypical local shape are determined using training sets. A geodesic active contour corrects locally the segmentations of the livers in abnormal images. Graph cuts segment the hepatic tumors using shape and enhancement constraints. Liver segmentation errors are reduced significantly and all tumors are detected. Finally, support vector machines and feature selection are employed to reduce the number of false tumor detections. The tumor detection true position fraction of 100% is achieved at 2.3 false positives/case and the tumor burden is estimated with 0.9% error. Results from the test data demonstrate the method's robustness to analyze livers from difficult clinical cases to allow the temporal monitoring of patients with hepatic cancer.

Linguraru, Marius George; Richbourg, William J.; Liu, Jianfei; Watt, Jeremy M.; Pamulapati, Vivek; Wang, Shijun; Summers, Ronald M.

2013-01-01

322

Gas-segmented continuous flow analysis of iron in water with a long liquid waveguide capillary flow cell  

Microsoft Academic Search

A long liquid waveguide capillary flow cell has been successfully adapted to a gas-segmented continuous flow auto-analyzer for trace analysis of iron in water. The flow cell was made of new material, Teflon AF-2400, which has a refractive index (1.29) lower than water (1.33). Total reflection of light can be achieved, provided that the incident angle at each reflection on

Jia-Zhong Zhang; Chris Kelble; Frank J Millero

2001-01-01

323

Blood volume analysis: a new technique and new clinical interest reinvigorate a classic study.  

PubMed

Blood volume studies using the indicator dilution technique and radioactive tracers have been performed in nuclear medicine departments for over 50 y. A nuclear medicine study is the gold standard for blood volume measurement, but the classic dual-isotope blood volume study is time-consuming and can be prone to technical errors. Moreover, a lack of normal values and a rubric for interpretation made volume status measurement of limited interest to most clinicians other than some hematologists. A new semiautomated system for blood volume analysis is now available and provides highly accurate results for blood volume analysis within only 90 min. The availability of rapid, accurate blood volume analysis has brought about a surge of clinical interest in using blood volume data for clinical management. Blood volume analysis, long a low-volume nuclear medicine study all but abandoned in some laboratories, is poised to enter the clinical mainstream. This article will first present the fundamental principles of fluid balance and the clinical means of volume status assessment. We will then review the indicator dilution technique and how it is used in nuclear medicine blood volume studies. We will present an overview of the new semiautomated blood volume analysis technique, showing how the study is done, how it works, what results are provided, and how those results are interpreted. Finally, we will look at some of the emerging areas in which data from blood volume analysis can improve patient care. The reader will gain an understanding of the principles underlying blood volume assessment, know how current nuclear medicine blood volume analysis studies are performed, and appreciate their potential clinical impact. PMID:17496003

Manzone, Timothy A; Dam, Hung Q; Soltis, Daniel; Sagar, Vidya V

2007-06-01

324

Coal gasification systems engineering and analysis. Volume 1: Executive summary  

NASA Technical Reports Server (NTRS)

Feasibility analyses and systems engineering studies for a 20,000 tons per day medium Btu (MBG) coal gasification plant to be built by TVA in Northern Alabama were conducted. Major objectives were as follows: (1) provide design and cost data to support the selection of a gasifier technology and other major plant design parameters, (2) provide design and cost data to support alternate product evaluation, (3) prepare a technology development plan to address areas of high technical risk, and (4) develop schedules, PERT charts, and a work breakdown structure to aid in preliminary project planning. Volume one contains a summary of gasification system characterizations. Five gasification technologies were selected for evaluation: Koppers-Totzek, Texaco, Lurgi Dry Ash, Slagging Lurgi, and Babcock and Wilcox. A summary of the trade studies and cost sensitivity analysis is included.

1980-01-01

325

The molybdate\\/ascorbic acid blue method for the phosphorus determination in very dilute and colored extracts by segmented flow analysis  

Microsoft Academic Search

A segmented flow analysis method for the determination of trace levels of orthophosphate is described. The method was based on the world wide accepted procedure of Murphy and Riley, using ascorbic acid as reductant, allowing the use of the common automated systems with minor adaptations. A segmented flow system was built using a single 700 mm long dialysis path. The

J. Coutinho

1996-01-01

326

Segmentation and analysis of breast cancer pathological images by an adaptive-sized hybrid neural network  

NASA Astrophysics Data System (ADS)

The number of nuclei on a pathology image assists pathologists in consistent diagnosis of breast cancer. Currently, most pathologists make a diagnosis based on a rough estimation of the number of nuclei on pathology images. Because of the rough estimation, the diagnosis is not objective. To assist pathologists to make a consistent, objective and fast diagnosis, it is necessary to develop a computer system to automatically recognize and count several kinds of nuclei. We have developed an algorithm for the automatic segmentation and counting of nuclei in breast cancer pathology images. In the development of the algorithm, we proposed two novel methods: an adaptive-sized hybrid neural network for the automatic segmentation of nuclei, insulin-like growth factor-II messenger RNAs and other structures, and the combined use of both the focused gradient filter and the watersheds algorithm for segmentation of overlapped nuclei.

Hasegawa, Akira; Cullen, Kevin J.; Mun, Seong K.

1996-04-01

327

Concepts and analysis for precision segmented reflector and feed support structures  

NASA Technical Reports Server (NTRS)

Several issues surrounding the design of a large (20-meter diameter) Precision Segmented Reflector are investigated. The concerns include development of a reflector support truss geometry that will permit deployment into the required doubly-curved shape without significant member strains. For deployable and erectable reflector support trusses, the reduction of structural redundancy was analyzed to achieve reduced weight and complexity for the designs. The stiffness and accuracy of such reduced member trusses, however, were found to be affected to a degree that is unexpected. The Precision Segmented Reflector designs were developed with performance requirements that represent the Reflector application. A novel deployable sunshade concept was developed, and a detailed parametric study of various feed support structural concepts was performed. The results of the detailed study reveal what may be the most desirable feed support structure geometry for Precision Segmented Reflector/Large Deployable Reflector applications.

Miller, Richard K.; Thomson, Mark W.; Hedgepeth, John M.

1990-01-01

328

Analysis of Speed Sign Classification Algorithms Using Shape Based Segmentation of Binary Images  

NASA Astrophysics Data System (ADS)

Traffic Sign Recognition is a widely studied problem and its dynamic nature calls for the application of a broad range of preprocessing, segmentation, and recognition techniques but few databases are available for evaluation. We have produced a database consisting of 1,300 images captured by a video camera. On this database we have conducted a systematic experimental study. We used four different preprocessing techniques and designed a generic speed sign segmentation algorithm. Then we selected a range of contemporary speed sign classification algorithms using shape based segmented binary images for training and evaluated their results using four metrics, including accuracy and processing speed. The results indicate that Naive Bayes and Random Forest seem particularly well suited for this recognition task. Moreover, we show that two specific preprocessing techniques appear to provide a better basis for concept learning than the others.

Muhammad, Azam Sheikh; Lavesson, Niklas; Davidsson, Paul; Nilsson, Mikael

329

Efficacy of bronchoscopic lung volume reduction: a meta-analysis  

PubMed Central

Background Over the last several years, the morbidity, mortality, and high costs associated with lung volume reduction (LVR) surgery has fuelled the development of different methods for bronchoscopic LVR (BLVR) in patients with emphysema. In this meta-analysis, we sought to study and compare the efficacy of most of these methods. Methods Eligible studies were retrieved from PubMed and Embase for the following BLVR methods: one-way valves, sealants (BioLVR), LVR coils, airway bypass stents, and bronchial thermal vapor ablation. Primary study outcomes included the mean change post-intervention in the lung function tests, the 6-minute walk distance, and the St George’s Respiratory Questionnaire. Secondary outcomes included treatment-related complications. Results Except for the airway bypass stents, all other methods of BLVR showed efficacy in primary outcomes. However, in comparison, the BioLVR method showed the most significant findings and was the least associated with major treatment-related complications. For the BioLVR method, the mean change in forced expiratory volume (in first second) was 0.18 L (95% confidence interval [CI]: 0.09 to 0.26; P<0.001); in 6-minute walk distance was 23.98 m (95% CI: 12.08 to 35.88; P<0.01); and in St George’s Respiratory Questionnaire was -8.88 points (95% CI: ?12.12 to ?5.64; P<0.001). Conclusion The preliminary findings of our meta-analysis signify the importance of most methods of BLVR. The magnitude of the effect on selected primary outcomes shows noninfe-riority, if not equivalence, when compared to what is known for surgical LVR.

Iftikhar, Imran H; McGuire, Franklin R; Musani, Ali I

2014-01-01

330

Cost-Benefit Analysis for Inland Navigation Improvements. Volume 1.  

National Technical Information Service (NTIS)

Research is directed towards improvement in procedures for the estimation of that portion of inland waterway transportation benefits which contribute to national income. The report is in three volumes. The first volume develops a conceptual framework whic...

J. P. Stucker L. B. Lave L. N. Moses M. V. Beuthe W. B. Allen

1970-01-01

331

Cost - Benefit Analysis for Inland Navigation Improvements. Volume 2.  

National Technical Information Service (NTIS)

Research is directed towards improvement in procedures for the estimation of that portion of inland waterway transportation benefits which contribute to national income. The report is in three volumes. The second volume deals with the development of deman...

J. P. Stucker L. B. Lave L. N. Moses M. V. Beuthe W. B. Allen

1970-01-01

332

An algorithm for control volume analysis of cryogenic systems  

NASA Astrophysics Data System (ADS)

This thesis presents an algorithm suitable for numerical analysis of cryogenic refrigeration systems. Preliminary design of a cryogenic system commences with a number of decoupling assumptions with regard to the process variables of heat and work transfer (e.g., work input rate, heat loading rates) and state variables (pinch points, momentum losses). Making preliminary performance estimations minimizes the effect of component interactions which is inconsistent with the intent of analysis. A more useful design and analysis tool is one in which no restrictions are applied to the system - interactions become absolutely coupled and governed by the equilibrium state variables. Such a model would require consideration of hardware specifications and performance data and information with respect to the thermal environment. Model output would consist of the independent thermodynamic state variables from which process variables and performance parameters may be computed. This model will have a framework compatible for numerical solution on a digital computer so that it may be interfaced with graphic symbology for user interaction. This algorithm approaches cryogenic problems in a highly-coupled state-dependent manner. The framework for this algorithm revolves around the revolutionary thermodynamic solution technique for computer Aided Thermodynamics (CAT). Fundamental differences exist between the Control Volume (CV) algorithm and CAT, which will be discussed where appropriate.

Stanton, Michael B.

1989-06-01

333

Synfuel program analysis. Volume 1: Procedures-capabilities  

NASA Astrophysics Data System (ADS)

The analytic procedures and capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative are described. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specify cases and interpret outputs. It contains an explicit description (with examples) of the types of results which can be obtained when applied for the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. The objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

Muddiman, J. B.; Whelan, J. W.

1980-07-01

334

Hospital benefit segmentation.  

PubMed

Market segmentation is an important topic to both health care practitioners and researchers. The authors explore the relative importance that health care consumers attach to various benefits available in a major metropolitan area hospital. The purposes of the study are to test, and provide data to illustrate, the efficacy of one approach to hospital benefit segmentation analysis. PMID:10280370

Finn, D W; Lamb, C W

1986-12-01

335

Segmentation of 2-D and 3-D objects from MRI volume data using constrained elastic deformations of flexible Fourier contour and surface models  

Microsoft Academic Search

This paper describes a new model-based segmentation technique combining desirable properties of physical models (snakes), shape representation by Fourier parametrization, and modelling of natural shape variability. Flexible parametric shape modelsare represented by a parameter vector describing the mean contour and by a set of eigenmodes of the parameterscharacterizing the shape variation. Usually the segmentation process is divided into an initial

Gábor Székely; András Kelemen; Christian Brechbühler; Guido Gerig

1996-01-01

336

3-D volume reconstruction of skin lesions for melanin and blood volume estimation and lesion severity analysis.  

PubMed

Subsurface information about skin lesions, such as the blood volume beneath the lesion, is important for the analysis of lesion severity towards early detection of skin cancer such as malignant melanoma. Depth information can be obtained from diffuse reflectance based multispectral transillumination images of the skin. An inverse volume reconstruction method is presented which uses a genetic algorithm optimization procedure with a novel population initialization routine and nudge operator based on the multispectral images to reconstruct the melanin and blood layer volume components. Forward model evaluation for fitness calculation is performed using a parallel processing voxel-based Monte Carlo simulation of light in skin. Reconstruction results for simulated lesions show excellent volume accuracy. Preliminary validation is also done using a set of 14 clinical lesions, categorized into lesion severity by an expert dermatologist. Using two features, the average blood layer thickness and the ratio of blood volume to total lesion volume, the lesions can be classified into mild and moderate/severe classes with 100% accuracy. The method therefore has excellent potential for detection and analysis of pre-malignant lesions. PMID:22829392

D'Alessandro, Brian; Dhawan, Atam P

2012-11-01

337

Infants' Early Ability to Segment the Conversational Speech Signal Predicts Later Language Development: A Retrospective Analysis  

ERIC Educational Resources Information Center

Two studies examined relationships between infants' early speech processing performance and later language and cognitive outcomes. Study 1 found that performance on speech segmentation tasks before 12 months of age related to expressive vocabulary at 24 months. However, performance on other tasks was not related to 2-year vocabulary. Study 2…

Newman, Rochelle; Ratner, Nan Bernstein; Jusczyk, Ann Marie; Jusczyk, Peter W.; Dow, Kathy Ayala

2006-01-01

338

Breast segmentation in screening mammograms using multiscale analysis and self-organizing maps  

Microsoft Academic Search

Previously we presented an unsupervised self-organizing map (SOM) for segmentation of the breast region in screening mammograms. This study improves upon our earlier technique by (1) enhancing the detection of the breast region near the skin line, as well as (2) reducing the computational complexity. Contrary to the initial technique, the improved one exploits global image properties extracted at different

H. Erin Rickard; Georgia D. Tourassi; Nevine Eltonsy; Adel S. Elmaghraby

2004-01-01

339

Efficiency of segmented HPGe detectors: design criteria for pulse shape analysis  

Microsoft Academic Search

The problems with reconstructing the trajectory of ?-rays and identifying their interaction points in segmented coaxial HPGe detectors are discussed here in view of their importance in the design of the next generation of detector arrays for ?-ray spectroscopy studies. In fact, the tracking quality of the ?-ray interactions in the medium controls the overall detection efficiency. This paper focuses

O. Wieland; F. Camera; B. Million; A. Bracco; M. Pignanelli; G. Ripamonti; A. Geraci; J. van der Marel

2001-01-01

340

Detection of breast tumor candidates using marker-controlled watershed segmentation and morphological analysis  

Microsoft Academic Search

Computer Aided Diagnosis (CAD) was approved to automate breast cancer detection with mammograms in 1998. But due to the great variability in tumor sizes and shapes, and underlying breast tissue structures, pattern recognition algorithms have a difficult time adapting to different situations. In this paper, a marker-controlled watershed segmentation algorithm was developed to locate breast mass tumor candidates. The approach

Samual H. Lewis; Aijuan Dong

2012-01-01

341

A novel color image segmentation method and its application to white blood cell image analysis  

Microsoft Academic Search

According to the fact that the H component in HSI color space contains most of the white blood cell information, and the S component contains the structure information of the white blood cell nucleus, we develop an iterative Otsu's approach based on circular histogram for the leukocyte segmentation by taking full advantage of this knowledge. Experimental results show that this

Jianhua WU; Pingping ZENG; Yuan ZHOU; Christian OLIVIER

2006-01-01

342

Cost Variance Analysis of Certification Cost Estimate for Alaska Segment of ANGTS.  

National Technical Information Service (NTIS)

This paper reports on the variance between two cost estimates of the Alaska Segment of the Alaska Natural Gas Transportation System (ANGTS). The first estimate was filed with the Federal Power Commission (FPC) on March 1, 1977 to obtain a conditional cert...

J. D. McCullough

1981-01-01

343

SEQUENCE ANALYSIS OF THE SMALL RNA SEGMENT OF GUINEA PIG-PASSAGED PICHINDE VIRUS VARIANTS  

Microsoft Academic Search

The established animal model for Lassa fever is based on the new world arenavirus Pichinde (PIC). Natural isolates of PIC virus are attenuated in guinea pigs, but serial guinea pig passage renders them extremely virulent in that host. We have compared the nucleotide sequences of the small RNA segments of two attenuated, low- passage variants of the PIC virus Munchique

LIHONG ZHANG; KATHLEEN MARRIOTT; JUDITH F. ARONSON

1999-01-01

344

Analysis of the segmentation of the conjugate passive margins of Australia and Antarctica  

Microsoft Academic Search

We use gravity anomalies along the conjugate passive margins of Australia and Antarctica to investigate the segmentation of these margins and the signature of the Australia-Antarctica Discordance (AAD). We estimated the residual isostatic anomaly (RIA) along both margins to characterize the longitudinal variations of the margin crustal structure at wavelengths of 200 to 500 km. The RIA was calculated by

R. Lataste; A. Briais; J. Lin

2003-01-01

345

Theoretical Analysis of Segmented Wolter/LSM X-Ray Telescope Systems.  

National Technical Information Service (NTIS)

The Segmented Wolter I/LSM X-ray Telescope, which consists of a Wolter I Telescope with a tilted, off-axis convex spherical Layered Synthetic Microstructure (LSM) optics placed near the primary focus to accommodate multiple off-axis detectors, has been an...

D. L. Shealy S. H. Chao

1986-01-01

346

Finite difference based vibration simulation analysis of a segmented distributed piezoelectric structronic plate system  

Microsoft Academic Search

Electrical modeling of piezoelectric structronic systems by analog circuits has the disadvantages of huge circuit structure and low precision. However, studies of electrical simulation of segmented distributed piezoelectric structronic plate systems (PSPSs) by using output voltage signals of high-speed digital circuits to evaluate the real-time dynamic displacements are scarce in the literature. Therefore, an equivalent dynamic model based on the

B Y Ren; L Wang; H S Tzou; H H Yue

2010-01-01

347

National Evaluation of Family Support Programs. Final Report Volume A: The Meta-Analysis.  

ERIC Educational Resources Information Center

This volume is part of the final report of the National Evaluation of Family Support Programs and details findings from a meta-analysis of extant research on programs providing family support services. Chapter A1 of this volume provides a rationale for using meta-analysis. Chapter A2 describes the steps of preparation for the meta-analysis.…

Layzer, Jean I.; Goodson, Barbara D.; Bernstein, Lawrence; Price, Cristofer

348

Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 3  

NASA Technical Reports Server (NTRS)

The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.

Keitz, J. F.

1982-01-01

349

Phylogenetic analysis of the S segment from Juquitiba hantavirus: identification of two distinct lineages in Oligoryzomys nigripes.  

PubMed

The purpose of this study was to investigate the phylogenetic relationship of the Juquitiba virus (JUQV) carried by Oligoryzomys nigripes in endemic and non-endemic areas of Brazil. Wild rodents infected with the Juquitiba virus (JUQV) were sampled from a non-Hantavirus Cardiopulmonary Syndrome endemic area in Brazil. Three strains from O. nigripes were identified by the sequencing of the complete S segment and compared to previous studies of JUQV available in GenBank. The phylogenetic analysis of the complete S segment revealed two distinct clades; the first clade was composed of the JUQV from two non-endemic areas in Brazil and the second clade contained JUQV strains from Argentina, Paraguay and other Brazilian endemic areas. PMID:23751399

Guterres, Alexandro; de Oliveira, Renata Carvalho; Fernandes, Jorlan; D'Andrea, Paulo Sérgio; Bonvicino, Cibele R; Bragagnolo, Camila; Guimarães, Gustavo Ducoff; Almada, Gilton Luiz; Machado, Rosangela Rosa; Lavocat, Marília; Elkhoury, Mauro da Rosa; Schrago, Carlos Guerra; de Lemos, Elba Regina Sampaio

2013-08-01

350

Value and limitations of segmental analysis of stress thallium myocardial imaging for localization of coronary artery disease  

SciTech Connect

This study was done to determine the value of thallium-201 myocardial scintigraphic imaging (MSI) for identifying disease in the individual coronary arteries. Segmental analysis of rest and stress MSI was performed in 133 patients with ateriographically proved coronary artery disease (CAD). Certain scintigraphic segments were highly specific (97 to 100%) for the three major coronary arteries: anterior wall and septum for the left anterior descending (LAD) coronary artery; the inferior wall for the right coronary artery (RCA); and the proximal lateral wall for the circumflex (LCX) artery. Perfusion defects located in the anterolateral wall in the anterior view were highly specific for proximal disease in the LAD involving the major diagonal branches, but this was not true for septal defects. The apical segments were not specific for any of the three major vessels. Although MSI was abnormal in 89% of these patients with CAD, it was less sensitive for identifying individual vessel disease: 63% for LAD, 50% for RCA, and 21% for LCX disease (narrowings > = 50%). Sensitivity increased with the severity of stenosis, but even for 100% occlusions was only 87% for LAD, 58% for RCA and 38% for LCX. Sensitivity diminished as the number of vessels involved increased: with single-vessel disease, 80% of LAD, 54% of RAC and 33% of LCX lesions were detected, but in patients with triple-vessel disease, only 50% of LAD, 50% of RCA and 16% of LCX lesions were identified. Thus, although segmented analysis of MSI can identify disease in the individual coronary arteries with high specificity, only moderate sensitivity is achieved, reflecting the tendency of MSI to identify only the most severely ischemic area among several that may be present in a heart. Perfusion scintigrams display relative distributions rather than absolute values for myocardial blood flow.

Rigo, P.; Bailey, I.K.; Griffith, L.S.C.; Pitt, B.; Borow, R.D.; Wagner, H.N.; Becker, L.C.

1980-05-01

351

Consumer Reactions to Four Prototype Patient Package Inserts for Erythromycin: A Focus Group Analysis. Volume I: Summary Analysis. Volume II: Edited Transcriptions.  

National Technical Information Service (NTIS)

Report covers focus group discussions conducted by the Uniworld Group for FDA regarding four versions of erythromycin PPIs. Volume I contains a content analysis of the four different PPI versions and major observations made by the focus group participants...

L. A. Morris S. Morris D. Thilman J. Guerin

1979-01-01

352

Automatic moment segmentation and peak detection analysis of heart sound pattern via short-time modified Hilbert transform.  

PubMed

This paper proposes a novel automatic method for the moment segmentation and peak detection analysis of heart sound (HS) pattern, with special attention to the characteristics of the envelopes of HS and considering the properties of the Hilbert transform (HT). The moment segmentation and peak location are accomplished in two steps. First, by applying the Viola integral waveform method in the time domain, the envelope (E(T)) of the HS signal is obtained with an emphasis on the first heart sound (S1) and the second heart sound (S2). Then, based on the characteristics of the E(T) and the properties of the HT of the convex and concave functions, a novel method, the short-time modified Hilbert transform (STMHT), is proposed to automatically locate the moment segmentation and peak points for the HS by the zero crossing points of the STMHT. A fast algorithm for calculating the STMHT of E(T) can be expressed by multiplying the E(T) by an equivalent window (W(E)). According to the range of heart beats and based on the numerical experiments and the important parameters of the STMHT, a moving window width of N=1s is validated for locating the moment segmentation and peak points for HS. The proposed moment segmentation and peak location procedure method is validated by sounds from Michigan HS database and sounds from clinical heart diseases, such as a ventricular septal defect (VSD), an aortic septal defect (ASD), Tetralogy of Fallot (TOF), rheumatic heart disease (RHD), and so on. As a result, for the sounds where S2 can be separated from S1, the average accuracies achieved for the peak of S1 (AP?), the peak of S2 (AP?), the moment segmentation points from S1 to S2 (AT??) and the cardiac cycle (ACC) are 98.53%, 98.31% and 98.36% and 97.37%, respectively. For the sounds where S1 cannot be separated from S2, the average accuracies achieved for the peak of S1 and S2 (AP??) and the cardiac cycle ACC are 100% and 96.69%. PMID:24657095

Sun, Shuping; Jiang, Zhongwei; Wang, Haibin; Fang, Yu

2014-05-01

353

Modeling and analysis of passive dynamic bipedal walking with segmented feet and compliant joints  

NASA Astrophysics Data System (ADS)

Passive dynamic walking has been developed as a possible explanation for the efficiency of the human gait. This paper presents a passive dynamic walking model with segmented feet, which makes the bipedal walking gait more close to natural human-like gait. The proposed model extends the simplest walking model with the addition of flat feet and torsional spring based compliance on ankle joints and toe joints, to achieve stable walking on a slope driven by gravity. The push-off phase includes foot rotations around the toe joint and around the toe tip, which shows a great resemblance to human normal walking. This paper investigates the effects of the segmented foot structure on bipedal walking in simulations. The model achieves satisfactory walking results on even or uneven slopes.

Huang, Yan; Wang, Qi-Ning; Gao, Yue; Xie, Guang-Ming

2012-10-01

354

Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis  

NASA Technical Reports Server (NTRS)

The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.

Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.

2013-01-01

355

Pulse shape analysis for segmented germanium detectors implemented in graphics processing units  

NASA Astrophysics Data System (ADS)

Position sensitive highly segmented germanium detectors constitute the state-of-the-art of the technology employed for ?-spectroscopy studies. The operation of large spectrometers composed of tens to hundreds of such detectors demands enormous amounts of computing power for the digital treatment of the signals. The use of Graphics Processing Units (GPUs) has been evaluated as a cost-effective solution to meet such requirements. Different implementations and the hardware constraints limiting the performance of the system are examined.

Calore, Enrico; Bazzacco, Dino; Recchia, Francesco

2013-08-01

356

Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy  

NASA Astrophysics Data System (ADS)

In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

2013-04-01

357

A faster circular binary segmentation algorithm for the analysis of array CGH data  

Microsoft Academic Search

Motivation: Array CGH technologies enable the simultaneous measurement of DNA copy number for thousands of sites on a genome. We developed the circular binary segmentation (CBS) algorithm to divide the genome into regions of equal copy number. The algorithm tests for change-points using a maximal t-statistic with a permutation reference distribution to obtain the correspond- ing P-value. The number of

E. S. Venkatraman; Adam B. Olshen

2007-01-01

358

Congenital Aortic Disease: 4D Magnetic Resonance Segmentation and Quantitative Analysis  

PubMed Central

Automated and accurate segmentation of the aorta in 4D (3D+time) cardiovascular magnetic resonance (MR) image data is important for early detection of congenital aortic disease leading to aortic aneurysms and dissections. A computer-aided diagnosis method is reported that allows one to objectively identify subjects with connective tissue disorders from sixteen-phase 4D aortic MR images. Starting with a step of multi-view image registration, our automated segmentation method combines level-set and optimal surface segmentation algorithms in a single optimization process so that the final aortic surfaces in all 16 cardiac phases are determined. The resulting aortic lumen surface is registered with an aortic model followed by calculation of modal indices of aortic shape and motion. The modal indices reflect the differences of any individual aortic shape and motion from an average aortic behavior. A Support Vector Machine (SVM) classifier is used for the discrimination between normal and connective tissue disorder subjects. 4D MR image data sets acquired from 104 normal and connective tissue disorder MR datasets were used for development and performance evaluation of our method. The automated 4D segmentation resulted in accurate aortic surfaces in all 16 cardiac phases, covering the aorta from the aortic annulus to the diaphragm, yielding subvoxel accuracy with signed surface positioning errors of ?0.07 ± 1.16 voxel (?0.10 ± 2.05 mm). The computer aided diagnosis method distinguished between normal and connective tissue disorder subjects with a classification correctness of 90.4%.

Zhao, Fei; Zhang, Honghai; Wahle, Andreas; Thomas, Matthew T.; Stolpen, Alan H.; Scholz, Thomas D.; Sonka, Milan

2009-01-01

359

A classification-driven partially occluded object segmentation (CPOOS) method with application to chromosome analysis  

Microsoft Academic Search

Classification of segment images created by connecting points of high concavity along curvatures is used to resolve partial occlusion in images. Modeling of shape or curvature is not necessary nor is the traditional excessive use of heuristics. Applied to human cell images, 82.6% of the analyzed clusters of chromosomes are correctly separated, rising to 90.5% following rejection of 8.7% of

Boaz Lerner; Hugo Guterman; I. Dinstein

1998-01-01

360

Global Segmentation and Curvature Analysis of Volumetric Data Sets Using Trivariate B-Spline Functions  

Microsoft Academic Search

This paper presents a method to globally segment volumetric images into regions that contain convex or concave (elliptic) iso-surfaces, planar or cylindrical (parabolic) iso-surfaces, and volumetric regions with saddle- like (hyperbolic) iso-surfaces, regardless of the value of the iso-surface level. The proposed scheme relies on a novel approach to globally compute, bound, and analyze the Gaussian and mean curvatures of

Octavian Soldea; Gershon Elber; Ehud Rivlin

2006-01-01

361

Segmental hair analysis using liquid chromatography–tandem mass spectrometry after a single dose of benzodiazepines  

Microsoft Academic Search

In China, benzodiazepines are the most frequently observed compounds in cases of drug-facilitated crime. Sensitive, specific, and reproducible methods for the quantitative determination of 18 benzodiazepines in hair have been developed using LC–MS\\/MS. Fourteen volunteers had ingested a single 1–6mg estazolam tablet. Hair was collected 1 month after administration. All the proximal segments were positive for estazolam. With increased dosage,

Ping Xiang; Qiran Sun; Baohua Shen; Peng Chen; Wei Liu; Min Shen

2011-01-01

362

Who Will More Likely Buy PHEV: A Detailed Market Segmentation Analysis  

Microsoft Academic Search

Understanding the diverse PHEV purchase behaviors among prospective new car buyers is key for designing efficient and effective policies for promoting new energy vehicle technologies. The ORNL MA3T model developed for the U.S. Department of Energy is described and used to project PHEV purchase probabilities by different consumers. MA3T disaggregates the U.S. household vehicle market into 1458 consumer segments based

Zhenhong Lin; David L Greene

2010-01-01

363

Yucca Mountain transportation routes: Preliminary characterization and risk analysis; Volume 2, Figures [and] Volume 3, Technical Appendices  

SciTech Connect

This report presents appendices related to the preliminary assessment and risk analysis for high-level radioactive waste transportation routes to the proposed Yucca Mountain Project repository. Information includes data on population density, traffic volume, ecologically sensitive areas, and accident history.

Souleyrette, R.R. II; Sathisan, S.K.; di Bartolo, R. [Nevada Univ., Las Vegas, NV (United States). Transportation Research Center] [Nevada Univ., Las Vegas, NV (United States). Transportation Research Center

1991-05-31

364

Mutational analysis of segmental stabilization of transcripts from the Zymomonas mobilis gap-pgk operon.  

PubMed Central

In Zymomonas mobilis, the genes encoding glyceraldehyde-3-phosphate dehydrogenase and phosphoglycerate kinase are transcribed together from the gap-pgk operon. However, higher levels of the former enzyme are present in the cytoplasm because of increased stability of a 5' segment containing the gap coding region. This segment is bounded by an upstream untranslated region which can be folded into many stem-loop structures and a prominent intercistronic stem-loop. Mutations eliminating a proposed stem-loop in the untranslated region or the intercistronic stem-loop resulted in a decrease in the stability and pool size of the 5' gap segment. Site-specific mutations in the unpaired regions of both of these stems also altered the message pools. Elimination of the intercistronic stem appeared to reduce the endonucleolytic cleavage within the pgk coding region, increasing the stability and abundance of the full-length message. DNA encoding the prominent stem-loop at the 3' end of the message was shown to be a transcriptional terminator both in Z. mobilis and in Escherichia coli. This third stem-loop region (part of the transcriptional terminator) was required to stabilize the full-length gap-pgk message. Images

Burchhardt, G; Keshav, K F; Yomano, L; Ingram, L O

1993-01-01

365

Incorporation of learned shape priors into a graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes of mice  

NASA Astrophysics Data System (ADS)

Spectral-domain optical coherence tomography (SD-OCT) finds widespread use clinically for the detection and management of ocular diseases. This non-invasive imaging modality has also begun to find frequent use in research studies involving animals such as mice. Numerous approaches have been proposed for the segmentation of retinal surfaces in SD-OCT images obtained from human subjects; however, the segmentation of retinal surfaces in mice scans is not as well-studied. In this work, we describe a graph-theoretic segmentation approach for the simultaneous segmentation of 10 retinal surfaces in SD-OCT scans of mice that incorporates learned shape priors. We compared the method to a baseline approach that did not incorporate learned shape priors and observed that the overall unsigned border position errors reduced from 3.58 +/- 1.33 ?m to 3.20 +/- 0.56 ?m.

Antony, Bhavna J.; Song, Qi; Abràmoff, Michael D.; Sohn, Eliott; Wu, Xiaodong; Garvin, Mona K.

2014-03-01

366

Multimodal Retinal Vessel Segmentation from Spectral-Domain Optical Coherence Tomography and Fundus Photography  

PubMed Central

Segmenting retinal vessels in optic nerve head (ONH) centered spectral-domain optical coherence tomography (SD-OCT) volumes is particularly challenging due to the projected neural canal opening (NCO) and relatively low visibility in the ONH center. Color fundus photographs provide a relatively high vessel contrast in the region inside the NCO, but have not been previously used to aid the SD-OCT vessel segmentation process. Thus, in this paper, we present two approaches for the segmentation of retinal vessels in SD-OCT volumes that each take advantage of complimentary information from fundus photographs. In the first approach (referred to as the registered-fundus vessel segmentation approach), vessels are first segmented on the fundus photograph directly (using a k-NN pixel classifier) and this vessel segmentation result is mapped to the SD-OCT volume through the registration of the fundus photograph to the SD-OCT volume. In the second approach (referred to as the multimodal vessel segmentation approach), after fundus-to-SD-OCT registration, vessels are simultaneously segmented with a k-NN classifier using features from both modalities. Three-dimensional structural information from the intraretinal layers and neural canal opening obtained through graph-theoretic segmentation approaches of the SD-OCT volume are used in combination with Gaussian filter banks and Gabor wavelets to generate the features. The approach is trained on 15 and tested on 19 randomly chosen independent image pairs of SD-OCT volumes and fundus images from 34 subjects with glaucoma. Based on a receiver operating characteristic (ROC) curve analysis, the present registered-fundus and multimodal vessel segmentation approaches [area under the curve (AUC) of 0.85 and 0.89, respectively] both perform significantly better than the two previous OCT-based approaches (AUC of 0.78 and 0.83, p < 0.05). The multimodal approach overall performs significantly better than the other three approaches (p < 0.05).

Hu, Zhihong; Niemeijer, Meindert; Abramoff, Michael D.; Garvin, Mona K.

2014-01-01

367

Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography.  

PubMed

Segmenting retinal vessels in optic nerve head (ONH) centered spectral-domain optical coherence tomography (SD-OCT) volumes is particularly challenging due to the projected neural canal opening (NCO) and relatively low visibility in the ONH center. Color fundus photographs provide a relatively high vessel contrast in the region inside the NCO, but have not been previously used to aid the SD-OCT vessel segmentation process. Thus, in this paper, we present two approaches for the segmentation of retinal vessels in SD-OCT volumes that each take advantage of complimentary information from fundus photographs. In the first approach (referred to as the registered-fundus vessel segmentation approach), vessels are first segmented on the fundus photograph directly (using a k-NN pixel classifier) and this vessel segmentation result is mapped to the SD-OCT volume through the registration of the fundus photograph to the SD-OCT volume. In the second approach (referred to as the multimodal vessel segmentation approach), after fundus-to-SD-OCT registration, vessels are simultaneously segmented with a k -NN classifier using features from both modalities. Three-dimensional structural information from the intraretinal layers and neural canal opening obtained through graph-theoretic segmentation approaches of the SD-OCT volume are used in combination with Gaussian filter banks and Gabor wavelets to generate the features. The approach is trained on 15 and tested on 19 randomly chosen independent image pairs of SD-OCT volumes and fundus images from 34 subjects with glaucoma. Based on a receiver operating characteristic (ROC) curve analysis, the present registered-fundus and multimodal vessel segmentation approaches [area under the curve (AUC) of 0.85 and 0.89, respectively] both perform significantly better than the two previous OCT-based approaches (AUC of 0.78 and 0.83, p < 0.05). The multimodal approach overall performs significantly better than the other three approaches (p < 0.05). PMID:22759443

Hu, Zhihong; Niemeijer, Meindert; Abràmoff, Michael D; Garvin, Mona K

2012-10-01

368

Probabilistic analysis of activation volumes generated during deep brain stimulation.  

PubMed

Deep brain stimulation (DBS) is an established therapy for the treatment of Parkinson's disease (PD) and shows great promise for the treatment of several other disorders. However, while the clinical analysis of DBS has received great attention, a relative paucity of quantitative techniques exists to define the optimal surgical target and most effective stimulation protocol for a given disorder. In this study we describe a methodology that represents an evolutionary addition to the concept of a probabilistic brain atlas, which we call a probabilistic stimulation atlas (PSA). We outline steps to combine quantitative clinical outcome measures with advanced computational models of DBS to identify regions where stimulation-induced activation could provide the best therapeutic improvement on a per-symptom basis. While this methodology is relevant to any form of DBS, we present example results from subthalamic nucleus (STN) DBS for PD. We constructed patient-specific computer models of the volume of tissue activated (VTA) for 163 different stimulation parameter settings which were tested in six patients. We then assigned clinical outcome scores to each VTA and compiled all of the VTAs into a PSA to identify stimulation-induced activation targets that maximized therapeutic response with minimal side effects. The results suggest that selection of both electrode placement and clinical stimulation parameter settings could be tailored to the patient's primary symptoms using patient-specific models and PSAs. PMID:20974269

Butson, Christopher R; Cooper, Scott E; Henderson, Jaimie M; Wolgamuth, Barbara; McIntyre, Cameron C

2011-02-01

369

Reusable Subsystems Design/Analysis Study. Volume 3. Supplemental Data.  

National Technical Information Service (NTIS)

This volume provides supplemental data by way of appendixes to the task presented in Volume II AD-506 593, and AD-506 594. These appendixes include a Technology Summary, A Proposed Specification for Reusable Propulsion Components and Feed System, and Stre...

L. L. Morgan R. L. Gorman H. L. Jensen R. F. Hausman H. K. Burbridge

1970-01-01

370

A Genetic Analysis of Brain Volumes and IQ in Children  

ERIC Educational Resources Information Center

In a population-based sample of 112 nine-year old twin pairs, we investigated the association among total brain volume, gray matter and white matter volume, intelligence as assessed by the Raven IQ test, verbal comprehension, perceptual organization and perceptual speed as assessed by the Wechsler Intelligence Scale for Children-III. Phenotypic…

van Leeuwen, Marieke; Peper, Jiska S.; van den Berg, Stephanie M.; Brouwer, Rachel M.; Hulshoff Pol, Hilleke E.; Kahn, Rene S.; Boomsma, Dorret I.

2009-01-01

371

Analysis of Swept Volume via Lie Groups and Differential Equations  

Microsoft Academic Search

The development of useful mathematical techniques for an alyzing swept volumes, together with efficient means of im plementing these methods to produce serviceable models, has important applications to numerically controlled (NC) machin ing, robotics, and motion planning, as well as other areas of automation. In this article a novel approach to swept volumes is delineated—one that fully exploits the intrinsic

Denis Blackmore; M. C. Leu

1992-01-01

372

EPA RREL'S MOBILE VOLUME REDUCTION UNIT -- APPLICATIONS ANALYSIS REPORT  

EPA Science Inventory

The volume reduction unit (VRU) is a pilot-scale, mobile soil washing system designed to remove organic contaminants from the soil through particle size separation and solubilization. The VRU removes contaminants by suspending them in a wash solution and by reducing the volume of...

373

Analysis of Cogging Torque and its Effect on Direct Torque Control (DTC) in a Segmented Interior Permanent Magnet Machine  

Microsoft Academic Search

This paper investigates reduced cogging torque seen in the Segmented Interior Permanent Magnet (SIPM) machine and explores the effects of cogging torque on direct torque control (DTC) performance. In the segmented IPM machine, the pole magnets are segmented to provide an improved flux-weakening capability to the rotor. The cogging torque of the segmented IPM machine is found to be significantly

Rukmi Dutta; Saad Sayeef; M. F. Rahman

2007-01-01

374

A Rapid and Efficient 2D/3D Nuclear Segmentation Method for Analysis of Early Mouse Embryo and Stem Cell Image Data  

PubMed Central

Summary Segmentation is a fundamental problem that dominates the success of microscopic image analysis. In almost 25 years of cell detection software development, there is still no single piece of commercial software that works well in practice when applied to early mouse embryo or stem cell image data. To address this need, we developed MINS (modular interactive nuclear segmentation) as a MATLAB/C++-based segmentation tool tailored for counting cells and fluorescent intensity measurements of 2D and 3D image data. Our aim was to develop a tool that is accurate and efficient yet straightforward and user friendly. The MINS pipeline comprises three major cascaded modules: detection, segmentation, and cell position classification. An extensive evaluation of MINS on both 2D and 3D images, and comparison to related tools, reveals improvements in segmentation accuracy and usability. Thus, its accuracy and ease of use will allow MINS to be implemented for routine single-cell-level image analyses.

Lou, Xinghua; Kang, Minjung; Xenopoulos, Panagiotis; Munoz-Descalzo, Silvia; Hadjantonakis, Anna-Katerina

2014-01-01

375

Automatic Segmentation of Eight Tissue Classes in Neonatal Brain MRI  

PubMed Central

Purpose Volumetric measurements of neonatal brain tissues may be used as a biomarker for later neurodevelopmental outcome. We propose an automatic method for probabilistic brain segmentation in neonatal MRIs. Materials and Methods In an IRB-approved study axial T1- and T2-weighted MR images were acquired at term-equivalent age for a preterm cohort of 108 neonates. A method for automatic probabilistic segmentation of the images into eight cerebral tissue classes was developed: cortical and central grey matter, unmyelinated and myelinated white matter, cerebrospinal fluid in the ventricles and in the extra cerebral space, brainstem and cerebellum. Segmentation is based on supervised pixel classification using intensity values and spatial positions of the image voxels. The method was trained and evaluated using leave-one-out experiments on seven images, for which an expert had set a reference standard manually. Subsequently, the method was applied to the remaining 101 scans, and the resulting segmentations were evaluated visually by three experts. Finally, volumes of the eight segmented tissue classes were determined for each patient. Results The Dice similarity coefficients of the segmented tissue classes, except myelinated white matter, ranged from 0.75 to 0.92. Myelinated white matter was difficult to segment and the achieved Dice coefficient was 0.47. Visual analysis of the results demonstrated accurate segmentations of the eight tissue classes. The probabilistic segmentation method produced volumes that compared favorably with the reference standard. Conclusion The proposed method provides accurate segmentation of neonatal brain MR images into all given tissue classes, except myelinated white matter. This is the one of the first methods that distinguishes cerebrospinal fluid in the ventricles from cerebrospinal fluid in the extracerebral space. This method might be helpful in predicting neurodevelopmental outcome and useful for evaluating neuroprotective clinical trials in neonates.

Anbeek, Petronella; Isgum, Ivana; van Kooij, Britt J. M.; Mol, Christian P.; Kersbergen, Karina J.; Groenendaal, Floris; Viergever, Max A.; de Vries, Linda S.; Benders, Manon J. N. L.

2013-01-01

376

Model documentation of the gas analysis modeling system. Volume 1. Model overview  

Microsoft Academic Search

This is Volume 1 of three volumes of documentation for the Gas Analysis Modeling System (GAMS) developed by the Analysis and Forecasting Branch, Reserves ad Natural Gas Division, Office of Oil and Gas, Energy Information Administration (EIA), US Department of Energy. The documentation has been developed to comply with the requirements specified in Energy Information Administration Order EI 5910.3A, Guidelines

Kydes

1984-01-01

377

Dimensionality reduction of hyperspectral imagery based on spectral analysis of homogeneous segments: distortion measurements and classification scores  

NASA Astrophysics Data System (ADS)

In this work, a new strategy for the analysis of hyperspectral image data is described and assessed. Firstly, the image is segmented into areas based on a spatial homogeneity criterion of pixel spectra. Then, a reduced data set (RDS) is produced by applying the projection pursuit (PP) algorithm to each of the segments in which the original hyperspectral image has been partitioned. Few significant spectral pixels are extracted from each segment. This operation allows the size of the data set to be dramatically reduced; nevertheless, most of the spectral information relative to the whole image is retained by RDS. In fact, RDS constitutes a good approximation of the most representative elements that would be found for the whole image, as the spectral features of RDS are very similar to the features of the original hyperspectral data. Therefore, the elements of a basis, either orthogonal or nonorthogonal, that best represents RDS, are searched for. Algorithms that can be used for this task are principal component analysis (PCA), independent component analysis (ICA), PP, or matching pursuit (MP). Once the basis has been calculated from RDS, the whole hyperspectral data set is decomposed on such a basis to yield a sequence of components, or features, whose (statistical) significance decreases with the index. Hence, minor components may be discarded without compromising the results of application tasks. Experiments carried out on AVIRIS data, whose ground truth was available, show that PCA based on RDS, even if suboptimal in the MMSE sense with respect to standard PCA, increases the separability of thematic classes, which is favored when pixel vectors in the transformed domain are homogeneously spread around their class centers.

Alparone, Luciano; Argenti, Fabrizio; Dionisio, Michele; Santurri, Leonardo

2004-02-01

378

Knowledge-based 3D segmentation of the brain in MR images for quantitative multiple sclerosis lesion tracking  

NASA Astrophysics Data System (ADS)

Brain segmentation in magnetic resonance (MR) images is an important step in quantitative analysis applications, including the characterization of multiple sclerosis (MS) lesions over time. Our approach is based on a priori knowledge of the intensity and three-dimensional (3D) spatial relationships of structures in MR images of the head. Optimal thresholding and connected-components analysis are used to generate a starting point for segmentation. A 3D radial search is then performed to locate probable locations of the intra-cranial cavity (ICC). Missing portions of the ICC surface are interpolated in order to exclude connected structures. Partial volume effects and inter-slice intensity variations in the image are accounted for automatically. Several studies were conducted to validate the segmentation. Accuracy was tested by calculating the segmented volume and comparing to known volumes of a standard MR phantom. Reliability was tested by comparing calculated volumes of individual segmentation results from multiple images of the same subject. The segmentation results were also compared to manual tracings. The average error in volume measurements for the phantom was 1.5% and the average coefficient of variation of brain volume measurements of the same subject was 1.2%. Since the new algorithm requires minimal user interaction, variability introduced by manual tracing and interactive threshold or region selection was eliminated. Overall, the new algorithm was shown to produce a more accurate and reliable brain segmentation than existing manual and semi-automated techniques.

Fisher, Elizabeth; Cothren, Robert M., Jr.; Tkach, Jean A.; Masaryk, Thomas J.; Cornhill, J. Fredrick

1997-04-01

379

Engineering analysis and evaluation of the Centralia mine fire. Volume 2  

SciTech Connect

Provided in volume 2 is an analysis of the mine fire, which defines fire conditions, identifies ventilation patterns, and determines the progression of the fire. Options to control and/or extinguish the fire are examined, based on the analysis.

Not Available

1983-07-01

380

Anatomical studies on the spinal cord segments of the impala (Aepyceros melampus).  

PubMed

The anatomy of the spinal cord segments was studied and recorded for the impala. The root attachment lengths were greatest at C3, T10 and L3 cord segment levels in the respective regions. As to the root emergence length the greatest lengths were observed at C7, T10, L5 and S1 cord segment levels respectively. The interroot interval was longest at C2, T8 and L1 segments respectively. The longest cord segments were C2, T13, L2 and S2 segments. The widest cord segments of their respective regions were C7, T1, L5 and S1 cord segments. As to segment volume C3, T13, L2 and S1 were the most voluminous cord segments in the respective cord regions. Statistical analysis revealed a high correlation among all of the study parameters suggesting a high degree of multicolinearity. Gross anatomical relationships concerning the location of the spinal cord segments with respect to the vertebrae were studied. The cord segments C1, T1-T4 and L1-L3 were within their vertebral limits. In the impala the spinal cord terminated at the midlevel of S4 vertebra. PMID:8238955

Rao, G S; Kalt, D J; Koch, M; Majok, A A

1993-09-01

381

A Posteriori Error Analysis of a Cell-centered Finite Volume Method for Semilinear Elliptic Problems  

SciTech Connect

In this paper, we conduct an a posteriori analysis for the error in a quantity of interest computed from a cell-centered finite volume scheme. The a posteriori error analysis is based on variational analysis, residual error and the adjoint problem. To carry out the analysis, we use an equivalence between the cell-centered finite volume scheme and a mixed finite element method with special choice of quadrature.

Michael Pernice

2009-11-01

382

Texture-based segmentation and analysis of emphysema depicted on CT images  

NASA Astrophysics Data System (ADS)

In this study we present a texture-based method of emphysema segmentation depicted on CT examination consisting of two steps. Step 1, a fractal dimension based texture feature extraction is used to initially detect base regions of emphysema. A threshold is applied to the texture result image to obtain initial base regions. Step 2, the base regions are evaluated pixel-by-pixel using a method that considers the variance change incurred by adding a pixel to the base in an effort to refine the boundary of the base regions. Visual inspection revealed a reasonable segmentation of the emphysema regions. There was a strong correlation between lung function (FEV1%, FEV1/FVC, and DLCO%) and fraction of emphysema computed using the texture based method, which were -0.433, -.629, and -0.527, respectively. The texture-based method produced more homogeneous emphysematous regions compared to simple thresholding, especially for large bulla, which can appear as speckled regions in the threshold approach. In the texture-based method, single isolated pixels may be considered as emphysema only if neighboring pixels meet certain criteria, which support the idea that single isolated pixels may not be sufficient evidence that emphysema is present. One of the strength of our complex texture-based approach to emphysema segmentation is that it goes beyond existing approaches that typically extract a single or groups texture features and individually analyze the features. We focus on first identifying potential regions of emphysema and then refining the boundary of the detected regions based on texture patterns.

Tan, Jun; Zheng, Bin; Wang, Xingwei; Lederman, Dror; Pu, Jiantao; Sciurba, Frank C.; Gur, David; Leader, J. Ken

2011-03-01

383

Analysis of the Command and Control Segment (CCS) attitude estimation algorithm  

NASA Technical Reports Server (NTRS)

This paper categorizes the qualitative behavior of the Command and Control Segment (CCS) differential correction algorithm as applied to attitude estimation using simultaneous spin axis sun angle and Earth cord length measurements. The categories of interest are the domains of convergence, divergence, and their boundaries. Three series of plots are discussed that show the dependence of the estimation algorithm on the vehicle radius, the sun/Earth angle, and the spacecraft attitude. Common qualitative dynamics to all three series are tabulated and discussed. Out-of-limits conditions for the estimation algorithm are identified and discussed.

Stockwell, Catherine

1993-01-01

384

Local Analysis of Human Cortex in MRI Brain Volume  

PubMed Central

This paper describes a method for subcortical identification and labeling of 3D medical MRI images. Indeed, the ability to identify similarities between the most characteristic subcortical structures such as sulci and gyri is helpful for human brain mapping studies in general and medical diagnosis in particular. However, these structures vary greatly from one individual to another because they have different geometric properties. For this purpose, we have developed an efficient tool that allows a user to start with brain imaging, to segment the border gray/white matter, to simplify the obtained cortex surface, and to describe this shape locally in order to identify homogeneous features. In this paper, a segmentation procedure using geometric curvature properties that provide an efficient discrimination for local shape is implemented on the brain cortical surface. Experimental results demonstrate the effectiveness and the validity of our approach.

2014-01-01

385

Cell Based Volume Integration for Boundary Integral Analysis  

SciTech Connect

The evaluation of volume integrals that arise in boundary integral formulations for non-homogeneous problems is considered. Using the 'Galerkin vector' to represent the Green's function, the volume integral is decomposed into a boundary integral plus a simpler volume integral wherein the source function is everywhere zero on the boundary. This new volume integral can be evaluated using a regular grid of cells covering the domain, with all cell integrals, including partial cells at the boundary, evaluated by simple linear interpolation of vertex values. For grid vertices that lie close to the boundary, the near-singular integrals are handled by partial analytic integration. The method employs a Galerkin approximation and is presented in terms of the 3D Poisson problem. An axi-symmetric formulation is also presented, and in this setting, the solution of a nonlinear problem is considered.

Koehler, Matthew [Vanderbilt University; Yang, Ruoke [ORNL; Gray, Leonard J [ORNL

2012-01-01

386

Coal Conversion Control Technology. Volume III. Economic Analysis; Appendix.  

National Technical Information Service (NTIS)

This volume is the product of an information-gathering effort relating to coal conversion process streams. Available and developing control technology has been evaluated in view of the requirements of present and proposed federal, state, regional, and int...

L. E. Bostwick M. R. Smith D. O. Moore D. K. Webber

1979-01-01

387

Mass segmentation of dense breasts on digitized mammograms: analysis of a probability-based function  

NASA Astrophysics Data System (ADS)

In this study, a segmentation algorithm based on the steepest changes of a probabilistic cost function was tested on non-processed and pre-processed dense breast images in an attempt to determine the efficacy of pre-processing for dense breast masses. Also, the inter-observer variability between expert radiologists was studied. Background trend correction was used as the pre-processing method. The algorithm, based on searching the steepest changes on a probabilistic cost function, was tested on 107 cancerous masses and 98 benign masses with density ratings of 3 or 4 according to the American College of Radiology's density rating scale. The computer-segmented results were validated using the following statistics: overlap, accuracy, sensitivity, specificity, Dice similarity index, and kappa. The mean accuracy statistic value ranged from 0.71 to 0.84 for cancer cases and 0.81 to 0.86 for benign cases. For nearly all statistics there were statistically significant differences between the expert radiologists.

Kinnard, Lisa M.; Lo, Shih-Chung B.; Duckett, Eva; Makariou, Erini; Osicka, Teresa; Freedman, Matthew T.; Chouikha, Mohamed F.

2005-04-01

388

Determination of fiber volume in graphite/epoxy materials using computer image analysis  

NASA Technical Reports Server (NTRS)

The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.

Viens, Michael J.

1990-01-01

389

Analysis of flexible aircraft longitudinal dynamics and handling qualities. Volume 1: Analysis methods  

NASA Technical Reports Server (NTRS)

As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.

Waszak, M. R.; Schmidt, D. S.

1985-01-01

390

Analysis of aviation logistics volume of new airport based on support vector machine  

NASA Astrophysics Data System (ADS)

The Analysis of aviation Logistics volume of new airport is important basis of aviation logistics infrastructure planning. The analysis for air Logistics volume of new airport is always difficulty. In this paper, the model of comprehensive analysis of multi-dimensional was established based on the features of services radiation regional of new airport. And on the basis, combining the features of the new airport, this paper adopts nonlinear regression of support vector machine to analyses air Logistics volume of the future of new airport. Then empirical analysis was done.

Zhao, Gang; Zhou, Ling-yun; Luan, Kun

2013-03-01

391

Comparing Market-segment-profitability Analysis with Department-Profitability Analysis as Hotel Marketing-decision Tools  

Microsoft Academic Search

Although marketing managers would appreciate financial data that more directly support their activities, the financial data generated by hotel accounting systems are aimed at apportioning department-related expenses and reflecting financial picture of overall operation. Given the industry’s increased focus on the profit generated by a given customer or market segment, a more useful form of financial data would allow hotel

Islam Karadag; Woo Gon Kim

2006-01-01

392

Air segmented amplitude modulated multiplexed flow analysis with software-based phase recognition: determination of phosphate ion.  

PubMed

Amplitude modulated multiplexed flow analysis (AMMFA) has been improved by introducing air segmentation and software-based phase recognition. Sample solutions, the flow rates of which are respectively varied at different frequencies, are merged. Air is introduced to the merged liquid stream in order to limit the dispersion of analytes within each liquid segment separated by air bubbles. The stream is led to a detector with no physical deaeration. Air signals are distinguished from liquid signals through the analysis of detector output signals, and are suppressed down to the level of liquid signals. Resulting signals are smoothed based on moving average computation. Thus processed signals are analyzed by fast Fourier transform. The analytes in the samples are respectively determined from the amplitudes of the corresponding wave components obtained. The developed system has been applied to the simultaneous determinations of phosphate ions in water samples by a Malachite Green method. The linearity of the analytical curve (0.0-31.0 ?mol dm(-3)) is good (r(2)>0.999) and the detection limit (3.3 ?) at the modulation period of 30s is 0.52 ?mol dm(-3). Good recoveries around 100% have been obtained for phosphate ions spiked into real water samples. PMID:24274279

Ogusu, Takeshi; Uchimoto, Katsuya; Takeuchi, Masaki; Tanaka, Hideji

2014-01-15

393

Techniques in helical scanning, dynamic imaging and image segmentation for improved quantitative analysis with X-ray micro-CT  

NASA Astrophysics Data System (ADS)

This paper reports on recent advances at the micro-computed tomography facility at the Australian National University. Since 2000 this facility has been a significant centre for developments in imaging hardware and associated software for image reconstruction, image analysis and image-based modelling. In 2010 a new instrument was constructed that utilises theoretically-exact image reconstruction based on helical scanning trajectories, allowing higher cone angles and thus better utilisation of the available X-ray flux. We discuss the technical hurdles that needed to be overcome to allow imaging with cone angles in excess of 60°. We also present dynamic tomography algorithms that enable the changes between one moment and the next to be reconstructed from a sparse set of projections, allowing higher speed imaging of time-varying samples. Researchers at the facility have also created a sizeable distributed-memory image analysis toolkit with capabilities ranging from tomographic image reconstruction to 3D shape characterisation. We show results from image registration and present some of the new imaging and experimental techniques that it enables. Finally, we discuss the crucial question of image segmentation and evaluate some recently proposed techniques for automated segmentation.

Sheppard, Adrian; Latham, Shane; Middleton, Jill; Kingston, Andrew; Myers, Glenn; Varslot, Trond; Fogden, Andrew; Sawkins, Tim; Cruikshank, Ron; Saadatfar, Mohammad; Francois, Nicolas; Arns, Christoph; Senden, Tim

2014-04-01

394

Resonance mode analysis for volume estimation of asymmetric branching structures.  

PubMed

The resonance conditions associated with the propagation of a harmonic wave within a rigid, lossless branching structure can be explicitly derived. In this study, exact resonance conditions are derived for multi-order, rigid, asymmetric branching structures. These are compared with resonance conditions for rigid, multi-order, symmetric branching structures which we reported previously. The effect of asymmetry on the form of the higher-order resonance condition is discussed. In the low-frequency range, the resonance condition can be modified into simpler forms which facilitate volume estimation of the branching structure. Two such volume approximation techniques are presented: (a) a fundamental frequency method, in which the lowest resonance frequency is inversely proportional to the structure volume, and (b) an effective-length method, in which an effective length is calculated for all branches distal to the first bifurcation. Equivalence of the two methods is demonstrated. An experimental study was performed to measure the resonance modes of several second-order glass models with asymmetric branching structures similar to those of mammalian lungs. The resulting volume estimates were in close agreement with the true volumes. PMID:2774312

Raphael, D T; Epstein, M A

1989-01-01

395

Intraspecific phylogeography of the gopher tortoise, Gopherus polyphemus: RFLP analysis of amplified mtDNA segments.  

PubMed

The slow rate of mtDNA evolution in turtles poses a limitation on the levels of intraspecific variation detectable by conventional restriction fragment surveys. We examined mtDNA variation in the gopher tortoise (Gopherus polyphemus) using an alternative restriction assay, one in which PCR-amplified segments of the mitochondrial genome were digested with tetranucleotide-site endonucleases. Restriction fragment polymorphisms representing four amplified regions were analysed to evaluate population genetic structure among 112 tortoises throughout the species' range. Thirty-six haplotypes were identified, and three major geographical assemblages (Eastern, Western, and Mid-Florida) were resolved by UPGMA and parsimony analyses. Eastern and Western assemblages abut near the Apalachicola drainage, whereas the Mid-Florida assemblage appears restricted to the Brooksville Ridge. The Eastern/Western assemblage boundary is remarkably congruent with phylogeographic profiles for eight additional species from the south-eastern U.S., representing both freshwater and terrestrial realms. PMID:8564009

Osentoski, M F; Lamb, T

1995-12-01

396

Support trusses for large precision segmented reflectors: Preliminary design and analysis  

NASA Technical Reports Server (NTRS)

Precision Segmented Reflector (PSR) technology is currently being developed for a range of future applications such as the Large Deployable Reflector. The structures activities at NASA-Langley are outlined in support of the PSR program. Design concepts are explored for erectable and deployable support structures which are envisioned to be the backbone of these precision reflectors. Important functional requirements for the support trusses related to stiffness, mass, and surface accuracy are reviewed. Proposed geometries for these structures and factors motivating the erectable and deployable designs are discussed. Analytical results related to stiffness, dynamic behavior, and surface accuracy are presented and considered in light of the functional requirements. Results are included for both a 4-meter-diameter prototype support truss which is currently being designed as the Test Bed for the PSR technology development program, and for two 20-meter support structures.

Collins, Timothy J.; Fichter, W. B.

1989-01-01

397

Quantitative trait locus analysis of leaf dissection in tomato using Lycopersicon pennellii segmental introgression lines.  

PubMed Central

Leaves are one of the most conspicuous and important organs of all seed plants. A fundamental source of morphological diversity in leaves is the degree to which the leaf is dissected by lobes and leaflets. We used publicly available segmental introgression lines to describe the quantitative trait loci (QTL) controlling the difference in leaf dissection seen between two tomato species, Lycopersicon esculentum and L. pennellii. We define eight morphological characteristics that comprise the mature tomato leaf and describe loci that affect each of these characters. We found 30 QTL that contribute one or more of these characters. Of these 30 QTL, 22 primarily affect leaf dissection and 8 primarily affect leaf size. On the basis of which characters are affected, four classes of loci emerge that affect leaf dissection. The majority of the QTL produce phenotypes intermediate to the two parent lines, while 5 QTL result in transgression with drastically increased dissection relative to both parent lines.

Holtan, Hans E E; Hake, Sarah

2003-01-01

398

Quantitative trait locus analysis of leaf dissection in tomato using Lycopersicon pennellii segmental introgression lines.  

PubMed

Leaves are one of the most conspicuous and important organs of all seed plants. A fundamental source of morphological diversity in leaves is the degree to which the leaf is dissected by lobes and leaflets. We used publicly available segmental introgression lines to describe the quantitative trait loci (QTL) controlling the difference in leaf dissection seen between two tomato species, Lycopersicon esculentum and L. pennellii. We define eight morphological characteristics that comprise the mature tomato leaf and describe loci that affect each of these characters. We found 30 QTL that contribute one or more of these characters. Of these 30 QTL, 22 primarily affect leaf dissection and 8 primarily affect leaf size. On the basis of which characters are affected, four classes of loci emerge that affect leaf dissection. The majority of the QTL produce phenotypes intermediate to the two parent lines, while 5 QTL result in transgression with drastically increased dissection relative to both parent lines. PMID:14668401

Holtan, Hans E E; Hake, Sarah

2003-11-01

399

Cargo Logistics Airlift Systems Study (CLASS). Volume 1: Analysis of current air cargo system  

NASA Technical Reports Server (NTRS)

The material presented in this volume is classified into the following sections; (1) analysis of current routes; (2) air eligibility criteria; (3) current direct support infrastructure; (4) comparative mode analysis; (5) political and economic factors; and (6) future potential market areas. An effort was made to keep the observations and findings relating to the current systems as objective as possible in order not to bias the analysis of future air cargo operations reported in Volume 3 of the CLASS final report.

Burby, R. J.; Kuhlman, W. H.

1978-01-01

400

A link-segment model of upright human posture for analysis of head-trunk coordination  

NASA Technical Reports Server (NTRS)

Sensory-motor control of upright human posture may be organized in a top-down fashion such that certain head-trunk coordination strategies are employed to optimize visual and/or vestibular sensory inputs. Previous quantitative models of the biomechanics of human posture control have examined the simple case of ankle sway strategy, in which an inverted pendulum model is used, and the somewhat more complicated case of hip sway strategy, in which multisegment, articulated models are used. While these models can be used to quantify the gross dynamics of posture control, they are not sufficiently detailed to analyze head-trunk coordination strategies that may be crucial to understanding its underlying mechanisms. In this paper, we present a biomechanical model of upright human posture that extends an existing four mass, sagittal plane, link-segment model to a five mass model including an independent head link. The new model was developed to analyze segmental body movements during dynamic posturography experiments in order to study head-trunk coordination strategies and their influence on sensory inputs to balance control. It was designed specifically to analyze data collected on the EquiTest (NeuroCom International, Clackamas, OR) computerized dynamic posturography system, where the task of maintaining postural equilibrium may be challenged under conditions in which the visual surround, support surface, or both are in motion. The performance of the model was tested by comparing its estimated ground reaction forces to those measured directly by support surface force transducers. We conclude that this model will be a valuable analytical tool in the search for mechanisms of balance control.

Nicholas, S. C.; Doxey-Gasway, D. D.; Paloski, W. H.

1998-01-01

401

Analysis of load-relaxation in compressed segments of lumbar spine.  

PubMed

Load-relaxation was measured in 12 segments of human cadaveric lumbar spine. Each segment consisted of an intact intervertebral disc attached to half of its adjacent vertebrae with the posterior elements removed. Six specimens were each compressed at six different strains (corresponding to initial loads of 0.5-2.5 kN) and, for each strain, the load-relaxation was measured for a period of 20 min at room temperature. These load-relaxation curves were used to plot three isochrones for each specimen. All isochrones were linear (r values in the range 0.95-0.99). This result indicated that a linear model could be used to represent load-relaxation. Four specimens were tested at a single strain (corresponding to an initial load of about 2 kN) at 37 degrees C for a period of 4-6 h. Load was plotted against the logarithm of time. The resulting plots did not show any peaks, indicating that relaxation effects did not predominate at any particular times during load-relaxation. However, it was possible to model the load-relaxation as a simple linear system which can be represented as two Maxwell elements in parallel. These elements were characterized by relaxation times of 16 +/- 8 min and 4.6 +/- 0.8 h. Fourier transformation of the load-relaxation curves showed a gradual increase in the storage modulus and a gradual decrease in the loss modulus for frequencies of about 1 Hz and above. At these frequencies, the spine cannot function as a shock-absorber in pure compression. PMID:8673325

Holmes, A D; Hukins, D W

1996-03-01

402

Seismic volume attribute analysis of the Cenozoic succession in the L08 block (Southern North Sea)  

Microsoft Academic Search

Three-dimensional volume attribute extraction techniques have been applied to the Cenozoic succession, in a 3-D seismic data volume from the L08 block in the Dutch sector of the North Sea. Volume attribute analysis aims at the extraction of 3-D signal characteristics that are relevant to the geological interpretation of the data. For instance, the spatial discontinuities that are associated with

Philippe Steeghs; Irina Overeem; Sevgi Tigrek

2000-01-01

403

A combined machine-learning and graph-based framework for the segmentation of retinal surfaces in SD-OCT volumes  

PubMed Central

Optical coherence tomography is routinely used clinically for the detection and management of ocular diseases as well as in research where the studies may involve animals. This routine use requires that the developed automated segmentation methods not only be accurate and reliable, but also be adaptable to meet new requirements. We have previously proposed the use of a graph-theoretic approach for the automated 3-D segmentation of multiple retinal surfaces in volumetric human SD-OCT scans. The method ensures the global optimality of the set of surfaces with respect to a cost function. Cost functions have thus far been typically designed by hand by domain experts. This difficult and time-consuming task significantly impacts the adaptability of these methods to new models. Here, we describe a framework for the automated machine-learning based design of the cost function utilized by this graph-theoretic method. The impact of the learned components on the final segmentation accuracy are statistically assessed in order to tailor the method to specific applications. This adaptability is demonstrated by utilizing the method to segment seven, ten and five retinal surfaces from SD-OCT scans obtained from humans, mice and canines, respectively. The overall unsigned border position errors observed when using the recommended configuration of the graph-theoretic method was 6.45 ± 1.87 ?m, 3.35 ± 0.62 ?m and 9.75 ± 3.18 ?m for the human, mouse and canine set of images, respectively.

Antony, Bhavna J.; Abramoff, Michael D.; Harper, Matthew M.; Jeong, Woojin; Sohn, Elliott H.; Kwon, Young H.; Kardon, Randy; Garvin, Mona K.

2013-01-01

404

Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 4: Mission peculiar spacecraft segment and module specifications  

NASA Technical Reports Server (NTRS)

The specifications for the Earth Observatory Satellite (EOS) peculiar spacecraft segment and associated subsystems and modules are presented. The specifications considered include the following: (1) wideband communications subsystem module, (2) mission peculiar software, (3) hydrazine propulsion subsystem module, (4) solar array assembly, and (5) the scanning spectral radiometer.

1974-01-01

405

Estimation of thigh muscle cross-sectional area by single- and multifrequency segmental bioelectrical impedance analysis in the elderly.  

PubMed

Bioelectrical impedance analysis (BIA) has been used to estimate skeletal muscle mass, but its application in the elderly is not optimal. The accuracy of BIA may be influenced by the expansion of extracellular water (ECW) relative to muscle mass with aging. Multifrequency BIA (MFBIA) can evaluate the distribution between ECW and intracellular water (ICW), and thus may be superior to single-frequency BIA (SFBIA) to estimate muscle mass in the elderly. A total of 58 elderly participants aged 65-85 years were recruited. Muscle cross-sectional area (CSA) was obtained from computed tomography scans at the mid-thigh. Segmental SFBIA and MFBIA were measured for the upper legs. An index of the ratio of ECW and ICW was calculated using MFBIA. The correlation between muscle CSA and SFBIA was moderate (r = 0.68), but strong between muscle CSA and MFBIA (r = 0.85). ECW/ICW index was significantly and positively correlated with age (P < 0.001). SFBIA tends to significantly overestimate muscle CSA in subjects who had relative expansion of ECW in the thigh segment (P < 0.001). This trend was not observed for MFBIA (P = 0.42). Relative expansion of ECW was observed in older participants. The relative expansion of ECW affects the validity of traditional SFBIA, which is lowered when estimating muscle CSA in the elderly. By contrast, MFBIA was not affected by water distribution in thigh segments, thus rendering the validity of MFBIA for estimating thigh muscle CSA higher than SFBIA in the elderly. PMID:24114698

Yamada, Yosuke; Ikenaga, Masahiro; Takeda, Noriko; Morimura, Kazuhiro; Miyoshi, Nobuyuki; Kiyonaga, Akira; Kimura, Misaka; Higaki, Yasuki; Tanaka, Hiroaki

2014-01-15

406

Analysis of iris structure and iridocorneal angle parameters with anterior segment optical coherence tomography in Fuchs' uveitis syndrome.  

PubMed

To evaluate the differences in the biometric parameters of iridocorneal angle and iris structure measured by anterior segment optical coherence tomography (AS-OCT) in Fuchs' uveitis syndrome (FUS). Seventy-six eyes of 38 consecutive patients with the diagnosis of unilateral FUS were recruited into this prospective, cross-sectional and comparative study. After a complete ocular examination, anterior segment biometric parameters were measured by Visante(®) AS-OCT. All parameters were compared between the two eyes of each patient statistically. The mean age of the 38 subjects was 32.5 ± 7.5 years (18 female and 20 male). The mean visual acuity was lower in eyes with FUS (0.55 ± 0.31) than in healthy eyes (0.93 ± 0.17). The central corneal thickness did not differ significantly between eyes. All iridocorneal angle parameters (angle-opening distance 500 and 750, scleral spur angle, trabecular-iris space (TISA) 500 and 750) except TISA 500 in temporal quadrant were significantly larger in eyes with FUS than in healthy eyes. Anterior chamber depth was deeper in the eyes with FUS than in the unaffected eyes. With regard to iris measurements, iris thickness in the thickest part, iris bowing and iris shape were all statistically different between the affected eye and the healthy eye in individual patients with FUS. However, no statistically significant differences were evident in iris thickness 500 ?m, thickness in the middle and iris length. There were significant difference in iris shape between the two eyes of patients with glaucoma. AS-OCT as an imaging method provides us with many informative resultsin the analysis of anterior segment parameters in FUS. PMID:23277205

Basarir, Berna; Altan, Cigdem; Pinarci, Eylem Yaman; Celik, Ugur; Satana, Banu; Demirok, Ahmet

2013-06-01

407

Automatic segmentation and identification of solitary pulmonary nodules on follow-up CT scans based on local intensity structure analysis and non-rigid image registration  

NASA Astrophysics Data System (ADS)

This paper presents a novel method that can automatically segment solitary pulmonary nodule (SPN) and match such segmented SPNs on follow-up thoracic CT scans. Due to the clinical importance, a physician needs to find SPNs on chest CT and observe its progress over time in order to diagnose whether it is benign or malignant, or to observe the effect of chemotherapy for malignant ones using follow-up data. However, the enormous amount of CT images makes large burden tasks to a physician. In order to lighten this burden, we developed a method for automatic segmentation and assisting observation of SPNs in follow-up CT scans. The SPNs on input 3D thoracic CT scan are segmented based on local intensity structure analysis and the information of pulmonary blood vessels. To compensate lung deformation, we co-register follow-up CT scans based on an affine and a non-rigid registration. Finally, the matches of detected nodules are found from registered CT scans based on a similarity measurement calculation. We applied these methods to three patients including 14 thoracic CT scans. Our segmentation method detected 96.7% of SPNs from the whole images, and the nodule matching method found 83.3% correspondences from segmented SPNs. The results also show our matching method is robust to the growth of SPN, including integration/separation and appearance/disappearance. These confirmed our method is feasible for segmenting and identifying SPNs on follow-up CT scans.

Chen, Bin; Naito, Hideto; Nakamura, Yoshihiko; Kitasaka, Takayuki; Rueckert, Daniel; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

2011-03-01

408

In vivo analysis of hippocampal subfield atrophy in mild cognitive impairment via semi-automatic segmentation of T2-weighted MRI  

PubMed Central

The measurement of hippocampal volumes using MRI is a useful in-vivo biomarker for detection and monitoring of early Alzheimer’s Disease (AD), including during the amnestic Mild Cognitive Impairment (a-MCI) stage. The pathology underlying AD has regionally selective effects within the hippocampus. As such, we predict that hippocampal subfields are more sensitive in discriminating prodromal AD (i.e., a-MCI) from cognitively normal controls than whole hippocampal volumes, and attempt to demonstrate this using a semi-automatic method that can accurately segment hippocampal subfields. High-resolution coronal-oblique T2-weighted images of the hippocampal formation were acquired in 45 subjects (28 controls and 17 a-MCI (mean age: 69.5 ± 9.2; 70.2 ± 7.6)). CA1, CA2, CA3, and CA4/DG subfields, along with head and tail regions, were segmented using an automatic algorithm. CA1 and CA4/DG segmentations were manually edited. Whole hippocampal volumes were obtained from the subjects’ T1-weighted anatomical images. Automatic segmentation produced significant group differences in the following subfields: CA1 (left: p=0.001, right: p=0.038), CA4/DG (left: p=0.002, right: p=0.043), head (left: p=0.018, right: p=0.002), and tail (left: p=0.019). After manual correction, differences were increased in CA1 (left: p<0.001, right: p=0.002), and reduced in CA4/DG (left: p=0.029, right: p=0.221). Whole hippocampal volumes significantly differed bilaterally (left: p=0.028, right: p=0.009). This pattern of atrophy in a-MCI is consistent with the topography of AD pathology observed in postmortem studies, and corrected left CA1 provided stronger discrimination than whole hippocampal volume (p=0.03). These results suggest that semi-automatic segmentation of hippocampal subfields is efficient and may provide additional sensitivity beyond whole hippocampal volumes.

Pluta,