Sample records for reference segment size

  1. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    PubMed

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Cavity contour segmentation in chest radiographs using supervised learning and dynamic programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maduskar, Pragnya, E-mail: pragnya.maduskar@radboudumc.nl; Hogeweg, Laurens; Sánchez, Clara I.

    Purpose: Efficacy of tuberculosis (TB) treatment is often monitored using chest radiography. Monitoring size of cavities in pulmonary tuberculosis is important as the size predicts severity of the disease and its persistence under therapy predicts relapse. The authors present a method for automatic cavity segmentation in chest radiographs. Methods: A two stage method is proposed to segment the cavity borders, given a user defined seed point close to the center of the cavity. First, a supervised learning approach is employed to train a pixel classifier using texture and radial features to identify the border pixels of the cavity. A likelihoodmore » value of belonging to the cavity border is assigned to each pixel by the classifier. The authors experimented with four different classifiers:k-nearest neighbor (kNN), linear discriminant analysis (LDA), GentleBoost (GB), and random forest (RF). Next, the constructed likelihood map was used as an input cost image in the polar transformed image space for dynamic programming to trace the optimal maximum cost path. This constructed path corresponds to the segmented cavity contour in image space. Results: The method was evaluated on 100 chest radiographs (CXRs) containing 126 cavities. The reference segmentation was manually delineated by an experienced chest radiologist. An independent observer (a chest radiologist) also delineated all cavities to estimate interobserver variability. Jaccard overlap measure Ω was computed between the reference segmentation and the automatic segmentation; and between the reference segmentation and the independent observer's segmentation for all cavities. A median overlap Ω of 0.81 (0.76 ± 0.16), and 0.85 (0.82 ± 0.11) was achieved between the reference segmentation and the automatic segmentation, and between the segmentations by the two radiologists, respectively. The best reported mean contour distance and Hausdorff distance between the reference and the automatic segmentation were, respectively, 2.48 ± 2.19 and 8.32 ± 5.66 mm, whereas these distances were 1.66 ± 1.29 and 5.75 ± 4.88 mm between the segmentations by the reference reader and the independent observer, respectively. The automatic segmentations were also visually assessed by two trained CXR readers as “excellent,” “adequate,” or “insufficient.” The readers had good agreement in assessing the cavity outlines and 84% of the segmentations were rated as “excellent” or “adequate” by both readers. Conclusions: The proposed cavity segmentation technique produced results with a good degree of overlap with manual expert segmentations. The evaluation measures demonstrated that the results approached the results of the experienced chest radiologists, in terms of overlap measure and contour distance measures. Automatic cavity segmentation can be employed in TB clinics for treatment monitoring, especially in resource limited settings where radiologists are not available.« less

  3. Design Reference Missions for Deep-Space Optical Communication

    NASA Astrophysics Data System (ADS)

    Breidenthal, J.; Abraham, D.

    2016-05-01

    We examined the potential, but uncertain, NASA mission portfolio out to a time horizon of 20 years, to identify mission concepts that potentially could benefit from optical communication, considering their communications needs, the environments in which they would operate, and their notional size, weight, and power constraints. A set of 12 design reference missions was selected to represent the full range of potential missions. These design reference missions span the space of potential customer requirements, and encompass the wide range of applications that an optical ground segment might eventually be called upon to serve. The design reference missions encompass a range of orbit types, terminal sizes, and positions in the solar system that reveal the chief system performance variables of an optical ground segment, and may be used to enable assessments of the ability of alternative systems to meet various types of customer needs.

  4. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    PubMed

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  5. Incidence and Significance of Spontaneous ST Segment Re-elevation After Reperfused Anterior Acute Myocardial Infarction - Relationship With Infarct Size, Adverse Remodeling, and Events at 1 Year.

    PubMed

    Cuenin, Léo; Lamoureux, Sophie; Schaaf, Mathieu; Bochaton, Thomas; Monassier, Jean-Pierre; Claeys, Marc J; Rioufol, Gilles; Finet, Gérard; Garcia-Dorado, David; Angoulvant, Denis; Elbaz, Meyer; Delarche, Nicolas; Coste, Pierre; Metge, Marc; Perret, Thibault; Motreff, Pascal; Bonnefoy-Cudraz, Eric; Vanzetto, Gérald; Morel, Olivier; Boussaha, Inesse; Ovize, Michel; Mewton, Nathan

    2018-04-25

    Up to 25% of patients with ST elevation myocardial infarction (STEMI) have ST segment re-elevation after initial regression post-reperfusion and there are few data regarding its prognostic significance.Methods and Results:A standard 12-lead electrocardiogram (ECG) was recorded in 662 patients with anterior STEMI referred for primary percutaneous coronary intervention (PPCI). ECGs were recorded 60-90 min after PPCI and at discharge. ST segment re-elevation was defined as a ≥0.1-mV increase in STMax between the post-PPCI and discharge ECGs. Infarct size (assessed as creatine kinase [CK] peak), echocardiography at baseline and follow-up, and all-cause death and heart failure events at 1 year were assessed. In all, 128 patients (19%) had ST segment re-elevation. There was no difference between patients with and without re-elevation in infarct size (CK peak [mean±SD] 4,231±2,656 vs. 3,993±2,819 IU/L; P=0.402), left ventricular (LV) ejection fraction (50.7±11.6% vs. 52.2±10.8%; P=0.186), LV adverse remodeling (20.1±38.9% vs. 18.3±30.9%; P=0.631), or all-cause mortality and heart failure events (22 [19.8%] vs. 106 [19.2%]; P=0.887) at 1 year. Among anterior STEMI patients treated by PPCI, ST segment re-elevation was present in 19% and was not associated with increased infarct size or major adverse events at 1 year.

  6. Alignment and use of the optical test for the 8.4-m off-axis primary mirrors of the Giant Magellan Telescope

    NASA Astrophysics Data System (ADS)

    West, S. C.; Burge, J. H.; Cuerden, B.; Davison, W.; Hagen, J.; Martin, H. M.; Tuell, M. T.; Zhao, C.; Zobrist, T.

    2010-07-01

    The Giant Magellan Telescope has a 25 meter f/0.7 near-parabolic primary mirror constructed from seven 8.4 meter diameter segments. Several aspects of the interferometric optical test used to guide polishing of the six off-axis segments go beyond the demonstrated state of the art in optical testing. The null corrector is created from two obliquelyilluminated spherical mirrors combined with a computer-generated hologram (the measurement hologram). The larger mirror is 3.75 m in diameter and is supported at the top of a test tower, 23.5 m above the GMT segment. Its size rules out a direct validation of the wavefront produced by the null corrector. We can, however, use a reference hologram placed at an intermediate focus between the two spherical mirrors to measure the wavefront produced by the measurement hologram and the first mirror. This reference hologram is aligned to match the wavefront and thereby becomes the alignment reference for the rest of the system. The position and orientation of the reference hologram, the 3.75 m mirror and the GMT segment are measured with a dedicated laser tracker, leading to an alignment accuracy of about 100 microns over the 24 m dimensions of the test. In addition to the interferometer that measures the GMT segment, a separate interferometer at the center of curvature of the 3.75 m sphere monitors its figure simultaneously with the GMT measurement, allowing active correction and compensation for residual errors. We describe the details of the design, alignment, and use of this unique off-axis optical test.

  7. MR diffusion-weighted imaging-based subcutaneous tumour volumetry in a xenografted nude mouse model using 3D Slicer: an accurate and repeatable method

    PubMed Central

    Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2015-01-01

    Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359

  8. The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation

    NASA Astrophysics Data System (ADS)

    Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.

    2018-04-01

    The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.

  9. Automated detection and segmentation of follicles in 3D ultrasound for assisted reproduction

    NASA Astrophysics Data System (ADS)

    Narayan, Nikhil S.; Sivanandan, Srinivasan; Kudavelly, Srinivas; Patwardhan, Kedar A.; Ramaraju, G. A.

    2018-02-01

    Follicle quantification refers to the computation of the number and size of follicles in 3D ultrasound volumes of the ovary. This is one of the key factors in determining hormonal dosage during female infertility treatments. In this paper, we propose an automated algorithm to detect and segment follicles in 3D ultrasound volumes of the ovary for quantification. In a first of its kind attempt, we employ noise-robust phase symmetry feature maps as likelihood function to perform mean-shift based follicle center detection. Max-flow algorithm is used for segmentation and gray weighted distance transform is employed for post-processing the results. We have obtained state-of-the-art results with a true positive detection rate of >90% on 26 3D volumes with 323 follicles.

  10. Calibration of large area Micromegas detectors using cosmic rays

    NASA Astrophysics Data System (ADS)

    Biebel, O.; Flierl, B.; Herrmann, M.; Hertenberger, R.; Klitzner, F.; Lösel, P.; Müller, R.; Valderanis, C.; Zibell, A.

    2017-06-01

    Currently m2-sized micropattern detectors with spatial resolution better than 100 μm and online trigger capability are of big interest for many experiments. Large size in combination with superb spatial resolution and trigger capability implicates that the construction of these detectors is highly sophisticated and imposes strict mechanical tolerances. We developed a method to survey assembled and working detectors on potential deviations of the micro pattern readout structures from design value as well as deformations of the whole detector, using cosmic muons in a tracking facility. The LMU Cosmic Ray Facility consists of two 8 m2 ATLAS Monitored Drift Tube chambers (MDT) for precision muon reference tracking and two segmented trigger hodoscopes with sub-ns time-resolution and additional 10 cm position information along the wires of the MDTs. It provides information on homogeneity in efficiency and pulse height of one or several micropattern detectors installed in between the MDTs. With an angular acceptance of -30° to +30° the comparison of the reference muon tracking with centroidal position determination or time projection chamber like track reconstruction in the micropattern detector allows for calibration in three dimensions. We present results of a m2-sized one-dimensional resistive strip Micromegas detector consisting of two readout boards with in total 2048 strips, read out by 16 APV25 front-end boards. This 16-fold segmentation along the precision direction in combination with a 10-fold segmentation in orthogonal direction by the resolution of the trigger hodoscope, allows for very detailed analysis of the 1 m2 detector under study by subdivision into 160 partitions, each being analyzed separately. We are able to disentangle deviations from the readout strip straightness and global deformation due to the small overpressure caused by the Ar:CO2 (93:7) gas mixture flux. We introduce the alignment and calibration procedure, report on homogeneity in efficiency and pulse height and present results on deformation and performance of the m2-sized Micromegas.

  11. ATLAST and JWST Segmented Telescope Design Considerations

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee

    2016-01-01

    To the extent it makes sense, leverage JWST (James Webb Space Telescope) knowledge, designs, architectures. GSE (Ground Support Equipment) good starting point. Develop a full end-to-end architecture that closes. Try to avoid recreating the wheel except where needed. Optimize from there (mainly for stability and coronagraphy). Develop a scalable design reference mission (9.2 meters). Do just enough work to understand launch break points in aperture size Demonstrate 10 pm (phase modulation) stability is achievable on a design reference mission. A really key design driver is the most robust stability possible!!! Make design compatible with starshades. While segmented coronagraphs with high throughput and large bandpasses are important, make the system serviceable so you can evolve the instruments. Keep it room temperature to minimize the costs associated with cryo. Focus resources on the contrast problem. Start with the architecture and connect it to the technology needs.

  12. Cortical Enhanced Tissue Segmentation of Neonatal Brain MR Images Acquired by a Dedicated Phased Array Coil

    PubMed Central

    Shi, Feng; Yap, Pew-Thian; Fan, Yong; Cheng, Jie-Zhi; Wald, Lawrence L.; Gerig, Guido; Lin, Weili; Shen, Dinggang

    2010-01-01

    The acquisition of high quality MR images of neonatal brains is largely hampered by their characteristically small head size and low tissue contrast. As a result, subsequent image processing and analysis, especially for brain tissue segmentation, are often hindered. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by effectively combing images obtained from 8 coil elements without lengthening data acquisition time. In addition, a subject-specific atlas based tissue segmentation algorithm is specifically developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in neonatal images with a Hessian filter for generation of cortical GM prior. Then, the prior is combined with our neonatal population atlas to form a cortical enhanced hybrid atlas, which we refer to as the subject-specific atlas. Various experiments are conducted to compare the proposed method with manual segmentation results, as well as with additional two population atlas based segmentation methods. Results show that the proposed method is capable of segmenting the neonatal brain with the highest accuracy, compared to other two methods. PMID:20862268

  13. Using JWST Heritage to Enable a Future Large Ultra-Violet Optical Infrared Telescope

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee

    2016-01-01

    To the extent it makes sense, leverage JWST knowledge, designs, architectures, GSE. Develop a scalable design reference mission (9.2 meter). Do just enough work to understand launch break points in aperture size. Demonstrate 10 pm stability is achievable on a design reference mission. Make design compatible with starshades. While segmented coronagraphs with high throughput and large bandpasses are important, make the system serviceable so you can evolve the instruments. Keep it room temperature to minimize the costs associated with cryo. Focus resources on the contrast problem. Start with the architecture and connect it to the technology needs.

  14. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoot, A. J. A. J. van de, E-mail: a.j.schootvande@amc.uva.nl; Schooneveldt, G.; Wognum, S.

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used tomore » guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation results significantly (p < 0.01) based on DSC (6.72%) and SD of contour-to-contour distances (0.08 cm) and decreased the 95% confidence intervals of the bladder volume differences. Moreover, expanding the shape model improved the segmentation results significantly (p < 0.01) based on DSC and SD of contour-to-contour distances. Conclusions: This patient-specific shape model based automatic bladder segmentation method on CBCT is accurate and generic. Our segmentation method only needs two pretreatment imaging data sets as prior knowledge, is independent of patient gender and patient treatment position and has the possibility to manually adapt the segmentation locally.« less

  15. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits.

    PubMed

    Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G

    2017-12-05

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.

  16. Two- and three-dimensional CT measurements of urinary calculi length and width: a comparative study.

    PubMed

    Lidén, Mats; Thunberg, Per; Broxvall, Mathias; Geijer, Håkan

    2015-04-01

    The standard imaging procedure for a patient presenting with renal colic is unenhanced computed tomography (CT). The CT measured size has a close correlation to the estimated prognosis for spontaneous passage of a ureteral calculus. Size estimations of urinary calculi in CT images are still based on two-dimensional (2D) reformats. To develop and validate a calculus oriented three-dimensional (3D) method for measuring the length and width of urinary calculi and to compare the calculus oriented measurements of the length and width with corresponding 2D measurements obtained in axial and coronal reformats. Fifty unenhanced CT examinations demonstrating urinary calculi were included. A 3D symmetric segmentation algorithm was validated against reader size estimations. The calculus oriented size from the segmentation was then compared to the estimated size in axial and coronal 2D reformats. The validation showed 0.1 ± 0.7 mm agreement against reference measure. There was a 0.4 mm median bias for 3D estimated calculus length compared to 2D (P < 0.001), but no significant bias for 3D width compared to 2D. The length of a calculus in axial and coronal reformats becomes underestimated compared to 3D if its orientation is not aligned to the image planes. Future studies aiming to correlate calculus size with patient outcome should use a calculus oriented size estimation. © The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  17. Complete grain boundaries from incomplete EBSD maps: the influence of segmentation on grain size determinations

    NASA Astrophysics Data System (ADS)

    Heilbronner, Renée; Kilian, Ruediger

    2017-04-01

    Grain size analyses are carried out for a number of reasons, for example, the dynamically recrystallized grain size of quartz is used to assess the flow stresses during deformation. Typically a thin section or polished surface is used. If the expected grain size is large enough (10 µm or larger), the images can be obtained on a light microscope, if the grain size is smaller, the SEM is used. The grain boundaries are traced (the process is called segmentation and can be done manually or via image processing) and the size of the cross sectional areas (segments) is determined. From the resulting size distributions, 'the grain size' or 'average grain size', usually a mean diameter or similar, is derived. When carrying out such grain size analyses, a number of aspects are critical for the reproducibility of the result: the resolution of the imaging equipment (light microscope or SEM), the type of images that are used for segmentation (cross polarized, partial or full orientation images, CIP versus EBSD), the segmentation procedure (algorithm) itself, the quality of the segmentation and the mathematical definition and calculation of 'the average grain size'. The quality of the segmentation depends very strongly on the criteria that are used for identifying grain boundaries (for example, angles of misorientation versus shape considerations), on pre- and post-processing (filtering) and on the quality of the recorded images (most notably on the indexing ratio). In this contribution, we consider experimentally deformed Black Hills quartzite with dynamically re-crystallized grain sizes in the range of 2 - 15 µm. We compare two basic methods of segmentations of EBSD maps (orientation based versus shape based) and explore how the choice of methods influences the result of the grain size analysis. We also compare different measures for grain size (mean versus mode versus RMS, and 2D versus 3D) in order to determine which of the definitions of 'average grain size yields the most stable results.

  18. Extraction of liver volumetry based on blood vessel from the portal phase CT dataset

    NASA Astrophysics Data System (ADS)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Utsunomiya, Tohru; Shimada, Mitsuo

    2012-02-01

    At liver surgery planning stage, the liver volumetry would be essential for surgeons. Main problem at liver extraction is the wide variability of livers in shapes and sizes. Since, hepatic blood vessels structure varies from a person to another and covers liver region, the present method uses that information for extraction of liver in two stages. The first stage is to extract abdominal blood vessels in the form of hepatic and nonhepatic blood vessels. At the second stage, extracted vessels are used to control extraction of liver region automatically. Contrast enhanced CT datasets at only the portal phase of 50 cases is used. Those data include 30 abnormal livers. A reference for all cases is done through a comparison of two experts labeling results and correction of their inter-reader variability. Results of the proposed method agree with the reference at an average rate of 97.8%. Through application of different metrics mentioned at MICCAI workshop for liver segmentation, it is found that: volume overlap error is 4.4%, volume difference is 0.3%, average symmetric distance is 0.7 mm, Root mean square symmetric distance is 0.8 mm, and maximum distance is 15.8 mm. These results represent the average of overall data and show an improved accuracy compared to current liver segmentation methods. It seems to be a promising method for extraction of liver volumetry of various shapes and sizes.

  19. A novel adaptive scoring system for segmentation validation with multiple reference masks

    NASA Astrophysics Data System (ADS)

    Moltz, Jan H.; Rühaak, Jan; Hahn, Horst K.; Peitgen, Heinz-Otto

    2011-03-01

    The development of segmentation algorithms for different anatomical structures and imaging protocols is an important task in medical image processing. The validation of these methods, however, is often treated as a subordinate task. Since manual delineations, which are widely used as a surrogate for the ground truth, exhibit an inherent uncertainty, it is preferable to use multiple reference segmentations for an objective validation. This requires a consistent framework that should fulfill three criteria: 1) it should treat all reference masks equally a priori and not demand consensus between the experts; 2) it should evaluate the algorithmic performance in relation to the inter-reference variability, i.e., be more tolerant where the experts disagree about the true segmentation; 3) it should produce results that are comparable for different test data. We show why current state-of-the-art frameworks as the one used at several MICCAI segmentation challenges do not fulfill these criteria and propose a new validation methodology. A score is computed in an adaptive way for each individual segmentation problem, using a combination of volume- and surface-based comparison metrics. These are transformed into the score by relating them to the variability between the reference masks which can be measured by comparing the masks with each other or with an estimated ground truth. We present examples from a study on liver tumor segmentation in CT scans where our score shows a more adequate assessment of the segmentation results than the MICCAI framework.

  20. Computerized analysis of coronary artery disease: Performance evaluation of segmentation and tracking of coronary arteries in CT angiograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Chuan, E-mail: chuan@umich.edu; Chan, Heang-Ping; Chughtai, Aamer

    2014-08-15

    Purpose: The authors are developing a computer-aided detection system to assist radiologists in analysis of coronary artery disease in coronary CT angiograms (cCTA). This study evaluated the accuracy of the authors’ coronary artery segmentation and tracking method which are the essential steps to define the search space for the detection of atherosclerotic plaques. Methods: The heart region in cCTA is segmented and the vascular structures are enhanced using the authors’ multiscale coronary artery response (MSCAR) method that performed 3D multiscale filtering and analysis of the eigenvalues of Hessian matrices. Starting from seed points at the origins of the left andmore » right coronary arteries, a 3D rolling balloon region growing (RBG) method that adapts to the local vessel size segmented and tracked each of the coronary arteries and identifies the branches along the tracked vessels. The branches are queued and subsequently tracked until the queue is exhausted. With Institutional Review Board approval, 62 cCTA were collected retrospectively from the authors’ patient files. Three experienced cardiothoracic radiologists manually tracked and marked center points of the coronary arteries as reference standard following the 17-segment model that includes clinically significant coronary arteries. Two radiologists visually examined the computer-segmented vessels and marked the mistakenly tracked veins and noisy structures as false positives (FPs). For the 62 cases, the radiologists marked a total of 10191 center points on 865 visible coronary artery segments. Results: The computer-segmented vessels overlapped with 83.6% (8520/10191) of the center points. Relative to the 865 radiologist-marked segments, the sensitivity reached 91.9% (795/865) if a true positive is defined as a computer-segmented vessel that overlapped with at least 10% of the reference center points marked on the segment. When the overlap threshold is increased to 50% and 100%, the sensitivities were 86.2% and 53.4%, respectively. For the 62 test cases, a total of 55 FPs were identified by radiologist in 23 of the cases. Conclusions: The authors’ MSCAR-RBG method achieved high sensitivity for coronary artery segmentation and tracking. Studies are underway to further improve the accuracy for the arterial segments affected by motion artifacts, severe calcified and noncalcified soft plaques, and to reduce the false tracking of the veins and other noisy structures. Methods are also being developed to detect coronary artery disease along the tracked vessels.« less

  1. Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation.

    PubMed

    Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien

    2010-06-01

    The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and obtain more accurate segmentation results automatically. Moreover, realistic hand motion animations can be generated based on the bone segmentation results. The proposed method is found helpful for understanding hand bone geometries in dynamic postures that can be used in simulating 3D hand motion through multipostural MR images.

  2. A minimally interactive method to segment enlarged lymph nodes in 3D thoracic CT images using a rotatable spiral-scanning technique

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Moltz, Jan H.; Bornemann, Lars; Hahn, Horst K.

    2012-03-01

    Precise size measurement of enlarged lymph nodes is a significant indicator for diagnosing malignancy, follow-up and therapy monitoring of cancer diseases. The presence of diverse sizes and shapes, inhomogeneous enhancement and the adjacency to neighboring structures with similar intensities, make the segmentation task challenging. We present a semi-automatic approach requiring minimal user interactions to fast and robustly segment the enlarged lymph nodes. First, a stroke approximating the largest diameter of a specific lymph node is drawn manually from which a volume of interest (VOI) is determined. Second, Based on the statistical analysis of the intensities on the dilated stroke area, a region growing procedure is utilized within the VOI to create an initial segmentation of the target lymph node. Third, a rotatable spiral-scanning technique is proposed to resample the 3D boundary surface of the lymph node to a 2D boundary contour in a transformed polar image. The boundary contour is found by seeking the optimal path in 2D polar image with dynamic programming algorithm and eventually transformed back to 3D. Ultimately, the boundary surface of the lymph node is determined using an interpolation scheme followed by post-processing steps. To test the robustness and efficiency of our method, a quantitative evaluation was conducted with a dataset of 315 lymph nodes acquired from 79 patients with lymphoma and melanoma. Compared to the reference segmentations, an average Dice coefficient of 0.88 with a standard deviation of 0.08, and an average absolute surface distance of 0.54mm with a standard deviation of 0.48mm, were achieved.

  3. Treatment Using the SpyGlass Digital System in a Patient with Hepatolithiasis after a Whipple Procedure.

    PubMed

    Harima, Hirofumi; Hamabe, Kouichi; Hisano, Fusako; Matsuzaki, Yuko; Itoh, Tadahiko; Sanuki, Kazutoshi; Sakaida, Isao

    2018-05-23

    An 89-year-old man was referred to our hospital for treatment of hepatolithiasis causing recurrent cholangitis. He had undergone a prior Whipple procedure. Computed tomography demonstrated left-sided hepatolithiasis. First, we conducted peroral direct cholangioscopy (PDCS) using an ultraslim endoscope. Although PDCS was successfully conducted, it was unsuccessful in removing all the stones. The stones located in the B2 segment were difficult to remove because the endoscope could not be inserted deeply into this segment due to the small size of the intrahepatic bile duct. Next, we substituted the endoscope with an upper gastrointestinal endoscope. After positioning the endoscope, the SpyGlass digital system (SPY-DS) was successfully inserted deep into the B2 segment. Upon visualizing the residual stones, we conducted SPY-DS-guided electrohydraulic lithotripsy. The stones were disintegrated and completely removed. In cases of PDCS failure, a treatment strategy using the SPY-DS can be considered for patients with hepatolithiasis after a Whipple procedure.

  4. Decreasing transmembrane segment length greatly decreases perfringolysin O pore size

    DOE PAGES

    Lin, Qingqing; Li, Huilin; Wang, Tong; ...

    2015-04-08

    Perfringolysin O (PFO) is a transmembrane (TM) β-barrel protein that inserts into mammalian cell membranes. Once inserted into membranes, PFO assembles into pore-forming oligomers containing 30–50 PFO monomers. These form a pore of up to 300 Å, far exceeding the size of most other proteinaceous pores. In this study, we found that altering PFO TM segment length can alter the size of PFO pores. A PFO mutant with lengthened TM segments oligomerized to a similar extent as wild-type PFO, and exhibited pore-forming activity and a pore size very similar to wild-type PFO as measured by electron microscopy and a leakagemore » assay. In contrast, PFO with shortened TM segments exhibited a large reduction in pore-forming activity and pore size. This suggests that the interaction between TM segments can greatly affect the size of pores formed by TM β-barrel proteins. PFO may be a promising candidate for engineering pore size for various applications.« less

  5. Estimating A Reference Standard Segmentation With Spatially Varying Performance Parameters: Local MAP STAPLE

    PubMed Central

    Commowick, Olivier; Akhondi-Asl, Alireza; Warfield, Simon K.

    2012-01-01

    We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new Maximum A Posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the- art algorithms. PMID:22562727

  6. Performance evaluation of an automatic segmentation method of cerebral arteries in MRA images by use of a large image database

    NASA Astrophysics Data System (ADS)

    Uchiyama, Yoshikazu; Asano, Tatsunori; Hara, Takeshi; Fujita, Hiroshi; Kinosada, Yasutomi; Asano, Takahiko; Kato, Hiroki; Kanematsu, Masayuki; Hoshi, Hiroaki; Iwama, Toru

    2009-02-01

    The detection of cerebrovascular diseases such as unruptured aneurysm, stenosis, and occlusion is a major application of magnetic resonance angiography (MRA). However, their accurate detection is often difficult for radiologists. Therefore, several computer-aided diagnosis (CAD) schemes have been developed in order to assist radiologists with image interpretation. The purpose of this study was to develop a computerized method for segmenting cerebral arteries, which is an essential component of CAD schemes. For the segmentation of vessel regions, we first used a gray level transformation to calibrate voxel values. To adjust for variations in the positioning of patients, registration was subsequently employed to maximize the overlapping of the vessel regions in the target image and reference image. The vessel regions were then segmented from the background using gray-level thresholding and region growing techniques. Finally, rule-based schemes with features such as size, shape, and anatomical location were employed to distinguish between vessel regions and false positives. Our method was applied to 854 clinical cases obtained from two different hospitals. The segmentation of cerebral arteries in 97.1%(829/854) of the MRA studies was attained as an acceptable result. Therefore, our computerized method would be useful in CAD schemes for the detection of cerebrovascular diseases in MRA images.

  7. A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches

    NASA Astrophysics Data System (ADS)

    Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul

    2018-07-01

    Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.

  8. TH-CD-202-05: DECT Based Tissue Segmentation as Input to Monte Carlo Simulations for Proton Treatment Verification Using PET Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berndt, B; Wuerl, M; Dedes, G

    Purpose: To improve agreement of predicted and measured positron emitter yields in patients, after proton irradiation for PET-based treatment verification, using a novel dual energy CT (DECT) tissue segmentation approach, overcoming known deficiencies from single energy CT (SECT). Methods: DECT head scans of 5 trauma patients were segmented and compared to existing decomposition methods with a first focus on the brain. For validation purposes, three brain equivalent solutions [water, white matter (WM) and grey matter (GM) – equivalent with respect to their reference carbon and oxygen contents and CT numbers at 90kVp and 150kVp] were prepared from water, ethanol, sucrosemore » and salt. The activities of all brain solutions, measured during a PET scan after uniform proton irradiation, were compared to Monte Carlo simulations. Simulation inputs were various solution compositions obtained from different segmentation approaches from DECT, SECT scans, and known reference composition. Virtual GM solution salt concentration corrections were applied based on DECT measurements of solutions with varying salt concentration. Results: The novel tissue segmentation showed qualitative improvements in %C for patient brain scans (ground truth unavailable). The activity simulations based on reference solution compositions agree with the measurement within 3–5% (4–8Bq/ml). These reference simulations showed an absolute activity difference between WM (20%C) and GM (10%C) to H2O (0%C) of 43 Bq/ml and 22 Bq/ml, respectively. Activity differences between reference simulations and segmented ones varied from −6 to 1 Bq/ml for DECT and −79 to 8 Bq/ml for SECT. Conclusion: Compared to the conventionally used SECT segmentation, the DECT based segmentation indicates a qualitative and quantitative improvement. In controlled solutions, a MC input based on DECT segmentation leads to better agreement with the reference. Future work will address the anticipated improvement of quantification accuracy in patients, comparing different tissue decomposition methods with an MR brain segmentation. Acknowledgement: DFG-MAP and HIT-Heidelberg Deutsche Forschungsgemeinschaft (MAP); Bundesministerium fur Bildung und Forschung (01IB13001)« less

  9. Relationship between negative differential thermal resistance and asymmetry segment size

    NASA Astrophysics Data System (ADS)

    Kong, Peng; Hu, Tao; Hu, Ke; Jiang, Zhenhua; Tang, Yi

    2018-03-01

    Negative differential thermal resistance (NDTR) was investigated in a system consisting of two dissimilar anharmonic lattices exemplified by Frenkel-Kontorova (FK) lattices and Fremi-Pasta-Ulam (FPU) lattices (FK-FPU). The previous theoretical and numerical simulations show the dependence of NDTR are the coupling constant, interface and system size, but we find the segment size also to be an important element. It’s interesting that NDTR region depends on FK segment size rather than FPU segment size in this coupling FK-FPU model. Remarkably, we could observe that NDTR appears in the strong interface coupling strength case which is not NDTR in previous studies. The results are conducive to further developments in designing and fabricating thermal devices.

  10. REFERENCE CONDITION APPROACH TO THE ASSESSMENT OF BIOLOGICAL INTEGRITY IN STREAMS OF THE SOUTHERN ROCKY MOUNTAINS AND ITS USE IN MEASURING THE EFFECTIVENESS OF MILE-REMEDIATION EFFORTS

    EPA Science Inventory

    A recent development in water quality assessment is the comparison of assemblage data from impacted stream segments with that for groups of segments representing reference (or minimally-impacted) conditions. The degree of impairment of a stream segment is expressed as metrics, su...

  11. Automated segmentation of blood-flow regions in large thoracic arteries using 3D-cine PC-MRI measurements.

    PubMed

    van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna

    2012-03-01

    Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface segmentation results were shown to closely approximate manual segmentations.

  12. Operationalizing hippocampal volume as an enrichment biomarker for amnestic mild cognitive impairment trials: effect of algorithm, test-retest variability, and cut point on trial cost, duration, and sample size.

    PubMed

    Yu, Peng; Sun, Jia; Wolz, Robin; Stephenson, Diane; Brewer, James; Fox, Nick C; Cole, Patricia E; Jack, Clifford R; Hill, Derek L G; Schwarz, Adam J

    2014-04-01

    The objective of this study was to evaluate the effect of computational algorithm, measurement variability, and cut point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). We used normal control and amnestic MCI subjects from the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) as normative reference and screening cohorts. We evaluated the enrichment performance of 4 widely used hippocampal segmentation algorithms (FreeSurfer, Hippocampus Multi-Atlas Propagation and Segmentation (HMAPS), Learning Embeddings Atlas Propagation (LEAP), and NeuroQuant) in terms of 2-year changes in Mini-Mental State Examination (MMSE), Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Sum of Boxes (CDR-SB). We modeled the implications for sample size, screen fail rates, and trial cost and duration. HCV based patient selection yielded reduced sample sizes (by ∼40%-60%) and lower trial costs (by ∼30%-40%) across a wide range of cut points. These results provide a guide to the choice of HCV cut point for amnestic MCI clinical trials, allowing an informed tradeoff between statistical and practical considerations. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Sex- and Method-Specific Reference Values for Right Ventricular Strain by 2-Dimensional Speckle-Tracking Echocardiography.

    PubMed

    Muraru, Denisa; Onciul, Sebastian; Peluso, Diletta; Soriani, Nicola; Cucchini, Umberto; Aruta, Patrizia; Romeo, Gabriella; Cavalli, Giacomo; Iliceto, Sabino; Badano, Luigi P

    2016-02-01

    Despite the fact that assessment of right ventricular longitudinal strain (RVLS) carries important implications for patient diagnosis, prognosis, and treatment, its implementation in clinical settings has been hampered by the limited reference values and the lack of uniformity in software, method, and definition used for measuring RVLS. Accordingly, this study was designed to establish (1) the reference values for RVLS by 2-dimensional speckle-tracking echocardiography; and (2) their relationship with demographic, hemodynamic, and cardiac factors. In 276 healthy volunteers (55% women; age, 18-76 years), free wall and septum RVLS (6 segments) and free wall RVLS (3 segments) using both 6- and 3-segment regions of interest were obtained. Feasibility of 6-segment RVLS was 92%. Free wall RVLS from 3- versus 6-segment regions of interest had similar values, yet 6-segment region of interest was more feasible (86% versus 73%; P<0.001) and reproducible. Reference values (lower limits of normality) were as follows: 6-segment RVLS, -24.7±2.6% (-20.0%) for men and -26.7±3.1% (-20.3%) for women; 3-segment RVLS, -29.3±3.4% (-22.5%) for men and -31.6±4.0% (-23.3%) for women (P<0.001). Free wall RVLS was 5±2 strain units (%) larger in magnitude than 6-segment RVLS, 10±4% larger than septal RVLS, and 2±4% larger in women than in men (P<0.001). At multivariable analysis, age, sex, pulmonary systolic pressure, right atrial minimal volume, as well as right atrial and left ventricular longitudinal strain resulted as correlates of RVLS values. This is the largest study providing sex- and method-specific reference values for RVLS. Our data may foster the implementation of 2-dimensional speckle-tracking echocardiography-derived RV analysis in clinical practice. © 2016 American Heart Association, Inc.

  14. Comparison of different deep learning approaches for parotid gland segmentation from CT images

    NASA Astrophysics Data System (ADS)

    Hänsch, Annika; Schwier, Michael; Gass, Tobias; Morgas, Tomasz; Haas, Benjamin; Klein, Jan; Hahn, Horst K.

    2018-02-01

    The segmentation of target structures and organs at risk is a crucial and very time-consuming step in radiotherapy planning. Good automatic methods can significantly reduce the time clinicians have to spend on this task. Due to its variability in shape and often low contrast to surrounding structures, segmentation of the parotid gland is especially challenging. Motivated by the recent success of deep learning, we study different deep learning approaches for parotid gland segmentation. Particularly, we compare 2D, 2D ensemble and 3D U-Net approaches and find that the 2D U-Net ensemble yields the best results with a mean Dice score of 0.817 on our test data. The ensemble approach reduces false positives without the need for an automatic region of interest detection. We also apply our trained 2D U-Net ensemble to segment the test data of the 2015 MICCAI head and neck auto-segmentation challenge. With a mean Dice score of 0.861, our classifier exceeds the highest mean score in the challenge. This shows that the method generalizes well onto data from independent sites. Since appropriate reference annotations are essential for training but often difficult and expensive to obtain, it is important to know how many samples are needed to properly train a neural network. We evaluate the classifier performance after training with differently sized training sets (50-450) and find that 250 cases (without using extensive data augmentation) are sufficient to obtain good results with the 2D ensemble. Adding more samples does not significantly improve the Dice score of the segmentations.

  15. Thigh muscle segmentation of chemical shift encoding-based water-fat magnetic resonance images: The reference database MyoSegmenTUM.

    PubMed

    Schlaeger, Sarah; Freitag, Friedemann; Klupp, Elisabeth; Dieckmeyer, Michael; Weidlich, Dominik; Inhuber, Stephanie; Deschauer, Marcus; Schoser, Benedikt; Bublitz, Sarah; Montagnese, Federica; Zimmer, Claus; Rummeny, Ernst J; Karampinos, Dimitrios C; Kirschke, Jan S; Baum, Thomas

    2018-01-01

    Magnetic resonance imaging (MRI) can non-invasively assess muscle anatomy, exercise effects and pathologies with different underlying causes such as neuromuscular diseases (NMD). Quantitative MRI including fat fraction mapping using chemical shift encoding-based water-fat MRI has emerged for reliable determination of muscle volume and fat composition. The data analysis of water-fat images requires segmentation of the different muscles which has been mainly performed manually in the past and is a very time consuming process, currently limiting the clinical applicability. An automatization of the segmentation process would lead to a more time-efficient analysis. In the present work, the manually segmented thigh magnetic resonance imaging database MyoSegmenTUM is presented. It hosts water-fat MR images of both thighs of 15 healthy subjects and 4 patients with NMD with a voxel size of 3.2x2x4 mm3 with the corresponding segmentation masks for four functional muscle groups: quadriceps femoris, sartorius, gracilis, hamstrings. The database is freely accessible online at https://osf.io/svwa7/?view_only=c2c980c17b3a40fca35d088a3cdd83e2. The database is mainly meant as ground truth which can be used as training and test dataset for automatic muscle segmentation algorithms. The segmentation allows extraction of muscle cross sectional area (CSA) and volume. Proton density fat fraction (PDFF) of the defined muscle groups from the corresponding images and quadriceps muscle strength measurements/neurological muscle strength rating can be used for benchmarking purposes.

  16. Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.

    PubMed

    Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael

    2015-08-01

    In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Qingqing; Li, Huilin; Wang, Tong

    Perfringolysin O (PFO) is a transmembrane (TM) β-barrel protein that inserts into mammalian cell membranes. Once inserted into membranes, PFO assembles into pore-forming oligomers containing 30–50 PFO monomers. These form a pore of up to 300 Å, far exceeding the size of most other proteinaceous pores. In this study, we found that altering PFO TM segment length can alter the size of PFO pores. A PFO mutant with lengthened TM segments oligomerized to a similar extent as wild-type PFO, and exhibited pore-forming activity and a pore size very similar to wild-type PFO as measured by electron microscopy and a leakagemore » assay. In contrast, PFO with shortened TM segments exhibited a large reduction in pore-forming activity and pore size. This suggests that the interaction between TM segments can greatly affect the size of pores formed by TM β-barrel proteins. PFO may be a promising candidate for engineering pore size for various applications.« less

  18. Dissociation of somatic growth from segmentation drives gigantism in snakes.

    PubMed

    Head, Jason J; David Polly, P

    2007-06-22

    Body size is significantly correlated with number of vertebrae (pleomerism) in multiple vertebrate lineages, indicating that change in number of body segments produced during somitogenesis is an important factor in evolutionary change in body size, but the role of segmentation in the evolution of extreme sizes, including gigantism, has not been examined. We explored the relationship between body size and vertebral count in basal snakes that exhibit gigantism. Boids, pythonids and the typhlopid genera, Typhlops and Rhinotyphlops, possess a positive relationship between body size and vertebral count, confirming the importance of pleomerism; however, giant taxa possessed fewer than expected vertebrae, indicating that a separate process underlies the evolution of gigantism in snakes. The lack of correlation between body size and vertebral number in giant taxa demonstrates dissociation of segment production in early development from somatic growth during maturation, indicating that gigantism is achieved by modifying development at a different stage from that normally selected for changes in body size.

  19. Caudal migration and proliferation of renal progenitors regulates early nephron segment size in zebrafish.

    PubMed

    Naylor, Richard W; Dodd, Rachel C; Davidson, Alan J

    2016-10-19

    The nephron is the functional unit of the kidney and is divided into distinct proximal and distal segments. The factors determining nephron segment size are not fully understood. In zebrafish, the embryonic kidney has long been thought to differentiate in situ into two proximal tubule segments and two distal tubule segments (distal early; DE, and distal late; DL) with little involvement of cell movement. Here, we overturn this notion by performing lineage-labelling experiments that reveal extensive caudal movement of the proximal and DE segments and a concomitant compaction of the DL segment as it fuses with the cloaca. Laser-mediated severing of the tubule, such that the DE and DL are disconnected or that the DL and cloaca do not fuse, results in a reduction in tubule cell proliferation and significantly shortens the DE segment while the caudal movement of the DL is unaffected. These results suggest that the DL mechanically pulls the more proximal segments, thereby driving both their caudal extension and their proliferation. Together, these data provide new insights into early nephron morphogenesis and demonstrate the importance of cell movement and proliferation in determining initial nephron segment size.

  20. Automated measurement of pressure injury through image processing.

    PubMed

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries. © 2017 John Wiley & Sons Ltd.

  1. Refinement of ground reference data with segmented image data

    NASA Technical Reports Server (NTRS)

    Robinson, Jon W.; Tilton, James C.

    1991-01-01

    One of the ways to determine ground reference data (GRD) for satellite remote sensing data is to photo-interpret low altitude aerial photographs and then digitize the cover types on a digitized tablet and register them to 7.5 minute U.S.G.S. maps (that were themselves digitized). The resulting GRD can be registered to the satellite image or, vice versa. Unfortunately, there are many opportunities for error when using digitizing tablet and the resolution of the edges for the GRD depends on the spacing of the points selected on the digitizing tablet. One of the consequences of this is that when overlaid on the image, errors and missed detail in the GRD become evident. An approach is discussed for correcting these errors and adding detail to the GRD through the use of a highly interactive, visually oriented process. This process involves the use of overlaid visual displays of the satellite image data, the GRD, and a segmentation of the satellite image data. Several prototype programs were implemented which provide means of taking a segmented image and using the edges from the reference data to mask out these segment edges that are beyond a certain distance from the reference data edges. Then using the reference data edges as a guide, those segment edges that remain and that are judged not to be image versions of the reference edges are manually marked and removed. The prototype programs that were developed and the algorithmic refinements that facilitate execution of this task are described.

  2. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    PubMed

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-10-01

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  3. Beam hardening artifact reduction using dual energy computed tomography: implications for myocardial perfusion studies

    PubMed Central

    Carrascosa, Patricia; Cipriano, Silvina; De Zan, Macarena; Deviggiano, Alejandro; Capunay, Carlos; Cury, Ricardo C.

    2015-01-01

    Background Myocardial computed tomography perfusion (CTP) using conventional single energy (SE) imaging is influenced by the presence of beam hardening artifacts (BHA), occasionally resembling perfusion defects and commonly observed at the left ventricular posterobasal wall (PB). We therefore sought to explore the ability of dual energy (DE) CTP to attenuate the presence of BHA. Methods Consecutive patients without history of coronary artery disease who were referred for computed tomography coronary angiography (CTCA) due to atypical chest pain and a normal stress-rest SPECT and had absence or mild coronary atherosclerosis constituted the study population. The study group was acquired using DE and the control group using SE imaging. Results Demographical characteristics were similar between groups, as well as the heart rate and the effective radiation dose. Myocardial signal density (SD) levels were evaluated in 280 basal segments among the DE group (140 PB segments for each energy level from 40 to 100 keV; and 140 reference segments), and in 40 basal segments (at the same locations) among the SE group. Among the DE group, myocardial SD levels and myocardial SD ratio evaluated at the reference segment were higher at low energy levels, with significantly lower SD levels at increasing energy levels. Myocardial signal-to-noise ratio was not significantly influenced by the energy level applied, although 70 keV was identified as the energy level with the best overall signal-to-noise ratio. Significant differences were identified between the PB segment and the reference segment among the lower energy levels, whereas at ≥70 keV myocardial SD levels were similar. Compared to DE reconstructions at the best energy level (70 keV), SE acquisitions showed no significant differences overall regarding myocardial SD levels among the reference segments. Conclusions BHA that influence the assessment of myocardial perfusion can be attenuated using DE at 70 keV or higher. PMID:25774354

  4. Measurement of aspheric mirror segments using Fizeau interferometry with CGH correction

    NASA Astrophysics Data System (ADS)

    Burge, James H.; Zhao, Chunyu; Dubin, Matt

    2010-07-01

    Large aspheric primary mirrors are proposed that use hundreds segments that all must be aligned and phased to approximate the desired continuous mirror. We present a method of measuring these concave segments with a Fizeau interferometer where a spherical convex reference surface is held a few millimeters from the aspheric segment. The aspheric shape is accommodated by a small computer generated hologram (CGH). Different segments are measured by replacing the CGH. As a Fizeau test, nearly all of the optical elements and air spaces are common to both the measurement and reference wavefront, so the sensitivities are not tight. Also, since the reference surface of the test plate is common to all tests, this system achieves excellent control for the radius of curvature variation from one part to another. This paper describes the test system design and analysis for such a test, and presents data from a similar 1.4-m test performed at the University of Arizona.

  5. The limits of boundaries: unpacking localization and cognitive mapping relative to a boundary.

    PubMed

    Zhou, Ruojing; Mou, Weimin

    2018-05-01

    Previous research (Zhou, Mou, Journal of Experimental Psychology: Learning, Memory and Cognition 42(8):1316-1323, 2016) showed that learning individual locations relative to a single landmark, compared to learning relative to a boundary, led to more accurate inferences of inter-object spatial relations (cognitive mapping of multiple locations). Following our past findings, the current study investigated whether the larger number of reference points provided by a homogeneous circular boundary, as well as less accessible knowledge of direct spatial relations among the multiple reference points, would lead to less effective cognitive mapping relative to the boundary. Accordingly, we manipulated (a) the number of primary reference points (one segment drawn from a circular boundary, four such segments, vs. the complete boundary) available when participants were localizing four objects sequentially (Experiment 1) and (b) the extendedness of each of the four segments (Experiment 2). The results showed that cognitive mapping was the least accurate in the whole boundary condition. However, expanding each of the four segments did not affect the accuracy of cognitive mapping until the four were connected to form a continuous boundary. These findings indicate that when encoding locations relative to a homogeneous boundary, participants segmented the boundary into differentiated pieces and subsequently chose the most informative local part (i.e., the segment closest in distance to one location) as the primary reference point for a particular location. During this process, direct spatial relations among the reference points were likely not attended to. These findings suggest that people might encode and represent bounded space in a fragmented fashion when localizing within a homogeneous boundary.

  6. Market Segmentation for Information Services.

    ERIC Educational Resources Information Center

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  7. Scalable Joint Segmentation and Registration Framework for Infant Brain Images.

    PubMed

    Dong, Pei; Wang, Li; Lin, Weili; Shen, Dinggang; Wu, Guorong

    2017-03-15

    The first year of life is the most dynamic and perhaps the most critical phase of postnatal brain development. The ability to accurately measure structure changes is critical in early brain development study, which highly relies on the performances of image segmentation and registration techniques. However, either infant image segmentation or registration, if deployed independently, encounters much more challenges than segmentation/registration of adult brains due to dynamic appearance change with rapid brain development. In fact, image segmentation and registration of infant images can assists each other to overcome the above challenges by using the growth trajectories (i.e., temporal correspondences) learned from a large set of training subjects with complete longitudinal data. Specifically, a one-year-old image with ground-truth tissue segmentation can be first set as the reference domain. Then, to register the infant image of a new subject at earlier age, we can estimate its tissue probability maps, i.e., with sparse patch-based multi-atlas label fusion technique, where only the training images at the respective age are considered as atlases since they have similar image appearance. Next, these probability maps can be fused as a good initialization to guide the level set segmentation. Thus, image registration between the new infant image and the reference image is free of difficulty of appearance changes, by establishing correspondences upon the reasonably segmented images. Importantly, the segmentation of new infant image can be further enhanced by propagating the much more reliable label fusion heuristics at the reference domain to the corresponding location of the new infant image via the learned growth trajectories, which brings image segmentation and registration to assist each other. It is worth noting that our joint segmentation and registration framework is also flexible to handle the registration of any two infant images even with significant age gap in the first year of life, by linking their joint segmentation and registration through the reference domain. Thus, our proposed joint segmentation and registration method is scalable to various registration tasks in early brain development studies. Promising segmentation and registration results have been achieved for infant brain MR images aged from 2-week-old to 1-year-old, indicating the applicability of our method in early brain development study.

  8. Comparison of T1-weighted 2D TSE, 3D SPGR, and two-point 3D Dixon MRI for automated segmentation of visceral adipose tissue at 3 Tesla.

    PubMed

    Fallah, Faezeh; Machann, Jürgen; Martirosian, Petros; Bamberg, Fabian; Schick, Fritz; Yang, Bin

    2017-04-01

    To evaluate and compare conventional T1-weighted 2D turbo spin echo (TSE), T1-weighted 3D volumetric interpolated breath-hold examination (VIBE), and two-point 3D Dixon-VIBE sequences for automatic segmentation of visceral adipose tissue (VAT) volume at 3 Tesla by measuring and compensating for errors arising from intensity nonuniformity (INU) and partial volume effects (PVE). The body trunks of 28 volunteers with body mass index values ranging from 18 to 41.2 kg/m 2 (30.02 ± 6.63 kg/m 2 ) were scanned at 3 Tesla using three imaging techniques. Automatic methods were applied to reduce INU and PVE and to segment VAT. The automatically segmented VAT volumes obtained from all acquisitions were then statistically and objectively evaluated against the manually segmented (reference) VAT volumes. Comparing the reference volumes with the VAT volumes automatically segmented over the uncorrected images showed that INU led to an average relative volume difference of -59.22 ± 11.59, 2.21 ± 47.04, and -43.05 ± 5.01 % for the TSE, VIBE, and Dixon images, respectively, while PVE led to average differences of -34.85 ± 19.85, -15.13 ± 11.04, and -33.79 ± 20.38 %. After signal correction, differences of -2.72 ± 6.60, 34.02 ± 36.99, and -2.23 ± 7.58 % were obtained between the reference and the automatically segmented volumes. A paired-sample two-tailed t test revealed no significant difference between the reference and automatically segmented VAT volumes of the corrected TSE (p = 0.614) and Dixon (p = 0.969) images, but showed a significant VAT overestimation using the corrected VIBE images. Under similar imaging conditions and spatial resolution, automatically segmented VAT volumes obtained from the corrected TSE and Dixon images agreed with each other and with the reference volumes. These results demonstrate the efficacy of the signal correction methods and the similar accuracy of TSE and Dixon imaging for automatic volumetry of VAT at 3 Tesla.

  9. Size-Constrained Region Merging: A New Tool to Derive Basic Landcover Units from Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Castilla, G.

    2004-09-01

    Landcover maps typically represent the territory as a mosaic of contiguous units "polygons- that are assumed to correspond to geographic entities" like e.g. lakes, forests or villages-. They may also be viewed as representing a particular level of a landscape hierarchy where each polygon is a holon - an object made of subobjects and part of a superobject. The focal level portrayed in the map is distinguished from other levels by the average size of objects compounding it. Moreover, the focal level is bounded by the minimum size that objects of this level are supposed to have. Based on this framework, we have developed a segmentation method that defines a partition on a multiband image such that i) the mean size of segments is close to the one specified; ii) each segment exceeds the required minimum size; and iii) the internal homogeneity of segments is maximal given the size constraints. This paper briefly describes the method, focusing on its region merging stage. The most distinctive feature of the latter is that while the merging sequence is ordered by increasing dissimilarity as in conventional methods, there is no need to define a threshold on the dissimilarity measure between adjacent segments.

  10. Bibliography of the Edwards Aquifer, Texas, through 1993

    USGS Publications Warehouse

    Menard, J.A.

    1995-01-01

    The bibliography comprises 1,022 multidisciplinary references to technical and general literature for the three regions of the Edwards aquifer, Texas-San Antonio area; Barton Springs segment, Austin area; and northern segment, Austin area. The references in the bibliography were compiled from computerized data bases and from published bibliographies and reports. Dates of references range from the late 1800's through 1993. Subject and author indexes are included.

  11. Dynamic updating atlas for heart segmentation with a nonlinear field-based model.

    PubMed

    Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng

    2017-09-01

    Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.

  12. The error of L5/S1 joint moment calculation in a body-centered non-inertial reference frame when the fictitious force is ignored.

    PubMed

    Xu, Xu; Faber, Gert S; Kingma, Idsart; Chang, Chien-Chi; Hsiang, Simon M

    2013-07-26

    In ergonomics studies, linked segment models are commonly used for estimating dynamic L5/S1 joint moments during lifting tasks. The kinematics data input to these models are with respect to an arbitrary stationary reference frame. However, a body-centered reference frame, which is defined using the position and the orientation of human body segments, is sometimes used to conveniently identify the location of the load relative to the body. When a body-centered reference frame is moving with the body, it is a non-inertial reference frame and fictitious force exists. Directly applying a linked segment model to the kinematics data with respect to a body-centered non-inertial reference frame will ignore the effect of this fictitious force and introduce errors during L5/S1 moment estimation. In the current study, various lifting tasks were performed in the laboratory environment. The L5/S1 joint moments during the lifting tasks were calculated by a linked segment model with respect to a stationary reference frame and to a body-centered non-inertial reference frame. The results indicate that applying a linked segment model with respect to a body-centered non-inertial reference frame will result in overestimating the peak L5/S1 joint moments of the coronal plane, sagittal plane, and transverse plane during lifting tasks by 78%, 2%, and 59% on average, respectively. The instant when the peak moment occurred was delayed by 0.13, 0.03, and 0.09s on average, correspondingly for the three planes. The root-mean-square errors of the L5/S1 joint moment for the three planes are 21Nm, 19Nm, and 9Nm, correspondingly. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA gel electrophoresis images, called GELect, which was written in Java and made available through the imageJ framework. With a novel automated image processing workflow, the tool can accurately segment lanes from a gel matrix, intelligently extract distorted and even doublet bands that are difficult to identify by existing image processing tools. Consequently, genotyping from DNA gel electrophoresis can be performed automatically allowing users to efficiently conduct large scale DNA fingerprinting via DNA gel electrophoresis. The software is freely available from http://www.biotec.or.th/gi/tools/gelect.

  14. LACIE performance predictor final operational capability program description, volume 2

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Given the swath table files, the segment set for one country and cloud cover data, the SAGE program determines how many times and under what conditions each segment is accessed by satellites. The program writes a record for each segment on a data file which contains the pertinent acquisition data. The weather data file can also be generated from a NASA supplied tape. The Segment Acquisition Selector Program (SACS) selects data from the segment reference file based upon data input manually and from a crop window file. It writes the extracted data to a data acquisition file and prints two summary reports. The POUT program reads from associated LACIE files and produces printed reports. The major types of reports that can be produced are: (1) Substrate Reference Data Reports, (2) Population Mean, Standard Deviation and Histogram Reports, (3) Histograms of Monte Carlo Statistics Reports, and (4) Frequency of Sample Segment Acquisitions Reports.

  15. A new calibration methodology for thorax and upper limbs motion capture in children using magneto and inertial sensors.

    PubMed

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-09

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  16. Pulmonary vessel segmentation utilizing curved planar reformation and optimal path finding (CROP) in computed tomographic pulmonary angiography (CTPA) for CAD applications

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.

    2012-03-01

    Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.

  17. Effects of counterion size and backbone rigidity on the dynamics of ionic polymer melts and glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Yao; Bocharova, Vera; Ma, Mengze

    Backbone rigidity, counterion size and the static dielectric constant affect the glass transition temperature, segmental relaxation time and decoupling between counterion and segmental dynamics in significant manners.

  18. Focal segmental glomerulosclerosis

    MedlinePlus

    ... Alternative Names Segmental glomerulosclerosis; Focal sclerosis with hyalinosis Images Male urinary system References Appel GB, Radhakrishnan J. Glomerular disorders and nephrotic syndromes In: Goldman L, ...

  19. Mosaic generalized neurofibromatosis 1: report of two cases.

    PubMed

    Hardin, Jori; Behm, Allan; Haber, Richard M

    2014-01-01

    We report two cases of mosaic generalized neurofibromatosis 1 (NF1) and review the history of the classification of segmental neurofibromatosis (SNF; Ricardi type NF-V). Somatic mutations giving rise to limited disease, such as segmental neurofibromatosis are manifestations of mosaicism. If the mutation occurs before tissue differentiation, the clinical phenotype will be generalized disease. Mutations that occur later in development give rise to disease that is confined to a single region. Segmental neurofibromatosis is caused by a somatic mutation of neurofibromatosis type 1, and should not be regarded as a distinct entity from neurofibromatosis 1. Cases previously referred to as unilateral or bilateral segmental neurofibromatosis are now best referred to as mosaic generalized or mosaic localized neurofibromatosis 1.

  20. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity: Performance of the "i-ROP" System and Image Features Associated With Expert Diagnosis.

    PubMed

    Ataer-Cansizoglu, Esra; Bolon-Canedo, Veronica; Campbell, J Peter; Bozkurt, Alican; Erdogmus, Deniz; Kalpathy-Cramer, Jayashree; Patel, Samir; Jonas, Karyn; Chan, R V Paul; Ostmo, Susan; Chiang, Michael F

    2015-11-01

    We developed and evaluated the performance of a novel computer-based image analysis system for grading plus disease in retinopathy of prematurity (ROP), and identified the image features, shapes, and sizes that best correlate with expert diagnosis. A dataset of 77 wide-angle retinal images from infants screened for ROP was collected. A reference standard diagnosis was determined for each image by combining image grading from 3 experts with the clinical diagnosis from ophthalmoscopic examination. Manually segmented images were cropped into a range of shapes and sizes, and a computer algorithm was developed to extract tortuosity and dilation features from arteries and veins. Each feature was fed into our system to identify the set of characteristics that yielded the highest-performing system compared to the reference standard, which we refer to as the "i-ROP" system. Among the tested crop shapes, sizes, and measured features, point-based measurements of arterial and venous tortuosity (combined), and a large circular cropped image (with radius 6 times the disc diameter), provided the highest diagnostic accuracy. The i-ROP system achieved 95% accuracy for classifying preplus and plus disease compared to the reference standard. This was comparable to the performance of the 3 individual experts (96%, 94%, 92%), and significantly higher than the mean performance of 31 nonexperts (81%). This comprehensive analysis of computer-based plus disease suggests that it may be feasible to develop a fully-automated system based on wide-angle retinal images that performs comparably to expert graders at three-level plus disease discrimination. Computer-based image analysis, using objective and quantitative retinal vascular features, has potential to complement clinical ROP diagnosis by ophthalmologists.

  1. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models.

    PubMed

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-06-01

    Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter ( SP ), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the reliability, validity and applicability of results.

  2. Carbon fiber reinforcements for sheet molding composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozcan, Soydan; Paulauskas, Felix L.

    A method of processing a carbon fiber tow includes the steps of providing a carbon fiber tow made of a plurality of carbon filaments, depositing a sizing composition at spaced-apart sizing sites along a length of the tow, leaving unsized interstitial regions of the tow, and cross-cutting the tow into a plurality of segments. Each segment includes at least a portion of one of the sizing sites and at least a portion of at least one of the unsized regions of the tow, the unsized region including and end portion of the segment.

  3. Local and global evaluation for remote sensing image segmentation

    NASA Astrophysics Data System (ADS)

    Su, Tengfei; Zhang, Shengwei

    2017-08-01

    In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.

  4. Generating Ground Reference Data for a Global Impervious Surface Survey

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; De Colstoun, Eric Brown; Wolfe, Robert E.; Tan, Bin; Huang, Chengquan

    2012-01-01

    We are developing an approach for generating ground reference data in support of a project to produce a 30m impervious cover data set of the entire Earth for the years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. Since sufficient ground reference data for training and validation is not available from ground surveys, we are developing an interactive tool, called HSegLearn, to facilitate the photo-interpretation of 1 to 2 m spatial resolution imagery data, which we will use to generate the needed ground reference data at 30m. Through the submission of selected region objects and positive or negative examples of impervious surfaces, HSegLearn enables an analyst to automatically select groups of spectrally similar objects from a hierarchical set of image segmentations produced by the HSeg image segmentation program at an appropriate level of segmentation detail, and label these region objects as either impervious or nonimpervious.

  5. Approach for scene reconstruction from the analysis of a triplet of still images

    NASA Astrophysics Data System (ADS)

    Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle

    1997-03-01

    Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.

  6. Robustness of Radiomic Features in [11C]Choline and [18F]FDG PET/CT Imaging of Nasopharyngeal Carcinoma: Impact of Segmentation and Discretization.

    PubMed

    Lu, Lijun; Lv, Wenbing; Jiang, Jun; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan

    2016-12-01

    Radiomic features are increasingly utilized to evaluate tumor heterogeneity in PET imaging and to enable enhanced prediction of therapy response and outcome. An important ingredient to success in translation of radiomic features to clinical reality is to quantify and ascertain their robustness. In the present work, we studied the impact of segmentation and discretization on 88 radiomic features in 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) and [ 11 C]methyl-choline ([ 11 C]choline) positron emission tomography/X-ray computed tomography (PET/CT) imaging of nasopharyngeal carcinoma. Forty patients underwent [ 18 F]FDG PET/CT scans. Of these, nine patients were imaged on a different day utilizing [ 11 C]choline PET/CT. Tumors were delineated using reference manual segmentation by the consensus of three expert physicians, using 41, 50, and 70 % maximum standardized uptake value (SUV max ) threshold with background correction, Nestle's method, and watershed and region growing methods, and then discretized with fixed bin size (0.05, 0.1, 0.2, 0.5, and 1) in units of SUV. A total of 88 features, including 21 first-order intensity features, 10 shape features, and 57 second- and higher-order textural features, were extracted from the tumors. The robustness of the features was evaluated via the intraclass correlation coefficient (ICC) for seven kinds of segmentation methods (involving all 88 features) and five kinds of discretization bin size (involving the 57 second- and higher-order features). Forty-four (50 %) and 55 (63 %) features depicted ICC ≥0.8 with respect to segmentation as obtained from [ 18 F]FDG and [ 11 C]choline, respectively. Thirteen (23 %) and 12 (21 %) features showed ICC ≥0.8 with respect to discretization as obtained from [ 18 F]FDG and [ 11 C]choline, respectively. Six features were obtained from both [ 18 F]FDG and [ 11 C]choline having ICC ≥0.8 for both segmentation and discretization, five of which were gray-level co-occurrence matrix (GLCM) features (SumEntropy, Entropy, DifEntropy, Homogeneity1, and Homogeneity2) and one of which was an neighborhood gray-tone different matrix (NGTDM) feature (Coarseness). Discretization generated larger effects on features than segmentation in both tracers. Features extracted from [ 11 C]choline were more robust than [ 18 F]FDG for segmentation. Discretization had very similar effects on features extracted from both tracers.

  7. Assessment of LVEF using a new 16-segment wall motion score in echocardiography.

    PubMed

    Lebeau, Real; Serri, Karim; Lorenzo, Maria Di; Sauvé, Claude; Le, Van Hoai Viet; Soulières, Vicky; El-Rayes, Malak; Pagé, Maude; Zaïani, Chimène; Garot, Jérôme; Poulin, Frédéric

    2018-06-01

    Simpson biplane method and 3D by transthoracic echocardiography (TTE), radionuclide angiography (RNA) and cardiac magnetic resonance imaging (CMR) are the most accepted techniques for left ventricular ejection fraction (LVEF) assessment. Wall motion score index (WMSI) by TTE is an accepted complement. However, the conversion from WMSI to LVEF is obtained through a regression equation, which may limit its use. In this retrospective study, we aimed to validate a new method to derive LVEF from the wall motion score in 95 patients. The new score consisted of attributing a segmental EF to each LV segment based on the wall motion score and averaging all 16 segmental EF into a global LVEF. This segmental EF score was calculated on TTE in 95 patients, and RNA was used as the reference LVEF method. LVEF using the new segmental EF 15-40-65 score on TTE was compared to the reference methods using linear regression and Bland-Altman analyses. The median LVEF was 45% (interquartile range 32-53%; range from 15 to 65%). Our new segmental EF 15-40-65 score derived on TTE correlated strongly with RNA-LVEF ( r  = 0.97). Overall, the new score resulted in good agreement of LVEF compared to RNA (mean bias 0.61%). The standard deviations (s.d.s) of the distributions of inter-method difference for the comparison of the new score with RNA were 6.2%, indicating good precision. LVEF assessment using segmental EF derived from the wall motion score applied to each of the 16 LV segments has excellent correlation and agreement with a reference method. © 2018 The authors.

  8. A Rapid Segmentation-Insensitive "Digital Biopsy" Method for Radiomic Feature Extraction: Method and Pilot Study Using CT Images of Non-Small Cell Lung Cancer.

    PubMed

    Echegaray, Sebastian; Nair, Viswam; Kadoch, Michael; Leung, Ann; Rubin, Daniel; Gevaert, Olivier; Napel, Sandy

    2016-12-01

    Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    CRESSWELL,M.W.; ALLEN,R.A.; GHOSHTAGORE,R.N.

    This paper describes the fabrication and measurement of the linewidths of the reference segments of cross-bridge resistors patterned in (100) Bonded and Etched Back Silicon-on-Insulator (BESOI) material. The critical dimensions (CD) of the reference segments of a selection of the cross-bridge resistor test structures were measured both electrically and by Scanning-Electron Microscopy (SEM) cross-section imaging. The reference-segment features were aligned with <110> directions in the BESOI surface material and had drawn linewidths ranging from 0.35 to 3.0 {micro}m. They were defined by a silicon micro-machining process which results in their sidewalls being atomically-planar and smooth and inclined at 54.737{degree} tomore » the surface (100) plane of the substrate. This (100) implementation may usefully complement the attributes of the previously-reported vertical-sidewall one for selected reference-material applications. For example, the non-orthogonal intersection of the sidewalls and top-surface planes of the reference-segment features may alleviate difficulties encountered with atomic-force microscope measurements. In such applications it has been reported that it may be difficult to maintain probe-tip control at the sharp 90{degree} outside corner of the sidewalls and the upper surface. A second application is refining to-down image-processing algorithms and checking instrument performance. Novel aspects of the (100) SOI implementation that are reported here include the cross-bridge resistor test-structure architecture and details of its fabrication. The long-term goal is to develop a technique for the determination of the absolute dimensions of the trapezoidal cross-sections of the cross-bridge resistors' reference segments, as a prelude to developing them for dimensional reference applications. This is believed to be the first report of electrical CD measurements made on test structures of the cross-bridge resistor type that have been patterned in (100) SOI material. The electrical CD results are compared with cross-section SEM measurements made on the same features.« less

  10. TU-A-9A-06: Semi-Automatic Segmentation of Skin Cancer in High-Frequency Ultrasound Images: Initial Comparison with Histology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Y; Li, X; Fishman, K

    Purpose: In skin-cancer radiotherapy, the assessment of skin lesion is challenging, particularly with important features such as the depth and width hard to determine. The aim of this study is to develop interative segmentation method to delineate tumor boundary using high-frequency ultrasound images and to correlate the segmentation results with the histopathological tumor dimensions. Methods: We analyzed 6 patients who comprised a total of 10 skin lesions involving the face, scalp, and hand. The patient’s various skin lesions were scanned using a high-frequency ultrasound system (Episcan, LONGPORT, INC., PA, U.S.A), with a 30-MHz single-element transducer. The lateral resolution was 14.6more » micron and the axial resolution was 3.85 micron for the ultrasound image. Semiautomatic image segmentation was performed to extract the cancer region, using a robust statistics driven active contour algorithm. The corresponding histology images were also obtained after tumor resection and served as the reference standards in this study. Results: Eight out of the 10 lesions are successfully segmented. The ultrasound tumor delineation correlates well with the histology assessment, in all the measurements such as depth, size, and shape. The depths measured by the ultrasound have an average of 9.3% difference comparing with that in the histology images. The remaining 2 cases suffered from the situation of mismatching between pathology and ultrasound images. Conclusion: High-frequency ultrasound is a noninvasive, accurate and easy-accessible modality to image skin cancer. Our segmentation method, combined with high-frequency ultrasound technology, provides a promising tool to estimate the extent of the tumor to guide the radiotherapy procedure and monitor treatment response.« less

  11. Transferring diffractive optics from research to commercial applications: Part II - size estimations for selected markets

    NASA Astrophysics Data System (ADS)

    Brunner, Robert

    2014-04-01

    In a series of two contributions, decisive business-related aspects of the current process status to transfer research results on diffractive optical elements (DOEs) into commercial solutions are discussed. In part I, the focus was on the patent landscape. Here, in part II, market estimations concerning DOEs for selected applications are presented, comprising classical spectroscopic gratings, security features on banknotes, DOEs for high-end applications, e.g., for the semiconductor manufacturing market and diffractive intra-ocular lenses. The derived market sizes are referred to the optical elements, itself, rather than to the enabled instruments. The estimated market volumes are mainly addressed to scientifically and technologically oriented optical engineers to serve as a rough classification of the commercial dimensions of DOEs in the different market segments and do not claim to be exhaustive.

  12. Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.

    2016-06-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).

  13. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  14. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    NASA Astrophysics Data System (ADS)

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  15. [Comparison of Quantification of Myocardial Infarct Size by One Breath Hold Single Shot PSIR Sequence and Segmented FLASH-PSIR Sequence at 3. 0 Tesla MR].

    PubMed

    Cheng, Wei; Cai, Shu; Sun, Jia-yu; Xia, Chun-chao; Li, Zhen-lin; Chen, Yu-cheng; Zhong, Yao-zu

    2015-05-01

    To compare the two sequences [single shot true-FISP-PSIR (single shot-PSIR) and segmented-turbo-FLASH-PSIR (segmented-PSIR)] in the value of quantification for myocardial infarct size at 3. 0 tesla MRI. 38 patients with clinical confirmed myocardial infarction were served a comprehensive gadonilium cardiac MRI at 3. 0 tesla MRI system (Trio, Siemens). Myocardial delayed enhancement (MDE) were performed by single shot-PSIR and segmented-PSIR sequences separatedly in 12-20 min followed gadopentetate dimeglumine injection (0. 15 mmol/kg). The quality of MDE images were analysed by experienced physicians. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) between the two techniques were compared. Myocardial infarct size was quantified by a dedicated software automatically (Q-mass, Medis). All objectives were scanned on the 3. 0T MR successfully. No significant difference was found in SNR and CNR of the image quality between the two sequences (P>0. 05), as well as the total myocardial volume, between two sequences (P>0. 05). Furthermore, there were still no difference in the infarct size [single shot-PSIR (30. 87 ± 15. 72) mL, segmented-PSIR (29. 26±14. 07) ml], ratio [single shot-PSIR (22. 94%±10. 94%), segmented-PSIR (20. 75% ± 8. 78%)] between the two sequences (P>0. 05). However, the average aquisition time of single shot-PSIR (21. 4 s) was less than that of the latter (380 s). Single shot-PSIR is equal to segmented-PSIR in detecting the myocardial infarct size with less acquisition time, which is valuable in the clinic application and further research.

  16. Individual bone structure segmentation and labeling from low-dose chest CT

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P.

    2017-03-01

    The segmentation and labeling of the individual bones serve as the first step to the fully automated measurement of skeletal characteristics and the detection of abnormalities such as skeletal deformities, osteoporosis, and vertebral fractures. Moreover, the identified landmarks on the segmented bone structures can potentially provide relatively reliable location reference to other non-rigid human organs, such as breast, heart and lung, thereby facilitating the corresponding image analysis and registration. A fully automated anatomy-directed framework for the segmentation and labeling of the individual bone structures from low-dose chest CT is presented in this paper. The proposed system consists of four main stages: First, both clavicles are segmented and labeled by fitting a piecewise cylindrical envelope. Second, the sternum is segmented under the spatial constraints provided by the segmented clavicles. Third, all ribs are segmented and labeled based on 3D region growing within the volume of interest defined with reference to the spinal canal centerline and lungs. Fourth, the individual thoracic vertebrae are segmented and labeled by image intensity based analysis in the spatial region constrained by the previously segmented bone structures. The system performance was validated with 1270 lowdose chest CT scans through visual evaluation. Satisfactory performance was obtained respectively in 97.1% cases for the clavicle segmentation and labeling, in 97.3% cases for the sternum segmentation, in 97.2% cases for the rib segmentation, in 94.2% cases for the rib labeling, in 92.4% cases for vertebra segmentation and in 89.9% cases for the vertebra labeling.

  17. On the role of modeling choices in estimation of cerebral aneurysm wall tension.

    PubMed

    Ramachandran, Manasi; Laakso, Aki; Harbaugh, Robert E; Raghavan, Madhavan L

    2012-11-15

    To assess various approaches to estimating pressure-induced wall tension in intracranial aneurysms (IA) and their effect on the stratification of subjects in a study population. Three-dimensional models of 26 IAs (9 ruptured and 17 unruptured) were segmented from Computed Tomography Angiography (CTA) images. Wall tension distributions in these patient-specific geometric models were estimated based on various approaches such as differences in morphological detail utilized or modeling choices made. For all subjects in the study population, the peak wall tension was estimated using all investigated approaches and were compared to a reference approach-nonlinear finite element (FE) analysis using the Fung anisotropic model with regionally varying material fiber directions. Comparisons between approaches were focused toward assessing the similarity in stratification of IAs within the population based on peak wall tension. The stratification of IAs tension deviated to some extent from the reference approach as less geometric detail was incorporated. Interestingly, the size of the cerebral aneurysm as captured by a single size measure was the predominant determinant of peak wall tension-based stratification. Within FE approaches, simplifications to isotropy, material linearity and geometric linearity caused a gradual deviation from the reference estimates, but it was minimal and resulted in little to no impact on stratifications of IAs. Differences in modeling choices made without patient-specificity in parameters of such models had little impact on tension-based IA stratification in this population. Increasing morphological detail did impact the estimated peak wall tension, but size was the predominant determinant. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. 75 FR 3277 - Notice of Final Federal Agency Actions on State Highway 99 (Segment F-2) in Texas

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-20

    ... on State Highway 99 (Segment F-2) in Texas AGENCY: Federal Highway Administration (FHWA), DOT. ACTION... Highway 99) Segment F-2, from State Highway 249 to Interstate Highway 45 (I-45) in Harris County, Texas... (State Highway 99) Segment F-2 from State Highway 249 to I-45 in Harris County; FHWA Project Reference...

  19. Image Information Mining Utilizing Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai

    2002-01-01

    The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.

  20. The Sheath-less Planar Langmuir Probe

    NASA Astrophysics Data System (ADS)

    Cooke, D. L.

    2017-12-01

    The Langmuir probe is one of the oldest plasma diagnostics, provided the plasma density and species temperature from analysis of a current-voltage curve as the voltage is swept over a practically chosen range. The analysis depends on a knowledge or theory of the many factors that influence the current-voltage curve including, probe shape, size, nearby perturbations, and the voltage reference. For applications in Low Earth Orbit, the Planar Langmuir Probe, PLP, is an attractive geometry because the ram ion current is very constant over many Volts of a sweep, allowing the ion density and electron temperature to be determined independently with the same instrument, at different points on the sweep. However, when the physical voltage reference is itself small and electrically floating as with a small spacecraft, the spacecraft and probe system become a double probe where the current collection theory depends on the interaction of the spacecraft with the plasma which is generally not as simple as the probe itself. The Sheath-less PLP, SPLP, interlaces on a single ram facing surface, two variably biased probe elements, broken into many small and intertwined segments on a scale smaller than the plasma Debye length. The SPLP is electrically isolated from the rest of the spacecraft. For relative bias potentials of a few volts, the ion current to all segments of each element will be constant, while the electron currents will vary as a function of the element potential and the electron temperature. Because the segments are small, intertwined, and floating, the assembly will always present the same floating potential to the plasma, with minimal growth as a function of voltage, thus sheath-less and still planar. This concept has been modelled with Nascap, and tested with a physical model inserted into a Low Earth Orbit-like chamber plasma. Results will be presented.

  1. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    PubMed Central

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  2. Adapted all-numerical correlator for face recognition applications

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.-E.

    2013-03-01

    In this study, we suggest and validate an all-numerical implementation of a VanderLugt correlator which is optimized for face recognition applications. The main goal of this implementation is to take advantage of the benefits (detection, localization, and identification of a target object within a scene) of correlation methods and exploit the reconfigurability of numerical approaches. This technique requires a numerical implementation of the optical Fourier transform. We pay special attention to adapt the correlation filter to this numerical implementation. One main goal of this work is to reduce the size of the filter in order to decrease the memory space required for real time applications. To fulfil this requirement, we code the reference images with 8 bits and study the effect of this coding on the performances of several composite filters (phase-only filter, binary phase-only filter). The saturation effect has for effect to decrease the performances of the correlator for making a decision when filters contain up to nine references. Further, an optimization is proposed based for an optimized segmented composite filter. Based on this approach, we present tests with different faces demonstrating that the above mentioned saturation effect is significantly reduced while minimizing the size of the learning data base.

  3. Hidden Markov random field model and Broyden-Fletcher-Goldfarb-Shanno algorithm for brain image segmentation

    NASA Astrophysics Data System (ADS)

    Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane

    2018-05-01

    Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.

  4. A shape-based segmentation method for mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen

    2013-07-01

    Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

  5. Multivendor Spectral-Domain Optical Coherence Tomography Dataset, Observer Annotation Performance Evaluation, and Standardized Evaluation Framework for Intraretinal Cystoid Fluid Segmentation.

    PubMed

    Wu, Jing; Philip, Ana-Maria; Podkowinski, Dominika; Gerendas, Bianca S; Langs, Georg; Simader, Christian; Waldstein, Sebastian M; Schmidt-Erfurth, Ursula M

    2016-01-01

    Development of image analysis and machine learning methods for segmentation of clinically significant pathology in retinal spectral-domain optical coherence tomography (SD-OCT), used in disease detection and prediction, is limited due to the availability of expertly annotated reference data. Retinal segmentation methods use datasets that either are not publicly available, come from only one device, or use different evaluation methodologies making them difficult to compare. Thus we present and evaluate a multiple expert annotated reference dataset for the problem of intraretinal cystoid fluid (IRF) segmentation, a key indicator in exudative macular disease. In addition, a standardized framework for segmentation accuracy evaluation, applicable to other pathological structures, is presented. Integral to this work is the dataset used which must be fit for purpose for IRF segmentation algorithm training and testing. We describe here a multivendor dataset comprised of 30 scans. Each OCT scan for system training has been annotated by multiple graders using a proprietary system. Evaluation of the intergrader annotations shows a good correlation, thus making the reproducibly annotated scans suitable for the training and validation of image processing and machine learning based segmentation methods. The dataset will be made publicly available in the form of a segmentation Grand Challenge.

  6. Multivendor Spectral-Domain Optical Coherence Tomography Dataset, Observer Annotation Performance Evaluation, and Standardized Evaluation Framework for Intraretinal Cystoid Fluid Segmentation

    PubMed Central

    Wu, Jing; Philip, Ana-Maria; Podkowinski, Dominika; Gerendas, Bianca S.; Langs, Georg; Simader, Christian

    2016-01-01

    Development of image analysis and machine learning methods for segmentation of clinically significant pathology in retinal spectral-domain optical coherence tomography (SD-OCT), used in disease detection and prediction, is limited due to the availability of expertly annotated reference data. Retinal segmentation methods use datasets that either are not publicly available, come from only one device, or use different evaluation methodologies making them difficult to compare. Thus we present and evaluate a multiple expert annotated reference dataset for the problem of intraretinal cystoid fluid (IRF) segmentation, a key indicator in exudative macular disease. In addition, a standardized framework for segmentation accuracy evaluation, applicable to other pathological structures, is presented. Integral to this work is the dataset used which must be fit for purpose for IRF segmentation algorithm training and testing. We describe here a multivendor dataset comprised of 30 scans. Each OCT scan for system training has been annotated by multiple graders using a proprietary system. Evaluation of the intergrader annotations shows a good correlation, thus making the reproducibly annotated scans suitable for the training and validation of image processing and machine learning based segmentation methods. The dataset will be made publicly available in the form of a segmentation Grand Challenge. PMID:27579177

  7. Validation of Reference Genes for Gene Expression by Quantitative Real-Time RT-PCR in Stem Segments Spanning Primary to Secondary Growth in Populus tomentosa.

    PubMed

    Wang, Ying; Chen, Yajuan; Ding, Liping; Zhang, Jiewei; Wei, Jianhua; Wang, Hongzhi

    2016-01-01

    The vertical segments of Populus stems are an ideal experimental system for analyzing the gene expression patterns involved in primary and secondary growth during wood formation. Suitable internal control genes are indispensable to quantitative real time PCR (qRT-PCR) assays of gene expression. In this study, the expression stability of eight candidate reference genes was evaluated in a series of vertical stem segments of Populus tomentosa. Analysis through software packages geNorm, NormFinder and BestKeeper showed that genes ribosomal protein (RP) and tubulin beta (TUBB) were the most unstable across the developmental stages of P. tomentosa stems, and the combination of the three reference genes, eukaryotic translation initiation factor 5A (eIF5A), Actin (ACT6) and elongation factor 1-beta (EF1-beta) can provide accurate and reliable normalization of qRT-PCR analysis for target gene expression in stem segments undergoing primary and secondary growth in P. tomentosa. These results provide crucial information for transcriptional analysis in the P. tomentosa stem, which may help to improve the quality of gene expression data in these vertical stem segments, which constitute an excellent plant system for the study of wood formation.

  8. Word Family Size and French-Speaking Children's Segmentation of Existing Compounds

    ERIC Educational Resources Information Center

    Nicoladis, Elena; Krott, Andrea

    2007-01-01

    The family size of the constituents of compound words, or the number of compounds sharing the constituents, affects English-speaking children's compound segmentation. This finding is consistent with a usage-based theory of language acquisition, whereby children learn abstract underlying linguistic structure through their experience with particular…

  9. A comparison of six software packages for evaluation of solid lung nodules using semi-automated volumetry: what is the minimum increase in size to detect growth in repeated CT examinations.

    PubMed

    de Hoop, Bartjan; Gietema, Hester; van Ginneken, Bram; Zanen, Pieter; Groenewegen, Gerard; Prokop, Mathias

    2009-04-01

    We compared interexamination variability of CT lung nodule volumetry with six currently available semi-automated software packages to determine the minimum change needed to detect the growth of solid lung nodules. We had ethics committee approval. To simulate a follow-up examination with zero growth, we performed two low-dose unenhanced CT scans in 20 patients referred for pulmonary metastases. Between examinations, patients got off and on the table. Volumes of all pulmonary nodules were determined on both examinations using six nodule evaluation software packages. Variability (upper limit of the 95% confidence interval of the Bland-Altman plot) was calculated for nodules for which segmentation was visually rated as adequate. We evaluated 214 nodules (mean diameter 10.9 mm, range 3.3 mm-30.0 mm). Software packages provided adequate segmentation in 71% to 86% of nodules (p < 0.001). In case of adequate segmentation, variability in volumetry between scans ranged from 16.4% to 22.3% for the various software packages. Variability with five to six software packages was significantly less for nodules >or=8 mm in diameter (range 12.9%-17.1%) than for nodules <8 mm (range 18.5%-25.6%). Segmented volumes of each package were compared to each of the other packages. Systematic volume differences were detected in 11/15 comparisons. This hampers comparison of nodule volumes between software packages.

  10. Temporally consistent probabilistic detection of new multiple sclerosis lesions in brain MRI.

    PubMed

    Elliott, Colm; Arnold, Douglas L; Collins, D Louis; Arbel, Tal

    2013-08-01

    Detection of new Multiple Sclerosis (MS) lesions on magnetic resonance imaging (MRI) is important as a marker of disease activity and as a potential surrogate for relapses. We propose an approach where sequential scans are jointly segmented, to provide a temporally consistent tissue segmentation while remaining sensitive to newly appearing lesions. The method uses a two-stage classification process: 1) a Bayesian classifier provides a probabilistic brain tissue classification at each voxel of reference and follow-up scans, and 2) a random-forest based lesion-level classification provides a final identification of new lesions. Generative models are learned based on 364 scans from 95 subjects from a multi-center clinical trial. The method is evaluated on sequential brain MRI of 160 subjects from a separate multi-center clinical trial, and is compared to 1) semi-automatically generated ground truth segmentations and 2) fully manual identification of new lesions generated independently by nine expert raters on a subset of 60 subjects. For new lesions greater than 0.15 cc in size, the classifier has near perfect performance (99% sensitivity, 2% false detection rate), as compared to ground truth. The proposed method was also shown to exceed the performance of any one of the nine expert manual identifications.

  11. Measuring nanometre-scale electric fields in scanning transmission electron microscopy using segmented detectors.

    PubMed

    Brown, H G; Shibata, N; Sasaki, H; Petersen, T C; Paganin, D M; Morgan, M J; Findlay, S D

    2017-11-01

    Electric field mapping using segmented detectors in the scanning transmission electron microscope has recently been achieved at the nanometre scale. However, converting these results to quantitative field measurements involves assumptions whose validity is unclear for thick specimens. We consider three approaches to quantitative reconstruction of the projected electric potential using segmented detectors: a segmented detector approximation to differential phase contrast and two variants on ptychographical reconstruction. Limitations to these approaches are also studied, particularly errors arising from detector segment size, inelastic scattering, and non-periodic boundary conditions. A simple calibration experiment is described which corrects the differential phase contrast reconstruction to give reliable quantitative results despite the finite detector segment size and the effects of plasmon scattering in thick specimens. A plasmon scattering correction to the segmented detector ptychography approaches is also given. Avoiding the imposition of periodic boundary conditions on the reconstructed projected electric potential leads to more realistic reconstructions. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-07

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  13. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-01

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  14. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  15. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    PubMed

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.

  16. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    PubMed Central

    Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683

  17. Estimation of non-solid lung nodule volume with low-dose CT protocols: effect of reconstruction algorithm and measurement method

    NASA Astrophysics Data System (ADS)

    Gavrielides, Marios A.; DeFilippo, Gino; Berman, Benjamin P.; Li, Qin; Petrick, Nicholas; Schultz, Kurt; Siegelman, Jenifer

    2017-03-01

    Computed tomography is primarily the modality of choice to assess stability of nonsolid pulmonary nodules (sometimes referred to as ground-glass opacity) for three or more years, with change in size being the primary factor to monitor. Since volume extracted from CT is being examined as a quantitative biomarker of lung nodule size, it is important to examine factors affecting the performance of volumetric CT for this task. More specifically, the effect of reconstruction algorithms and measurement method in the context of low-dose CT protocols has been an under-examined area of research. In this phantom study we assessed volumetric CT with two different measurement methods (model-based and segmentation-based) for nodules with radiodensities of both nonsolid (-800HU and -630HU) and solid (-10HU) nodules, sizes of 5mm and 10mm, and two different shapes (spherical and spiculated). Imaging protocols included CTDIvol typical of screening (1.7mGy) and sub-screening (0.6mGy) scans and different types of reconstruction algorithms across three scanners. Results showed that radio-density was the factor contributing most to overall error based on ANOVA. The choice of reconstruction algorithm or measurement method did not affect substantially the accuracy of measurements; however, measurement method affected repeatability with repeatability coefficients ranging from around 3-5% for the model-based estimator to around 20-30% across reconstruction algorithms for the segmentation-based method. The findings of the study can be valuable toward developing standardized protocols and performance claims for nonsolid nodules.

  18. Carotid artery phantom designment and simulation using field II

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Yang, Xin; Ding, Mingyue

    2013-10-01

    Carotid atherosclerosis is the major cause of ischemic stroke, a leading cause of mortality and disability. Morphology and structure features of carotid plaques are the keys to identify plaques and monitoring the disease. Manually segmentation on the ultrasonic images to get the best-fitted actual size of the carotid plaques based on physicians personal experience, namely "gold standard", is a important step in the study of plaque size. However, it is difficult to qualitatively measure the segmentation error caused by the operator's subjective factors. In order to reduce the subjective factors, and the uncertainty factors of quantification, the experiments in this paper were carried out. In this study, we firstly designed a carotid artery phantom, and then use three different beam-forming algorithms of medical ultrasound to simulate the phantom. Finally obtained plaques areas were analyzed through manual segmentation on simulation images. We could (1) directly evaluate the different beam-forming algorithms for the ultrasound imaging simulation on the effect of carotid artery; (2) also analyze the sensitivity of detection on different size of plaques; (3) indirectly reflect the accuracy of the manual segmentation base on segmentation results the evaluation.

  19. Radiographic Response to Yttrium-90 Radioembolization in Anterior Versus Posterior Liver Segments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Saad M.; Lewandowski, Robert J.; Ryu, Robert K.

    2008-11-15

    The purpose of our study was to determine if preferential radiographic tumor response occurs in tumors located in posterior versus anterior liver segments following radioembolization with yttrium-90 glass microspheres. One hundred thirty-seven patients with chemorefractory liver metastases of various primaries were treated with yttrium-90 glass microspheres. Of these, a subset analysis was performed on 89 patients who underwent 101 whole-right-lobe infusions to liver segments V, VI, VII, and VIII. Pre- and posttreatment imaging included either triphasic contrast material-enhanced CT or gadolinium-enhanced MRI. Responses to treatment were compared in anterior versus posterior right lobe lesions using both RECIST and WHO criteria.more » Statistical comparative studies were conducted in 42 patients with both anterior and posterior segment lesions using the paired-sample t-test. Pearson correlation was used to determine the relationship between pretreatment tumor size and posttreatment tumor response. Median administered activity, delivered radiation dose, and treatment volume were 2.3 GBq, 118.2 Gy, and 1,072 cm{sup 3}, respectively. Differences between the pretreatment tumor size of anterior and posterior liver segments were not statistically significant (p = 0.7981). Differences in tumor response between anterior and posterior liver segments were not statistically significant using WHO criteria (p = 0.8557). A statistically significant correlation did not exist between pretreatment tumor size and posttreatment tumor response (r = 0.0554, p = 0.4434). On imaging follow-up using WHO criteria, for anterior and posterior regions of the liver, (1) response rates were 50% (PR = 50%) and 45% (CR = 9%, PR = 36%), and (2) mean changes in tumor size were -41% and -40%. In conclusion, this study did not find evidence of preferential radiographic tumor response in posterior versus anterior liver segments treated with yttrium-90 glass microspheres.« less

  20. Radiographic response to yttrium-90 radioembolization in anterior versus posterior liver segments.

    PubMed

    Ibrahim, Saad M; Lewandowski, Robert J; Ryu, Robert K; Sato, Kent T; Gates, Vanessa L; Mulcahy, Mary F; Kulik, Laura; Larson, Andrew C; Omary, Reed A; Salem, Riad

    2008-01-01

    The purpose of our study was to determine if preferential radiographic tumor response occurs in tumors located in posterior versus anterior liver segments following radioembolization with yttrium-90 glass microspheres. One hundred thirty-seven patients with chemorefractory liver metastases of various primaries were treated with yttrium-90 glass microspheres. Of these, a subset analysis was performed on 89 patients who underwent 101 whole-right-lobe infusions to liver segments V, VI, VII, and VIII. Pre- and posttreatment imaging included either triphasic contrast material-enhanced CT or gadolinium-enhanced MRI. Responses to treatment were compared in anterior versus posterior right lobe lesions using both RECIST and WHO criteria. Statistical comparative studies were conducted in 42 patients with both anterior and posterior segment lesions using the paired-sample t-test. Pearson correlation was used to determine the relationship between pretreatment tumor size and posttreatment tumor response. Median administered activity, delivered radiation dose, and treatment volume were 2.3 GBq, 118.2 Gy, and 1,072 cm(3), respectively. Differences between the pretreatment tumor size of anterior and posterior liver segments were not statistically significant (p = 0.7981). Differences in tumor response between anterior and posterior liver segments were not statistically significant using WHO criteria (p = 0.8557). A statistically significant correlation did not exist between pretreatment tumor size and posttreatment tumor response (r = 0.0554, p = 0.4434). On imaging follow-up using WHO criteria, for anterior and posterior regions of the liver, (1) response rates were 50% (PR = 50%) and 45% (CR = 9%, PR = 36%), and (2) mean changes in tumor size were -41% and -40%. In conclusion, this study did not find evidence of preferential radiographic tumor response in posterior versus anterior liver segments treated with yttrium-90 glass microspheres.

  1. Minding the Gaps: Literacy Enhances Lexical Segmentation in Children Learning to Read

    ERIC Educational Resources Information Center

    Havron, Naomi; Arnon, Inbal

    2017-01-01

    Can emergent literacy impact the size of the linguistic units children attend to? We examined children's ability to segment multiword sequences before and after they learned to read, in order to disentangle the effect of literacy and age on segmentation. We found that early readers were better at segmenting multiword units (after controlling for…

  2. Tipping point analysis of a large ocean ambient sound record

    NASA Astrophysics Data System (ADS)

    Livina, Valerie N.; Harris, Peter; Brower, Albert; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2017-04-01

    We study a long (2003-2015) high-resolution (250Hz) sound pressure record provided by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) from the hydro-acoustic station Cape Leeuwin (Australia). We transform the hydrophone waveforms into five bands of 10-min-average sound pressure levels (including the third-octave band) and apply tipping point analysis techniques [1-3]. We report the results of the analysis of fluctuations and trends in the data and discuss the BigData challenges in processing this record, including handling data segments of large size and possible HPC solutions. References: [1] Livina et al, GRL 2007, [2] Livina et al, Climate of the Past 2010, [3] Livina et al, Chaos 2015.

  3. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    PubMed Central

    Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2016-01-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321

  4. Scale effects and morphological diversification in hindlimb segment mass proportions in neognath birds.

    PubMed

    Kilbourne, Brandon M

    2014-01-01

    In spite of considerable work on the linear proportions of limbs in amniotes, it remains unknown whether differences in scale effects between proximal and distal limb segments has the potential to influence locomotor costs in amniote lineages and how changes in the mass proportions of limbs have factored into amniote diversification. To broaden our understanding of how the mass proportions of limbs vary within amniote lineages, I collected data on hindlimb segment masses - thigh, shank, pes, tarsometatarsal segment, and digits - from 38 species of neognath birds, one of the most speciose amniote clades. I scaled each of these traits against measures of body size (body mass) and hindlimb size (hindlimb length) to test for departures from isometry. Additionally, I applied two parameters of trait evolution (Pagel's λ and δ) to understand patterns of diversification in hindlimb segment mass in neognaths. All segment masses are positively allometric with body mass. Segment masses are isometric with hindlimb length. When examining scale effects in the neognath subclade Land Birds, segment masses were again positively allometric with body mass; however, shank, pedal, and tarsometatarsal segment masses were also positively allometric with hindlimb length. Methods of branch length scaling to detect phylogenetic signal (i.e., Pagel's λ) and increasing or decreasing rates of trait change over time (i.e., Pagel's δ) suffer from wide confidence intervals, likely due to small sample size and deep divergence times. The scaling of segment masses appears to be more strongly related to the scaling of limb bone mass as opposed to length, and the scaling of hindlimb mass distribution is more a function of scale effects in limb posture than proximo-distal differences in the scaling of limb segment mass. Though negative allometry of segment masses appears to be precluded by the need for mechanically sound limbs, the positive allometry of segment masses relative to body mass may underlie scale effects in stride frequency and length between smaller and larger neognaths. While variation in linear proportions of limbs appear to be governed by developmental mechanisms, variation in mass proportions does not appear to be constrained so.

  5. Scale effects and morphological diversification in hindlimb segment mass proportions in neognath birds

    PubMed Central

    2014-01-01

    Introduction In spite of considerable work on the linear proportions of limbs in amniotes, it remains unknown whether differences in scale effects between proximal and distal limb segments has the potential to influence locomotor costs in amniote lineages and how changes in the mass proportions of limbs have factored into amniote diversification. To broaden our understanding of how the mass proportions of limbs vary within amniote lineages, I collected data on hindlimb segment masses – thigh, shank, pes, tarsometatarsal segment, and digits – from 38 species of neognath birds, one of the most speciose amniote clades. I scaled each of these traits against measures of body size (body mass) and hindlimb size (hindlimb length) to test for departures from isometry. Additionally, I applied two parameters of trait evolution (Pagel’s λ and δ) to understand patterns of diversification in hindlimb segment mass in neognaths. Results All segment masses are positively allometric with body mass. Segment masses are isometric with hindlimb length. When examining scale effects in the neognath subclade Land Birds, segment masses were again positively allometric with body mass; however, shank, pedal, and tarsometatarsal segment masses were also positively allometric with hindlimb length. Methods of branch length scaling to detect phylogenetic signal (i.e., Pagel’s λ) and increasing or decreasing rates of trait change over time (i.e., Pagel’s δ) suffer from wide confidence intervals, likely due to small sample size and deep divergence times. Conclusions The scaling of segment masses appears to be more strongly related to the scaling of limb bone mass as opposed to length, and the scaling of hindlimb mass distribution is more a function of scale effects in limb posture than proximo-distal differences in the scaling of limb segment mass. Though negative allometry of segment masses appears to be precluded by the need for mechanically sound limbs, the positive allometry of segment masses relative to body mass may underlie scale effects in stride frequency and length between smaller and larger neognaths. While variation in linear proportions of limbs appear to be governed by developmental mechanisms, variation in mass proportions does not appear to be constrained so. PMID:24876886

  6. Performance of an Artificial Multi-observer Deep Neural Network for Fully Automated Segmentation of Polycystic Kidneys.

    PubMed

    Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J

    2017-08-01

    Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.

  7. Segmentation of hepatic artery in multi-phase liver CT using directional dilation and connectivity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Schnurr, Alena-Kathrin; Zidowitz, Stephan; Georgii, Joachim; Zhao, Yue; Razavi, Mohammad; Schwier, Michael; Hahn, Horst K.; Hansen, Christian

    2016-03-01

    Segmentation of hepatic arteries in multi-phase computed tomography (CT) images is indispensable in liver surgery planning. During image acquisition, the hepatic artery is enhanced by the injection of contrast agent. The enhanced signals are often not stably acquired due to non-optimal contrast timing. Other vascular structure, such as hepatic vein or portal vein, can be enhanced as well in the arterial phase, which can adversely affect the segmentation results. Furthermore, the arteries might suffer from partial volume effects due to their small diameter. To overcome these difficulties, we propose a framework for robust hepatic artery segmentation requiring a minimal amount of user interaction. First, an efficient multi-scale Hessian-based vesselness filter is applied on the artery phase CT image, aiming to enhance vessel structures with specified diameter range. Second, the vesselness response is processed using a Bayesian classifier to identify the most probable vessel structures. Considering the vesselness filter normally performs not ideally on the vessel bifurcations or the segments corrupted by noise, two vessel-reconnection techniques are proposed. The first technique uses a directional morphological operator to dilate vessel segments along their centerline directions, attempting to fill the gap between broken vascular segments. The second technique analyzes the connectivity of vessel segments and reconnects disconnected segments and branches. Finally, a 3D vessel tree is reconstructed. The algorithm has been evaluated using 18 CT images of the liver. To quantitatively measure the similarities between segmented and reference vessel trees, the skeleton coverage and mean symmetric distance are calculated to quantify the agreement between reference and segmented vessel skeletons, resulting in an average of 0:55+/-0:27 and 12:7+/-7:9 mm (mean standard deviation), respectively.

  8. A new bioimpedance research device (BIRD) for measuring the electrical impedance of acupuncture meridians.

    PubMed

    Wong, Felix Wu Shun; Lim, Chi Eung Danforn; Smith, Warren

    2010-03-01

    The aim of this article is to introduce an electrical bioimpedance device that uses an old and little-known impedance measuring technique to study the impedance of the meridian and nonmeridian tissue segments. Three (3) pilot experimental studies involving both a tissue phantom (a cucumber) and 3 human subjects were performed using this BIRD-I (Bioimpedance Research Device) device. This device consists of a Fluke RCL meter, a multiplexer box, a laptop computer, and a medical-grade isolation transformer. Segment and surface sheath (or local) impedances were estimated using formulae first published in the 1930s, in an approach that differs from that of the standard four-electrode technique used in most meridian studies to date. Our study found that, when using a quasilinear four-electrode arrangement, the reference electrodes should be positioned at least 10 cm from the test electrodes to ensure that the segment (or core) impedance estimation is not affected by the proximity of the reference electrodes. A tissue phantom was used to determine the repeatability of segment (core) impedance measurement by the device. An applied frequency of 100 kHz was found to produce the best repeatability among the various frequencies tested. In another preliminary study, with a segment of the triple energizer meridian on the lower arm selected as reference segment, core resistance-based profiles around the lower arm showed three of the other five meridians to exist as local resistance minima relative to neighboring nonmeridian segments. The profiles of the 2 subjects tested were very similar, suggesting that the results are unlikely to be spurious. In electrical bioimpedance studies, it is recommended that the measuring technique and device be clearly defined and standardized to provide optimal working conditions. In our study using the BIRD I device, we defined our standard experimental conditions as a test frequency of 100 kHz and the position of the reference electrodes of at least 10 cm from the test electrodes. Our device has demonstrated potential for use in quantifying the degree of electrical interconnection between any two surface-defined test meridian or nonmeridian segments. Issues arising from use of this device and the measurement Horton and van Ravenswaay technique were also presented.

  9. Unit bias. A new heuristic that helps explain the effect of portion size on food intake.

    PubMed

    Geier, Andrew B; Rozin, Paul; Doros, Gheorghe

    2006-06-01

    People seem to think that a unit of some entity (with certain constraints) is the appropriate and optimal amount. We refer to this heuristic as unit bias. We illustrate unit bias by demonstrating large effects of unit segmentation, a form of portion control, on food intake. Thus, people choose, and presumably eat, much greater weights of Tootsie Rolls and pretzels when offered a large as opposed to a small unit size (and given the option of taking as many units as they choose at no monetary cost). Additionally, they consume substantially more M&M's when the candies are offered with a large as opposed to a small spoon (again with no limits as to the number of spoonfuls to be taken). We propose that unit bias explains why small portion sizes are effective in controlling consumption; in some cases, people served small portions would simply eat additional portions if it were not for unit bias. We argue that unit bias is a general feature in human choice and discuss possible origins of this bias, including consumption norms.

  10. Statistical Mechanical Theory of Coupled Slow Dynamics in Glassy Polymer-Molecule Mixtures

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Schweizer, Kenneth

    The microscopic Elastically Collective Nonlinear Langevin Equation theory of activated relaxation in one-component supercooled liquids and glasses is generalized to polymer-molecule mixtures. The key idea is to account for dynamic coupling between molecule and polymer segment motion. For describing the molecule hopping event, a temporal casuality condition is formulated to self-consistently determine a dimensionless degree of matrix distortion relative to the molecule jump distance based on the concept of coupled dynamic free energies. Implementation for real materials employs an established Kuhn sphere model of the polymer liquid and a quantitative mapping to a hard particle reference system guided by the experimental equation-of-state. The theory makes predictions for the mixture dynamic shear modulus, activated relaxation time and diffusivity of both species, and mixture glass transition temperature as a function of molecule-Kuhn segment size ratio and attraction strength, composition and temperature. Model calculations illustrate the dynamical behavior in three distinct mixture regimes (fully miscible, bridging, clustering) controlled by the molecule-polymer interaction or chi-parameter. Applications to specific experimental systems will be discussed.

  11. Extraction of Overt Verbal Response from the Acoustic Noise in a Functional Magnetic Resonance Imaging Scan by Use of Segmented Active Noise Cancellation

    PubMed Central

    Jung, Kwan-Jin; Prasad, Parikshit; Qin, Yulin; Anderson, John R.

    2013-01-01

    A method to extract the subject's overt verbal response from the obscuring acoustic noise in an fMRI scan is developed by applying active noise cancellation with a conventional MRI microphone. Since the EPI scanning and its accompanying acoustic noise in fMRI are repetitive, the acoustic noise in one time segment was used as a reference noise in suppressing the acoustic noise in subsequent segments. However, the acoustic noise from the scanner was affected by the subject's movements, so the reference noise was adaptively adjusted as the scanner's acoustic properties varied in time. This method was successfully applied to a cognitive fMRI experiment with overt verbal responses. PMID:15723385

  12. Automated segmentation of intraretinal layers from macular optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Haeker, Mona; Sonka, Milan; Kardon, Randy; Shah, Vinay A.; Wu, Xiaodong; Abràmoff, Michael D.

    2007-03-01

    Commercially-available optical coherence tomography (OCT) systems (e.g., Stratus OCT-3) only segment and provide thickness measurements for the total retina on scans of the macula. Since each intraretinal layer may be affected differently by disease, it is desirable to quantify the properties of each layer separately. Thus, we have developed an automated segmentation approach for the separation of the retina on (anisotropic) 3-D macular OCT scans into five layers. Each macular series consisted of six linear radial scans centered at the fovea. Repeated series (up to six, when available) were acquired for each eye and were first registered and averaged together, resulting in a composite image for each angular location. The six surfaces defining the five layers were then found on each 3-D composite image series by transforming the segmentation task into that of finding a minimum-cost closed set in a geometric graph constructed from edge/regional information and a priori-determined surface smoothness and interaction constraints. The method was applied to the macular OCT scans of 12 patients with unilateral anterior ischemic optic neuropathy (corresponding to 24 3-D composite image series). The boundaries were independently defined by two human experts on one raw scan of each eye. Using the average of the experts' tracings as a reference standard resulted in an overall mean unsigned border positioning error of 6.7 +/- 4.0 μm, with five of the six surfaces showing significantly lower mean errors than those computed between the two observers (p < 0.05, pixel size of 50 × 2 μm).

  13. Segmental Rescoring in Text Recognition

    DTIC Science & Technology

    2014-02-04

    description relates to rescoring text hypotheses in text recognition based on segmental features. Offline printed text and handwriting recognition (OHR) can... Handwriting , College Park, Md., 2006, which is incorporated by reference here. For the set of training images 202, a character modeler 208 receives

  14. Size of the Dynamic Bead in Polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agapov, Alexander L; Sokolov, Alexei P

    2010-01-01

    Presented analysis of neutron, mechanical, and MD simulation data available in the literature demonstrates that the dynamic bead size (the smallest subchain that still exhibits the Rouse-like dynamics) in most of the polymers is significantly larger than the traditionally defined Kuhn segment. Moreover, our analysis emphasizes that even the static bead size (e.g., chain statistics) disagrees with the Kuhn segment length. We demonstrate that the deficiency of the Kuhn segment definition is based on the assumption of a chain being completely extended inside a single bead. The analysis suggests that representation of a real polymer chain by the bead-and-spring modelmore » with a single parameter C cannot be correct. One needs more parameters to reflect correctly details of the chain structure in the bead-and-spring model.« less

  15. Segmenting the thoracic, abdominal and pelvic musculature on CT scans combining atlas-based model and active contour model

    NASA Astrophysics Data System (ADS)

    Zhang, Weidong; Liu, Jiamin; Yao, Jianhua; Summers, Ronald M.

    2013-03-01

    Segmentation of the musculature is very important for accurate organ segmentation, analysis of body composition, and localization of tumors in the muscle. In research fields of computer assisted surgery and computer-aided diagnosis (CAD), muscle segmentation in CT images is a necessary pre-processing step. This task is particularly challenging due to the large variability in muscle structure and the overlap in intensity between muscle and internal organs. This problem has not been solved completely, especially for all of thoracic, abdominal and pelvic regions. We propose an automated system to segment the musculature on CT scans. The method combines an atlas-based model, an active contour model and prior segmentation of fat and bones. First, body contour, fat and bones are segmented using existing methods. Second, atlas-based models are pre-defined using anatomic knowledge at multiple key positions in the body to handle the large variability in muscle shape. Third, the atlas model is refined using active contour models (ACM) that are constrained using the pre-segmented bone and fat. Before refining using ACM, the initialized atlas model of next slice is updated using previous atlas. The muscle is segmented using threshold and smoothed in 3D volume space. Thoracic, abdominal and pelvic CT scans were used to evaluate our method, and five key position slices for each case were selected and manually labeled as the reference. Compared with the reference ground truth, the overlap ratio of true positives is 91.1%+/-3.5%, and that of false positives is 5.5%+/-4.2%.

  16. Time-efficient high-resolution whole-brain three-dimensional macromolecular proton fraction mapping

    PubMed Central

    Yarnykh, Vasily L.

    2015-01-01

    Purpose Macromolecular proton fraction (MPF) mapping is a quantitative MRI method that reconstructs parametric maps of a relative amount of macromolecular protons causing the magnetization transfer (MT) effect and provides a biomarker of myelination in neural tissues. This study aimed to develop a high-resolution whole-brain MPF mapping technique utilizing a minimal possible number of source images for scan time reduction. Methods The described technique is based on replacement of an actually acquired reference image without MT saturation by a synthetic one reconstructed from R1 and proton density maps, thus requiring only three source images. This approach enabled whole-brain three-dimensional MPF mapping with isotropic 1.25×1.25×1.25 mm3 voxel size and scan time of 20 minutes. The synthetic reference method was validated against standard MPF mapping with acquired reference images based on data from 8 healthy subjects. Results Mean MPF values in segmented white and gray matter appeared in close agreement with no significant bias and small within-subject coefficients of variation (<2%). High-resolution MPF maps demonstrated sharp white-gray matter contrast and clear visualization of anatomical details including gray matter structures with high iron content. Conclusions Synthetic reference method improves resolution of MPF mapping and combines accurate MPF measurements with unique neuroanatomical contrast features. PMID:26102097

  17. Study on the bearing capacity of embedded chute on shield tunnel segment

    NASA Astrophysics Data System (ADS)

    Fanzhen, Zhang; Jie, Bu; Zhibo, Su; Qigao, Hu

    2018-05-01

    The method of perforation and steel implantation is often used to fix and install pipeline, cables and other facilities in the shield tunnel, which would inevitably do damage to the precast segments. In order to reduce the damage and the resulting safety and durability problems, embedded chute was set at the equipment installation in one shield tunnel. Finite element models of segment concrete and steel are established in this paper. When water-soil pressure calculated separately and calculated together, the mechanical property of segment is studied. The bearing capacity and deformation of segment are analysed before and after embedding the chute. Research results provide a reference for similar shield tunnel segment engineering.

  18. Direct estimation of human trabecular bone stiffness using cone beam computed tomography.

    PubMed

    Klintström, Eva; Klintström, Benjamin; Pahr, Dieter; Brismar, Torkel B; Smedby, Örjan; Moreno, Rodrigo

    2018-04-10

    The aim of this study was to evaluate the possibility of estimating the biomechanical properties of trabecular bone through finite element simulations by using dental cone beam computed tomography data. Fourteen human radius specimens were scanned in 3 cone beam computed tomography devices: 3-D Accuitomo 80 (J. Morita MFG., Kyoto, Japan), NewTom 5 G (QR Verona, Verona, Italy), and Verity (Planmed, Helsinki, Finland). The imaging data were segmented by using 2 different methods. Stiffness (Young modulus), shear moduli, and the size and shape of the stiffness tensor were studied. Corresponding evaluations by using micro-CT were regarded as the reference standard. The 3-D Accuitomo 80 (J. Morita MFG., Kyoto, Japan) showed good performance in estimating stiffness and shear moduli but was sensitive to the choice of segmentation method. NewTom 5 G (QR Verona, Verona, Italy) and Verity (Planmed, Helsinki, Finland) yielded good correlations, but they were not as strong as Accuitomo 80 (J. Morita MFG., Kyoto, Japan). The cone beam computed tomography devices overestimated both stiffness and shear compared with the micro-CT estimations. Finite element-based calculations of biomechanics from cone beam computed tomography data are feasible, with strong correlations for the Accuitomo 80 scanner (J. Morita MFG., Kyoto, Japan) combined with an appropriate segmentation method. Such measurements might be useful for predicting implant survival by in vivo estimations of bone properties. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Deep learning in the small sample size setting: cascaded feed forward neural networks for medical image segmentation

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke

    2016-03-01

    Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.

  20. A reference skeletal dosimetry model for an adult male radionuclide therapy patient based on three-dimensional imaging and paired-image radiation transport

    NASA Astrophysics Data System (ADS)

    Shah, Amish P.

    The need for improved patient-specificity of skeletal dose estimates is widely recognized in radionuclide therapy. Current clinical models for marrow dose are based on skeletal mass estimates from a variety of sources and linear chord-length distributions that do not account for particle escape into cortical bone. To predict marrow dose, these clinical models use a scheme that requires separate calculations of cumulated activity and radionuclide S values. Selection of an appropriate S value is generally limited to one of only three sources, all of which use as input the trabecular microstructure of an individual measured 25 years ago, and the tissue mass derived from different individuals measured 75 years ago. Our study proposed a new modeling approach to marrow dosimetry---the Paired Image Radiation Transport (PIRT) model---that properly accounts for both the trabecular microstructure and the cortical macrostructure of each skeletal site in a reference male radionuclide patient. The PIRT model, as applied within EGSnrc, requires two sets of input geometry: (1) an infinite voxel array of segmented microimages of the spongiosa acquired via microCT; and (2) a segmented ex-vivo CT image of the bone site macrostructure defining both the spongiosa (marrow, endosteum, and trabeculae) and the cortical bone cortex. Our study also proposed revising reference skeletal dosimetry models for the adult male cancer patient. Skeletal site-specific radionuclide S values were obtained for a 66-year-old male reference patient. The derivation for total skeletal S values were unique in that the necessary skeletal mass and electron dosimetry calculations were formulated from the same source bone site over the entire skeleton. We conclude that paired-image radiation-transport techniques provide an adoptable method by which the intricate, anisotropic trabecular microstructure of the skeletal site; and the physical size and shape of the bone can be handled together, for improved compilation of reference radionuclide S values. We also conclude that this comprehensive model for the adult male cancer patient should be implemented for use in patient-specific calculations for radionuclide dosimetry of the skeleton.

  1. The diagnostic performance of leak-plugging automated segmentation versus manual tracing of breast lesions on ultrasound images.

    PubMed

    Xiong, Hui; Sultan, Laith R; Cary, Theodore W; Schultz, Susan M; Bouzghar, Ghizlane; Sehgal, Chandra M

    2017-05-01

    To assess the diagnostic performance of a leak-plugging segmentation method that we have developed for delineating breast masses on ultrasound images. Fifty-two biopsy-proven breast lesion images were analyzed by three observers using the leak-plugging and manual segmentation methods. From each segmentation method, grayscale and morphological features were extracted and classified as malignant or benign by logistic regression analysis. The performance of leak-plugging and manual segmentations was compared by: size of the lesion, overlap area ( O a ) between the margins, and area under the ROC curves ( A z ). The lesion size from leak-plugging segmentation correlated closely with that from manual tracing ( R 2 of 0.91). O a was higher for leak plugging, 0.92 ± 0.01 and 0.86 ± 0.06 for benign and malignant masses, respectively, compared to 0.80 ± 0.04 and 0.73 ± 0.02 for manual tracings. Overall O a between leak-plugging and manual segmentations was 0.79 ± 0.14 for benign and 0.73 ± 0.14 for malignant lesions. A z for leak plugging was consistently higher (0.910 ± 0.003) compared to 0.888 ± 0.012 for manual tracings. The coefficient of variation of A z between three observers was 0.29% for leak plugging compared to 1.3% for manual tracings. The diagnostic performance, size measurements, and observer variability for automated leak-plugging segmentations were either comparable to or better than those of manual tracings.

  2. Automated aortic calcification detection in low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.

  3. Plaque shift and distal embolism in patients with acute myocardial infarction: a volumetric intravascular ultrasound analysis from the HORIZONS-AMI trial.

    PubMed

    Wu, Xiaofan; Maehara, Akiko; He, Yong; Xu, Kai; Oviedo, Carlos; Witzenbichler, Bernhard; Lansky, Alexandra J; Dressler, Ovidiu; Parise, Helen; Stone, Gregg W; Mintz, Gary S

    2013-08-01

    Vessel expansion and axial plaque redistribution or distal plaque embolization contribute to the increase in lumen dimensions after stent implantation. Preintervention and postintervention grayscale volumetric intravascular ultrasound was used to study 43 de novo native coronary lesions treated with TAXUS or Express bare metal stents in the HORIZONS-AMI Trial. There was a decrease in lesion segment plaque + media (P + M) volume (-19.5 ± 22.2 mm(3) ) that was associated with a decrease in overall analysis segment (lesion plus 5 mm long proximal and distal reference segments) P + M volume (-17.5 ± 21.0 mm(3) ) that was greater than the shift of plaque from the lesion to the proximal and distal reference segments (1.9 ± 4.5 mm(3) , P < 0.0001). Overall analysis segment P + M volume decreased more in the angiographic thrombus (+) versus the thrombus (-) group (27.4 ± 23.4 vs. -8.9 ± 14.3 mm(3) , P = 0.003), whereas plaque shift to the reference segments showed no significant difference between the two groups (1.5 ± 5.2 vs. 2.3 ± 3.9 mm(3) , P = 0.590). Compared with the angiographic thrombus (-) group, patients in the thrombus (+) group more often developed no reflow (25% vs. 0%, P = 0.012) and had a higher preintervention CK-MB (P = 0.011), postintervention CK-MB (P < 0.001), and periprocedural (post-PCI minus pre-PCI) elevation of CK-MB (P = 0.001). In acute myocardial infarction lesions, there was a marked poststenting reduction in overall plaque volume that was significantly greater in patients with angiographic thrombus than without thrombus and may have explained a greater periprocedural rise in CK-MB. © 2013 Wiley Periodicals, Inc.

  4. Passive Synthetic Aperture Radar Imaging Using Commercial OFDM Communication Networks

    DTIC Science & Technology

    2012-09-13

    baseband sampling is key to ensure proper correlation with a reference signal. The DFT represents the sam- pled spectrum of a periodic discrete sequence...convenient to sample the baseband time domain segments at a rate of Ts/N . In this way, the segments are easily correlated to the elemental form of the...phase history solution of Gp ,l[k ′ n] = Sp,l,n ϕp,l,ndp,l,nN2 , dp,l,n 6= 0. (5.5.13) The segment need not be limited to N samples . For segments of length

  5. Diffusion MRI with Semi-Automated Segmentation Can Serve as a Restricted Predictive Biomarker of the Therapeutic Response of Liver Metastasis

    PubMed Central

    Stephen, Renu M.; Jha, Abhinav K.; Roe, Denise J.; Trouard, Theodore P.; Galons, Jean-Philippe; Kupinski, Matthew A.; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D.; Rodriguez, Jeffrey J.; Gillies, Robert J.; Stopeck, Alison T.

    2015-01-01

    Purpose To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Methods Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450 s/mm2 at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. Results A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2–5 cm in size (p = 0.002), but not for heavily treated patients with the same tumor size range (p = 0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33 μm2/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2–5 cm liver lesions. Conclusion Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. PMID:26284600

  6. Diffusion MRI with Semi-Automated Segmentation Can Serve as a Restricted Predictive Biomarker of the Therapeutic Response of Liver Metastasis.

    PubMed

    Stephen, Renu M; Jha, Abhinav K; Roe, Denise J; Trouard, Theodore P; Galons, Jean-Philippe; Kupinski, Matthew A; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D; Rodriguez, Jeffrey J; Gillies, Robert J; Stopeck, Alison T

    2015-12-01

    To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450s/mm(2) at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2-5cm in size (p=0.002), but not for heavily treated patients with the same tumor size range (p=0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33μm(2)/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2-5cm liver lesions. Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Segmental Isotopic Labeling of Proteins for Nuclear Magnetic Resonance

    PubMed Central

    Dongsheng, Liu; Xu, Rong; Cowburn, David

    2009-01-01

    Nuclear Magnetic Resonance (NMR) spectroscopy has emerged as one of the principle techniques of structural biology. It is not only a powerful method for elucidating the 3D structures under near physiological conditions, but also a convenient method for studying protein-ligand interactions and protein dynamics. A major drawback of macromolecular NMR is its size limitation caused by slower tumbling rates and greater complexity of the spectra as size increases. Segmental isotopic labeling allows specific segment(s) within a protein to be selectively examined by NMR thus significantly reducing the spectral complexity for large proteins and allowing a variety of solution-based NMR strategies to be applied. Two related approaches are generally used in the segmental isotopic labeling of proteins: expressed protein ligation and protein trans-splicing. Here we describe the methodology and recent application of expressed protein ligation and protein trans-splicing for NMR structural studies of proteins and protein complexes. We also describe the protocol used in our lab for the segmental isotopic labeling of a 50 kDa protein Csk (C-terminal Src Kinase) using expressed protein ligation methods. PMID:19632474

  8. Automated segmentation of the lungs from high resolution CT images for quantitative study of chronic obstructive pulmonary diseases

    NASA Astrophysics Data System (ADS)

    Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.

    2005-04-01

    Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.

  9. Minding the gaps: literacy enhances lexical segmentation in children learning to read.

    PubMed

    Havron, Naomi; Arnon, Inbal

    2017-11-01

    Can emergent literacy impact the size of the linguistic units children attend to? We examined children's ability to segment multiword sequences before and after they learned to read, in order to disentangle the effect of literacy and age on segmentation. We found that early readers were better at segmenting multiword units (after controlling for age, cognitive, and linguistic variables), and that improvement in literacy skills between the two sessions predicted improvement in segmentation abilities. Together, these findings suggest that literacy acquisition, rather than age, enhanced segmentation. We discuss implications for models of language learning.

  10. Effectiveness of light-reflecting devices: A systematic reanalysis of animal-vehicle collision data.

    PubMed

    Brieger, Falko; Hagen, Robert; Vetter, Daniela; Dormann, Carsten F; Storch, Ilse

    2016-12-01

    Every year, approximately 500 human fatalities occur due to animal-vehicle collisions in the United States and Europe. Especially heavy-bodied animals affect road safety. For more than 50 years, light-reflecting devices such as wildlife warning reflectors have been employed to alert animals to traffic when crossing roads during twilight and night. Numerous studies addressed the effectiveness of light-reflecting devices in reducing collisions with animals in past decades, but yielded contradictory results. In this study, we conducted a systematic literature review to investigate whether light-reflecting devices contribute to an effective prevention of animal-vehicle collisions. We reviewed 53 references and reanalyzed original data of animal-vehicle collisions with meta-analytical methods. We calculated an effect size based on the annual number of animal-vehicle collisions per kilometer of road to compare segments with and without the installation of light-reflecting devices for 185 roads in Europe and North America. Our results indicate that light-reflecting devices did not significantly reduce the number of animal-vehicle collisions. However, we observed considerable differences of effect sizes with respect to study duration, study design, and country. Our results suggest that length of the road segment studied, study duration, study design and public attitude (preconception) to the functioning of devices may affect whether the documented number of animal-vehicle collisions in- or decrease and might in turn influence whether results obtained were published. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A Microfabricated Segmented-Involute-Foil Regenerator for Enhancing Reliability and Performance of Stirling Engines

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir; Danila, Daniel; Simon, Terrence; Mantell, Susan; Sun, Liyong; Gadeon, David; Qiu, Songgang; Wood, Gary; Kelly, Kevin; McLean, Jeffrey

    2007-01-01

    An actual-size microfabricated regenerator comprised of a stack of 42 disks, 19 mm diameter and 0.25 mm thick, with layers of microscopic, segmented, involute-shaped flow channels was fabricated and tested. The geometry resembles layers of uniformly-spaced segmented-parallel-plates, except the plates are curved. Each disk was made from electro-plated nickel using the LiGA process. This regenerator had feature sizes close to those required for an actual Stirling engine but the overall regenerator dimensions were sized for the NASA/Sunpower oscillating-flow regenerator test rig. Testing in the oscillating-flow test rig showed the regenerator performed extremely well, significantly better than currently used random-fiber material, producing the highest figures of merit ever recorded for any regenerator tested in that rig over its approximately 20 years of use.

  12. Calculation and analysis of shear resistance of segment ring joint with shear pin

    NASA Astrophysics Data System (ADS)

    Wu, Shengzhi; Huang, Haibin; Wang, Mingnian; Xiao, Shihui; Liu, Dagang

    2018-03-01

    In order to get the effect of shear pins between segments on the shear resistance of segment girth joints. Take the Maliuzhou traffic tunnel project of Zhuhai which with super large diameter and Marine Composite strata as the research object, the longitudinal shear stiffness of tunnel shear considering the shear rigidity of shear pins was obtained through the finite element shear experiment of segment ring. By comparing the calculation results of shear pin and non shear pin between segment ring connections, the conclusion that shear pin setting can effectively decompose and transfer shear force and control the dislocation between segment ring blocks is obtained. The study can be used as reference for the design and construction of shield tunnel.

  13. Photoreceptor inner segment ellipsoid band integrity on spectral domain optical coherence tomography

    PubMed Central

    Saxena, Sandeep; Srivastav, Khushboo; Cheung, Chui M; Ng, Joanne YW; Lai, Timothy YY

    2014-01-01

    Spectral domain optical coherence tomography cross-sectional imaging of the macula has conventionally been resolved into four bands. However, some doubts were raised regarding authentication of the existence of these bands. Recently, a number of studies have suggested that the second band appeared to originate from the inner segment ellipsoids of the foveal cone photoreceptors, and therefore the previously called inner segment-outer segment junction is now referred to as inner segment ellipsoidband. Photoreceptor dysfunction may be a significant predictor of visual acuity in a spectrum of surgical and medical retinal diseases. This review aims to provide an overview and summarizes the role of the photoreceptor inner segment ellipsoid band in the management and prognostication of various vitreoretinal diseases. PMID:25525329

  14. Mixed vitiligo of Blaschko lines: a newly discovered presentation of vitiligo responsive to combination treatment.

    PubMed

    Kovacevic, Maja; Stanimirovic, Andrija; Vucic, Majda; Goren, Andy; Situm, Mirna; Lukinovic Skudar, Vesna; Lotti, Torello

    2016-07-01

    Vitiligo, depigmenting disorder of the skin and mucous membranes, affects up to 1% of the population worldwide. It is classified into four major types: segmental, non-segmental, mixed, and unclassified type. Non-segmental vitiligo refers to non-dermatomal distribution of lesions, while dermatomal distribution of lesions is present in patients with segmental vitiligo. Segmental vitiligo can also follow Blaschko lines - pathways of epidermal cell migration and proliferation during the development of the fetus. Here, we present patient with segmental and non-segmental vitiligo following Blaschko lines with excellent therapeutic response to combined therapy. Prior to our report, a case of segmental and non-segmental vitiligo followed by Blaschko lines was never described, therefore we suggest the term "mixed vitiligo of Blaschko lines" to describe this entity. This is also a rare case in which 90% repigmentation was achieved in patient with segmental and nonsegmental vitiligo following Blaschko lines in only 2 months of combined therapy. © 2016 Wiley Periodicals, Inc.

  15. Axially adjustable magnetic properties in arrays of multilayered Ni/Cu nanowires with variable segment sizes

    NASA Astrophysics Data System (ADS)

    Shirazi Tehrani, A.; Almasi Kashi, M.; Ramazani, A.; Montazer, A. H.

    2016-07-01

    Arrays of multilayered Ni/Cu nanowires (NWs) with variable segment sizes were fabricated into anodic aluminum oxide templates using a pulsed electrodeposition method in a single bath for designated potential pulse times. Increasing the pulse time between 0.125 and 2 s in the electrodeposition of Ni enabled the formation of segments with thicknesses ranging from 25 to 280 nm and 10-110 nm in 42 and 65 nm diameter NWs, respectively, leading to disk-shaped, rod-shaped and/or near wire-shaped geometries. Using hysteresis loop measurements at room temperature, the axial and perpendicular magnetic properties were investigated. Regardless of the segment geometry, the axial coercivity and squareness significantly increased with increasing Ni segment thickness, in agreement with a decrease in calculated demagnetizing factors along the NW length. On the contrary, the perpendicular magnetic properties were found to be independent of the pulse times, indicating a competition between the intrawire interactions and the shape demagnetizing field.

  16. Spontaneous ignition temperature limits of jet A fuel in research-combustor segment

    NASA Technical Reports Server (NTRS)

    Ingebo, R. D.

    1974-01-01

    The effects of inlet-air pressure and reference velocity on the spontaneous-ignition temperature limits of Jet A fuel were determined in a combustor segment with a primary-zone length of 0.076 m (3 in.). At a constant reference velocity of 21.4 m/sec (170 ft/sec), increasing the inlet-air pressure from 21 to 207 N/sq cm decreased the spontaneous-ignition temperature limit from approximately 700 to 555 K. At a constant inlet-air pressure of 41 N/sq cm, increasing the reference velocity from 12.2 to 30.5 m/sec increased the spontaneous-ignition temperature limit from approximately 575 to 800 K. Results are compared with other data in the literature.

  17. Randomly displaced phase distribution design and its advantage in page-data recording of Fourier transform holograms.

    PubMed

    Emoto, Akira; Fukuda, Takashi

    2013-02-20

    For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.

  18. A novel pipeline for adrenal tumour segmentation.

    PubMed

    Koyuncu, Hasan; Ceylan, Rahime; Erdogan, Hasan; Sivri, Mesut

    2018-06-01

    Adrenal tumours occur on adrenal glands surrounded by organs and osteoid. These tumours can be categorized as either functional, non-functional, malign, or benign. Depending on their appearance in the abdomen, adrenal tumours can arise from one adrenal gland (unilateral) or from both adrenal glands (bilateral) and can connect with other organs, including the liver, spleen, pancreas, etc. This connection phenomenon constitutes the most important handicap against adrenal tumour segmentation. Size change, variety of shape, diverse location, and low contrast (similar grey values between the various tissues) are other disadvantages compounding segmentation difficulty. Few studies have considered adrenal tumour segmentation, and no significant improvement has been achieved for unilateral, bilateral, adherent, or noncohesive tumour segmentation. There is also no recognised segmentation pipeline or method for adrenal tumours including different shape, size, or location information. This study proposes an adrenal tumour segmentation (ATUS) pipeline designed to eliminate the above disadvantages for adrenal tumour segmentation. ATUS incorporates a number of image methods, including contrast limited adaptive histogram equalization, split and merge based on quadtree decomposition, mean shift segmentation, large grey level eliminator, and region growing. Performance assessment of ATUS was realised on 32 arterial and portal phase computed tomography images using six metrics: dice, jaccard, sensitivity, specificity, accuracy, and structural similarity index. ATUS achieved remarkable segmentation performance, and was not affected by the discussed handicaps, on particularly adherence to other organs, with success rates of 83.06%, 71.44%, 86.44%, 99.66%, 99.43%, and 98.51% for the metrics, respectively, for images including sufficient contrast uptake. The proposed ATUS system realises detailed adrenal tumour segmentation, and avoids known disadvantages preventing accurate segmentation. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. One Size (Never) Fits All: Segment Differences Observed Following a School-Based Alcohol Social Marketing Program

    ERIC Educational Resources Information Center

    Dietrich, Timo; Rundle-Thiele, Sharyn; Leo, Cheryl; Connor, Jason

    2015-01-01

    Background: According to commercial marketing theory, a market orientation leads to improved performance. Drawing on the social marketing principles of segmentation and audience research, the current study seeks to identify segments to examine responses to a school-based alcohol social marketing program. Methods: A sample of 371 year 10 students…

  20. Evaluation of dual energy quantitative CT for determining the spatial distributions of red marrow and bone for dosimetry in internal emitter radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsitt, Mitchell M., E-mail: goodsitt@umich.edu; Shenoy, Apeksha; Howard, David

    2014-05-15

    Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correctionmore » factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa.« less

  1. Evaluation of dual energy quantitative CT for determining the spatial distributions of red marrow and bone for dosimetry in internal emitter radiation therapy

    PubMed Central

    Goodsitt, Mitchell M.; Shenoy, Apeksha; Shen, Jincheng; Howard, David; Schipper, Matthew J.; Wilderman, Scott; Christodoulou, Emmanuel; Chun, Se Young; Dewaraja, Yuni K.

    2014-01-01

    Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correction factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa. PMID:24784380

  2. Remote Ischemic Perconditioning to Reduce Reperfusion Injury During Acute ST-Segment-Elevation Myocardial Infarction: A Systematic Review and Meta-Analysis.

    PubMed

    McLeod, Shelley L; Iansavichene, Alla; Cheskes, Sheldon

    2017-05-17

    Remote ischemic conditioning (RIC) is a noninvasive therapeutic strategy that uses brief cycles of blood pressure cuff inflation and deflation to protect the myocardium against ischemia-reperfusion injury. The objective of this systematic review was to determine the impact of RIC on myocardial salvage index, infarct size, and major adverse cardiovascular events when initiated before catheterization. Electronic searches of Medline, Embase, and Cochrane Central Register of Controlled Trials were conducted and reference lists were hand searched. Randomized controlled trials comparing percutaneous coronary intervention (PCI) with and without RIC for patients with ST-segment-elevation myocardial infarction were included. Two reviewers independently screened abstracts, assessed quality of the studies, and extracted data. Data were pooled using random-effects models and reported as mean differences and relative risk with 95% confidence intervals. Eleven articles (9 randomized controlled trials) were included with a total of 1220 patients (RIC+PCI=643, PCI=577). Studies with no events were excluded from meta-analysis. The myocardial salvage index was higher in the RIC+PCI group compared with the PCI group (mean difference: 0.08; 95% confidence interval, 0.02-0.14). Infarct size was reduced in the RIC+PCI group compared with the PCI group (mean difference: -2.46; 95% confidence interval, -4.66 to -0.26). Major adverse cardiovascular events were lower in the RIC+PCI group (9.5%) compared with the PCI group (17.0%; relative risk: 0.57; 95% confidence interval, 0.40-0.82). RIC appears to be a promising adjunctive treatment to PCI for the prevention of reperfusion injury in patients with ST-segment-elevation myocardial infarction; however, additional high-quality research is required before a change in practice can be considered. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  3. Variation in center of mass estimates for extant sauropsids and its importance for reconstructing inertial properties of extinct archosaurs.

    PubMed

    Allen, Vivian; Paxton, Heather; Hutchinson, John R

    2009-09-01

    Inertial properties of animal bodies and segments are critical input parameters for biomechanical analysis of standing and moving, and thus are important for paleobiological inquiries into the broader behaviors, ecology and evolution of extinct taxa such as dinosaurs. But how accurately can these be estimated? Computational modeling was used to estimate the inertial properties including mass, density, and center of mass (COM) for extant crocodiles (adult and juvenile Crocodylus johnstoni) and birds (Gallus gallus; junglefowl and broiler chickens), to identify the chief sources of variation and methodological errors, and their significance. High-resolution computed tomography scans were segmented into 3D objects and imported into inertial property estimation software that allowed for the examination of variable body segment densities (e.g., air spaces such as lungs, and deformable body outlines). Considerable biological variation of inertial properties was found within groups due to ontogenetic changes as well as evolutionary changes between chicken groups. COM positions shift in variable directions during ontogeny in different groups. Our method was repeatable and the resolution was sufficient for accurate estimations of mass and density in particular. However, we also found considerable potential methodological errors for COM related to (1) assumed body segment orientation, (2) what frames of reference are used to normalize COM for size-independent comparisons among animals, and (3) assumptions about tail shape. Methods and assumptions are suggested to minimize these errors in the future and thereby improve estimation of inertial properties for extant and extinct animals. In the best cases, 10%-15% errors in these estimates are unavoidable, but particularly for extinct taxa errors closer to 50% should be expected, and therefore, cautiously investigated. Nonetheless in the best cases these methods allow rigorous estimation of inertial properties. (c) 2009 Wiley-Liss, Inc.

  4. Blood vessels segmentation of hatching eggs based on fully convolutional networks

    NASA Astrophysics Data System (ADS)

    Geng, Lei; Qiu, Ling; Wu, Jun; Xiao, Zhitao

    2018-04-01

    FCN, trained end-to-end, pixels-to-pixels, predict result of each pixel. It has been widely used for semantic segmentation. In order to realize the blood vessels segmentation of hatching eggs, a method based on FCN is proposed in this paper. The training datasets are composed of patches extracted from very few images to augment data. The network combines with lower layer and deconvolution to enables precise segmentation. The proposed method frees from the problem that training deep networks need large scale samples. Experimental results on hatching eggs demonstrate that this method can yield more accurate segmentation outputs than previous researches. It provides a convenient reference for fertility detection subsequently.

  5. Analytical study on a two-dimensional plane of the off-design flow properties of tandem-bladed compressor stators

    NASA Technical Reports Server (NTRS)

    Sanger, N. L.

    1973-01-01

    The flow characteristics of several tandem bladed compressor stators were analytically evaluated over a range of inlet incidence angles. The ratios of rear-segment to front-segment chord and camber were varied. Results were also compared to the analytical performance of a reference solid blade section. All tandem blade sections exhibited lower calculated losses than the solid stator. But no one geometric configuration exhibited clearly superior characteristics. The front segment accepts the major effect of overall incidence angle change. Rear- to front-segment camber ratios of 4 and greater appeared to be limited by boundary-layer separation from the pressure surface of the rear segment.

  6. [Analysis of genomic copy number variations in two unrelated neonates with 8p deletion and duplication associated with congenital heart disease].

    PubMed

    Mei, Mei; Yang, Lin; Zhan, Guodong; Wang, Huijun; Ma, Duan; Zhou, Wenhao; Huang, Guoying

    2014-06-01

    To screen for genomic copy number variations (CNVs) in two unrelated neonates with multiple congenital abnormalities using Affymetrix SNP chip and try to find the critical region associated with congenital heart disease. Two neonates were tested for genomic copy number variations by using Cytogenetic SNP chip.Rare CNVs with potential clinical significance were selected of which deletion segments' size was larger than 50 kb and duplication segments' size was larger than 150 kb based on the analysis of ChAs software, without false positive CNVs and segments of normal population. The identified CNVs were compared with those of the cases in DECIPHER and ISCA databases. Eleven rare CNVs with size from 546.6-27 892 kb were identified in the 2 neonates. The deletion region and size of case 1 were 8p23.3-p23.1 (387 912-11 506 771 bp) and 11.1 Mb respectively, the duplication region and size of case 1 were 8p23.1-p11.1 (11 508 387-43 321 279 bp) and 31.8 Mb respectively. The deletion region and size of case 2 were 8p23.3-p23.1 (46 385-7 809 878 bp) and 7.8 Mb respectively, the duplication region and size of case 2 were 8p23.1-p11.21 (12 260 914-40 917 092 bp) and 28.7 Mb respectively. The comparison with Decipher and ISCA databases supported previous viewpoint that 8p23.1 had been associated with congenital heart disease and the region between 7 809 878-11 506 771 bp may play a role in the severe cardiac defects associated with 8p23.1 deletions. Case 1 had serious cardiac abnormalities whose GATA4 was located in the duplication segment and the copy number increased while SOX7 was located in the deletion segment and the copy number decreased. The region between 7 809 878-11 506 771 bp in 8p23.1 is associated with heart defects and copy number variants of SOX7 and GATA4 may result in congenital heart disease.

  7. Cryopreservation of in vitro grown nodal segments of Rauvolfia serpentina by PVS2 vitrification.

    PubMed

    Ray, Avik; Bhattacharya, Sabita

    2008-01-01

    This paper describes the cryopreservation by PVS2 vitrification of Rauvolfia serpentina (L.) Benth ex kurz, an important tropical medicinal plant. The effects of type and size of explants, sucrose preculture (duration and concentration) and vitrification treatment were tested. Preliminary experiments with PVS1, 2 and 3 produced shoot growth only for PVS2. When optimizing the PVS2 vitrification of nodal segments, those of 0.31 - 0.39 cm in size were better than other nodal sizes and or apices. Sucrose preculture had a positive role in survival and subsequent regrowth of the cryopreserved explants. Seven days on 0.5 M sucrose solution significantly improved the viability of nodal segments. PVS2 incubation for 45 minutes combined with a 7-day preculture gave the optimum result of 66 percent. Plantlets derived after cryopreservation resumed growth and regenerated normally.

  8. Semiautomatic Segmentation of Glioma on Mobile Devices.

    PubMed

    Wu, Ya-Ping; Lin, Yu-Song; Wu, Wei-Guo; Yang, Cong; Gu, Jian-Qin; Bai, Yan; Wang, Mei-Yun

    2017-01-01

    Brain tumor segmentation is the first and the most critical step in clinical applications of radiomics. However, segmenting brain images by radiologists is labor intense and prone to inter- and intraobserver variability. Stable and reproducible brain image segmentation algorithms are thus important for successful tumor detection in radiomics. In this paper, we propose a supervised brain image segmentation method, especially for magnetic resonance (MR) brain images with glioma. This paper uses hard edge multiplicative intrinsic component optimization to preprocess glioma medical image on the server side, and then, the doctors could supervise the segmentation process on mobile devices in their convenient time. Since the preprocessed images have the same brightness for the same tissue voxels, they have small data size (typically 1/10 of the original image size) and simple structure of 4 types of intensity value. This observation thus allows follow-up steps to be processed on mobile devices with low bandwidth and limited computing performance. Experiments conducted on 1935 brain slices from 129 patients show that more than 30% of the sample can reach 90% similarity; over 60% of the samples can reach 85% similarity, and more than 80% of the sample could reach 75% similarity. The comparisons with other segmentation methods also demonstrate both efficiency and stability of the proposed approach.

  9. Comprehensive evaluation of an image segmentation technique for measuring tumor volume from CT images

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun

    2008-03-01

    Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.

  10. Comparison of the accuracy of 3-dimensional cone-beam computed tomography and micro-computed tomography reconstructions by using different voxel sizes.

    PubMed

    Maret, Delphine; Peters, Ove A; Galibourg, Antoine; Dumoncel, Jean; Esclassan, Rémi; Kahn, Jean-Luc; Sixou, Michel; Telmon, Norbert

    2014-09-01

    Cone-beam computed tomography (CBCT) data are, in principle, metrically exact. However, clinicians need to consider the precision of measurements of dental morphology as well as other hard tissue structures. CBCT spatial resolution, and thus image reconstruction quality, is restricted by the acquisition voxel size. The aim of this study was to assess geometric discrepancies among 3-dimensional CBCT reconstructions relative to the micro-CT reference. A total of 37 permanent teeth from 9 mandibles were scanned with CBCT 9500 and 9000 3D and micro-CT. After semiautomatic segmentation, reconstructions were obtained from CBCT acquisitions (voxel sizes 76, 200, and 300 μm) and from micro-CT (voxel size 41 μm). All reconstructions were positioned in the same plane by image registration. The topography of the geometric discrepancies was displayed by using a color map allowing the maximum differences to be located. The maximum differences were mainly found at the cervical margins and on the cusp tips or incisal edges. Geometric reconstruction discrepancies were significant at 300-μm resolution (P = .01, Wilcoxon test). To study hard tissue morphology, CBCT acquisitions require voxel sizes smaller than 300 μm. This experimental study will have to be complemented by studies in vivo that consider the conditions of clinical practice. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  11. Operationalizing hippocampal volume as an enrichment biomarker for amnestic MCI trials: effect of algorithm, test-retest variability and cut-point on trial cost, duration and sample size

    PubMed Central

    Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.

    2014-01-01

    Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008

  12. Understanding the market for geographic information: A market segmentation and characteristics analysis

    NASA Technical Reports Server (NTRS)

    Piper, William S.; Mick, Mark W.

    1994-01-01

    Findings and results from a marketing research study are presented. The report identifies market segments and the product types to satisfy demand in each. An estimate of market size is based on the specific industries in each segment. A sample of ten industries was used in the study. The scientific study covered U.S. firms only.

  13. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    PubMed Central

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-01-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. PMID:24223474

  14. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-10-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.

  15. Location of the internal carotid artery and ophthalmic artery segments for non-invasive intracranial pressure measurement by multi-depth TCD.

    PubMed

    Hamarat, Yasin; Deimantavicius, Mantas; Kalvaitis, Evaldas; Siaudvytyte, Lina; Januleviciene, Ingrida; Zakelis, Rolandas; Bartusis, Laimonas

    2017-12-01

    The aim of the present study was to locate the ophthalmic artery by using the edge of the internal carotid artery (ICA) as the reference depth to perform a reliable non-invasive intracranial pressure measurement via a multi-depth transcranial Doppler device and to then determine the positions and angles of an ultrasonic transducer (UT) on the closed eyelid in the case of located segments. High tension glaucoma (HTG) patients and healthy volunteers (HVs) undergoing non-invasive intracranial pressure measurement were selected for this prospective study. The depth of the edge of the ICA was identified, followed by a selection of the depths of the IOA and EOA segments. The positions and angles of the UT on the closed eyelid were measured. The mean depth of the identified ICA edge for HTG patients was 64.3 mm and was 63.0 mm for HVs (p = 0.21). The mean depth of the selected IOA segment for HTG patients was 59.2 mm and 59.3 mm for HVs (p = 0.91). The mean depth of the selected EOA segment for HTG patients was 48.5 mm and 49.8 mm for HVs (p = 0.14). The difference in the located depths of the segments between groups was not statistically significant. The results showed a significant difference in the measured UT angles in the case of the identified edge of the ICA and selected ophthalmic artery segments (p = 0.0002). We demonstrated that locating the IOA and EOA segments can be achieved using the edge of the ICA as a reference point. OA: ophthalmic artery; IOA: intracranial segments of the ophthalmic artery; EOA: extracranial segments of the ophthalmic artery; ICA: internal carotid artery; UT: ultrasonic transducer; HTG: high tension glaucoma; SD: standard deviation; ICP: intracranial pressure; TCD: transcranial Doppler.

  16. Estimation of genomic prediction accuracy from reference populations with varying degrees of relationship.

    PubMed

    Lee, S Hong; Clark, Sam; van der Werf, Julius H J

    2017-01-01

    Genomic prediction is emerging in a wide range of fields including animal and plant breeding, risk prediction in human precision medicine and forensic. It is desirable to establish a theoretical framework for genomic prediction accuracy when the reference data consists of information sources with varying degrees of relationship to the target individuals. A reference set can contain both close and distant relatives as well as 'unrelated' individuals from the wider population in the genomic prediction. The various sources of information were modeled as different populations with different effective population sizes (Ne). Both the effective number of chromosome segments (Me) and Ne are considered to be a function of the data used for prediction. We validate our theory with analyses of simulated as well as real data, and illustrate that the variation in genomic relationships with the target is a predictor of the information content of the reference set. With a similar amount of data available for each source, we show that close relatives can have a substantially larger effect on genomic prediction accuracy than lesser related individuals. We also illustrate that when prediction relies on closer relatives, there is less improvement in prediction accuracy with an increase in training data or marker panel density. We release software that can estimate the expected prediction accuracy and power when combining different reference sources with various degrees of relationship to the target, which is useful when planning genomic prediction (before or after collecting data) in animal, plant and human genetics.

  17. Comparison of atlas-based techniques for whole-body bone segmentation.

    PubMed

    Arabi, Hossein; Zaidi, Habib

    2017-02-01

    We evaluate the accuracy of whole-body bone extraction from whole-body MR images using a number of atlas-based segmentation methods. The motivation behind this work is to find the most promising approach for the purpose of MRI-guided derivation of PET attenuation maps in whole-body PET/MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean square distance (MSD) as image similarity measures for calculating the weighting factors, along with other atlas-dependent algorithms, such as (v) shape-based averaging (SBA) and (vi) Hofmann's pseudo-CT generation method. The performance evaluation of the different segmentation techniques was carried out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice criterion, global weighting atlas fusion methods provided moderate improvement of whole-body bone segmentation (DSC= 0.65 ± 0.05) compared to non-weighted IA (DSC= 0.60 ± 0.02). The local weighed atlas fusion approach using the MSD similarity measure outperformed the other strategies by achieving a DSC of 0.81 ± 0.03 while using the NCC and NMI measures resulted in a DSC of 0.78 ± 0.05 and 0.75 ± 0.04, respectively. Despite very long computation time, the extracted bone obtained from both SBA (DSC= 0.56 ± 0.05) and Hofmann's methods (DSC= 0.60 ± 0.02) exhibited no improvement compared to non-weighted IA. Finding the optimum parameters for implementation of the atlas fusion approach, such as weighting factors and image similarity patch size, have great impact on the performance of atlas-based segmentation approaches. The voxel-wise atlas fusion approach exhibited excellent performance in terms of cancelling out the non-systematic registration errors leading to accurate and reliable segmentation results. Denoising and normalization of MR images together with optimization of the involved parameters play a key role in improving bone extraction accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12

    PubMed Central

    Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J

    2014-01-01

    We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210

  19. Optimal trajectories for the aeroassisted flight experiment. Part 4: Data, tables, and graphs

    NASA Technical Reports Server (NTRS)

    Miele, A.; Wang, T.; Lee, W. Y.; Wang, H.; Wu, G. D.

    1989-01-01

    The determination of optimal trajectories for the aeroassisted flight experiment (AFE) is discussed. Data, tables, and graphs relative to the following transfers are presented: (IA) indirect ascent to a 178 NM perigee via a 197 NM apogee; and (DA) direct ascent to a 178 NM apogee. For both transfers, two cases are investigated: (1) the bank angle is continuously variable; and (2) the trajectory is divided into segments along which the bank angle is constant. For case (2), the following subcases are studied: two segments, three segments, four segments, and five segments; because the time duration of each segment is optimized, the above subcases involve four, six, eight, and ten parameters, respectively. Presented here are systematic data on a total of ten optimal trajectories (OT), five for Transfer IA and five for Transfer DA. For comparison purposes and only for Transfer IA, a five-segment reference trajectory RT is also considered.

  20. Aberration correction in wide-field fluorescence microscopy by segmented-pupil image interferometry.

    PubMed

    Scrimgeour, Jan; Curtis, Jennifer E

    2012-06-18

    We present a new technique for the correction of optical aberrations in wide-field fluorescence microscopy. Segmented-Pupil Image Interferometry (SPII) uses a liquid crystal spatial light modulator placed in the microscope's pupil plane to split the wavefront originating from a fluorescent object into an array of individual beams. Distortion of the wavefront arising from either system or sample aberrations results in displacement of the images formed from the individual pupil segments. Analysis of image registration allows for the local tilt in the wavefront at each segment to be corrected with respect to a central reference. A second correction step optimizes the image intensity by adjusting the relative phase of each pupil segment through image interferometry. This ensures that constructive interference between all segments is achieved at the image plane. Improvements in image quality are observed when Segmented-Pupil Image Interferometry is applied to correct aberrations arising from the microscope's optical path.

  1. An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.

    PubMed

    Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein

    2017-12-22

    The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.

  2. 78 FR 42921 - Endangered and Threatened Wildlife and Plants; Designation of Critical Habitat for the Northwest...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-18

    ... Habitat for the Northwest Atlantic Ocean Distinct Population Segment of the Loggerhead Sea Turtle (Caretta... Northwest Atlantic Ocean Distinct Population Segment (DPS) of the Loggerhead Sea Turtle (Caretta caretta... Ocean DPS of the loggerhead sea turtle, its habitat, or previous Federal actions, refer to the proposed...

  3. Using NASA's Reference Architecture: Comparing Polar and Geostationary Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Burnett, Michael

    2013-01-01

    The JPSS and GOES-R programs are housed at NASA GSFC and jointly implemented by NASA and NOAA to NOAA requirements. NASA's role in the JPSS Ground System is to develop and deploy the system according to NOAA requirements. NASA's role in the GOES-R ground segment is to provide Systems Engineering expertise and oversight for NOAA's development and deployment of the system. NASA's Earth Science Data Systems Reference Architecture is a document developed by NASA's Earth Science Data Systems Standards Process Group that describes a NASA Earth Observing Mission Ground system as a generic abstraction. The authors work within the respective ground segment projects and are also separately contributors to the Reference Architecture document. Opinions expressed are the author's only and are not NOAA, NASA or the Ground Projects' official positions.

  4. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    PubMed

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  5. Best Merge Region Growing Segmentation with Integrated Non-Adjacent Region Object Aggregation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Tarabalka, Yuliya; Montesano, Paul M.; Gofman, Emanuel

    2012-01-01

    Best merge region growing normally produces segmentations with closed connected region objects. Recognizing that spectrally similar objects often appear in spatially separate locations, we present an approach for tightly integrating best merge region growing with non-adjacent region object aggregation, which we call Hierarchical Segmentation or HSeg. However, the original implementation of non-adjacent region object aggregation in HSeg required excessive computing time even for moderately sized images because of the required intercomparison of each region with all other regions. This problem was previously addressed by a recursive approximation of HSeg, called RHSeg. In this paper we introduce a refined implementation of non-adjacent region object aggregation in HSeg that reduces the computational requirements of HSeg without resorting to the recursive approximation. In this refinement, HSeg s region inter-comparisons among non-adjacent regions are limited to regions of a dynamically determined minimum size. We show that this refined version of HSeg can process moderately sized images in about the same amount of time as RHSeg incorporating the original HSeg. Nonetheless, RHSeg is still required for processing very large images due to its lower computer memory requirements and amenability to parallel processing. We then note a limitation of RHSeg with the original HSeg for high spatial resolution images, and show how incorporating the refined HSeg into RHSeg overcomes this limitation. The quality of the image segmentations produced by the refined HSeg is then compared with other available best merge segmentation approaches. Finally, we comment on the unique nature of the hierarchical segmentations produced by HSeg.

  6. Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L

    2008-04-01

    The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.

  7. Automatic segmentation and measurements of gestational sac using static B-mode ultrasound images

    NASA Astrophysics Data System (ADS)

    Ibrahim, Dheyaa Ahmed; Al-Assam, Hisham; Du, Hongbo; Farren, Jessica; Al-karawi, Dhurgham; Bourne, Tom; Jassim, Sabah

    2016-05-01

    Ultrasound imagery has been widely used for medical diagnoses. Ultrasound scanning is safe and non-invasive, and hence used throughout pregnancy for monitoring growth. In the first trimester, an important measurement is that of the Gestation Sac (GS). The task of measuring the GS size from an ultrasound image is done manually by a Gynecologist. This paper presents a new approach to automatically segment a GS from a static B-mode image by exploiting its geometric features for early identification of miscarriage cases. To accurately locate the GS in the image, the proposed solution uses wavelet transform to suppress the speckle noise by eliminating the high-frequency sub-bands and prepare an enhanced image. This is followed by a segmentation step that isolates the GS through the several stages. First, the mean value is used as a threshold to binarise the image, followed by filtering unwanted objects based on their circularity, size and mean of greyscale. The mean value of each object is then used to further select candidate objects. A Region Growing technique is applied as a post-processing to finally identify the GS. We evaluated the effectiveness of the proposed solution by firstly comparing the automatic size measurements of the segmented GS against the manual measurements, and then integrating the proposed segmentation solution into a classification framework for identifying miscarriage cases and pregnancy of unknown viability (PUV). Both test results demonstrate that the proposed method is effective in segmentation the GS and classifying the outcomes with high level accuracy (sensitivity (miscarriage) of 100% and specificity (PUV) of 99.87%).

  8. Operating Room of the Future: Advanced Technologies in Safe and Efficient Operating Rooms

    DTIC Science & Technology

    2010-10-01

    research, and treatment purposes. A laser optical mouse and a graphics tablet were used by radiologists to segment 12 simulated reference lesions per...radiologists seg- mented a total of 132 simulated lesions. Overall error in contour segmentation was less with the graphics tablet than with the mouse...PG0.0001). Error in area of segmentation was not significantly different between the tablet and the mouse (P=0.62). Time for segmen- tation was less with

  9. Microplitis demolitor bracovirus genome segments vary in abundance and are individually packaged in virions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beck, Markus H.; Inman, Ross B.; Strand, Michael R.

    2007-03-01

    Polydnaviruses (PDVs) are distinguished by their unique association with parasitoid wasps and their segmented, double-stranded (ds) DNA genomes that are non-equimolar in abundance. Relatively little is actually known, however, about genome packaging or segment abundance of these viruses. Here, we conducted electron microscopy (EM) and real-time polymerase chain reaction (PCR) studies to characterize packaging and segment abundance of Microplitis demolitor bracovirus (MdBV). Like other PDVs, MdBV replicates in the ovaries of females where virions accumulate to form a suspension called calyx fluid. Wasps then inject a quantity of calyx fluid when ovipositing into hosts. The MdBV genome consists of 15more » segments that range from 3.6 (segment A) to 34.3 kb (segment O). EM analysis indicated that MdBV virions contain a single nucleocapsid that encapsidates one circular DNA of variable size. We developed a semi-quantitative real-time PCR assay using SYBR Green I. This assay indicated that five (J, O, H, N and B) segments of the MdBV genome accounted for more than 60% of the viral DNAs in calyx fluid. Estimates of relative segment abundance using our real-time PCR assay were also very similar to DNA size distributions determined from micrographs. Analysis of parasitized Pseudoplusia includens larvae indicated that copy number of MdBV segments C, B and J varied between hosts but their relative abundance within a host was virtually identical to their abundance in calyx fluid. Among-tissue assays indicated that each viral segment was most abundant in hemocytes and least abundant in salivary glands. However, the relative abundance of each segment to one another was similar in all tissues. We also found no clear relationship between MdBV segment and transcript abundance in hemocytes and fat body.« less

  10. Segmental neurofibromatosis and cancer: report of triple malignancy in a woman with mosaic Neurofibromatosis 1 and review of neoplasms in segmental neurofibromatosis.

    PubMed

    Cohen, Philip R

    2016-07-15

    BackgroundSegmental neurofibromatosis, referred to as mosaic neurofibromatosis 1, patients present with neurofibromas or café au lait macules or both in a unilateral segment of the body.PurposeA woman with segmental neurofibromatosis and triple cancer (renal cell carcinoma, mixed thyroid carcinoma, and lentigo maligna) is described and cancers observed in patients with segmental neurofibromatosis are reviewed.MethodsPubMed was used to search the following terms, separately and in combination: cancer, malignancy, mosaic, neoplasm, neurofibroma, neurofibromatosis, segment, segmental, tumor.ResultsMalignancy (13 cancers) has been observed in 11 segmental neurofibromatosis patients; one patient had three different cancers. The most common neoplasms were of neural crest origin {malignant peripheral nerve sheath tumor (3 patients) and melanoma (3 patients)] and gastrointestinal tract origin [colon (1 patient) and gastric (1 patient)]. Breast cancer, Hodgkin lymphoma, lung cancer, kidney cancer, and thyroid cancer each occurred in one patient.ConclusionsSimilar to patients with von Recklinghausen neurofibromatosis 1, individuals with segmental neurofibromatosis also have a genodermatosis-associated increased risk of developing cancer.

  11. Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2002-01-01

    This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.

  12. The relationship between partial upper-airway obstruction and inter-breath transition period during sleep.

    PubMed

    Mann, Dwayne L; Edwards, Bradley A; Joosten, Simon A; Hamilton, Garun S; Landry, Shane; Sands, Scott A; Wilson, Stephen J; Terrill, Philip I

    2017-10-01

    Short pauses or "transition-periods" at the end of expiration and prior to subsequent inspiration are commonly observed during sleep in humans. However, the role of transition periods in regulating ventilation during physiological challenges such as partial airway obstruction (PAO) has not been investigated. Twenty-nine obstructive sleep apnea patients and eight controls underwent overnight polysomnography with an epiglottic catheter. Sustained-PAO segments (increased epiglottic pressure over ≥5 breaths without increased peak inspiratory flow) and unobstructed reference segments were manually scored during apnea-free non-REM sleep. Nasal pressure data was computationally segmented into inspiratory (T I , shortest period achieving 95% inspiratory volume), expiratory (T E , shortest period achieving 95% expiratory volume), and inter-breath transition period (T Trans , period between T E and subsequent T I ). Compared with reference segments, sustained-PAO segments had a mean relative reduction in T Trans (-24.7±17.6%, P<0.001), elevated T I (11.8±10.5%, P<0.001), and a small reduction in T E (-3.9±8.0, P≤0.05). Compensatory increases in inspiratory period during PAO are primarily explained by reduced transition period and not by reduced expiratory period. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.

    PubMed

    Zhao, Liya; Jia, Kebin

    2016-01-01

    Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  14. Usher syndrome type 1–associated cadherins shape the photoreceptor outer segment

    PubMed Central

    Parain, Karine; Aghaie, Asadollah; Picaud, Serge

    2017-01-01

    Usher syndrome type 1 (USH1) causes combined hearing and sight defects, but how mutations in USH1 genes lead to retinal dystrophy in patients remains elusive. The USH1 protein complex is associated with calyceal processes, which are microvilli of unknown function surrounding the base of the photoreceptor outer segment. We show that in Xenopus tropicalis, these processes are connected to the outer-segment membrane by links composed of protocadherin-15 (USH1F protein). Protocadherin-15 deficiency, obtained by a knockdown approach, leads to impaired photoreceptor function and abnormally shaped photoreceptor outer segments. Rod basal outer disks displayed excessive outgrowth, and cone outer segments were curved, with lamellae of heterogeneous sizes, defects also observed upon knockdown of Cdh23, encoding cadherin-23 (USH1D protein). The calyceal processes were virtually absent in cones and displayed markedly reduced F-actin content in rods, suggesting that protocadherin-15–containing links are essential for their development and/or maintenance. We propose that calyceal processes, together with their associated links, control the sizing of rod disks and cone lamellae throughout their daily renewal. PMID:28495838

  15. Usher syndrome type 1-associated cadherins shape the photoreceptor outer segment.

    PubMed

    Schietroma, Cataldo; Parain, Karine; Estivalet, Amrit; Aghaie, Asadollah; Boutet de Monvel, Jacques; Picaud, Serge; Sahel, José-Alain; Perron, Muriel; El-Amraoui, Aziz; Petit, Christine

    2017-06-05

    Usher syndrome type 1 (USH1) causes combined hearing and sight defects, but how mutations in USH1 genes lead to retinal dystrophy in patients remains elusive. The USH1 protein complex is associated with calyceal processes, which are microvilli of unknown function surrounding the base of the photoreceptor outer segment. We show that in Xenopus tropicalis , these processes are connected to the outer-segment membrane by links composed of protocadherin-15 (USH1F protein). Protocadherin-15 deficiency, obtained by a knockdown approach, leads to impaired photoreceptor function and abnormally shaped photoreceptor outer segments. Rod basal outer disks displayed excessive outgrowth, and cone outer segments were curved, with lamellae of heterogeneous sizes, defects also observed upon knockdown of Cdh23 , encoding cadherin-23 (USH1D protein). The calyceal processes were virtually absent in cones and displayed markedly reduced F-actin content in rods, suggesting that protocadherin-15-containing links are essential for their development and/or maintenance. We propose that calyceal processes, together with their associated links, control the sizing of rod disks and cone lamellae throughout their daily renewal. © 2017 Schietroma et al.

  16. Ultra-Stable Segmented Telescope Sensing and Control Architecture

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee; Bolcar, Matthew; Knight, Scott; Redding, David

    2017-01-01

    The LUVOIR team is conducting two full architecture studies Architecture A 15 meter telescope that folds up in an 8.4m SLS Block 2 shroud is nearly complete. Architecture B 9.2 meter that uses an existing fairing size will begin study this Fall. This talk will summarize the ultra-stable architecture of the 15m segmented telescope including the basic requirements, the basic rationale for the architecture, the technologies employed, and the expected performance. This work builds on several dynamics and thermal studies performed for ATLAST segmented telescope configurations. The most important new element was an approach to actively control segments for segment to segment motions which will be discussed later.

  17. Iterative framework for the joint segmentation and CT synthesis of MR images: application to MRI-only radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Burgos, Ninon; Guerreiro, Filipa; McClelland, Jamie; Presles, Benoît; Modat, Marc; Nill, Simeon; Dearnaley, David; deSouza, Nandita; Oelfke, Uwe; Knopf, Antje-Christin; Ourselin, Sébastien; Cardoso, M. Jorge

    2017-06-01

    To tackle the problem of magnetic resonance imaging (MRI)-only radiotherapy treatment planning (RTP), we propose a multi-atlas information propagation scheme that jointly segments organs and generates pseudo x-ray computed tomography (CT) data from structural MR images (T1-weighted and T2-weighted). As the performance of the method strongly depends on the quality of the atlas database composed of multiple sets of aligned MR, CT and segmented images, we also propose a robust way of registering atlas MR and CT images, which combines structure-guided registration, and CT and MR image synthesis. We first evaluated the proposed framework in terms of segmentation and CT synthesis accuracy on 15 subjects with prostate cancer. The segmentations obtained with the proposed method were compared using the Dice score coefficient (DSC) to the manual segmentations. Mean DSCs of 0.73, 0.90, 0.77 and 0.90 were obtained for the prostate, bladder, rectum and femur heads, respectively. The mean absolute error (MAE) and the mean error (ME) were computed between the reference CTs (non-rigidly aligned to the MRs) and the pseudo CTs generated with the proposed method. The MAE was on average 45.7+/- 4.6 HU and the ME -1.6+/- 7.7 HU. We then performed a dosimetric evaluation by re-calculating plans on the pseudo CTs and comparing them to the plans optimised on the reference CTs. We compared the cumulative dose volume histograms (DVH) obtained for the pseudo CTs to the DVH obtained for the reference CTs in the planning target volume (PTV) located in the prostate, and in the organs at risk at different DVH points. We obtained average differences of -0.14 % in the PTV for {{D}98 % } , and between -0.14 % and 0.05% in the PTV, bladder, rectum and femur heads for D mean and {{D}2 % } . Overall, we demonstrate that the proposed framework is able to automatically generate accurate pseudo CT images and segmentations in the pelvic region, potentially bypassing the need for CT scan for accurate RTP.

  18. Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography

    PubMed Central

    Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji

    2013-01-01

    OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418

  19. Electrocardiographic evaluation of reperfusion therapy in patients with acute myocardial infarction.

    PubMed

    Clemmensen, P

    1996-02-01

    The present thesis is based on 6 previously published clinical studies in patients with AMI. Thrombolytic therapy for patients with AMI improves early infarct coronary artery patency, limits AMI size, improves left ventricular function and survival, as demonstrated in large placebo-controlled clinical trials. With the advent of interventions aimed at limiting AMI size it became important to assess the amount of ischemic myocardium in the early phase of AMI, and to develop noninvasive methods for evaluation of these therapies. The aims of the present studies were to develop such methods. The studies have included 267 patients with AMI admitted up to 12 hours after onset of symptoms. All included patients had acute ECG ST-segment changes indicating subepicardial ischemia, and patients with bundle branch block were excluded. Serial ECG's were analyzed with quantitative ST-segment measurements in the acute phase and compared to the Selvester QRS score estimated final AMI size. These ECG indices were compared to and validated through comparisons with other independent noninvasive and invasive methods, used for the purpose of evaluating patients with AMI treated with thrombolytic therapy. It was found that in patients with first AMI not treated with reperfusion therapies the QRS score estimated final AMI size can be predicted from the acute ST-segment elevation. Based on the number of ECG leads with ST-segment elevation and its summated magnitude, formulas were developed to provide an "ST score" for estimating the amount of myocardium in jeopardy during the early phase of AMI. The ST-segment deviation present in the ECG in patients with documented occlusion of the infarct related coronary artery, was subsequently shown to correlate with the degree of regional and global left ventricular dysfunction. Because serial changes in ST-segment elevation, during the acute phase of AMI were believed to reflect changes is myocardial ischemia and thus possibly infarct artery patency status, the summated ST-segment elevation present on the admission ECG was compared to that present after administration of intravenous thrombolytic therapy, and immediately prior to angiographic visualization of the infarct related coronary artery. The entire spectrum of sensitivities and specificities, derived from different cut-off values for the degree of ST-segment normalization, was described for the first time. It was found that a 20% decrease in ST-segment elevation could predict coronary artery patency with a high level of accuracy: positive predictive value = 88% and negative predictive value = 80%.(ABSTRACT TRUNCATED)

  20. Precise Alignment and Permanent Mounting of Thin and Lightweight X-ray Segments

    NASA Technical Reports Server (NTRS)

    Biskach, Michael P.; Chan, Kai-Wing; Hong, Melinda N.; Mazzarella, James R.; McClelland, Ryan S.; Norman, Michael J.; Saha, Timo T.; Zhang, William W.

    2012-01-01

    To provide observations to support current research efforts in high energy astrophysics. future X-ray telescope designs must provide matching or better angular resolution while significantly increasing the total collecting area. In such a design the permanent mounting of thin and lightweight segments is critical to the overall performance of the complete X-ray optic assembly. The thin and lightweight segments used in the assemhly of the modules are desigued to maintain and/or exceed the resolution of existing X-ray telescopes while providing a substantial increase in collecting area. Such thin and delicate X-ray segments are easily distorted and yet must be aligned to the arcsecond level and retain accurate alignment for many years. The Next Generation X-ray Optic (NGXO) group at NASA Goddard Space Flight Center has designed, assembled. and implemented new hardware and procedures mth the short term goal of aligning three pairs of X-ray segments in a technology demonstration module while maintaining 10 arcsec alignment through environmental testing as part of the eventual design and construction of a full sized module capable of housing hundreds of X-ray segments. The recent attempts at multiple segment pair alignment and permanent mounting is described along with an overview of the procedure used. A look into what the next year mll bring for the alignment and permanent segment mounting effort illustrates some of the challenges left to overcome before an attempt to populate a full sized module can begin.

  1. A method for direct measurement of the first-order mass moments of human body segments.

    PubMed

    Fujii, Yusaku; Shimada, Kazuhito; Maru, Koichi; Ozawa, Junichi; Lu, Rong-Sheng

    2010-01-01

    We propose a simple and direct method for measuring the first-order mass moment of a human body segment. With the proposed method, the first-order mass moment of the body segment can be directly measured by using only one precision scale and one digital camera. In the dummy mass experiment, the relative standard uncertainty of a single set of measurements of the first-order mass moment is estimated to be 1.7%. The measured value will be useful as a reference for evaluating the uncertainty of the body segment inertial parameters (BSPs) estimated using an indirect method.

  2. Asteroid Redirect Mission Proximity Operations for Reference Target Asteroid 2008 EV5

    NASA Technical Reports Server (NTRS)

    Reeves, David M.; Mazanek, Daniel D.; Cichy, Benjamin D.; Broschart, Steve B.; Deweese, Keith D.

    2016-01-01

    NASA's Asteroid Redirect Mission (ARM) is composed of two segments, the Asteroid Redirect Robotic Mission (ARRM), and the Asteroid Redirect Crewed Mission (ARCM). In March of 2015, NASA selected the Robotic Boulder Capture Option1 as the baseline for the ARRM. This option will capture a multi-ton boulder, (typically 2-4 meters in size) from the surface of a large (greater than approx.100 m diameter) Near-Earth Asteroid (NEA) and return it to cis-lunar space for subsequent human exploration during the ARCM. Further human and robotic missions to the asteroidal material would also be facilitated by its return to cis-lunar space. In addition, prior to departing the asteroid, the Asteroid Redirect Vehicle (ARV) will perform a demonstration of the Enhanced Gravity Tractor (EGT) planetary defense technique2. This paper will discuss the proximity operations which have been broken into three phases: Approach and Characterization, Boulder Capture, and Planetary Defense Demonstration. Each of these phases has been analyzed for the ARRM reference target, 2008 EV5, and a detailed baseline operations concept has been developed.

  3. Evaluation of a High-Resolution Benchtop Micro-CT Scanner for Application in Porous Media Research

    NASA Astrophysics Data System (ADS)

    Tuller, M.; Vaz, C. M.; Lasso, P. O.; Kulkarni, R.; Ferre, T. A.

    2010-12-01

    Recent advances in Micro Computed Tomography (MCT) provided the motivation to thoroughly evaluate and optimize scanning, image reconstruction/segmentation and pore-space analysis capabilities of a new generation benchtop MCT scanner and associated software package. To demonstrate applicability to soil research the project was focused on determination of porosities and pore size distributions of two Brazilian Oxisols from segmented MCT-data. Effects of metal filters and various acquisition parameters (e.g. total rotation, rotation step, and radiograph frame averaging) on image quality and acquisition time are evaluated. Impacts of sample size and scanning resolution on CT-derived porosities and pore-size distributions are illustrated.

  4. Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.

    PubMed

    Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R

    2012-06-01

    The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.

  5. Magnetic resonance angiography in the follow-up of distal lower-extremity bypass surgery: comparison with duplex ultrasound and digital subtraction angiography.

    PubMed

    Meissner, Oliver A; Verrel, Frauke; Tató, Federico; Siebert, Uwe; Ramirez, Heldin; Ruppert, Volker; Schoenberg, Stefan O; Reiser, Maximilian

    2004-11-01

    The danger of limb loss as a consequence of acute occlusion of infrapopliteal bypasses underscores the requirement for careful patient follow-up. The objective of this study was to determine the agreement and accuracy of contrast material-enhanced moving-table magnetic resonance (MR) angiography and duplex ultrasonography (US) in the assessment of failing bypass grafts. In cases of discrepancy, digital subtraction angiography (DSA) served as the reference standard. MR angiography was performed in 24 consecutive patients with 26 femorotibial or femoropedal bypass grafts. Each revascularized limb was divided into five segments--(i) native arteries proximal to the graft; (ii) proximal anastomosis; (iii) graft course; (iv) distal anastomosis; and (v) native arteries distal to the graft-resulting in 130 vascular segments. Three readers evaluated all MR angiograms for image quality and the presence of failing grafts. The degree of stenosis was compared to the findings of duplex US, and in case of discrepancy, to DSA findings. Two separate analyses were performed with use of DSA only and a combined diagnostic endpoint as the reference standard. Image quality was rated excellent or intermediate in 119 of 130 vascular segments (92%). Venous overlay was encountered in 26 of 130 segments (20%). In only two segments was evaluation of the outflow region not feasible. One hundred seventeen of 130 vascular segments were available for quantitative analysis. In 109 of 117 segments (93%), MR angiography and duplex US showed concordant findings. In the eight discordant segments in seven patients, duplex US overlooked four high-grade stenoses that were correctly identified by MR angiography and confirmed by DSA. Percutaneous transluminal angioplasty was performed in these cases. In no case did MR angiography miss an area of stenosis of sufficient severity to require treatment. Total accuracy for duplex US ranged from 0.90 to 0.97 depending on the reference standard used, whereas MR angiography was completely accurate (1.00) regardless of the standard definition. Our data strongly suggest that the accuracy of MR angiography for identifying failing grafts in the infrapopliteal circulation is equal to that of duplex US and superior to that of duplex US in cases of complex revascularization. MR angiography should be included in routine follow-up of patients undergoing infrapopliteal bypass surgery.

  6. An automated method for accurate vessel segmentation.

    PubMed

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; Cheng, Kwang-Ting Tim

    2017-05-07

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm's growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008 European Conf. on Computer Vision; Law and Chung 2009 IEEE Trans. Image Process. 18 596-612; Wang 2015 J. Neurosci. Methods 241 30-6) with manually optimized parameters. Our system has also been applied clinically for cerebral aneurysm development analysis. Experimental results on 10 patients' data, with two 3D CT scans per patient, show that our system's automatic diagnosis outcomes are consistent with clinicians' manual measurements.

  7. An automated method for accurate vessel segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting

    2017-05-01

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008 European Conf. on Computer Vision; Law and Chung 2009 IEEE Trans. Image Process. 18 596-612; Wang 2015 J. Neurosci. Methods 241 30-6) with manually optimized parameters. Our system has also been applied clinically for cerebral aneurysm development analysis. Experimental results on 10 patients’ data, with two 3D CT scans per patient, show that our system’s automatic diagnosis outcomes are consistent with clinicians’ manual measurements.

  8. Geologic field-trip guide to Mount Shasta Volcano, northern California

    USGS Publications Warehouse

    Christiansen, Robert L.; Calvert, Andrew T.; Grove, Timothy L.

    2017-08-18

    The southern part of the Cascades Arc formed in two distinct, extended periods of activity: “High Cascades” volcanoes erupted during about the past 6 million years and were built on a wider platform of Tertiary volcanoes and shallow plutons as old as about 30 Ma, generally called the “Western Cascades.” For the most part, the Shasta segment (for example, Hildreth, 2007; segment 4 of Guffanti and Weaver, 1988) of the arc forms a distinct, fairly narrow axis of short-lived small- to moderate-sized High Cascades volcanoes that erupted lavas, mainly of basaltic-andesite or low-silica-andesite compositions. Western Cascades rocks crop out only sparsely in the Shasta segment; almost all of the following descriptions are of High Cascades features except for a few unusual localities where older, Western Cascades rocks are exposed to view along the route of the field trip.The High Cascades arc axis in this segment of the arc is mainly a relatively narrow band of either monogenetic or short-lived shield volcanoes. The belt generally averages about 15 km wide and traverses the length of the Shasta segment, roughly 100 km between about the Klamath River drainage on the north, near the Oregon-California border, and the McCloud River drainage on the south (fig. 1). Superposed across this axis are two major long-lived stratovolcanoes and the large rear-arc Medicine Lake volcano. One of the stratovolcanoes, the Rainbow Mountain volcano of about 1.5–0.8 Ma, straddles the arc near the midpoint of the Shasta segment. The other, Mount Shasta itself, which ranges from about 700 ka to 0 ka, lies distinctly west of the High Cascades axis. It is notable that Mount Shasta and Medicine Lake volcanoes, although volcanologically and petrologically quite different, span about the same range of ages and bracket the High Cascades axis on the west and east, respectively.The field trip begins near the southern end of the Shasta segment, where the Lassen Volcanic Center field trip leaves off, in a field of high-alumina olivine tholeiite lavas (HAOTs, referred to elsewhere in this guide as low-potassium olivine tholeiites, LKOTs). It proceeds around the southern, western, and northern flanks of Mount Shasta and onto a part of the arc axis. The stops feature elements of the Mount Shasta area in an approximately chronological order, from oldest to youngest.

  9. Edges in CNC polishing: from mirror-segments towards semiconductors, paper 1: edges on processing the global surface.

    PubMed

    Walker, David; Yu, Guoyu; Li, Hongyu; Messelink, Wilhelmus; Evans, Rob; Beaucamp, Anthony

    2012-08-27

    Segment-edges for extremely large telescopes are critical for observations requiring high contrast and SNR, e.g. detecting exo-planets. In parallel, industrial requirements for edge-control are emerging in several applications. This paper reports on a new approach, where edges are controlled throughout polishing of the entire surface of a part, which has been pre-machined to its final external dimensions. The method deploys compliant bonnets delivering influence functions of variable diameter, complemented by small pitch tools sized to accommodate aspheric mis-fit. We describe results on witness hexagons in preparation for full size prototype segments for the European Extremely Large Telescope, and comment on wider applications of the technology.

  10. Comparison of computer- and human-derived coronary angiographic end-point measures for controlled therapy trials

    NASA Technical Reports Server (NTRS)

    Mack, W. J.; Selzer, R. H.; Pogoda, J. M.; Lee, P. L.; Shircore, A. M.; Azen, S. P.; Blankenhorn, D. H.

    1992-01-01

    The Cholesterol Lowering Atherosclerosis Study, a randomized angiographic clinical trial, demonstrated the beneficial effect of niacin/colestipol plus diet therapy on coronary atherosclerosis. Outcome was determined by panel-based estimates (viewed in both still and cine modes) of percent stenosis severity and change in native artery and bypass graft lesions. Computer-based quantitative coronary angiography (QCA) was also used to measure lesion and bypass graft stenosis severity and change in individual frames closely matched in orientation, opacification, and cardiac phase. Both methods jointly evaluated 350 nonoccluded lesions. The correlation between QCA and panel estimates of lesion size was 0.70 (p less than 0.0001) and for change in lesion size was 0.28 (p = 0.002). Agreement between the two methods in classifying lesion changes (i.e., regression, unchanged, or progression) occurred for 60% (210 of 350) of the lesions kappa +/- SEM = 0.20 +/- 0.05, p less than 0.001). The panel identified 442 nonoccluded lesions for which QCA stenosis measurements could not be obtained. Lesions not measurable by QCA included those with stenosis greater than 85% that could not be reliably edge tracked, segments with diffuse or ecstatic disease that had no reliable reference diameter, and segments for which matched frames could not be located. Seventy-nine lesions, the majority between 21% and 40% stenosis, were identified and measured by QCA but were not identified by the panel. This comparison study demonstrates the need to consider available angiographic measurement methods in relation to the goals of their use.

  11. Early Use of N-acetylcysteine With Nitrate Therapy in Patients Undergoing Primary Percutaneous Coronary Intervention for ST-Segment-Elevation Myocardial Infarction Reduces Myocardial Infarct Size (the NACIAM Trial [N-acetylcysteine in Acute Myocardial Infarction]).

    PubMed

    Pasupathy, Sivabaskari; Tavella, Rosanna; Grover, Suchi; Raman, Betty; Procter, Nathan E K; Du, Yang Timothy; Mahadavan, Gnanadevan; Stafford, Irene; Heresztyn, Tamila; Holmes, Andrew; Zeitz, Christopher; Arstall, Margaret; Selvanayagam, Joseph; Horowitz, John D; Beltrame, John F

    2017-09-05

    Contemporary ST-segment-elevation myocardial infarction management involves primary percutaneous coronary intervention, with ongoing studies focusing on infarct size reduction using ancillary therapies. N-acetylcysteine (NAC) is an antioxidant with reactive oxygen species scavenging properties that also potentiates the effects of nitroglycerin and thus represents a potentially beneficial ancillary therapy in primary percutaneous coronary intervention. The NACIAM trial (N-acetylcysteine in Acute Myocardial Infarction) examined the effects of NAC on infarct size in patients with ST-segment-elevation myocardial infarction undergoing percutaneous coronary intervention. This randomized, double-blind, placebo-controlled, multicenter study evaluated the effects of intravenous high-dose NAC (29 g over 2 days) with background low-dose nitroglycerin (7.2 mg over 2 days) on early cardiac magnetic resonance imaging-assessed infarct size. Secondary end points included cardiac magnetic resonance-determined myocardial salvage and creatine kinase kinetics. Of 112 randomized patients with ST-segment-elevation myocardial infarction, 75 (37 in NAC group, 38 in placebo group) underwent early cardiac magnetic resonance imaging. Median duration of ischemia pretreatment was 2.4 hours. With background nitroglycerin infusion administered to all patients, those randomized to NAC exhibited an absolute 5.5% reduction in cardiac magnetic resonance-assessed infarct size relative to placebo (median, 11.0%; [interquartile range 4.1, 16.3] versus 16.5%; [interquartile range 10.7, 24.2]; P =0.02). Myocardial salvage was approximately doubled in the NAC group (60%; interquartile range, 37-79) compared with placebo (27%; interquartile range, 14-42; P <0.01) and median creatine kinase areas under the curve were 22 000 and 38 000 IU·h in the NAC and placebo groups, respectively ( P =0.08). High-dose intravenous NAC administered with low-dose intravenous nitroglycerin is associated with reduced infarct size in patients with ST-segment-elevation myocardial infarction undergoing percutaneous coronary intervention. A larger study is required to assess the impact of this therapy on clinical cardiac outcomes. Australian New Zealand Clinical Trials Registry. URL: http://www.anzctr.org.au/. Unique identifier: 12610000280000. © 2017 American Heart Association, Inc.

  12. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging.

    PubMed

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.

  13. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging

    PubMed Central

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691

  14. Remedial Sheets for Progress Checks, Segments 19-40.

    ERIC Educational Resources Information Center

    New York Inst. of Tech., Old Westbury.

    The second part of the Self-Paced Physics Course remediation materials is presented for U. S. Naval Academy students who miss core problems on the progress check. The total of 101 problems is incorporated in this volume to match study segments 19 through 40. Each remedial sheet is composed of a statement of the missed problem and references to…

  15. Development and Evaluation of a Semi-automated Segmentation Tool and a Modified Ellipsoid Formula for Volumetric Analysis of the Kidney in Non-contrast T2-Weighted MR Images.

    PubMed

    Seuss, Hannes; Janka, Rolf; Prümmer, Marcus; Cavallaro, Alexander; Hammon, Rebecca; Theis, Ragnar; Sandmair, Martin; Amann, Kerstin; Bäuerle, Tobias; Uder, Michael; Hammon, Matthias

    2017-04-01

    Volumetric analysis of the kidney parenchyma provides additional information for the detection and monitoring of various renal diseases. Therefore the purposes of the study were to develop and evaluate a semi-automated segmentation tool and a modified ellipsoid formula for volumetric analysis of the kidney in non-contrast T2-weighted magnetic resonance (MR)-images. Three readers performed semi-automated segmentation of the total kidney volume (TKV) in axial, non-contrast-enhanced T2-weighted MR-images of 24 healthy volunteers (48 kidneys) twice. A semi-automated threshold-based segmentation tool was developed to segment the kidney parenchyma. Furthermore, the three readers measured renal dimensions (length, width, depth) and applied different formulas to calculate the TKV. Manual segmentation served as a reference volume. Volumes of the different methods were compared and time required was recorded. There was no significant difference between the semi-automatically and manually segmented TKV (p = 0.31). The difference in mean volumes was 0.3 ml (95% confidence interval (CI), -10.1 to 10.7 ml). Semi-automated segmentation was significantly faster than manual segmentation, with a mean difference = 188 s (220 vs. 408 s); p < 0.05. Volumes did not differ significantly comparing the results of different readers. Calculation of TKV with a modified ellipsoid formula (ellipsoid volume × 0.85) did not differ significantly from the reference volume; however, the mean error was three times higher (difference of mean volumes -0.1 ml; CI -31.1 to 30.9 ml; p = 0.95). Applying the modified ellipsoid formula was the fastest way to get an estimation of the renal volume (41 s). Semi-automated segmentation and volumetric analysis of the kidney in native T2-weighted MR data delivers accurate and reproducible results and was significantly faster than manual segmentation. Applying a modified ellipsoid formula quickly provides an accurate kidney volume.

  16. Microvascular dysfunction in the immediate aftermath of chronic total coronary occlusion recanalization.

    PubMed

    Ladwiniec, Andrew; Cunnington, Michael S; Rossington, Jennifer; Thackray, Simon; Alamgir, Farquad; Hoye, Angela

    2016-05-01

    The aim of this study was to compare microvascular resistance under both baseline and hyperemic conditions immediately after percutaneous coronary intervention (PCI) of a chronic total occlusion (CTO) with an unobstructed reference vessel in the same patient Microvascular dysfunction has been reported to be prevalent immediately after CTO PCI. However, previous studies have not made comparison with a reference vessel. Patients with a CTO may have global microvascular and/or endothelial dysfunction, making comparison with established normal values misleading. After successful CTO PCI in 21 consecutive patients, coronary pressure and flow velocity were measured at baseline and hyperemia in distal segments of the CTO/target vessel and an unobstructed reference vessel. Hemodynamics including hyperemic microvascular resistance (HMR), basal microvascular resistance (BMR), and instantaneous minimal microvascular resistance at baseline and hyperemia were calculated and compared between reference and target/CTO vessels. After CTO PCI, BMR was reduced in the target/CTO vessel compared with the reference vessel: 3.58 mm Hg/cm/s vs 4.94 mm Hg/cm/s, difference -1.36 mm Hg/cm/s (-2.33 to -0.39, p = 0.008). We did not detect a difference in HMR: 1.82 mm Hg/cm/s vs 2.01 mm Hg/cm/s, difference -0.20 (-0.78 to 0.39, p = 0.49). Instantaneous minimal microvascular resistance correlated strongly with the length of stented segment at baseline (r = 0.63, p = 0.005) and hyperemia (r = 0.68, p = 0.002). BMR is reduced in a recanalized CTO in the immediate aftermath of PCI compared to an unobstructed reference vessel; however, HMR appears to be preserved. A longer stented segment is associated with increased microvascular resistance. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Defining And Employing Reference Conditions For Ecological Restoration Of The Lower Missouri River, USA

    NASA Astrophysics Data System (ADS)

    Jacobson, R. B.; Elliott, C. M.; Reuter, J. M.

    2008-12-01

    Ecological reference conditions are especially challenging for large, intensively managed rivers like the Lower Missouri. Historical information provides broad understanding of how the river has changed, but translating historical information into quantitative reference conditions remains a challenge. Historical information is less available for biological and chemical conditions than for physical conditions. For physical conditions, much of the early historical condition is documented in date-specific measurements or maps, and it is difficult to determine how representative these conditions are for a river system that was characterized historically by large floods and high channel migration rates. As an alternative to a historically defined least- disturbed condition, spatial variation within the Missouri River basin provides potential for defining a best- attainable reference condition. A possibility for the best-attainable condition for channel morphology is an unchannelized segment downstream of the lowermost dam (rkm 1298 - 1203). This segment retains multiple channels and abundant sandbars although it has a highly altered flow regime and a greatly diminished sediment supply. Conversely, downstream river segments have more natural flow regimes, but have been narrowed and simplified for navigation and bank stability. We use two computational tools to compensate for the lack of ideal reference conditions. The first is a hydrologic model that synthesizes natural and altered flow regimes based on 100 years of daily inputs to the river (daily routing model, DRM, US Army Corps of Engineers, 1998); the second tool is hydrodynamic modeling of habitat availability. The flow-regime and hydrodynamic outputs are integrated to define habitat-duration curves as the basis for reference conditions (least-disturbed flow regime and least-disturbed channel morphology). Lacking robust biological response models, we use mean residence time of water and a habitat diversity index as generic ecosystem indicators.

  18. Cg/Stability Map for the Reference H Cycle 3 Supersonic Transport Concept Along the High Speed Research Baseline Mission Profile

    NASA Technical Reports Server (NTRS)

    Giesy, Daniel P.; Christhilf, David M.

    1999-01-01

    A comparison is made between the results of trimming a High Speed Civil Transport (HSCT) concept along a reference mission profile using two trim modes. One mode uses the stabilator. The other mode uses fore and aft placement of the center of gravity. A comparison is make of the throttle settings (cruise segments) or the total acceleration (ascent and descent segments) and of the drag coefficient. The comparative stability of trimming using the two modes is also assessed by comparing the stability margins and the placement of the lateral and longitudinal eigenvalues.

  19. Improving CCTA-based lesions' hemodynamic significance assessment by accounting for partial volume modeling in automatic coronary lumen segmentation.

    PubMed

    Freiman, Moti; Nickisch, Hannes; Prevrhal, Sven; Schmitt, Holger; Vembar, Mani; Maurovich-Horvat, Pál; Donnelly, Patrick; Goshen, Liran

    2017-03-01

    The goal of this study was to assess the potential added benefit of accounting for partial volume effects (PVE) in an automatic coronary lumen segmentation algorithm that is used to determine the hemodynamic significance of a coronary artery stenosis from coronary computed tomography angiography (CCTA). Two sets of data were used in our work: (a) multivendor CCTA datasets of 18 subjects from the MICCAI 2012 challenge with automatically generated centerlines and 3 reference segmentations of 78 coronary segments and (b) additional CCTA datasets of 97 subjects with 132 coronary lesions that had invasive reference standard FFR measurements. We extracted the coronary artery centerlines for the 97 datasets by an automated software program followed by manual correction if required. An automatic machine-learning-based algorithm segmented the coronary tree with and without accounting for the PVE. We obtained CCTA-based FFR measurements using a flow simulation in the coronary trees that were generated by the automatic algorithm with and without accounting for PVE. We assessed the potential added value of PVE integration as a part of the automatic coronary lumen segmentation algorithm by means of segmentation accuracy using the MICCAI 2012 challenge framework and by means of flow simulation overall accuracy, sensitivity, specificity, negative and positive predictive values, and the receiver operated characteristic (ROC) area under the curve. We also evaluated the potential benefit of accounting for PVE in automatic segmentation for flow simulation for lesions that were diagnosed as obstructive based on CCTA which could have indicated a need for an invasive exam and revascularization. Our segmentation algorithm improves the maximal surface distance error by ~39% compared to previously published method on the 18 datasets from the MICCAI 2012 challenge with comparable Dice and mean surface distance. Results with and without accounting for PVE were comparable. In contrast, integrating PVE analysis into an automatic coronary lumen segmentation algorithm improved the flow simulation specificity from 0.6 to 0.68 with the same sensitivity of 0.83. Also, accounting for PVE improved the area under the ROC curve for detecting hemodynamically significant CAD from 0.76 to 0.8 compared to automatic segmentation without PVE analysis with invasive FFR threshold of 0.8 as the reference standard. Accounting for PVE in flow simulation to support the detection of hemodynamic significant disease in CCTA-based obstructive lesions improved specificity from 0.51 to 0.73 with same sensitivity of 0.83 and the area under the curve from 0.69 to 0.79. The improvement in the AUC was statistically significant (N = 76, Delong's test, P = 0.012). Accounting for the partial volume effects in automatic coronary lumen segmentation algorithms has the potential to improve the accuracy of CCTA-based hemodynamic assessment of coronary artery lesions. © 2017 American Association of Physicists in Medicine.

  20. Automated compromised right lung segmentation method using a robust atlas-based active volume model with sparse shape composition prior in CT.

    PubMed

    Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren

    2015-12-01

    To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. Published by Elsevier Ltd.

  1. A semiautomatic segmentation method for prostate in CT images using local texture classification and statistical shape modeling.

    PubMed

    Shahedi, Maysam; Halicek, Martin; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2018-06-01

    Prostate segmentation in computed tomography (CT) images is useful for treatment planning and procedure guidance such as external beam radiotherapy and brachytherapy. However, because of the low, soft tissue contrast of CT images, manual segmentation of the prostate is a time-consuming task with high interobserver variation. In this study, we proposed a semiautomated, three-dimensional (3D) segmentation for prostate CT images using shape and texture analysis and we evaluated the method against manual reference segmentations. The prostate gland usually has a globular shape with a smoothly curved surface, and its shape could be accurately modeled or reconstructed having a limited number of well-distributed surface points. In a training dataset, using the prostate gland centroid point as the origin of a coordination system, we defined an intersubject correspondence between the prostate surface points based on the spherical coordinates. We applied this correspondence to generate a point distribution model for prostate shape using principal component analysis and to study the local texture difference between prostate and nonprostate tissue close to the different prostate surface subregions. We used the learned shape and texture characteristics of the prostate in CT images and then combined them with user inputs to segment a new image. We trained our segmentation algorithm using 23 CT images and tested the algorithm on two sets of 10 nonbrachytherapy and 37 postlow dose rate brachytherapy CT images. We used a set of error metrics to evaluate the segmentation results using two experts' manual reference segmentations. For both nonbrachytherapy and post-brachytherapy image sets, the average measured Dice similarity coefficient (DSC) was 88% and the average mean absolute distance (MAD) was 1.9 mm. The average measured differences between the two experts on both datasets were 92% (DSC) and 1.1 mm (MAD). The proposed, semiautomatic segmentation algorithm showed a fast, robust, and accurate performance for 3D prostate segmentation of CT images, specifically when no previous, intrapatient information, that is, previously segmented images, was available. The accuracy of the algorithm is comparable to the best performance results reported in the literature and approaches the interexpert variability observed in manual segmentation. © 2018 American Association of Physicists in Medicine.

  2. Semi-automatic segmentation of myocardium at risk in T2-weighted cardiovascular magnetic resonance.

    PubMed

    Sjögren, Jane; Ubachs, Joey F A; Engblom, Henrik; Carlsson, Marcus; Arheden, Håkan; Heiberg, Einar

    2012-01-31

    T2-weighted cardiovascular magnetic resonance (CMR) has been shown to be a promising technique for determination of ischemic myocardium, referred to as myocardium at risk (MaR), after an acute coronary event. Quantification of MaR in T2-weighted CMR has been proposed to be performed by manual delineation or the threshold methods of two standard deviations from remote (2SD), full width half maximum intensity (FWHM) or Otsu. However, manual delineation is subjective and threshold methods have inherent limitations related to threshold definition and lack of a priori information about cardiac anatomy and physiology. Therefore, the aim of this study was to develop an automatic segmentation algorithm for quantification of MaR using anatomical a priori information. Forty-seven patients with first-time acute ST-elevation myocardial infarction underwent T2-weighted CMR within 1 week after admission. Endocardial and epicardial borders of the left ventricle, as well as the hyper enhanced MaR regions were manually delineated by experienced observers and used as reference method. A new automatic segmentation algorithm, called Segment MaR, defines the MaR region as the continuous region most probable of being MaR, by estimating the intensities of normal myocardium and MaR with an expectation maximization algorithm and restricting the MaR region by an a priori model of the maximal extent for the user defined culprit artery. The segmentation by Segment MaR was compared against inter observer variability of manual delineation and the threshold methods of 2SD, FWHM and Otsu. MaR was 32.9 ± 10.9% of left ventricular mass (LVM) when assessed by the reference observer and 31.0 ± 8.8% of LVM assessed by Segment MaR. The bias and correlation was, -1.9 ± 6.4% of LVM, R = 0.81 (p < 0.001) for Segment MaR, -2.3 ± 4.9%, R = 0.91 (p < 0.001) for inter observer variability of manual delineation, -7.7 ± 11.4%, R = 0.38 (p = 0.008) for 2SD, -21.0 ± 9.9%, R = 0.41 (p = 0.004) for FWHM, and 5.3 ± 9.6%, R = 0.47 (p < 0.001) for Otsu. There is a good agreement between automatic Segment MaR and manually assessed MaR in T2-weighted CMR. Thus, the proposed algorithm seems to be a promising, objective method for standardized MaR quantification in T2-weighted CMR.

  3. An unsupervised approach for measuring myocardial perfusion in MR image sequences

    NASA Astrophysics Data System (ADS)

    Discher, Antoine; Rougon, Nicolas; Preteux, Francoise

    2005-08-01

    Quantitatively assessing myocardial perfusion is a key issue for the diagnosis, therapeutic planning and patient follow-up of cardio-vascular diseases. To this end, perfusion MRI (p-MRI) has emerged as a valuable clinical investigation tool thanks to its ability of dynamically imaging the first pass of a contrast bolus in the framework of stress/rest exams. However, reliable techniques for automatically computing regional first pass curves from 2D short-axis cardiac p-MRI sequences remain to be elaborated. We address this problem and develop an unsupervised four-step approach comprising: (i) a coarse spatio-temporal segmentation step, allowing to automatically detect a region of interest for the heart over the whole sequence, and to select a reference frame with maximal myocardium contrast; (ii) a model-based variational segmentation step of the reference frame, yielding a bi-ventricular partition of the heart into left ventricle, right ventricle and myocardium components; (iii) a respiratory/cardiac motion artifacts compensation step using a novel region-driven intensity-based non rigid registration technique, allowing to elastically propagate the reference bi-ventricular segmentation over the whole sequence; (iv) a measurement step, delivering first-pass curves over each region of a segmental model of the myocardium. The performance of this approach is assessed over a database of 15 normal and pathological subjects, and compared with perfusion measurements delivered by a MRI manufacturer software package based on manual delineations by a medical expert.

  4. Varying behavior of different window sizes on the classification of static and dynamic physical activities from a single accelerometer.

    PubMed

    Fida, Benish; Bernabucci, Ivan; Bibbo, Daniele; Conforto, Silvia; Schmid, Maurizio

    2015-07-01

    Accuracy of systems able to recognize in real time daily living activities heavily depends on the processing step for signal segmentation. So far, windowing approaches are used to segment data and the window size is usually chosen based on previous studies. However, literature is vague on the investigation of its effect on the obtained activity recognition accuracy, if both short and long duration activities are considered. In this work, we present the impact of window size on the recognition of daily living activities, where transitions between different activities are also taken into account. The study was conducted on nine participants who wore a tri-axial accelerometer on their waist and performed some short (sitting, standing, and transitions between activities) and long (walking, stair descending and stair ascending) duration activities. Five different classifiers were tested, and among the different window sizes, it was found that 1.5 s window size represents the best trade-off in recognition among activities, with an obtained accuracy well above 90%. Differences in recognition accuracy for each activity highlight the utility of developing adaptive segmentation criteria, based on the duration of the activities. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  6. SEQassembly: A Practical Tools Program for Coding Sequences Splicing

    NASA Astrophysics Data System (ADS)

    Lee, Hongbin; Yang, Hang; Fu, Lei; Qin, Long; Li, Huili; He, Feng; Wang, Bo; Wu, Xiaoming

    CDS (Coding Sequences) is a portion of mRNA sequences, which are composed by a number of exon sequence segments. The construction of CDS sequence is important for profound genetic analysis such as genotyping. A program in MATLAB environment is presented, which can process batch of samples sequences into code segments under the guide of reference exon models, and splice these code segments of same sample source into CDS according to the exon order in queue file. This program is useful in transcriptional polymorphism detection and gene function study.

  7. Application-Controlled Demand Paging for Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Cox, Michael; Ellsworth, David; Kutler, Paul (Technical Monitor)

    1997-01-01

    In the area of scientific visualization, input data sets are often very large. In visualization of Computational Fluid Dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. In this paper we demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to poor performance. We then describe a paged segment system that we have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. We show that application control over some of these can significantly improve performance. We show that sparse traversal can be exploited by loading only those data actually required. We show also that application control over data loading can be exploited by 1) loading data from alternative storage format (in particular 3-dimensional data stored in sub-cubes), 2) controlling the page size. Both of these techniques effectively reduce the total memory required by visualization at run-time. We also describe experiments we have done on remote out-of-core visualization (when pages are read by demand from remote disk) whose results are promising.

  8. The head problem. The organizational significance of segmentation in head development.

    PubMed

    Horder, Tim J; Presley, Robert; Slípka, Jaroslav

    2010-01-01

    This review argues for the segmental basis of chordate head organization which, like somite-based segmental organization in the trunk, takes its origin from early mesodermal development. The review builds on, and brings up to date, Goodrich's well-known scheme of head organization. It surveys recent data in support of this scheme and shows how evidence and arguments supposedly in conflict with it can be accommodated. Many of the arguments revolve around matters of methodology; the limitations of older LM, SEM (on which the concept of "somitomeres" is based) and recent molecular evidence (which has sometimes been seen as allocating the central role in head organization to the CNS and the neural crest) are highlighted and shown to explain a number of claims contrary to Goodrich's. We provide (in Part 2) a new, comparative survey of the best available evidence most directly relevant to the Goodrich Bauplan, with a special emphasis on stem chordates. The postotic region has commonly been seen as segmentally organized: the critical issues concern the preotic region. There are many reasons why Goodrich's three preotic segments may become specialized during evolution and why the underlying initial segmental organization may be overridden in later stages during embryonic development; we refer to a number of these. We conclude that the preotic segmental Bauplan is remarkably conserved and most explicitly demonstrated among stem forms, but we also suggest that the concept of the prechordal plate requires careful reexamination. Central to our overall analysis is the importance of the epigenetic nature of embryogenesis; its implications are made clear. Finally we speculate on evolutionary implications for the origin of the head and its specialized features. The review is intended to serve as a resource giving access to references to a wealth of now neglected, older data on anamniote embryology.

  9. Airway Segmentation and Centerline Extraction from Thoracic CT – Comparison of a New Method to State of the Art Commercialized Methods

    PubMed Central

    Reynisson, Pall Jens; Scali, Marta; Smistad, Erik; Hofstad, Erlend Fagertun; Leira, Håkon Olav; Lindseth, Frank; Nagelhus Hernes, Toril Anita; Amundsen, Tore; Sorger, Hanne; Langø, Thomas

    2015-01-01

    Introduction Our motivation is increased bronchoscopic diagnostic yield and optimized preparation, for navigated bronchoscopy. In navigated bronchoscopy, virtual 3D airway visualization is often used to guide a bronchoscopic tool to peripheral lesions, synchronized with the real time video bronchoscopy. Visualization during navigated bronchoscopy, the segmentation time and methods, differs. Time consumption and logistics are two essential aspects that need to be optimized when integrating such technologies in the interventional room. We compared three different approaches to obtain airway centerlines and surface. Method CT lung dataset of 17 patients were processed in Mimics (Materialize, Leuven, Belgium), which provides a Basic module and a Pulmonology module (beta version) (MPM), OsiriX (Pixmeo, Geneva, Switzerland) and our Tube Segmentation Framework (TSF) method. Both MPM and TSF were evaluated with reference segmentation. Automatic and manual settings allowed us to segment the airways and obtain 3D models as well as the centrelines in all datasets. We compared the different procedures by user interactions such as number of clicks needed to process the data and quantitative measures concerning the quality of the segmentation and centrelines such as total length of the branches, number of branches, number of generations, and volume of the 3D model. Results The TSF method was the most automatic, while the Mimics Pulmonology Module (MPM) and the Mimics Basic Module (MBM) resulted in the highest number of branches. MPM is the software which demands the least number of clicks to process the data. We found that the freely available OsiriX was less accurate compared to the other methods regarding segmentation results. However, the TSF method provided results fastest regarding number of clicks. The MPM was able to find the highest number of branches and generations. On the other hand, the TSF is fully automatic and it provides the user with both segmentation of the airways and the centerlines. Reference segmentation comparison averages and standard deviations for MPM and TSF correspond to literature. Conclusion The TSF is able to segment the airways and extract the centerlines in one single step. The number of branches found is lower for the TSF method than in Mimics. OsiriX demands the highest number of clicks to process the data, the segmentation is often sparse and extracting the centerline requires the use of another software system. Two of the software systems performed satisfactory with respect to be used in preprocessing CT images for navigated bronchoscopy, i.e. the TSF method and the MPM. According to reference segmentation both TSF and MPM are comparable with other segmentation methods. The level of automaticity and the resulting high number of branches plus the fact that both centerline and the surface of the airways were extracted, are requirements we considered particularly important. The in house method has the advantage of being an integrated part of a navigation platform for bronchoscopy, whilst the other methods can be considered preprocessing tools to a navigation system. PMID:26657513

  10. Highly Segmented Thermal Barrier Coatings Deposited by Suspension Plasma Spray: Effects of Spray Process on Microstructure

    NASA Astrophysics Data System (ADS)

    Chen, Xiaolong; Honda, Hiroshi; Kuroda, Seiji; Araki, Hiroshi; Murakami, Hideyuki; Watanabe, Makoto; Sakka, Yoshio

    2016-12-01

    Effects of the ceramic powder size used for suspension as well as several processing parameters in suspension plasma spraying of YSZ were investigated experimentally, aiming to fabricate highly segmented microstructures for thermal barrier coating (TBC) applications. Particle image velocimetry (PIV) was used to observe the atomization process and the velocity distribution of atomized droplets and ceramic particles travelling toward the substrates. The tested parameters included the secondary plasma gas (He versus H2), suspension injection flow rate, and substrate surface roughness. Results indicated that a plasma jet with a relatively higher content of He or H2 as the secondary plasma gas was critical to produce highly segmented YSZ TBCs with a crack density up to 12 cracks/mm. The optimized suspension flow rate played an important role to realize coatings with a reduced porosity level and improved adhesion. An increased powder size and higher operation power level were beneficial for the formation of highly segmented coatings onto substrates with a wider range of surface roughness.

  11. Local orientational mobility in regular hyperbranched polymers.

    PubMed

    Dolgushev, Maxim; Markelov, Denis A; Fürstenberg, Florian; Guérin, Thomas

    2016-07-01

    We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes.

  12. Simulating the Structural Response of a Preloaded Bolted Joint

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2008-01-01

    The present paper describes the structural analyses performed on a preloaded bolted-joint configuration. The joint modeled was comprised of two L-shaped structures connected together using a single bolt. Each L-shaped structure involved a vertical flat segment (or shell wall) welded to a horizontal segment (or flange). Parametric studies were performed using elasto-plastic, large-deformation nonlinear finite element analyses to determine the influence of several factors on the bolted-joint response. The factors considered included bolt preload, washer-surface-bearing size, edge boundary conditions, joint segment length, and loading history. Joint response is reported in terms of displacements, gap opening, and surface strains. Most of the factors studied were determined to have minimal effect on the bolted-joint response; however, the washer-bearing-surface size affected the response significantly.

  13. What Controls Subduction Earthquake Size and Occurrence?

    NASA Astrophysics Data System (ADS)

    Ruff, L. J.

    2008-12-01

    There is a long history of observational studies on the size and recurrence intervals of the large underthrusting earthquakes in subduction zones. In parallel with this documentation of the variability in both recurrence times and earthquake sizes -- both within and amongst subduction zones -- there have been numerous suggestions for what controls size and occurrence. In addition to the intrinsic scientific interest in these issues, there are direct applications to hazards mitigation. In this overview presentation, I review past progress, consider current paradigms, and look toward future studies that offer some resolution of long- standing questions. Given the definition of seismic moment, earthquake size is the product of overall static stress drop, down-dip fault width, and along-strike fault length. The long-standing consensus viewpoint is that for the largest earthquakes in a subduction zone: stress-drop is constant, fault width is the down-dip extent of the seismogenic portion of the plate boundary, but that along-strike fault length can vary from one large earthquake to the next. While there may be semi-permanent segments along a subduction zone, successive large earthquakes can rupture different combinations of segments. Many investigations emphasize the role of asperities within the segments, rather than segment edges. Thus, the question of earthquake size is translated into: "What controls the along-strike segmentation, and what determines which segments will rupture in a particular earthquake cycle?" There is no consensus response to these questions. Over the years, the suggestions for segmentation control include physical features in the subducted plate, physical features in the over-lying plate, and more obscure -- and possibly ever-changing -- properties of the plate interface such as the hydrologic conditions. It seems that the full global answer requires either some unforeseen breakthrough, or the long-term hard work of falsifying all candidate hypotheses except one. This falsification process requires both concentrated multidisciplinary efforts and patience. Large earthquake recurrence intervals in the same subduction zone segment display a significant, and therefore unfortunate, variability. Over the years, many of us have devised simple models to explain this variability. Of course, there are also more complicated explanations with many additional model parameters. While there has been important observational progress as both historical and paleo-seismological studies continue to add more data pairs of fault length and recurrence intervals, there has been a frustrating lack of progress in elimination of candidate models or processes that explain recurrence time variability. Some of the simple models for recurrence times offer a probabilistic or even deterministic prediction of future recurrence times - and have been used for hazards evaluation. It is important to know if these models are correct. Since we do not have the patience to wait for a strict statistical test, we must find other ways to test these ideas. For example, some of the simple deterministic models for along-strike segment interaction make predictions for variation in tectonic stress state that can be tested during the inter-seismic period. We have seen how some observational discoveries in the past decade (e.g., the episodic creep events down-dip of the seismogenic zone) give us additional insight into the physical processes in subduction zones; perhaps multi-disciplinary studies of subduction zones will discover a new way to reliably infer large-scale shear stresses on the plate interface?

  14. Automated Segmentation of Kidneys from MR Images in Patients with Autosomal Dominant Polycystic Kidney Disease

    PubMed Central

    Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B.; Torres, Vicente E.; Yu, Alan S.L.; Mrug, Michal; Bennett, William M.; Flessner, Michael F.; Landsittel, Doug P.

    2016-01-01

    Background and objectives Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Design, setting, participants, & measurements Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2–weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Results Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. Conclusions We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. PMID:26797708

  15. Automated Segmentation of Kidneys from MR Images in Patients with Autosomal Dominant Polycystic Kidney Disease.

    PubMed

    Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B; Torres, Vicente E; Yu, Alan S L; Mrug, Michal; Bennett, William M; Flessner, Michael F; Landsittel, Doug P; Bae, Kyongtae T

    2016-04-07

    Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2-weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. Copyright © 2016 by the American Society of Nephrology.

  16. SU-E-I-96: A Study About the Influence of ROI Variation On Tumor Segmentation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To study the influence of different regions of interest (ROI) on tumor segmentation in PET. Methods: The experiments were conducted on a cylindrical phantom. Six spheres with different volumes (0.5ml, 1ml, 6ml, 12ml, 16ml and 20 ml) were placed inside a cylindrical container to mimic tumors of different sizes. The spheres were filled with 11C solution as sources and the cylindrical container was filled with 18F-FDG solution as the background. The phantom was continuously scanned in a Biograph-40 True Point/True View PET/CT scanner, and 42 images were reconstructed with source-to-background ratio (SBR) ranging from 16:1 to 1.8:1. We tookmore » a large and a small ROI for each sphere, both of which contain the whole sphere and does not contain any other spheres. Six other ROIs of different sizes were then taken between the large and the small ROI. For each ROI, all images were segmented by eitht thresholding methods and eight advanced methods, respectively. The segmentation results were evaluated by dice similarity index (DSI), classification error (CE) and volume error (VE). The robustness of different methods to ROI variation was quantified using the interrun variation and a generalized Cohen's kappa. Results: With the change of ROI, the segmentation results of all tested methods changed more or less. Compared with all advanced methods, thresholding methods were less affected by the ROI change. In addition, most of the thresholding methods got more accurate segmentation results for all sphere sizes. Conclusion: The results showed that the segmentation performance of all tested methods was affected by the change of ROI. Thresholding methods were more robust to this change and they can segment the PET image more accurately. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  17. What is the best ST-segment recovery parameter to predict clinical outcome and myocardial infarct size? Amplitude, speed, and completeness of ST-segment recovery after primary percutaneous coronary intervention for ST-segment elevation myocardial infarction.

    PubMed

    Kuijt, Wichert J; Green, Cindy L; Verouden, Niels J W; Haeck, Joost D E; Tzivoni, Dan; Koch, Karel T; Stone, Gregg W; Lansky, Alexandra J; Broderick, Samuel; Tijssen, Jan G P; de Winter, Robbert J; Roe, Matthew T; Krucoff, Mitchell W

    ST-segment recovery (STR) is a strong mechanistic correlate of infarct size (IS) and outcome in ST-segment elevation myocardial infarction (STEMI). Characterizing measures of speed, amplitude, and completeness of STR may extend the use of this noninvasive biomarker. Core laboratory continuous 24-h 12-lead Holter ECG monitoring, IS by single-photon emission computed tomography (SPECT), and 30-day mortality of 2 clinical trials of primary percutaneous coronary intervention in STEMI were combined. Multiple ST measures (STR at last contrast injection (LC) measured from peak value; 30, 60, 90, 120, and 240min, residual deviation; time to steady ST recovery; and the 3-h area under the time trend curve [ST-AUC] from LC) were univariably correlated with IS and predictive of mortality. After multivariable adjustment for ST-parameters and GRACE risk factors, STR at 240min remained an additive predictor of mortality. Early STR, residual deviation, and ST-AUC remained associated with IS. Multiple parameters that quantify the speed, amplitude, and completeness of STR predict mortality and correlate with IS. Copyright © 2017. Published by Elsevier Inc.

  18. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    PubMed

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  19. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    NASA Astrophysics Data System (ADS)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  20. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  1. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    NASA Astrophysics Data System (ADS)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  2. Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation.

    PubMed

    Brosch, Tom; Tang, Lisa Y W; Youngjin Yoo; Li, David K B; Traboulsee, Anthony; Tam, Roger

    2016-05-01

    We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.

  3. Scaling Relations for the Thermal Structure of Segmented Oceanic Transform Faults

    NASA Astrophysics Data System (ADS)

    Wolfson-Schwehr, M.; Boettcher, M. S.; Behn, M. D.

    2015-12-01

    Mid-ocean ridge-transform faults (RTFs) are a natural laboratory for studying strike-slip earthquake behavior due to their relatively simple geometry, well-constrained slip rates, and quasi-periodic seismic cycles. However, deficiencies in our understanding of the limited size of the largest RTF earthquakes are due, in part, to not considering the effect of short intra-transform spreading centers (ITSCs) on fault thermal structure. We use COMSOL Multiphysics to run a series of 3D finite element simulations of segmented RTFs with visco-plastic rheology. The models test a range of RTF segment lengths (L = 10-150 km), ITSC offset lengths (O = 1-30 km), and spreading rates (V = 2-14 cm/yr). The lithosphere and upper mantle are approximated as steady-state, incompressible flow. Coulomb failure incorporates brittle processes in the lithosphere, and a temperature-dependent flow law for dislocation creep of olivine activates ductile deformation in the mantle. ITSC offsets as small as 2 km affect the thermal structure underlying many segmented RTFs, reducing the area above the 600˚C isotherm, A600, and thus the size of the largest expected earthquakes, Mc. We develop a scaling relation for the critical ITSC offset length, OC, which significantly reduces the thermal affect of adjacent fault segments of length L1 and L2. OC is defined as the ITSC offset that results in an area loss ratio of R = (Aunbroken - Acombined)/Aunbroken - Adecoupled) = 63%, where Aunbroken = C600(L1+L2)1.5V-0.6 is A600 for an RTF of length L1 + L2; Adecoupled = C600(L11.5+L21.5)V-0.6 is the combined A600 of RTFs of lengths L1 and L2, respectively; and Acombined = Aunbroken exp(-O/ OC) + Adecoupled (1-exp(-O/ OC)). C600 is a constant. We use OC and kinematic fault parameters (L1, L2, O, and V) to develop a scaling relation for the approximate seismogenic area, Aseg, for each segment of a RTF system composed of two fault segments. Finally, we estimate the size of Mc on a fault segment based on Aseg. We show that small (<1 km) offsets in the fault trace observed between M­W6 rupture patches on Gofar and Discovery transform faults, located at ~4S on the East Pacific Rise, are not sufficient to thermally decouple adjacent fault patches. Thus additional factors, possibly including changes in fault zone material properties, must limit the size of Mc on these faults.

  4. Characterization and Evaluation of Incorporation the Casting Sand in Mortar

    NASA Astrophysics Data System (ADS)

    Zanelato, E. B.; Azevedo, A. R. G.; Alexandre, J.; Xavier, C. G.; Monteiro, S. N.; Mendonça, T. A. O.

    The process of casting metals and alloys occurs through the fusion of this metal and its subsequent casting into a mold with the dimensions and geometry close to the final piece. Most foundries use sand casting molds for making you. This work aims to characterize and evaluate the foundry sand to allow its use in segments of Civil Engineering, creating a viable destination for a residue is that discarded. The following characterization tests were performer: particle size, chemical analysis, X-ray Diffraction and Density Real grain. For the execution of the test specimens was used to 1:3 cement and sand, and the incorporation of 10% and 20% of the total mass replacing the sand, and the trace reference. The results show that best results in compression and bending tests were obtained by replacing 10 % of common sand for sand casting.

  5. Atypical patterns of cardiac involvement in Fabry disease.

    PubMed

    Coughlan, J J; Elkholy, K; O'Brien, J; Kiernan, T

    2016-03-17

    A 58-year-old woman was referred to our cardiology service with chest pain, exertional dyspnoea and palpitations on a background of known Fabry disease diagnosed with genetic testing in 1994. ECG showed sinus rhythm, shortened PR interval, widespread t wave inversion, q waves in the lateral leads and left ventricular hypertrophy (LVH). Coronary angiogram showed only mild atheroma. Transthoracic echocardiogram showed anterolateral LVH and reduced left ventricular cavity size in keeping with Fabry cardiomyopathy. Cardiac MRI demonstrated asymmetric hypertrophy with evidence of diffuse myocardial fibrosis in the maximally hypertrophied segments from base to apex with late gadolinium enhancement in the anterior and anteroseptal walls. This was quite an atypical appearance for Fabry cardiomyopathy. This case highlights the heterogeneity of patterns of cardiac involvement that may be associated with this rare X-linked lysosomal disorder. 2016 BMJ Publishing Group Ltd.

  6. Active tectonics of Peru: Heterogeneous interseismic coupling along the Nazca megathrust, rigid motion of the Peruvian Sliver, and Subandean shortening accommodation

    NASA Astrophysics Data System (ADS)

    Villegas-Lanza, J. C.; Chlieh, M.; Cavalié, O.; Tavera, H.; Baby, P.; Chire-Chira, J.; Nocquet, J.-M.

    2016-10-01

    Over 100 GPS sites measured in 2008-2013 in Peru provide new insights into the present-day crustal deformation of the 2200 km long Peruvian margin. This margin is squeezed between the eastward subduction of the oceanic Nazca Plate at the South America trench axis and the westward continental subduction of the South American Plate beneath the Eastern Cordillera and Subandean orogenic wedge. Continental active faults and GPS data reveal the rigid motion of a Peruvian Forearc Sliver that extends from the oceanic trench axis to the Western-Eastern Cordilleras boundary and moves southeastward at 4-5 mm/yr relative to a stable South America reference frame. GPS data indicate that the Subandean shortening increases southward by 2 to 4 mm/yr. In a Peruvian Sliver reference frame, the residual GPS data indicate that the interseismic coupling along the Nazca megathrust is highly heterogeneous. Coupling in northern Peru is shallow and coincides with the site of previous moderate-sized and shallow tsunami-earthquakes. Deep coupling occurs in central and southern Peru, where repeated large and great megathrust earthquakes have occurred. The strong correlation between highly coupled areas and large ruptures suggests that seismic asperities are persistent features of the megathrust. Creeping segments appear at the extremities of great ruptures and where oceanic fracture zones and ridges enter the subduction zone, suggesting that these subducting structures play a major role in the seismic segmentation of the Peruvian margin. In central Peru, we estimate a recurrence time of 305 ± 40 years to reproduce the great 1746 Mw 8.8 Lima-Callao earthquake.

  7. Quantification of Tumor Vessels in Glioblastoma Patients Using Time-of-Flight Angiography at 7 Tesla: A Feasibility Study

    PubMed Central

    Radbruch, Alexander; Eidel, Oliver; Wiestler, Benedikt; Paech, Daniel; Burth, Sina; Kickingereder, Philipp; Nowosielski, Martha; Bäumer, Philipp; Wick, Wolfgang; Schlemmer, Heinz-Peter; Bendszus, Martin; Ladd, Mark; Nagel, Armin Michael; Heiland, Sabine

    2014-01-01

    Purpose To analyze if tumor vessels can be visualized, segmented and quantified in glioblastoma patients with time of flight (ToF) angiography at 7 Tesla and multiscale vessel enhancement filtering. Materials and Methods Twelve patients with newly diagnosed glioblastoma were examined with ToF angiography (TR = 15 ms, TE = 4.8 ms, flip angle = 15°, FOV = 160×210 mm2, voxel size: 0.31×0.31×0.40 mm3) on a whole-body 7 T MR system. A volume of interest (VOI) was placed within the border of the contrast enhancing part on T1-weighted images of the glioblastoma and a reference VOI was placed in the non-affected contralateral white matter. Automated segmentation and quantification of vessels within the two VOIs was achieved using multiscale vessel enhancement filtering in ImageJ. Results Tumor vessels were clearly visible in all patients. When comparing tumor and the reference VOI, total vessel surface (45.3±13.9 mm2 vs. 29.0±21.0 mm2 (p<0.035)) and number of branches (3.5±1.8 vs. 1.0±0.6 (p<0.001) per cubic centimeter were significantly higher, while mean vessel branch length was significantly lower (3.8±1.5 mm vs 7.2±2.8 mm (p<0.001)) in the tumor. Discussion ToF angiography at 7-Tesla MRI enables characterization and quantification of the internal vascular morphology of glioblastoma and may be used for the evaluation of therapy response within future studies. PMID:25415327

  8. Size and Base Composition of RNA in Supercoiled Plasmid DNA

    PubMed Central

    Williams, Peter H.; Boyer, Herbert W.; Helinski, Donald R.

    1973-01-01

    The average size and base composition of the covalently integrated RNA segment in supercoiled ColE1 DNA synthesized in Escherichia coli in the presence of chloramphenicol (CM-ColE1 DNA) have been determined by two independent methods. The two approaches yielded similar results, indicating that the RNA segment in CM-ColE1 DNA contains GMP at the 5′ end and comprises on the average 25 to 26 ribonucleotides with a base composition of 10-11 G, 3 A, 5-6 C, and 6-7 U. PMID:4359488

  9. Elevated serum uric acid affects myocardial reperfusion and infarct size in patients with ST-segment elevation myocardial infarction undergoing primary percutaneous coronary intervention.

    PubMed

    Mandurino-Mirizzi, Alessandro; Crimi, Gabriele; Raineri, Claudia; Pica, Silvia; Ruffinazzi, Marta; Gianni, Umberto; Repetto, Alessandra; Ferlini, Marco; Marinoni, Barbara; Leonardi, Sergio; De Servi, Stefano; Oltrona Visconti, Luigi; De Ferrari, Gaetano M; Ferrario, Maurizio

    2018-05-01

    Elevated serum uric acid (eSUA) was associated with unfavorable outcome in patients with ST-segment elevation myocardial infarction (STEMI). However, the effect of eSUA on myocardial reperfusion injury and infarct size has been poorly investigated. Our aim was to correlate eSUA with infarct size, infarct size shrinkage, myocardial reperfusion grade and long-term mortality in STEMI patients undergoing primary percutaneous coronary intervention. We performed a post-hoc patients-level analysis of two randomized controlled trials, testing strategies for myocardial ischemia/reperfusion injury protection. Each patient underwent acute (3-5 days) and follow-up (4-6 months) cardiac magnetic resonance. Infarct size and infarct size shrinkage were outcomes of interest. We assessed T2-weighted edema, myocardial blush grade (MBG), corrected Thrombolysis in myocardial infarction Frame Count, ST-segment resolution and long-term all-cause mortality. A total of 101 (86.1% anterior) STEMI patients were included; eSUA was found in 16 (15.8%) patients. Infarct size was larger in eSUA compared with non-eSUA patients (42.3 ± 22 vs. 29.1 ± 15 ml, P = 0.008). After adjusting for covariates, infarct size was 10.3 ml (95% confidence interval 1.2-19.3 ml, P = 0.001) larger in eSUA. Among patients with anterior myocardial infarction the difference in delayed enhancement between groups was maintained (respectively, 42.3 ± 22.4 vs. 29.9 ± 15.4 ml, P = 0.015). Infarct size shrinkage was similar between the groups. Compared with non-eSUA, eSUA patients had larger T2-weighted edema (53.8 vs. 41.2 ml, P = 0.031) and less favorable MBG (MBG < 2: 44.4 vs. 13.6%, P = 0.045). Corrected Thrombolysis in myocardial infarction Frame Count and ST-segment resolution did not significantly differ between the groups. At a median follow-up of 7.3 years, all-cause mortality was higher in the eSUA group (18.8 vs. 2.4%, P = 0.028). eSUA may affect myocardial reperfusion in patients with STEMI undergoing percutaneous coronary intervention and is associated with larger infarct size and higher long-term mortality.

  10. Teachers' Characteristics: Understanding the Decision to Refer for Special Education Placement

    ERIC Educational Resources Information Center

    Hauck, Deborah Z.

    2010-01-01

    This mixed method study examined elementary teachers' characteristics (efficacy, tolerance, and demographics) and their influences on the decision to refer African American students to special education. A stratified purposeful sample of 115 elementary teachers for the quantitative segment and a subsample of 13 teachers for the qualitative portion…

  11. Total body composition estimated by standing-posture 8-electrode bioelectrical impedance analysis in male wrestlers.

    PubMed

    Cheng, M-F; Chen, Y-Y; Jang, T-R; Lin, W-L; Chen, J; Hsieh, K-C

    2016-12-01

    Standing-posture 8-electrode bioelectrical impedance analysis is a fast and practical method for evaluating body composition in clinical settings, which can be used to estimate percentage body fat (BF%) and skeletal muscle mass in a subject's total body and body segments. In this study, dual-energy X-ray absorptiometry (DXA) was used as a reference method for validating the standing 8-electrode bioelectrical impedance analysis device BC-418 (BIA 8 , Tanita Corp., Tokyo, Japan). Forty-eight Taiwanese male wrestlers aged from 17.9 to 22.3 years volunteered to participate in this study. The lean soft tissue (LST) and BF% in the total body and body segments were measured in each subject by the BIA 8 and DXA. The correlation coefficients between total body, arm, leg segments impedance index (BI, ht 2 /Z) and lean soft tissue mass measured from DXA were r = 0.902, 0.453, 0.885, respectively (p < 0.01). In addition, the total body and segmental LST estimated by the BIA 8 were highly correlated with the DXA data (r = 0.936, 0.466, 0.886, p < 0.01). The estimation of total body and segmental BF% measured by BIA 8 and DXA also showed a significant correlation (r > 0.820, p < 0.01). The estimated LST and BF% from BIA 8 in the total body and body segments were highly correlated with the DXA results, which indicated that the standing-posture 8-electrode bioelectrical impedance analysis may be used to derive reference measures of LST and BF% in Taiwanese male wrestlers.

  12. Contribution of calcaneal and leg segment rotations to ankle joint dorsiflexion in a weight-bearing task.

    PubMed

    Chizewski, Michael G; Chiu, Loren Z F

    2012-05-01

    Joint angle is the relative rotation between two segments where one is a reference and assumed to be non-moving. However, rotation of the reference segment will influence the system's spatial orientation and joint angle. The purpose of this investigation was to determine the contribution of leg and calcaneal rotations to ankle rotation in a weight-bearing task. Forty-eight individuals performed partial squats recorded using a 3D motion capture system. Markers on the calcaneus and leg were used to model leg and calcaneal segment, and ankle joint rotations. Multiple linear regression was used to determine the contribution of leg and calcaneal segment rotations to ankle joint dorsiflexion. Regression models for left (R(2)=0.97) and right (R(2)=0.97) ankle dorsiflexion were significant. Sagittal plane leg rotation had a positive influence (left: β=1.411; right: β=1.418) while sagittal plane calcaneal rotation had a negative influence (left: β=-0.573; right: β=-0.650) on ankle dorsiflexion. Sagittal plane rotations of the leg and calcaneus were positively correlated (left: r=0.84, P<0.001; right: r=0.80, P<0.001). During a partial squat, the calcaneus rotates forward. Simultaneous forward calcaneal rotation with ankle dorsiflexion reduces total ankle dorsiflexion angle. Rear foot posture is reoriented during a partial squat, allowing greater leg rotation in the sagittal plane. Segment rotations may provide greater insight into movement mechanics that cannot be explained via joint rotations alone. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Total body composition estimated by standing-posture 8-electrode bioelectrical impedance analysis in male wrestlers

    PubMed Central

    Cheng, M-F; Chen, Y-Y; Jang, T-R; Lin, W-L; Chen, J

    2015-01-01

    Standing-posture 8-electrode bioelectrical impedance analysis is a fast and practical method for evaluating body composition in clinical settings, which can be used to estimate percentage body fat (BF%) and skeletal muscle mass in a subject’s total body and body segments. In this study, dual-energy X-ray absorptiometry (DXA) was used as a reference method for validating the standing 8-electrode bioelectrical impedance analysis device BC-418 (BIA8, Tanita Corp., Tokyo, Japan). Forty-eight Taiwanese male wrestlers aged from 17.9 to 22.3 years volunteered to participate in this study. The lean soft tissue (LST) and BF% in the total body and body segments were measured in each subject by the BIA8 and DXA. The correlation coefficients between total body, arm, leg segments impedance index (BI, ht2/Z) and lean soft tissue mass measured from DXA were r = 0.902, 0.453, 0.885, respectively (p < 0.01). In addition, the total body and segmental LST estimated by the BIA8 were highly correlated with the DXA data (r = 0.936, 0.466, 0.886, p < 0.01). The estimation of total body and segmental BF% measured by BIA8 and DXA also showed a significant correlation (r > 0.820, p < 0.01). The estimated LST and BF% from BIA8 in the total body and body segments were highly correlated with the DXA results, which indicated that the standing-posture 8-electrode bioelectrical impedance analysis may be used to derive reference measures of LST and BF% in Taiwanese male wrestlers. PMID:28090145

  14. Segmentation of facial bone surfaces by patch growing from cone beam CT volumes

    PubMed Central

    Lilja, Mikko; Kalke, Martti

    2016-01-01

    Objectives: The motivation behind this work was to design an automatic algorithm capable of segmenting the exterior of the dental and facial bones including the mandible, teeth, maxilla and zygomatic bone with an open surface (a surface with a boundary) from CBCT images for the anatomy-based reconstruction of radiographs. Such an algorithm would provide speed, consistency and improved image quality for clinical workflows, for example, in planning of implants. Methods: We used CBCT images from two studies: first to develop (n = 19) and then to test (n = 30) a segmentation pipeline. The pipeline operates by parameterizing the topology and shape of the target, searching for potential points on the facial bone–soft tissue edge, reconstructing a triangular mesh by growing patches on from the edge points with good contrast and regularizing the result with a surface polynomial. This process is repeated for convergence. Results: The output of the algorithm was benchmarked against a hand-drawn reference and reached a 0.50 ± 1.0-mm average and 1.1-mm root mean squares error in Euclidean distance from the reference to our automatically segmented surface. These results were achieved with images affected by inhomogeneity, noise and metal artefacts that are typical for dental CBCT. Conclusions: Previously, this level of accuracy and precision in dental CBCT has been reported in segmenting only the mandible, a much easier target. The segmentation results were consistent throughout the data set and the pipeline was found fast enough (<1-min average computation time) to be considered for clinical use. PMID:27482878

  15. Two phase sampling for wheat acreage estimation. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Thomas, R. W.; Hay, C. M.

    1977-01-01

    A two phase LANDSAT-based sample allocation and wheat proportion estimation method was developed. This technique employs manual, LANDSAT full frame-based wheat or cultivated land proportion estimates from a large number of segments comprising a first sample phase to optimally allocate a smaller phase two sample of computer or manually processed segments. Application to the Kansas Southwest CRD for 1974 produced a wheat acreage estimate for that CRD within 2.42 percent of the USDA SRS-based estimate using a lower CRD inventory budget than for a simulated reference LACIE system. Factor of 2 or greater cost or precision improvements relative to the reference system were obtained.

  16. GPU accelerated fuzzy connected image segmentation by using CUDA.

    PubMed

    Zhuge, Ying; Cao, Yong; Miller, Robert W

    2009-01-01

    Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.

  17. Medical image segmentation using 3D MRI data

    NASA Astrophysics Data System (ADS)

    Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.

    2017-05-01

    Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.

  18. Active hexagonally segmented mirror to investigate new optical phasing technologies for segmented telescopes.

    PubMed

    Gonté, Frédéric; Dupuy, Christophe; Luong, Bruno; Frank, Christoph; Brast, Roland; Sedghi, Baback

    2009-11-10

    The primary mirror of the future European Extremely Large Telescope will be equipped with 984 hexagonal segments. The alignment of the segments in piston, tip, and tilt within a few nanometers requires an optical phasing sensor. A test bench has been designed to study four different optical phasing sensor technologies. The core element of the test bench is an active segmented mirror composed of 61 flat hexagonal segments with a size of 17 mm side to side. Each of them can be controlled in piston, tip, and tilt by three piezoactuators with a precision better than 1 nm. The context of this development, the requirements, the design, and the integration of this system are explained. The first results on the final precision obtained in closed-loop control are also presented.

  19. Assessing condition of macroinvertebrate communities and sediment toxicity in the St. Lawrence River at Massena Area-of-Concern

    USGS Publications Warehouse

    Duffy, Brian T.; Baldigo, Barry P.; Smith, Alexander J.; George, Scott D.; David, Anthony M.

    2016-01-01

    In 1972, the USA and Canada agreed to restore the chemical, physical, and biological integrity of the Great Lakes ecosystem under the first Great Lakes Water Quality Agreement. In subsequent amendments, part of the St. Lawrence River at Massena, New York and segments of three tributaries, were designated as an Area of Concern (AOC) due to the effects of polychlorinated biphenyls (PCBs), lead and copper contamination, and habitat degradation and resulting impairment to several beneficial uses. Because sediments have been largely remediated, the present study was initiated to evaluate the current status of the benthic macroinvertebrate (benthos) beneficial use impairment (BUI). Benthic macroinvertebrate communities and sediment toxicity tests using Chironomus dilutus were used to test the hypotheses that community condition and sediment toxicity at AOC sites were not significantly different from those of adjacent reference sites. Grain size was found to be the main driver of community composition and macroinvertebrate assemblages, and bioassessment metrics did not differ significantly between AOC and reference sites of the same sediment class. Median growth of C. dilutus and its survival in three of the four river systems did not differ significantly in sediments from AOC and reference sites. Comparable macroinvertebrate assemblages and general lack of toxicity across most AOC and reference sites suggest that the quality of sediments should not significantly impair benthic macroinvertebrate communities in most sites in the St. Lawrence River AOC.

  20. Application of single- and dual-energy CT brain tissue segmentation to PET monitoring of proton therapy

    NASA Astrophysics Data System (ADS)

    Berndt, Bianca; Landry, Guillaume; Schwarz, Florian; Tessonnier, Thomas; Kamp, Florian; Dedes, George; Thieke, Christian; Würl, Matthias; Kurz, Christopher; Ganswindt, Ute; Verhaegen, Frank; Debus, Jürgen; Belka, Claus; Sommer, Wieland; Reiser, Maximilian; Bauer, Julia; Parodi, Katia

    2017-03-01

    The purpose of this work was to evaluate the ability of single and dual energy computed tomography (SECT, DECT) to estimate tissue composition and density for usage in Monte Carlo (MC) simulations of irradiation induced β + activity distributions. This was done to assess the impact on positron emission tomography (PET) range verification in proton therapy. A DECT-based brain tissue segmentation method was developed for white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF). The elemental composition of reference tissues was assigned to closest CT numbers in DECT space (DECTdist). The method was also applied to SECT data (SECTdist). In a validation experiment, the proton irradiation induced PET activity of three brain equivalent solutions (BES) was compared to simulations based on different tissue segmentations. Five patients scanned with a dual source DECT scanner were analyzed to compare the different segmentation methods. A single magnetic resonance (MR) scan was used for comparison with an established segmentation toolkit. Additionally, one patient with SECT and post-treatment PET scans was investigated. For BES, DECTdist and SECTdist reduced differences to the reference simulation by up to 62% when compared to the conventional stoichiometric segmentation (SECTSchneider). In comparison to MR brain segmentation, Dice similarity coefficients for WM, GM and CSF were 0.61, 0.67 and 0.66 for DECTdist and 0.54, 0.41 and 0.66 for SECTdist. MC simulations of PET treatment verification in patients showed important differences between DECTdist/SECTdist and SECTSchneider for patients with large CSF areas within the treatment field but not in WM and GM. Differences could be misinterpreted as PET derived range shifts of up to 4 mm. DECTdist and SECTdist yielded comparable activity distributions, and comparison of SECTdist to a measured patient PET scan showed improved agreement when compared to SECTSchneider. The agreement between predicted and measured PET activity distributions was improved by employing a brain specific segmentation applicable to both DECT and SECT data.

  1. Rapid Contour-based Segmentation for 18F-FDG PET Imaging of Lung Tumors by Using ITK-SNAP: Comparison to Expert-based Segmentation.

    PubMed

    Besson, Florent L; Henry, Théophraste; Meyer, Céline; Chevance, Virgile; Roblot, Victoire; Blanchet, Elise; Arnould, Victor; Grimon, Gilles; Chekroun, Malika; Mabille, Laurence; Parent, Florence; Seferian, Andrei; Bulifon, Sophie; Montani, David; Humbert, Marc; Chaumet-Riffaud, Philippe; Lebon, Vincent; Durand, Emmanuel

    2018-04-03

    Purpose To assess the performance of the ITK-SNAP software for fluorodeoxyglucose (FDG) positron emission tomography (PET) segmentation of complex-shaped lung tumors compared with an optimized, expert-based manual reference standard. Materials and Methods Seventy-six FDG PET images of thoracic lesions were retrospectively segmented by using ITK-SNAP software. Each tumor was manually segmented by six raters to generate an optimized reference standard by using the simultaneous truth and performance level estimate algorithm. Four raters segmented 76 FDG PET images of lung tumors twice by using ITK-SNAP active contour algorithm. Accuracy of ITK-SNAP procedure was assessed by using Dice coefficient and Hausdorff metric. Interrater and intrarater reliability were estimated by using intraclass correlation coefficients of output volumes. Finally, the ITK-SNAP procedure was compared with currently recommended PET tumor delineation methods on the basis of thresholding at 41% volume of interest (VOI; VOI 41 ) and 50% VOI (VOI 50 ) of the tumor's maximal metabolism intensity. Results Accuracy estimates for the ITK-SNAP procedure indicated a Dice coefficient of 0.83 (95% confidence interval: 0.77, 0.89) and a Hausdorff distance of 12.6 mm (95% confidence interval: 9.82, 15.32). Interrater reliability was an intraclass correlation coefficient of 0.94 (95% confidence interval: 0.91, 0.96). The intrarater reliabilities were intraclass correlation coefficients above 0.97. Finally, VOI 41 and VOI 50 accuracy metrics were as follows: Dice coefficient, 0.48 (95% confidence interval: 0.44, 0.51) and 0.34 (95% confidence interval: 0.30, 0.38), respectively, and Hausdorff distance, 25.6 mm (95% confidence interval: 21.7, 31.4) and 31.3 mm (95% confidence interval: 26.8, 38.4), respectively. Conclusion ITK-SNAP is accurate and reliable for active-contour-based segmentation of heterogeneous thoracic PET tumors. ITK-SNAP surpassed the recommended PET methods compared with ground truth manual segmentation. © RSNA, 2018.

  2. Ground truth crop proportion summaries for US segments, 1976-1979

    NASA Technical Reports Server (NTRS)

    Horvath, R. (Principal Investigator); Rice, D.; Wessling, T.

    1981-01-01

    The original ground truth data was collected, digitized, and registered to LANDSAT data for use in the LACIE and AgRISTARS projects. The numerous ground truth categories were consolidated into fewer classes of crops or crop conditions and counted occurrences of these classes for each segment. Tables are presented in which the individual entries are the percentage of total segment area assigned to a given class. The ground truth summaries were prepared from a 20% sample of the scene. An analysis indicates that this size of sample provides sufficient accuracy for use of the data in initial segment screening.

  3. Global Radius of Curvature Estimation and Control System for Segmented Mirrors

    NASA Technical Reports Server (NTRS)

    Rakoczy, John M. (Inventor)

    2006-01-01

    An apparatus controls positions of plural mirror segments in a segmented mirror with an edge sensor system and a controller. Current mirror segment edge sensor measurements and edge sensor reference measurements are compared with calculated edge sensor bias measurements representing a global radius of curvature. Accumulated prior actuator commands output from an edge sensor control unit are combined with an estimator matrix to form the edge sensor bias measurements. An optimal control matrix unit then accumulates the plurality of edge sensor error signals calculated by the summation unit and outputs the corresponding plurality of actuator commands. The plural mirror actuators respond to the actuator commands by moving respective positions of the mixor segments. A predetermined number of boundary conditions, corresponding to a plurality of hexagonal mirror locations, are removed to afford mathematical matrix calculation.

  4. The design and networking of dynamic satellite constellations for global mobile communication systems

    NASA Technical Reports Server (NTRS)

    Cullen, Cionaith J.; Benedicto, Xavier; Tafazolli, Rahim; Evans, Barry

    1993-01-01

    Various design factors for mobile satellite systems, whose aim is to provide worldwide voice and data communications to users with hand-held terminals, are examined. Two network segments are identified - the ground segment (GS) and the space segment (SS) - and are seen to be highly dependent on each other. The overall architecture must therefore be adapted to both of these segments, rather than each being optimized according to its own criteria. Terrestrial networks are grouped and called the terrestrial segment (TS). In the SS, of fundamental importance is the constellation altitude. The effect of the altitude on decisions such as constellation design choice and on network aspects like call handover statistics are fundamental. Orbit resonance is introduced and referred to throughout. It is specifically examined for its useful properties relating to GS/SS connectivities.

  5. Absolute measurements of large mirrors

    NASA Astrophysics Data System (ADS)

    Su, Peng

    The ability to produce mirrors for large astronomical telescopes is limited by the accuracy of the systems used to test the surfaces of such mirrors. Typically the mirror surfaces are measured by comparing their actual shapes to a precision master, which may be created using combinations of mirrors, lenses, and holograms. The work presented here develops several optical testing techniques that do not rely on a large or expensive precision, master reference surface. In a sense these techniques provide absolute optical testing. The Giant Magellan Telescope (GMT) has been designed with a 350 m 2 collecting area provided by a 25 m diameter primary mirror made out from seven circular independent mirror segments. These segments create an equivalent f/0.7 paraboloidal primary mirror consisting of a central segment and six outer segments. Each of the outer segments is 8.4 m in diameter and has an off-axis aspheric shape departing 14.5 mm from the best-fitting sphere. Much of the work in this dissertation is motivated by the need to measure the surfaces or such large mirrors accurately, without relying on a large or expensive precision reference surface. One method for absolute testing describing in this dissertation uses multiple measurements relative to a reference surface that is located in different positions with respect to the test surface of interest. The test measurements are performed with an algorithm that is based on the maximum likelihood (ML) method. Some methodologies for measuring large flat surfaces in the 2 m diameter range and for measuring the GMT primary mirror segments were specifically developed. For example, the optical figure of a 1.6-m flat mirror was determined to 2 nm rms accuracy using multiple 1-meter sub-aperture measurements. The optical figure of the reference surface used in the 1-meter sub-aperture measurements was also determined to the 2 nm level. The optical test methodology for a 1.7-m off axis parabola was evaluated by moving several times the mirror under test in relation to the test system. The result was a separation of errors in the optical test system to those errors from the mirror under test. This method proved to be accurate to 12nm rms. Another absolute measurement technique discussed in this dissertation utilizes the property of a paraboloidal surface of reflecting rays parallel to its optical axis, to its focal point. We have developed a scanning pentaprism technique that exploits this geometry to measure off-axis paraboloidal mirrors such as the GMT segments. This technique was demonstrated on a 1.7 m diameter prototype and proved to have a precision of about 50 nm rms.

  6. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  7. Graph-based surface reconstruction from stereo pairs using image segmentation

    NASA Astrophysics Data System (ADS)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  8. The use of atlas registration and graph cuts for prostate segmentation in magnetic resonance images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsager, Anne Sofie, E-mail: asko@hst.aau.dk; Østergaard, Lasse Riis; Fortunati, Valerio

    2015-04-15

    Purpose: An automatic method for 3D prostate segmentation in magnetic resonance (MR) images is presented for planning image-guided radiotherapy treatment of prostate cancer. Methods: A spatial prior based on intersubject atlas registration is combined with organ-specific intensity information in a graph cut segmentation framework. The segmentation is tested on 67 axial T{sub 2}-weighted MR images in a leave-one-out cross validation experiment and compared with both manual reference segmentations and with multiatlas-based segmentations using majority voting atlas fusion. The impact of atlas selection is investigated in both the traditional atlas-based segmentation and the new graph cut method that combines atlas andmore » intensity information in order to improve the segmentation accuracy. Best results were achieved using the method that combines intensity information, shape information, and atlas selection in the graph cut framework. Results: A mean Dice similarity coefficient (DSC) of 0.88 and a mean surface distance (MSD) of 1.45 mm with respect to the manual delineation were achieved. Conclusions: This approaches the interobserver DSC of 0.90 and interobserver MSD 0f 1.15 mm and is comparable to other studies performing prostate segmentation in MR.« less

  9. Sparse intervertebral fence composition for 3D cervical vertebra segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian

    2018-06-01

    Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.

  10. Identification of QTLs for rice grain size using a novel set of chromosomal segment substitution lines derived from Yamadanishiki in the genetic background of Koshihikari

    PubMed Central

    Okada, Satoshi; Onogi, Akio; Iijima, Ken; Hori, Kiyosumi; Iwata, Hiroyoshi; Yokoyama, Wakana; Suehiro, Miki; Yamasaki, Masanori

    2018-01-01

    Grain size is important for brewing-rice cultivars, but the genetic basis for this trait is still unclear. This paper aims to identify QTLs for grain size using novel chromosomal segment substitution lines (CSSLs) harboring chromosomal segments from Yamadanishiki, an excellent sake-brewing rice, in the genetic background of Koshihikari, a cooking cultivar. We developed a set of 49 CSSLs. Grain length (GL), grain width (GWh), grain thickness (GT), 100-grain weight (GWt) and days to heading (DTH) were evaluated, and a CSSL-QTL analysis was conducted. Eighteen QTLs for grain size and DTH were identified. Seven (qGL11, qGWh5, qGWh10, qGWt6-2, qGWt10-2, qDTH3, and qDTH6) that were detected in F2 and recombinant inbred lines (RILs) from Koshihikari/Yamadanishiki were validated, suggesting that they are important for large grain size and heading date in Yamadanishiki. Additionally, QTL reanalysis for GWt showed that qGWt10-2 was only detected in early-flowering RILs, while qGWt5 (in the same region as qGWh5) was only detected in late-flowering RILs, suggesting that these QTLs show different responses to the environment. Our study revealed that grain size in the Yamadanishiki cultivar is determined by a complex genetic mechanism. These findings could be useful for the breeding of both cooking and brewing rice. PMID:29875604

  11. Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.

    PubMed

    Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J

    2017-01-01

    House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.

  12. Particle size distribution of main-channel-bed sediments along the upper Mississippi River, USA

    USGS Publications Warehouse

    Remo, Jonathan; Heine, Ruben A.; Ickes, Brian

    2016-01-01

    In this study, we compared pre-lock-and-dam (ca. 1925) with a modern longitudinal survey of main-channel-bed sediments along a 740-km segment of the upper Mississippi River (UMR) between Davenport, IA, and Cairo, IL. This comparison was undertaken to gain a better understanding of how bed sediments are distributed longitudinally and to assess change since the completion of the UMR lock and dam navigation system and Missouri River dams (i.e., mid-twentieth century). The comparison of the historic and modern longitudinal bed sediment surveys showed similar bed sediment sizes and distributions along the study segment with the majority (> 90%) of bed sediment samples having a median diameter (D50) of fine to coarse sand. The fine tail (≤ D10) of the sediment size distributions was very fine to medium sand, and the coarse tail (≥ D90) of sediment-size distribution was coarse sand to gravel. Coarsest sediments in both surveys were found within or immediately downstream of bedrock-floored reaches. Statistical analysis revealed that the particle-size distributions between the survey samples were statistically identical, suggesting no overall difference in main-channel-bed sediment-size distribution between 1925 and present. This was a surprising result given the magnitude of river engineering undertaken along the study segment over the past ~ 90 years. The absence of substantial differences in main-channel-bed-sediment size suggests that flow competencies within the highly engineered navigation channel today are similar to conditions within the less-engineered historic channel.

  13. Cortical bone fracture analysis using XFEM - case study.

    PubMed

    Idkaidek, Ashraf; Jasiuk, Iwona

    2017-04-01

    We aim to achieve an accurate simulation of human cortical bone fracture using the extended finite element method within a commercial finite element software abaqus. A two-dimensional unit cell model of cortical bone is built based on a microscopy image of the mid-diaphysis of tibia of a 70-year-old human male donor. Each phase of this model, an interstitial bone, a cement line, and an osteon, are considered linear elastic and isotropic with material properties obtained by nanoindentation, taken from literature. The effect of using fracture analysis methods (cohesive segment approach versus linear elastic fracture mechanics approach), finite element type, and boundary conditions (traction, displacement, and mixed) on cortical bone crack initiation and propagation are studied. In this study cohesive segment damage evolution for a traction separation law based on energy and displacement is used. In addition, effects of the increment size and mesh density on analysis results are investigated. We find that both cohesive segment and linear elastic fracture mechanics approaches within the extended finite element method can effectively simulate cortical bone fracture. Mesh density and simulation increment size can influence analysis results when employing either approach, and using finer mesh and/or smaller increment size does not always provide more accurate results. Both approaches provide close but not identical results, and crack propagation speed is found to be slower when using the cohesive segment approach. Also, using reduced integration elements along with the cohesive segment approach decreases crack propagation speed compared with using full integration elements. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Testosterone Delivered with a Scaffold Is as Effective as Bone Morphologic Protein-2 in Promoting the Repair of Critical-Size Segmental Defect of Femoral Bone in Mice

    PubMed Central

    Cheng, Bi-Hua; Chu, Tien-Min G.; Chang, Chawnshang; Kang, Hong-Yo; Huang, Ko-En

    2013-01-01

    Loss of large bone segments due to fracture resulting from trauma or tumor removal is a common clinical problem. The goal of this study was to evaluate the use of scaffolds containing testosterone, bone morphogenetic protein-2 (BMP-2), or a combination of both for treatment of critical-size segmental bone defects in mice. A 2.5-mm wide osteotomy was created on the left femur of wildtype and androgen receptor knockout (ARKO) mice. Testosterone, BMP-2, or both were delivered locally using a scaffold that bridged the fracture. Results of X-ray imaging showed that in both wildtype and ARKO mice, BMP-2 treatment induced callus formation within 14 days after initiation of the treatment. Testosterone treatment also induced callus formation within 14 days in wildtype but not in ARKO mice. Micro-computed tomography and histological examinations revealed that testosterone treatment caused similar degrees of callus formation as BMP-2 treatment in wildtype mice, but had no such effect in ARKO mice, suggesting that the androgen receptor is required for testosterone to initiate fracture healing. These results demonstrate that testosterone is as effective as BMP-2 in promoting the healing of critical-size segmental defects and that combination therapy with testosterone and BMP-2 is superior to single therapy. Results of this study may provide a foundation to develop a cost effective and efficient therapeutic modality for treatment of bone fractures with segmental defects. PMID:23940550

  15. Effects of Pore Size on the Osteoconductivity and Mechanical Properties of Calcium Phosphate Cement in a Rabbit Model.

    PubMed

    Zhao, Yi-Nan; Fan, Jun-Jun; Li, Zhi-Quan; Liu, Yan-Wu; Wu, Yao-Ping; Liu, Jian

    2017-02-01

    Calcium phosphate cement (CPC) porous scaffold is widely used as a suitable bone substitute to repair bone defect, but the optimal pore size is unclear yet. The current study aimed to evaluate the effect of different pore sizes on the processing of bone formation in repairing segmental bone defect of rabbits using CPC porous scaffolds. Three kinds of CPC porous scaffolds with 5 mm diameters and 12 mm length were prepared with the same porosity but different pore sizes (Group A: 200-300 µm, Group B: 300-450 µm, Group C: 450-600 µm, respectively). Twelve millimeter segmental bone defects were created in the middle of the radius bone and filled with different kinds of CPC cylindrical scaffolds. After 4, 12, and 24 weeks, alkaline phosphatase (ALP), histological assessment, and mechanical properties evaluation were performed in all three groups. After 4 weeks, ALP activity increased in all groups but was highest in Group A with smallest pore size. The new bone formation within the scaffolds was not obvious in all groups. After 12 weeks, the new bone formation within the scaffolds was obvious in each group and highest in Group A. At 24 weeks, no significant difference in new bone formation was observed among different groups. Besides the osteoconductive effect, Group A with smallest pore size also had the best mechanical properties in vivo at 12 weeks. We demonstrate that pore size has a significant effect on the osteoconductivity and mechanical properties of calcium phosphate cement porous scaffold in vivo. Small pore size favors the bone formation in the early stage and may be more suitable for repairing segmental bone defect in vivo. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  16. Ureter tracking and segmentation in CT urography (CTU) using COMPASS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadjiiski, Lubomir, E-mail: lhadjisk@umich.edu; Zick, David; Chan, Heang-Ping

    2014-12-15

    Purpose: The authors are developing a computerized system for automated segmentation of ureters in CTU, referred to as combined model-guided path-finding analysis and segmentation system (COMPASS). Ureter segmentation is a critical component for computer-aided diagnosis of ureter cancer. Methods: COMPASS consists of three stages: (1) rule-based adaptive thresholding and region growing, (2) path-finding and propagation, and (3) edge profile extraction and feature analysis. With institutional review board approval, 79 CTU scans performed with intravenous (IV) contrast material enhancement were collected retrospectively from 79 patient files. One hundred twenty-four ureters were selected from the 79 CTU volumes. On average, the uretersmore » spanned 283 computed tomography slices (range: 116–399, median: 301). More than half of the ureters contained malignant or benign lesions and some had ureter wall thickening due to malignancy. A starting point for each of the 124 ureters was identified manually to initialize the tracking by COMPASS. In addition, the centerline of each ureter was manually marked and used as reference standard for evaluation of tracking performance. The performance of COMPASS was quantitatively assessed by estimating the percentage of the length that was successfully tracked and segmented for each ureter and by estimating the average distance and the average maximum distance between the computer and the manually tracked centerlines. Results: Of the 124 ureters, 120 (97%) were segmented completely (100%), 121 (98%) were segmented through at least 70%, and 123 (99%) were segmented through at least 50% of its length. In comparison, using our previous method, 85 (69%) ureters were segmented completely (100%), 100 (81%) were segmented through at least 70%, and 107 (86%) were segmented at least 50% of its length. With COMPASS, the average distance between the computer and the manually generated centerlines is 0.54 mm, and the average maximum distance is 2.02 mm. With our previous method, the average distance between the centerlines was 0.80 mm, and the average maximum distance was 3.38 mm. The improvements in the ureteral tracking length and both distance measures were statistically significant (p < 0.0001). Conclusions: COMPASS improved significantly the ureter tracking, including regions across ureter lesions, wall thickening, and the narrowing of the lumen.« less

  17. The study of muscle remodeling in Drosophila metamorphosis using in vivo microscopy and bioimage informatics

    PubMed Central

    2012-01-01

    Background Metamorphosis in insects transforms the larval into an adult body plan and comprises the destruction and remodeling of larval and the generation of adult tissues. The remodeling of larval into adult muscles promises to be a genetic model for human atrophy since it is associated with dramatic alteration in cell size. Furthermore, muscle development is amenable to 3D in vivo microscopy at high cellular resolution. However, multi-dimensional image acquisition leads to sizeable amounts of data that demand novel approaches in image processing and analysis. Results To handle, visualize and quantify time-lapse datasets recorded in multiple locations, we designed a workflow comprising three major modules. First, the previously introduced TLM-converter concatenates stacks of single time-points. The second module, TLM-2D-Explorer, creates maximum intensity projections for rapid inspection and allows the temporal alignment of multiple datasets. The transition between prepupal and pupal stage serves as reference point to compare datasets of different genotypes or treatments. We demonstrate how the temporal alignment can reveal novel insights into the east gene which is involved in muscle remodeling. The third module, TLM-3D-Segmenter, performs semi-automated segmentation of selected muscle fibers over multiple frames. 3D image segmentation consists of 3 stages. First, the user places a seed into a muscle of a key frame and performs surface detection based on level-set evolution. Second, the surface is propagated to subsequent frames. Third, automated segmentation detects nuclei inside the muscle fiber. The detected surfaces can be used to visualize and quantify the dynamics of cellular remodeling. To estimate the accuracy of our segmentation method, we performed a comparison with a manually created ground truth. Key and predicted frames achieved a performance of 84% and 80%, respectively. Conclusions We describe an analysis pipeline for the efficient handling and analysis of time-series microscopy data that enhances productivity and facilitates the phenotypic characterization of genetic perturbations. Our methodology can easily be scaled up for genome-wide genetic screens using readily available resources for RNAi based gene silencing in Drosophila and other animal models. PMID:23282138

  18. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  19. Comparison of MRI segmentation techniques for measuring liver cyst volumes in autosomal dominant polycystic kidney disease.

    PubMed

    Farooq, Zerwa; Behzadi, Ashkan Heshmatzadeh; Blumenfeld, Jon D; Zhao, Yize; Prince, Martin R

    To compare MRI segmentation methods for measuring liver cyst volumes in autosomal dominant polycystic kidney disease (ADPKD). Liver cyst volumes in 42 ADPKD patients were measured using region growing, thresholding and cyst diameter techniques. Manual segmentation was the reference standard. Root mean square deviation was 113, 155, and 500 for cyst diameter, thresholding and region growing respectively. Thresholding error for cyst volumes below 500ml was 550% vs 17% for cyst volumes above 500ml (p<0.001). For measuring volume of a small number of cysts, cyst diameter and manual segmentation methods are recommended. For severe disease with numerous, large hepatic cysts, thresholding is an acceptable alternative. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Current Methods to Define Metabolic Tumor Volume in Positron Emission Tomography: Which One is Better?

    PubMed

    Im, Hyung-Jun; Bradshaw, Tyler; Solaiyappan, Meiyappan; Cho, Steve Y

    2018-02-01

    Numerous methods to segment tumors using 18 F-fluorodeoxyglucose positron emission tomography (FDG PET) have been introduced. Metabolic tumor volume (MTV) refers to the metabolically active volume of the tumor segmented using FDG PET, and has been shown to be useful in predicting patient outcome and in assessing treatment response. Also, tumor segmentation using FDG PET has useful applications in radiotherapy treatment planning. Despite extensive research on MTV showing promising results, MTV is not used in standard clinical practice yet, mainly because there is no consensus on the optimal method to segment tumors in FDG PET images. In this review, we discuss currently available methods to measure MTV using FDG PET, and assess the advantages and disadvantages of the methods.

  1. Fabrication, testing and modeling of a new flexible armor inspired from natural fish scales and osteoderms.

    PubMed

    Chintapalli, Ravi Kiran; Mirkhalaf, Mohammad; Dastjerdi, Ahmad Khayer; Barthelat, Francois

    2014-09-01

    Crocodiles, armadillo, turtles, fish and many other animal species have evolved flexible armored skins in the form of hard scales or osteoderms, which can be described as hard plates of finite size embedded in softer tissues. The individual hard segments provide protection from predators, while the relative motion of these segments provides the flexibility required for efficient locomotion. In this work, we duplicated these broad concepts in a bio-inspired segmented armor. Hexagonal segments of well-defined size and shape were carved within a thin glass plate using laser engraving. The engraved plate was then placed on a soft substrate which simulated soft tissues, and then punctured with a sharp needle mounted on a miniature loading stage. The resistance of our segmented armor was significantly higher when smaller hexagons were used, and our bio-inspired segmented glass displayed an increase in puncture resistance of up to 70% compared to a continuous plate of glass of the same thickness. Detailed structural analyses aided by finite elements revealed that this extraordinary improvement is due to the reduced span of individual segments, which decreases flexural stresses and delays fracture. This effect can however only be achieved if the plates are at least 1000 stiffer than the underlying substrate, which is the case for natural armor systems. Our bio-inspired system also displayed many of the attributes of natural armors: flexible, robust with 'multi-hit' capabilities. This new segmented glass therefore suggests interesting bio-inspired strategies and mechanisms which could be systematically exploited in high-performance flexible armors. This study also provides new insights and a better understanding of the mechanics of natural armors such as scales and osteoderms.

  2. A segmentation approach for a delineation of terrestrial ecoregions

    NASA Astrophysics Data System (ADS)

    Nowosad, J.; Stepinski, T.

    2017-12-01

    Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for global ecological and conservation studies.

  3. Feasibility and scalability of spring parameters in distraction enterogenesis in a murine model.

    PubMed

    Huynh, Nhan; Dubrovsky, Genia; Rouch, Joshua D; Scott, Andrew; Stelzner, Matthias; Shekherdimian, Shant; Dunn, James C Y

    2017-07-01

    Distraction enterogenesis has been investigated as a novel treatment for short bowel syndrome (SBS). With variable intestinal sizes, it is critical to determine safe, translatable spring characteristics in differently sized animal models before clinical use. Nitinol springs have been shown to lengthen intestines in rats and pigs. Here, we show spring-mediated intestinal lengthening is scalable and feasible in a murine model. A 10-mm nitinol spring was compressed to 3 mm and placed in a 5-mm intestinal segment isolated from continuity in mice. A noncompressed spring placed in a similar fashion served as a control. Spring parameters were proportionally extrapolated from previous spring parameters to accommodate the smaller size of murine intestines. After 2-3 wk, the intestinal segments were examined for size and histology. Experimental group with spring constants, k = 0.2-1.4 N/m, showed intestinal lengthening from 5.0 ± 0.6 mm to 9.5 ± 0.8 mm (P < 0.0001), whereas control segments lengthened from 5.3 ± 0.5 mm to 6.4 ± 1.0 mm (P < 0.02). Diameter increased similarly in both groups. Isolated segment perforation was noted when k ≥ 0.8 N/m. Histologically, lengthened segments had increased muscularis thickness and crypt depth in comparison to normal intestine. Nitinol springs with k ≤ 0.4 N/m can safely yield nearly 2-fold distraction enterogenesis in length and diameter in a scalable mouse model. Not only does this study derive the safe ranges and translatable spring characteristics in a scalable murine model for patients with short bowel syndrome, it also demonstrates the feasibility of spring-mediated intestinal lengthening in a mouse, which can be used to study underlying mechanisms in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A NDVI assisted remote sensing image adaptive scale segmentation method

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  5. Atlas-based segmentation in breast cancer radiotherapy: Evaluation of specific and generic-purpose atlases.

    PubMed

    Ciardo, Delia; Gerardi, Marianna Alessandra; Vigorito, Sabrina; Morra, Anna; Dell'acqua, Veronica; Diaz, Federico Javier; Cattani, Federica; Zaffino, Paolo; Ricotti, Rosalinda; Spadea, Maria Francesca; Riboldi, Marco; Orecchia, Roberto; Baroni, Guido; Leonardi, Maria Cristina; Jereczek-Fossa, Barbara Alicja

    2017-04-01

    Atlas-based automatic segmentation (ABAS) addresses the challenges of accuracy and reliability in manual segmentation. We aim to evaluate the contribution of specific-purpose in ABAS of breast cancer (BC) patients with respect to generic-purpose libraries. One generic-purpose and 9 specific-purpose libraries, stratified according to type of surgery and size of thorax circumference, were obtained from the computed tomography of 200 BC patients. Keywords about contralateral breast volume and presence of breast expander/prostheses were recorded. ABAS was validated on 47 independent patients, considering manual segmentation from scratch as reference. Five ABAS datasets were obtained, testing single-ABAS and multi-ABAS with simultaneous truth and performance level estimation (STAPLE). Center of mass distance (CMD), average Hausdorff distance (AHD) and Dice similarity coefficient (DSC) between corresponding ABAS and manual structures were evaluated and statistically significant differences between different surgeries, structures and ABAS strategies were investigated. Statistically significant differences between patients who underwent different surgery were found, with superior results for conservative-surgery group, and between different structures were observed: ABAS of heart, lungs, kidneys and liver was satisfactory (median values: CMD<2 mm, DSC≥0.80, AHD<1.5 mm), whereas chest wall, breast and spinal cord obtained moderate performance (median values: 2 mm ≤ CMD<5 mm, 0.60 ≤ DSC<0.80, 1.5 mm ≤ AHD<4 mm) and esophagus, stomach, brachial plexus and supraclavicular nodes obtained poor performance (median CMD≥5 mm, DSC<0.60, AHD≥4 mm). The application of STAPLE algorithm generally yields higher performance and the use of keywords improves results for breast ABAS. The homogeneity in the selection of atlases based on multiple anatomical and clinical features and the use of specific-purpose libraries can improve ABAS performance with respect to generic-purpose libraries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Genetic Structure of Avian Influenza Viruses from Ducks of the Atlantic Flyway of North America

    PubMed Central

    Huang, Yanyan; Wille, Michelle; Dobbin, Ashley; Walzthöni, Natasha M.; Robertson, Gregory J.; Ojkic, Davor; Whitney, Hugh; Lang, Andrew S.

    2014-01-01

    Wild birds, including waterfowl such as ducks, are reservoir hosts of influenza A viruses. Despite the increased number of avian influenza virus (AIV) genome sequences available, our understanding of AIV genetic structure and transmission through space and time in waterfowl in North America is still limited. In particular, AIVs in ducks of the Atlantic flyway of North America have not been thoroughly investigated. To begin to address this gap, we analyzed 109 AIV genome sequences from ducks in the Atlantic flyway to determine their genetic structure and to document the extent of gene flow in the context of sequences from other locations and other avian and mammalian host groups. The analyses included 25 AIVs from ducks from Newfoundland, Canada, from 2008–2011 and 84 available reference duck AIVs from the Atlantic flyway from 2006–2011. A vast diversity of viral genes and genomes was identified in the 109 viruses. The genetic structure differed amongst the 8 viral segments with predominant single lineages found for the PB2, PB1 and M segments, increased diversity found for the PA, NP and NS segments (2, 3 and 3 lineages, respectively), and the highest diversity found for the HA and NA segments (12 and 9 lineages, respectively). Identification of inter-hemispheric transmissions was rare with only 2% of the genes of Eurasian origin. Virus transmission between ducks and other bird groups was investigated, with 57.3% of the genes having highly similar (≥99% nucleotide identity) genes detected in birds other than ducks. Transmission between North American flyways has been frequent and 75.8% of the genes were highly similar to genes found in other North American flyways. However, the duck AIV genes did display spatial distribution bias, which was demonstrated by the different population sizes of specific viral genes in one or two neighbouring flyways compared to more distant flyways. PMID:24498009

  7. A REFERENCE GRAMMAR OF ADAMAWA FULANI. AFRICAN LANGUAGE MONOGRAPH NUMBER 8.

    ERIC Educational Resources Information Center

    STENNES, LESLIE H.

    THIS REFERENCE WORK IS A STRUCTURAL GRAMMAR OF THE ADAMAWA DIALECT OF FULANI AS SPOKEN IN NIGERIA AND CAMEROUN. IT IS PRIMARILY WRITTEN FOR LINGUISTS AND THOSE WHO ALREADY KNOW FULANI. THE GRAMMAR IS DIVIDED INTO THREE PARTS--(1) PHONEMICS AND MORPHOPHONEMICS, DISCUSSING SEGMENTAL AND SUPRASEGMENTAL PHONEMES, PERMITTED SEQUENCES OF PHONEMES,…

  8. 40 CFR 86.1333-90 - Transient test cycle generation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... zero percent speed specified in the engine dynamometer schedules (appendix I (f)(1), (f)(2), or (f)(3... feedback torque equal to zero (using, for example, clutch disengagement, speed to torque control switching... reference speed and reference torque are zero percent values. For each idle segment that is ten seconds or...

  9. 40 CFR 86.1333-90 - Transient test cycle generation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... zero percent speed specified in the engine dynamometer schedules (appendix I (f)(1), (f)(2), or (f)(3... feedback torque equal to zero (using, for example, clutch disengagement, speed to torque control switching... reference speed and reference torque are zero percent values. For each idle segment that is ten seconds or...

  10. 40 CFR 86.1333-90 - Transient test cycle generation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... zero percent speed specified in the engine dynamometer schedules (appendix I (f)(1), (f)(2), or (f)(3... feedback torque equal to zero (using, for example, clutch disengagement, speed to torque control switching... reference speed and reference torque are zero percent values. For each idle segment that is ten seconds or...

  11. Automatic knee cartilage delineation using inheritable segmentation

    NASA Astrophysics Data System (ADS)

    Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.

    2008-03-01

    We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.

  12. Arabic OCR: toward a complete system

    NASA Astrophysics Data System (ADS)

    El-Bialy, Ahmed M.; Kandil, Ahmed H.; Hashish, Mohamed; Yamany, Sameh M.

    1999-12-01

    Latin and Chinese OCR systems have been studied extensively in the literature. Yet little work was performed for Arabic character recognition. This is due to the technical challenges found in the Arabic text. Due to its cursive nature, a powerful and stable text segmentation is needed. Also; features capturing the characteristics of the rich Arabic character representation are needed to build the Arabic OCR. In this paper a novel segmentation technique which is font and size independent is introduced. This technique can segment the cursive written text line even if the line suffers from small skewness. The technique is not sensitive to the location of the centerline of the text line and can segment different font sizes and type (for different character sets) occurring on the same line. Features extraction is considered one of the most important phases of the text reading system. Ideally, the features extracted from a character image should capture the essential characteristics of this character that are independent of the font type and size. In such ideal case, the classifier stores a single prototype per character. However, it is practically challenging to find such ideal set of features. In this paper, a set of features that reflect the topological aspects of Arabia characters is proposed. These proposed features integrated with a topological matching technique introduce an Arabic text reading system that is semi Omni.

  13. Local site preference rationalizes disentangling by DNA topoisomerases

    NASA Astrophysics Data System (ADS)

    Liu, Zhirong; Zechiedrich, Lynn; Chan, Hue Sun

    2010-03-01

    To rationalize the disentangling action of type II topoisomerases, an improved wormlike DNA model was used to delineate the degree of unknotting and decatenating achievable by selective segment passage at specific juxtaposition geometries and to determine how these activities were affected by DNA circle size and solution ionic strength. We found that segment passage at hooked geometries can reduce knot populations as dramatically as seen in experiments. Selective segment passage also provided theoretical underpinning for an intriguing empirical scaling relation between unknotting and decatenating potentials.

  14. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  15. Basic test framework for the evaluation of text line segmentation and text parameter extraction.

    PubMed

    Brodić, Darko; Milivojević, Dragan R; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  16. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    PubMed Central

    Brodić, Darko; Milivojević, Dragan R.; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms. PMID:22399932

  17. Diagnostic accuracy of ovarian cyst segmentation in B-mode ultrasound images

    NASA Astrophysics Data System (ADS)

    Bibicu, Dorin; Moraru, Luminita; Stratulat (Visan), Mirela

    2013-11-01

    Cystic and polycystic ovary syndrome is an endocrine disorder affecting women in the fertile age. The Moore Neighbor Contour, Watershed Method, Active Contour Models, and a recent method based on Active Contour Model with Selective Binary and Gaussian Filtering Regularized Level Set (ACM&SBGFRLS) techniques were used in this paper to detect the border of the ovarian cyst from echography images. In order to analyze the efficiency of the segmentation an original computer aided software application developed in MATLAB was proposed. The results of the segmentation were compared and evaluated against the reference contour manually delineated by a sonography specialist. Both the accuracy and time complexity of the segmentation tasks are investigated. The Fréchet distance (FD) as a similarity measure between two curves and the area error rate (AER) parameter as the difference between the segmented areas are used as estimators of the segmentation accuracy. In this study, the most efficient methods for the segmentation of the ovarian were analyzed cyst. The research was carried out on a set of 34 ultrasound images of the ovarian cyst.

  18. Optimal reinforcement of training datasets in semi-supervised landmark-based segmentation

    NASA Astrophysics Data System (ADS)

    Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2015-03-01

    During the last couple of decades, the development of computerized image segmentation shifted from unsupervised to supervised methods, which made segmentation results more accurate and robust. However, the main disadvantage of supervised segmentation is a need for manual image annotation that is time-consuming and subjected to human error. To reduce the need for manual annotation, we propose a novel learning approach for training dataset reinforcement in the area of landmark-based segmentation, where newly detected landmarks are optimally combined with reference landmarks from the training dataset and therefore enriches the training process. The approach is formulated as a nonlinear optimization problem, where the solution is a vector of weighting factors that measures how reliable are the detected landmarks. The detected landmarks that are found to be more reliable are included into the training procedure with higher weighting factors, whereas the detected landmarks that are found to be less reliable are included with lower weighting factors. The approach is integrated into the landmark-based game-theoretic segmentation framework and validated against the problem of lung field segmentation from chest radiographs.

  19. Association of ST segment depression >5 min after exercise testing with severity of coronary artery disease.

    PubMed

    Shaikh, Ayaz Hussain; Hanif, Bashir; Siddiqui, Adeel M; Shahab, Hunaina; Qazi, Hammad Ali; Mujtaba, Iqbal

    2010-04-01

    To determine the association of prolonged ST segment depression after an exercise test with severity of coronary artery disease. A cross sectional study of 100 consecutive patients referred to the cardiology laboratory for stress myocardial perfusion imaging (MPI) conducted between April-August 2008. All selected patients were monitored until their ST segment depression was recovered to baseline. ST segment recovery time was categorized into less and more than 5 minutes. Subsequent gated SPECT-MPI was performed and stratified according to severity of perfusion defect. Association was determined between post exercise ST segment depression recovery time (<5 minutes and >5 minutes) and severity of perfusion defect on MPI. The mean age of the patients was 57.12 +/- 9.0 years. The results showed statistically insignificant association (p > 0.05) between ST segment recovery time of <5 minutes and >5 minutes with low, intermediate or high risk MPI. Our findings suggest that the commonly used cut-off levels used in literature for prolonged, post exercise ST segment depression (>5 minutes into recovery phase) does not correlate with severity of ischaemia based on MPI results.

  20. Reference pricing system and competition: case study from Portugal.

    PubMed

    Portela, Conceiçăo

    2009-10-01

    To characterize the patterns of competition for a sample of drugs in the Portuguese pharmaceutical market before (January 2002-March 2003) and after (April 2003-June 2003) the introduction of the reference pricing system (RPS). We performed a descriptive, retrospective, longitudinal analysis, with monthly observations from January 2002 until June 2003 of 15 homogeneous groups. The groups represented the upper limit of public pharmaceutical expenditure in the RPS segment in 2003 (n=270). Measures of competition were: 1) number of presentations; 2) prescriptions' concentration in the generic and originator (brand) segments, using Herfindahl-Hirschman Index (HHI); and 3) dominant positions of market leader in the homogeneous group. A correlation analysis between the number of presentations, the HHI, and the dominant position of the market leader was performed using Pearson coefficient of correlation. The structure of the market changed with the introduction of RPS. We found an increasing number of generic presentations (from 4+/-3 to 7+/-4; mean+/-standard deviation) and a decrease in the HHI for the generics market segment (from 0.7+/-0.2 to 0.6+/-0.3). There was a negative correlation between those variables that increased after the introduction of RPS (from -0.6 to -0.8). The HHI for brands and the dominant positions remained unchanged. After the implementation of RPS, the increased competition was mainly driven by economic and social agents in the generics market segment but not in the brands market segment.

  1. ZResponse to selection, heritability and genetic correlations between body weight and body size in Pacific white shrimp, Litopenaeus vannamei

    NASA Astrophysics Data System (ADS)

    Andriantahina, Farafidy; Liu, Xiaolin; Huang, Hao; Xiang, Jianhai

    2012-03-01

    To quantify the response to selection, heritability and genetic correlations between weight and size of Litopenaeus vannamei, the body weight (BW), total length (TL), body length (BL), first abdominal segment depth (FASD), third abdominal segment depth (TASD), first abdominal segment width (FASW), and partial carapace length (PCL) of 5-month-old parents and of offspnng were measured by calculating seven body measunngs of offspnng produced by a nested mating design. Seventeen half-sib families and 42 full-sib families of L. vannamei were produced using artificial fertilization from 2-4 dams by each sire, and measured at around five months post-metamorphosis. The results show that hentabilities among vanous traits were high: 0.515±0.030 for body weight and 0.394±0.030 for total length. After one generation of selection. the selection response was 10.70% for offspring growth. In the 5th month, the realized heritability for weight was 0.296 for the offspnng generation. Genetic correlations between body weight and body size were highly variable. The results indicate that external morphological parameters can be applied dunng breeder selection for enhancing the growth without sacrificing animals for determining the body size and breed ability; and selective breeding can be improved significantly, simultaneously with increased production.

  2. Segmentation of white blood cells and comparison of cell morphology by linear and naïve Bayes classifiers.

    PubMed

    Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai

    2015-06-30

    Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets of comparison were performed: segmentation and classification. The automatically segmented results were compared to the ones obtained manually by a haematologist. It was found that the proposed method is consistent and coherent in both datasets, with dice similarity of 98.9 and 91.6% for average segmented nucleus and cell regions, respectively. Furthermore, the overall correction rate in the classification phase is about 98 and 94% for linear and naïve Bayes models, respectively. The proposed system, based on normal white blood cell morphology and its characteristics, was applied to two different datasets. The results of the calibrated segmentation process on both datasets are fast, robust, efficient and coherent. Meanwhile, the classification of normal white blood cells into five types shows high sensitivity in both linear and naïve Bayes models, with slightly better results in the linear classifier.

  3. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.

    PubMed

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.

  4. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification

    PubMed Central

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows. PMID:28125723

  5. Video segmentation using keywords

    NASA Astrophysics Data System (ADS)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  6. Tensor scale-based fuzzy connectedness image segmentation

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Udupa, Jayaram K.

    2003-05-01

    Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.

  7. Size-dependent trophic patterns of pallid sturgeon and shovelnose sturgeon in a large river system

    USGS Publications Warehouse

    French, William E.; Graeb, Brian D. S.; Bertrand, Katie N.; Chipps, Steven R.; Klumb, Robert A.

    2013-01-01

    This study compared patterns of δ15N and δ13C enrichment of pallid sturgeon Scaphirhynchus albus and shovelnose sturgeon S. platorynchus in the Missouri River, United States, to infer their trophic position in a large river system. We examined enrichment and energy flow for pallid sturgeon in three segments of the Missouri River (Montana/North Dakota, Nebraska/South Dakota, and Nebraska/Iowa) and made comparisons between species in the two downstream segments (Nebraska/South Dakota and Nebraska/Iowa). Patterns in isotopic composition for pallid sturgeon were consistent with gut content analyses indicating an ontogenetic diet shift from invertebrates to fish prey at sizes of >500-mm fork length (FL) in all three segments of the Missouri River. Isotopic patterns revealed shovelnose sturgeon did not experience an ontogenetic shift in diet and used similar prey resources as small (<500-mm FL) pallid sturgeon in the two downstream segments. We found stable isotope analysis to be an effective tool for evaluating the trophic position of sturgeons within a large river food web.

  8. Segmentation of Natural Gas Customers in Industrial Sector Using Self-Organizing Map (SOM) Method

    NASA Astrophysics Data System (ADS)

    Masbar Rus, A. M.; Pramudita, R.; Surjandari, I.

    2018-03-01

    The usage of the natural gas which is non-renewable energy, needs to be more efficient. Therefore, customer segmentation becomes necessary to set up a marketing strategy to be right on target or to determine an appropriate fee. This research was conducted at PT PGN using one of data mining method, i.e. Self-Organizing Map (SOM). The clustering process is based on the characteristic of its customers as a reference to create the customer segmentation of natural gas customers. The input variables of this research are variable of area, type of customer, the industrial sector, the average usage, standard deviation of the usage, and the total deviation. As a result, 37 cluster and 9 segment from 838 customer data are formed. These 9 segments then employed to illustrate the general characteristic of the natural gas customer of PT PGN.

  9. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    PubMed Central

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273

  10. Nerve ultrasound normal values - Readjustment of the ultrasound pattern sum score UPSS.

    PubMed

    Grimm, Alexander; Axer, Hubertus; Heiling, Bianka; Winter, Natalie

    2018-07-01

    Reference values are crucial for nerve ultrasound. Here, we reevaluated normal nerve and fascicle cross-sectional area (CSA) values in humans and compared them to published values. Based on these data, ultrasound pattern sum score (UPSS) boundary values were revisited and readjusted. Ultrasound of different peripheral nerves was performed in 100 healthy subjects at anatomically defined landmarks. Correlations with age, gender, height and weight were calculated. Overall, correspondence to other published reference values was high. Gender-dependency was found for the proximal median nerve. Dependency from height occurred in the tibial nerve (TN). Weight-dependency was not found. However, the most obvious differences were found in the TN between men >60 years and women <60 years. Thus, general boundary values were defined using the mean plus the twofold standard deviation for all subjects and nerve segments except for the TN, in which different cut-offs were proposed for elder men. Accordingly, the cut-offs for the UPSS were re-adjusted, none of the individuals revealed more than 2 points at maximum. The influence of distinct epidemiological factors on nerve size is most prominent in the TN, for which thus several normal values are useful. Adjusted reference values improve the accuracy of the UPSS. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  11. Causal Video Object Segmentation From Persistence of Occlusions

    DTIC Science & Technology

    2015-05-01

    Precision, recall, and F-measure are reported on the ground truth anno - tations converted to binary masks. Note we cannot evaluate “number of...to lack of occlusions. References [1] P. Arbelaez, M. Maire, C. Fowlkes, and J . Malik. Con- tour detection and hierarchical image segmentation. TPAMI...X. Bai, J . Wang, D. Simons, and G. Sapiro. Video snapcut: robust video object cutout using localized classifiers. In ACM Transactions on Graphics

  12. Automatic Measurement of Fetal Brain Development from Magnetic Resonance Imaging: New Reference Data.

    PubMed

    Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna

    2018-01-01

    Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p < 0.001) as well as mean volume and volume overlap differences of 4.77 and 18.13%, respectively. New reference data on 199 normal fetuses were created, and all 9 IUGR fetuses were at or below the third percentile of the normal growth chart. The proposed method is fast, accurate, reproducible, user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.

  13. Reactive power and voltage control strategy based on dynamic and adaptive segment for DG inverter

    NASA Astrophysics Data System (ADS)

    Zhai, Jianwei; Lin, Xiaoming; Zhang, Yongjun

    2018-03-01

    The inverter of distributed generation (DG) can support reactive power to help solve the problem of out-of-limit voltage in active distribution network (ADN). Therefore, a reactive voltage control strategy based on dynamic and adaptive segment for DG inverter is put forward to actively control voltage in this paper. The proposed strategy adjusts the segmented voltage threshold of Q(U) droop curve dynamically and adaptively according to the voltage of grid-connected point and the power direction of adjacent downstream line. And then the reactive power reference of DG inverter can be got through modified Q(U) control strategy. The reactive power of inverter is controlled to trace the reference value. The proposed control strategy can not only control the local voltage of grid-connected point but also help to maintain voltage within qualified range considering the terminal voltage of distribution feeder and the reactive support for adjacent downstream DG. The scheme using the proposed strategy is compared with the scheme without the reactive support of DG inverter and the scheme using the Q(U) control strategy with constant segmented voltage threshold. The simulation results suggest that the proposed method has a significant improvement on solving the problem of out-of-limit voltage, restraining voltage variation and improving voltage quality.

  14. Lung segment geometry study: simulation of largest possible tumours that fit into bronchopulmonary segments.

    PubMed

    Welter, S; Stöcker, C; Dicken, V; Kühl, H; Krass, S; Stamatis, G

    2012-03-01

    Segmental resection in stage I non-small cell lung cancer (NSCLC) has been well described and is considered to have similar survival rates as lobectomy but with increased rates of local tumour recurrence due to inadequate parenchymal margins. In consequence, today segmentectomy is only performed when the tumour is smaller than 2 cm. Three-dimensional reconstructions from 11 thin-slice CT scans of bronchopulmonary segments were generated, and virtual spherical tumours were placed over the segments, respecting all segmental borders. As a next step, virtual parenchymal safety margins of 2 cm and 3 cm were subtracted and the size of the remaining tumour calculated. The maximum tumour diameters with a 30-mm parenchymal safety margin ranged from 26.1 mm in right-sided segments 7 + 8 to 59.8 mm in the left apical segments 1-3. Using a three-dimensional reconstruction of lung CT scans, we demonstrated that segmentectomy or resection of segmental groups should be feasible with adequate margins, even for larger tumours in selected cases. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  15. SOV_refine: A further refined definition of segment overlap score and its significance for protein structure similarity.

    PubMed

    Liu, Tong; Wang, Zheng

    2018-01-01

    The segment overlap score (SOV) has been used to evaluate the predicted protein secondary structures, a sequence composed of helix (H), strand (E), and coil (C), by comparing it with the native or reference secondary structures, another sequence of H, E, and C. SOV's advantage is that it can consider the size of continuous overlapping segments and assign extra allowance to longer continuous overlapping segments instead of only judging from the percentage of overlapping individual positions as Q3 score does. However, we have found a drawback from its previous definition, that is, it cannot ensure increasing allowance assignment when more residues in a segment are further predicted accurately. A new way of assigning allowance has been designed, which keeps all the advantages of the previous SOV score definitions and ensures that the amount of allowance assigned is incremental when more elements in a segment are predicted accurately. Furthermore, our improved SOV has achieved a higher correlation with the quality of protein models measured by GDT-TS score and TM-score, indicating its better abilities to evaluate tertiary structure quality at the secondary structure level. We analyzed the statistical significance of SOV scores and found the threshold values for distinguishing two protein structures (SOV_refine  > 0.19) and indicating whether two proteins are under the same CATH fold (SOV_refine > 0.94 and > 0.90 for three- and eight-state secondary structures respectively). We provided another two example applications, which are when used as a machine learning feature for protein model quality assessment and comparing different definitions of topologically associating domains. We proved that our newly defined SOV score resulted in better performance. The SOV score can be widely used in bioinformatics research and other fields that need to compare two sequences of letters in which continuous segments have important meanings. We also generalized the previous SOV definitions so that it can work for sequences composed of more than three states (e.g., it can work for the eight-state definition of protein secondary structures). A standalone software package has been implemented in Perl with source code released. The software can be downloaded from http://dna.cs.miami.edu/SOV/.

  16. New Stopping Criteria for Segmenting DNA Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wentian

    2001-06-18

    We propose a solution on the stopping criterion in segmenting inhomogeneous DNA sequences with complex statistical patterns. This new stopping criterion is based on Bayesian information criterion in the model selection framework. When this criterion is applied to telomere of S.cerevisiae and the complete sequence of E.coli, borders of biologically meaningful units were identified, and a more reasonable number of domains was obtained. We also introduce a measure called segmentation strength which can be used to control the delineation of large domains. The relationship between the average domain size and the threshold of segmentation strength is determined for several genomemore » sequences.« less

  17. Segmentation of time series with long-range fractal correlations.

    PubMed

    Bernaola-Galván, P; Oliver, J L; Hackenberg, M; Coronado, A V; Ivanov, P Ch; Carpena, P

    2012-06-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome.

  18. [Detection of endpoint for segmentation between consonants and vowels in aphasia rehabilitation software based on artificial intelligence scheduling].

    PubMed

    Deng, Xingjuan; Chen, Ji; Shuai, Jie

    2009-08-01

    For the purpose of improving the efficiency of aphasia rehabilitation training, artificial intelligence-scheduling function is added in the aphasia rehabilitation software, and the software's performance is improved. With the characteristics of aphasia patient's voice as well as with the need of artificial intelligence-scheduling functions under consideration, the present authors have designed a set of endpoint detection algorithm. It determines the reference endpoints, then extracts every word and ensures the reasonable segmentation points between consonants and vowels, using the reference endpoints. The results of experiments show that the algorithm is able to attain the objects of detection at a higher accuracy rate. Therefore, it is applicable to the detection of endpoint on aphasia-patient's voice.

  19. Segmentation of left atrial intracardiac ultrasound images for image guided cardiac ablation therapy

    NASA Astrophysics Data System (ADS)

    Rettmann, M. E.; Stephens, T.; Holmes, D. R.; Linte, C.; Packer, D. L.; Robb, R. A.

    2013-03-01

    Intracardiac echocardiography (ICE), a technique in which structures of the heart are imaged using a catheter navigated inside the cardiac chambers, is an important imaging technique for guidance in cardiac ablation therapy. Automatic segmentation of these images is valuable for guidance and targeting of treatment sites. In this paper, we describe an approach to segment ICE images by generating an empirical model of blood pool and tissue intensities. Normal, Weibull, Gamma, and Generalized Extreme Value (GEV) distributions are fit to histograms of tissue and blood pool pixels from a series of ICE scans. A total of 40 images from 4 separate studies were evaluated. The model was trained and tested using two approaches. In the first approach, the model was trained on all images from 3 studies and subsequently tested on the 40 images from the 4th study. This procedure was repeated 4 times using a leave-one-out strategy. This is termed the between-subjects approach. In the second approach, the model was trained on 10 randomly selected images from a single study and tested on the remaining 30 images in that study. This is termed the within-subjects approach. For both approaches, the model was used to automatically segment ICE images into blood and tissue regions. Each pixel is classified using the Generalized Liklihood Ratio Test across neighborhood sizes ranging from 1 to 49. Automatic segmentation results were compared against manual segmentations for all images. In the between-subjects approach, the GEV distribution using a neighborhood size of 17 was found to be the most accurate with a misclassification rate of approximately 17%. In the within-subjects approach, the GEV distribution using a neighborhood size of 19 was found to be the most accurate with a misclassification rate of approximately 15%. As expected, the majority of misclassified pixels were located near the boundaries between tissue and blood pool regions for both methods.

  20. Electromigration model for the prediction of lifetime based on the failure unit statistics in aluminum metallization

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Ahn, Byung Tae

    2003-01-01

    A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.

  1. GPU-based relative fuzzy connectedness image segmentation.

    PubMed

    Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W

    2013-01-01

    Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  2. GPU-based relative fuzzy connectedness image segmentation

    PubMed Central

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  3. Image segmentation by hierarchial agglomeration of polygons using ecological statistics

    DOEpatents

    Prasad, Lakshman; Swaminarayan, Sriram

    2013-04-23

    A method for rapid hierarchical image segmentation based on perceptually driven contour completion and scene statistics is disclosed. The method begins with an initial fine-scale segmentation of an image, such as obtained by perceptual completion of partial contours into polygonal regions using region-contour correspondences established by Delaunay triangulation of edge pixels as implemented in VISTA. The resulting polygons are analyzed with respect to their size and color/intensity distributions and the structural properties of their boundaries. Statistical estimates of granularity of size, similarity of color, texture, and saliency of intervening boundaries are computed and formulated into logical (Boolean) predicates. The combined satisfiability of these Boolean predicates by a pair of adjacent polygons at a given segmentation level qualifies them for merging into a larger polygon representing a coarser, larger-scale feature of the pixel image and collectively obtains the next level of polygonal segments in a hierarchy of fine-to-coarse segmentations. The iterative application of this process precipitates textured regions as polygons with highly convolved boundaries and helps distinguish them from objects which typically have more regular boundaries. The method yields a multiscale decomposition of an image into constituent features that enjoy a hierarchical relationship with features at finer and coarser scales. This provides a traversable graph structure from which feature content and context in terms of other features can be derived, aiding in automated image understanding tasks. The method disclosed is highly efficient and can be used to decompose and analyze large images.

  4. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  5. Laboratory and telescope demonstration of the TP3-WFS for the adaptive optics segment of AOLI

    NASA Astrophysics Data System (ADS)

    Colodro-Conde, C.; Velasco, S.; Fernández-Valdivia, J. J.; López, R.; Oscoz, A.; Rebolo, R.; Femenía, B.; King, D. L.; Labadie, L.; Mackay, C.; Muthusubramanian, B.; Pérez Garrido, A.; Puga, M.; Rodríguez-Coira, G.; Rodríguez-Ramos, L. F.; Rodríguez-Ramos, J. M.; Toledo-Moreo, R.; Villó-Pérez, I.

    2017-05-01

    Adaptive Optics Lucky Imager (AOLI) is a state-of-the-art instrument that combines adaptive optics (AO) and lucky imaging (LI) with the objective of obtaining diffraction-limited images in visible wavelength at mid- and big-size ground-based telescopes. The key innovation of AOLI is the development and use of the new Two Pupil Plane Positions Wavefront Sensor (TP3-WFS). The TP3-WFS, working in visible band, represents an advance over classical wavefront sensors such as the Shack-Hartmann WFS because it can theoretically use fainter natural reference stars, which would ultimately provide better sky coverages to AO instruments using this newer sensor. This paper describes the software, algorithms and procedures that enabled AOLI to become the first astronomical instrument performing real-time AO corrections in a telescope with this new type of WFS, including the first control-related results at the William Herschel Telescope.

  6. An Unexpected Adverse Event during Colonoscopy Screening: Bochdalek Hernia.

    PubMed

    Lee, Joon Seop; Kim, Eun Soo; Jung, Min Kyu; Kim, Sung Kook; Jin, Sun; Lee, Deok Heon; Seo, Jun Won

    2018-05-25

    Bochdalek hernia (BH) is defined as herniated abdominal contents appearing throughout the posterolateral segment of the diaphragm. It is usually observed during the prenatal or newborn period. Here, we report a case of an adult patient with herniated omentum and colon due to BH that was discovered during a colonoscopy. A 41-year-old woman was referred to our hospital with severe left chest and abdominal pain that began during a colonoscopy. Her chest radiography showed colonic shadow filling in the lower half of the left thoracic cavity. A computed tomography scan revealed an approximately 6-cm-sized left posterolateral diaphragmatic defect and a herniated omentum in the colon. The patient underwent thoracoscopic surgery, during which, the diaphragmatic defect was closed and herniated omentum was repaired. The patient was discharged without further complications. To the best of our knowledge, this case is the first report of BH in an adult found during a routine colonoscopy screening.

  7. A segmented, enriched N-type germanium detector for neutrinoless double beta-decay experiments

    NASA Astrophysics Data System (ADS)

    Leviner, L. E.; Aalseth, C. E.; Ahmed, M. W.; Avignone, F. T.; Back, H. O.; Barabash, A. S.; Boswell, M.; De Braeckeleer, L.; Brudanin, V. B.; Chan, Y.-D.; Egorov, V. G.; Elliott, S. R.; Gehman, V. M.; Hossbach, T. W.; Kephart, J. D.; Kidd, M. F.; Konovalov, S. I.; Lesko, K. T.; Li, Jingyi; Mei, D.-M.; Mikhailov, S.; Miley, H.; Radford, D. C.; Reeves, J.; Sandukovsky, V. G.; Umatov, V. I.; Underwood, T. A.; Tornow, W.; Wu, Y. K.; Young, A. R.

    2014-01-01

    We present data characterizing the performance of the first segmented, N-type Ge detector, isotopically enriched to 85% 76Ge. This detector, based on the Ortec PT6×2 design and referred to as SEGA (Segmented, Enriched Germanium Assembly), was developed as a possible prototype for neutrinoless double beta-decay measurements by the MAJORANA collaboration. We present some of the general characteristics (including bias potential, efficiency, leakage current, and integral cross-talk) for this detector in its temporary cryostat. We also present an analysis of the resolution of the detector, and demonstrate that for all but two segments there is at least one channel that reaches the MAJORANA resolution goal below 4 keV FWHM at 2039 keV, and all channels are below 4.5 keV FWHM.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruland, Robert

    The Visible-Infrared SASE Amplifier (VISA) undulator consists of four 99cm long segments. Each undulator segment is set up on a pulsed-wire bench, to characterize the magnetic properties and to locate the magnetic axis of the FODO array. Subsequently, the location of the magnetic axis, as defined by the wire, is referenced to tooling balls on each magnet segment by means of a straightness interferometer. After installation in the vacuum chamber, the four magnet segments are aligned with respect to themselves and globally to the beam line reference laser. A specially designed alignment fixture is used to mount one straightness interferometermore » each in the horizontal and vertical plane of the beam. The goal of these procedures is to keep the combined rms trajectory error, due to magnetic and alignment errors, to 50{micro}m.« less

  9. Quantification of osteolytic bone lesions in a preclinical rat trial

    NASA Astrophysics Data System (ADS)

    Fränzle, Andrea; Bretschi, Maren; Bäuerle, Tobias; Giske, Kristina; Hillengass, Jens; Bendl, Rolf

    2013-10-01

    In breast cancer, most of the patients who died, have developed bone metastasis as disease progression. Bone metastases in case of breast cancer are mainly bone destructive (osteolytic). To understand pathogenesis and to analyse response to different treatments, animal models, in our case rats, are examined. For assessment of treatment response to bone remodelling therapies exact segmentations of osteolytic lesions are needed. Manual segmentations are not only time-consuming but lack in reproducibility. Computerized segmentation tools are essential. In this paper we present an approach for the computerized quantification of osteolytic lesion volumes using a comparison to a healthy reference model. The presented qualitative and quantitative evaluation of the reconstructed bone volumes show, that the automatically segmented lesion volumes complete missing bone in a reasonable way.

  10. An algorithm for automating the registration of USDA segment ground data to LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Graham, M. H. (Principal Investigator)

    1981-01-01

    The algorithm is referred to as the Automatic Segment Matching Algorithm (ASMA). The ASMA uses control points or the annotation record of a P-format LANDSAT compter compatible tape as the initial registration to relate latitude and longitude to LANDSAT rows and columns. It searches a given area of LANDSAT data with a 2x2 sliding window and computes gradient values for bands 5 and 7 to match the segment boundaries. The gradient values are held in memory during the shifting (or matching) process. The reconstructed segment array, containing ones (1's) for boundaries and zeros elsewhere are computer compared to the LANDSAT array and the best match computed. Initial testing of the ASMA indicates that it has good potential for replacing the manual technique.

  11. Modeling of current characteristics of segmented Langmuir probe on DEMETER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imtiaz, Nadia; Marchand, Richard; Lebreton, Jean-Pierre

    We model the current characteristics of the DEMETER Segmented Langmuir probe (SLP). The probe is used to measure electron density and temperature in the ionosphere at an altitude of approximately 700 km. It is also used to measure the plasma flow velocity in the satellite frame of reference. The probe is partitioned into seven collectors: six electrically insulated spherical segments and a guard electrode (the rest of the sphere and the small post). Comparisons are made between the predictions of the model and DEMETER measurements for actual ionospheric plasma conditions encountered along the satellite orbit. Segment characteristics are computed numericallymore » with PTetra, a three-dimensional particle in cell simulation code. In PTetra, space is discretized with an unstructured tetrahedral mesh, thus, enabling a good representation of the probe geometry. The model also accounts for several physical effects of importance in the interaction of spacecraft with the space environment. These include satellite charging, photoelectron, and secondary electron emissions. The model is electrostatic, but it accounts for the presence of a uniform background magnetic field. PTetra simulation results show different characteristics for the different probe segments. The current collected by each segment depends on its orientation with respect to the ram direction, the plasma composition, the magnitude, and the orientation of the magnetic field. It is observed that the presence of light H{sup +} ions leads to a significant increase in the ion current branch of the I-V curves of the negatively polarized SLP. The effect of the magnetic field is demonstrated by varying its magnitude and direction with respect to the reference magnetic field. It is found that the magnetic field appreciably affects the electron current branch of the I-V curves of certain segments on the SLP, whereas the ion current branch remains almost unaffected. PTetra simulations are validated by comparing the computed characteristics and their angular anisotropy with the DEMETER measurements, as simulation results are found to be in good agreement with the measurements.« less

  12. In vivo organ mass of Korean adults obtained from whole-body magnetic resonance data.

    PubMed

    Park, S; Lee, J K; Kim, J I; Lee, Y J; Lim, Y K; Kim, C S; Lee, C

    2006-01-01

    In vivo organ mass of the Korean adult, male and female were presented for the purpose of radiation protection. A total of 121 healthy volunteers (66 males and 55 females), whose body dimensions were close to that of average Korean adults, were recruited for this study. Whole-body magnetic resonance (MR) images were obtained, and contours of 15 organs (brain, eye, gall bladder, heart, kidney, liver, lung, pancreas, stomach, spleen, testes, thymus, thyroid, urinary bladder and uterus) and 9 bones (femur, tibia + fibula, humerus, radius + ulna, pelvis, cervical spine, thoracic and lumber spine, skull and clavicle) were segmented for organ volume rendering by anatomists using commercial software. Organ and bone masses were calculated by multiplying the Asian reference densities of the corresponding organs and bones by the measured volumes. The resulting organ and bone masses were compared with those of the International Commission of Radiological Protection (ICRP) and the Asian reference data. Significantly large standard deviation was shown in the moving organs of the respiratory and circulatory systems and in the alimentary and urogenital organs that are variable in volume in a single person. Gall bladder and pancreas showed unique Korean organ masses compared with those of ICRP and the Asian reference adults. Different from anatomical data based on autopsy, the in vivo volume and mass in this study can more exactly describe the organ volume of a living human subject for radiation protection. A larger sample size would be required for obtaining statistically more reliable results. It is also needed to establish the reference organ mass of younger age groups for which it is difficult to recruit volunteers and to immobilise the subjects for long-time MR scanning. At present, the data from this study will contribute to the establishment of a Korean reference database.

  13. Project W-320, 241-C-106 sluicing electrical calculations, Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, J.W.

    1998-08-07

    This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. These calculations are required: To determine the power requirements needed to power electrical heat tracing segments contained within three manufactured insulated tubing assemblies; To verify thermal adequacy of tubing assembly selection by others; To size the heat tracing feeder and branch circuit conductors and conduits; To size protective circuit breaker and fuses; and To accomplish thermal design for two electrical heat tracing segments: One at C-106 tank riser 7 (CCTV) and one at the exhaust hatchway (condensate drain). Contents include: C-Farm electrical heat tracing;more » Cable ampacity, lighting, conduit fill and voltage drop; and Control circuit sizing and voltage drop analysis for the seismic shutdown system.« less

  14. Anaphora: A Cross Disciplinary Survey. Technical Report No. 31.

    ERIC Educational Resources Information Center

    Nash-Webber, Bonnie Lynn

    Two fundamental assumptions guide this survey of recent research on anaphora. The first is that anaphoric expressions do not refer to segments in a text or discourse, but to entities that are assumed to be in the language receiver's mind. The second assumption is that a text serves to suggest the referents for anaphora, as does the nonlinguistic…

  15. Segmental maxillary distraction with a novel device for closure of a wide alveolar cleft

    PubMed Central

    Bousdras, Vasilios A.; Liyanage, Chandra; Mars, Michael; Ayliffe, Peter R

    2014-01-01

    Treatment of a wide alveolar cleft with initial application of segmental distraction osteogenesis is reported, in order to minimise cleft size prior to secondary alveolar bone grafting. The lesser maxillary segment was mobilised with osteotomy at Le Fort I level and, a novel distractor, facilitated horizontal movement of the dental/alveolar segment along the curvature of the maxillary dental arch. Following a latency period of 4 days distraction was applied for 7 days at a rate of 0.5 mm twice daily. Radiographic, ultrasonographic and clinical assessment revealed new bone and soft tissue formation 8 weeks after completion of the distraction phase. Overall the maxillary segment did move minimising the width of the cleft, which allowed successful closure with a secondary alveolar bone graft. PMID:24987601

  16. Segmental maxillary distraction with a novel device for closure of a wide alveolar cleft.

    PubMed

    Bousdras, Vasilios A; Liyanage, Chandra; Mars, Michael; Ayliffe, Peter R

    2014-01-01

    Treatment of a wide alveolar cleft with initial application of segmental distraction osteogenesis is reported, in order to minimise cleft size prior to secondary alveolar bone grafting. The lesser maxillary segment was mobilised with osteotomy at Le Fort I level and, a novel distractor, facilitated horizontal movement of the dental/alveolar segment along the curvature of the maxillary dental arch. Following a latency period of 4 days distraction was applied for 7 days at a rate of 0.5 mm twice daily. Radiographic, ultrasonographic and clinical assessment revealed new bone and soft tissue formation 8 weeks after completion of the distraction phase. Overall the maxillary segment did move minimising the width of the cleft, which allowed successful closure with a secondary alveolar bone graft.

  17. Genetics Home Reference: spondylothoracic dysostosis

    MedlinePlus

    ... the MESP2 protein is nonfunctional or absent, somite segmentation does not occur properly, which results in ... mutations occur? How can gene mutations affect health and development? More about ...

  18. Determining the maximum diameter for holes in the shoe without compromising shoe integrity when using a multi-segment foot model.

    PubMed

    Shultz, Rebecca; Jenkyn, Thomas

    2012-01-01

    Measuring individual foot joint motions requires a multi-segment foot model, even when the subject is wearing a shoe. Each foot segment must be tracked with at least three skin-mounted markers, but for these markers to be visible to an optical motion capture system holes or 'windows' must be cut into the structure of the shoe. The holes must be sufficiently large avoiding interfering with the markers, but small enough that they do not compromise the shoe's structural integrity. The objective of this study was to determine the maximum size of hole that could be cut into a running shoe upper without significantly compromising its structural integrity or changing the kinematics of the foot within the shoe. Three shoe designs were tested: (1) neutral cushioning, (2) motion control and (3) stability shoes. Holes were cut progressively larger, with four sizes tested in all. Foot joint motions were measured: (1) hindfoot with respect to midfoot in the frontal plane, (2) forefoot twist with respect to midfoot in the frontal plane, (3) the height-to-length ratio of the medial longitudinal arch and (4) the hallux angle with respect to first metatarsal in the sagittal plane. A single subject performed level walking at her preferred pace in each of the three shoes with ten repetitions for each hole size. The largest hole that did not disrupt shoe integrity was an oval of 1.7cm×2.5cm. The smallest shoe deformations were seen with the motion control shoe. The least change in foot joint motion was forefoot twist in both the neutral shoe and stability shoe for any size hole. This study demonstrates that for a hole smaller than this size, optical motion capture with a cluster-based multi-segment foot model is feasible for measure foot in shoe kinematics in vivo. Copyright © 2011. Published by Elsevier Ltd.

  19. Segmenting the Femoral Head and Acetabulum in the Hip Joint Automatically Using a Multi-Step Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Ji; Cheng, Yuanzhi; Fu, Yili; Zhou, Shengjun; Tamura, Shinichi

    We describe a multi-step approach for automatic segmentation of the femoral head and the acetabulum in the hip joint from three dimensional (3D) CT images. Our segmentation method consists of the following steps: 1) construction of the valley-emphasized image by subtracting valleys from the original images; 2) initial segmentation of the bone regions by using conventional techniques including the initial threshold and binary morphological operations from the valley-emphasized image; 3) further segmentation of the bone regions by using the iterative adaptive classification with the initial segmentation result; 4) detection of the rough bone boundaries based on the segmented bone regions; 5) 3D reconstruction of the bone surface using the rough bone boundaries obtained in step 4) by a network of triangles; 6) correction of all vertices of the 3D bone surface based on the normal direction of vertices; 7) adjustment of the bone surface based on the corrected vertices. We evaluated our approach on 35 CT patient data sets. Our experimental results show that our segmentation algorithm is more accurate and robust against noise than other conventional approaches for automatic segmentation of the femoral head and the acetabulum. Average root-mean-square (RMS) distance from manual reference segmentations created by experienced users was approximately 0.68mm (in-plane resolution of the CT data).

  20. Job-Transitions in the Administrative Labor Market in Higher Education: Some Methodological Considerations.

    ERIC Educational Resources Information Center

    Smolansky, Bettie M.

    The question of whether the market for administrators is segmented by institutional types (i.e., region, affiliation, size, mission, and resource level) was investigated. One facet of the research was the applicability of segmentation theory to the occupational labor market for college managers. Principal data were provided by career histories of…

  1. Valley segments, stream reaches, and channel units [Chapter 2

    Treesearch

    Peter A. Bisson; David R. Montgomery; John M. Buffington

    2006-01-01

    Valley segments, stream reaches, and channel units are three hierarchically nested subdivisions of the drainage network (Frissell et al. 1986), falling in size between landscapes and watersheds (see Chapter 1) and individual point measurements made along the stream network (Table 2.1; also see Chapters 3 and 4). These three subdivisions compose the habitat for large,...

  2. Generating Ground Reference Data for a Global Impervious Surface Survey

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; deColstoun, Eric Brown; Wolfe, Robert E.; Tan, Bin; Huang, Chengquan

    2012-01-01

    We are engaged in a project to produce a 30m impervious cover data set of the entire Earth for the years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. The GLS data from Landsat provide an unprecedented opportunity to map global urbanization at this resolution for the first time, with unprecedented detail and accuracy. Moreover, the spatial resolution of Landsat is absolutely essential to accurately resolve urban targets such as buildings, roads and parking lots. Finally, with GLS data available for the 1975, 1990, 2000, and 2005 time periods, and soon for the 2010 period, the land cover/use changes due to urbanization can now be quantified at this spatial scale as well. Our approach works across spatial scales using very high spatial resolution commercial satellite data to both produce and evaluate continental scale products at the 30m spatial resolution of Landsat data. We are developing continental scale training data at 1m or so resolution and aggregating these to 30m for training a regression tree algorithm. Because the quality of the input training data are critical, we have developed an interactive software tool, called HSegLearn, to facilitate the photo-interpretation of high resolution imagery data, such as Quickbird or Ikonos data, into an impervious versus non-impervious map. Previous work has shown that photo-interpretation of high resolution data at 1 meter resolution will generate an accurate 30m resolution ground reference when coarsened to that resolution. Since this process can be very time consuming when using standard clustering classification algorithms, we are looking at image segmentation as a potential avenue to not only improve the training process but also provide a semi-automated approach for generating the ground reference data. HSegLearn takes as its input a hierarchical set of image segmentations produced by the HSeg image segmentation program [1, 2]. HSegLearn lets an analyst specify pixel locations as being either positive or negative examples, and displays a classification of the study area based on these examples. For our study, the positive examples are examples of impervious surfaces and negative examples are examples of non-impervious surfaces. HSegLearn searches the hierarchical segmentation from HSeg for the coarsest level of segmentation at which selected positive example locations do not conflict with negative example locations and labels the image accordingly. The negative example regions are always defined at the finest level of segmentation detail. The resulting classification map can be then further edited at a region object level using the previously developed HSegViewer tool [3]. After providing an overview of the HSeg image segmentation program, we provide a detailed description of the HSegLearn software tool. We then give examples of using HSegLearn to generate ground reference data and conclude with comments on the effectiveness of the HSegLearn tool.

  3. Incorporating scale into digital terrain analysis

    NASA Astrophysics Data System (ADS)

    Dragut, L. D.; Eisank, C.; Strasser, T.

    2009-04-01

    Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Drăguţ et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative method for generating scale levels in terrain-based environmental modeling. Based on segments, R squared improved up to a value of 0.47. Before integrating the procedure described above into a software application, thorough comparison between the results of different generalization techniques, on different datasets and terrain conditions is necessary. This is the subject of our ongoing research as part of the SCALA project (Scales and Hierarchies in Landform Classification). References: Drăguţ, L., Schauppenlehner, T., Muhar, A., Strobl, J. and Blaschke, T., in press. Optimization of scale and parametrization for terrain segmentation: an application to soil-landscape modeling, Computers & Geosciences.

  4. A First Look at the Upcoming SISO Space Reference FOM

    NASA Technical Reports Server (NTRS)

    Mueller, Bjorn; Crues, Edwin Z.; Dexter, Dan; Garro, Alfredo; Skuratovskiy, Anton; Vankov, Alexander

    2016-01-01

    Spaceflight is difficult, dangerous and expensive; human spaceflight even more so. In order to mitigate some of the danger and expense, professionals in the space domain have relied, and continue to rely, on computer simulation. Simulation is used at every level including concept, design, analysis, construction, testing, training and ultimately flight. As space systems have grown more complex, new simulation technologies have been developed, adopted and applied. Distributed simulation is one those technologies. Distributed simulation provides a base technology for segmenting these complex space systems into smaller, and usually simpler, component systems or subsystems. This segmentation also supports the separation of responsibilities between participating organizations. This segmentation is particularly useful for complex space systems like the International Space Station (ISS), which is composed of many elements from many nations along with visiting vehicles from many nations. This is likely to be the case for future human space exploration activities. Over the years, a number of distributed simulations have been built within the space domain. While many use the High Level Architecture (HLA) to provide the infrastructure for interoperability, HLA without a Federation Object Model (FOM) is insufficient by itself to insure interoperability. As a result, the Simulation Interoperability Standards Organization (SISO) is developing a Space Reference FOM. The Space Reference FOM Product Development Group is composed of members from several countries. They contribute experiences from projects within NASA, ESA and other organizations and represent government, academia and industry. The initial version of the Space Reference FOM is focusing on time and space and will provide the following: (i) a flexible positioning system using reference frames for arbitrary bodies in space, (ii) a naming conventions for well-known reference frames, (iii) definitions of common time scales, (iv) federation agreements for common types of time management with focus on time stepped simulation, and (v) support for physical entities, such as space vehicles and astronauts. The Space Reference FOM is expected to make collaboration politically, contractually and technically easier. It is also expected to make collaboration easier to manage and extend.

  5. On some basic principles of the wave planetology illustrated by real shapes and tectonic patterns of celestial bodies

    NASA Astrophysics Data System (ADS)

    Kochemasov, G. G.

    2011-10-01

    The physical background. Celestial bodies move in orbits and keep them due to equality of centrifugal and attractio n forces. These forces are oppositely directed. There is a third force -the inert ia-gravity one directed at the right angle to mentioned above and, thus, not interfering with them (Fig. 1). This force is caused by moving all celestial bodies in non -circular keplerian orbits with periodically changing accelerations. A clear illustration of status of this third force is a stretched rope never achieving a straight line because of the not compensated rope weight acting at the right angle to the stretching force s. In the cas e of cosmic bodies this "not compens ated" inertia-gravity force is absorbed in a cosmic body mass making this mass to warp, undulate. This warping in form of standing waves in rotating bodies is decomposed in four interfering direct ions (ortho - and diagonal) (Fig. 2) producing uplifted (+, ++), subsided (-, --) and neutral (0) blocks (Fig. 2). An interfe rence of fundamental waves 1 long 2π R ma kes always pres ent in bodies tectonic dichotomy: an oppos ition of two hemispheres-segments - one uplifted, another subsided (Fig. 2-6). The first overtone of the wave 1 - wave 2 long πR ma kes tectonic sectors superimposed on segments -hemispheres (Fig. 2, 7, 8). Along with the segment -sectoral pattern in cosmic bodies tectonic granulation develops (Fig. 9, 10). The granule sizes are inversely proportional to orbital frequencies [1-3]. The sectoral tectonic blocks are clearly visible also on Venus and icy satellites of Saturn, especially on polar views. Earth and photosphere are remarkable reference points of this fundamental dependence: orbits - tectonic granulation (Fig. 9, 10).

  6. Prenatal development of the normal human vertebral corpora in different segments of the spine.

    PubMed

    Nolting, D; Hansen, B F; Keeling, J; Kjaer, I

    1998-11-01

    Vertebral columns from 13 normal human fetuses (10-24 weeks of gestation) that had aborted spontaneously were investigated as part of the legal autopsy procedure. The investigation included spinal cord analysis. To analyze the formation of the normal human vertebral corpora along the spine, including the early location and disappearance of the notochord. Reference material on the development of the normal human vertebral corpora is needed for interpretation of published observations on prenatal malformations in the spine, which include observations of various types of malformation (anencephaly, spina bifida) and various genotypes (trisomy 18, 21 and 13, as well as triploidy). The vertebral columns were studied by using radiography (Faxitron X-ray apparatus, Faxitron Model 43,855, Hewlett Packard) in lateral, frontal, and axial views and histology (decalcification, followed by toluidine blue and alcian blue staining) in and axial view. Immunohistochemical marking with Keratin Wide Spectrum also was done. Notochordal tissue (positive on marking with Keratin Wide Spectrum [DAKO, Denmark]) was located anterior to the cartilaginous body center in the youngest fetuses. The process of disintegration of the notochord and the morphology of the osseous vertebral corpora in the lumbosacral, thoracic, and cervical segments are described. Marked differences appeared in axial views, which were verified on horizontal histologic sections. Also, the increase in size was different in the different segments, being most pronounced in the thoracic and upper lumbar bodies. The lower thoracic bodies were the first to ossify. The morphologic changes observed by radiography were verified histologically. In this study, normal prenatal standards were established for the early development of the vertebral column. These standards can be used in the future--for evaluation of pathologic deviations in the human vertebral column in the second trimester.

  7. Digital photogrammetry for quantitative wear analysis of retrieved TKA components.

    PubMed

    Grochowsky, J C; Alaways, L W; Siskey, R; Most, E; Kurtz, S M

    2006-11-01

    The use of new materials in knee arthroplasty demands a way in which to accurately quantify wear in retrieved components. Methods such as damage scoring, coordinate measurement, and in vivo wear analysis have been used in the past. The limitations in these methods illustrate a need for a different methodology that can accurately quantify wear, which is relatively easy to perform and uses a minimal amount of expensive equipment. Off-the-shelf digital photogrammetry represents a potentially quick and easy alternative to what is readily available. Eighty tibial inserts were visually examined for front and backside wear and digitally photographed in the presence of two calibrated reference fields. All images were segmented (via manual and automated algorithms) using Adobe Photoshop and National Institute of Health ImageJ. Finally, wear was determined using ImageJ and Rhinoceros software. The absolute accuracy of the method and repeatability/reproducibility by different observers were measured in order to determine the uncertainty of wear measurements. To determine if variation in wear measurements was due to implant design, 35 implants of the three most prevalent designs were subjected to retrieval analysis. The overall accuracy of area measurements was 97.8%. The error in automated segmentation was found to be significantly lower than that of manual segmentation. The photogrammetry method was found to be reasonably accurate and repeatable in measuring 2-D areas and applicable to determining wear. There was no significant variation in uncertainty detected among different implant designs. Photogrammetry has a broad range of applicability since it is size- and design-independent. A minimal amount of off-the-shelf equipment is needed for the procedure and no proprietary knowledge of the implant is needed. (c) 2006 Wiley Periodicals, Inc.

  8. Specific Stimuli Induce Specific Adaptations: Sensorimotor Training vs. Reactive Balance Training

    PubMed Central

    Freyler, Kathrin; Krause, Anne; Gollhofer, Albert; Ritzmann, Ramona

    2016-01-01

    Typically, balance training has been used as an intervention paradigm either as static or as reactive balance training. Possible differences in functional outcomes between the two modalities have not been profoundly studied. The objective of the study was to investigate the specificity of neuromuscular adaptations in response to two balance intervention modalities within test and intervention paradigms containing characteristics of both profiles: classical sensorimotor training (SMT) referring to a static ledger pivoting around the ankle joint vs. reactive balance training (RBT) using externally applied perturbations to deteriorate body equilibrium. Thirty-eight subjects were assigned to either SMT or RBT. Before and after four weeks of intervention training, postural sway and electromyographic activities of shank and thigh muscles were recorded and co-contraction indices (CCI) were calculated. We argue that specificity of training interventions could be transferred into corresponding test settings containing properties of SMT and RBT, respectively. The results revealed that i) postural sway was reduced in both intervention groups in all test paradigms; magnitude of changes and effect sizes differed dependent on the paradigm: when training and paradigm coincided most, effects were augmented (P<0.05). ii) These specificities were accompanied by segmental modulations in the amount of CCI, with a greater reduction within the CCI of thigh muscles after RBT compared to the shank muscles after SMT (P<0.05). The results clearly indicate the relationship between test and intervention specificity in balance performance. Hence, specific training modalities of postural control cause multi-segmental and context-specific adaptations, depending upon the characteristics of the trained postural strategy. In relation to fall prevention, perturbation training could serve as an extension to SMT to include the proximal segment, and thus the control of structures near to the body’s centre of mass, into training. PMID:27911944

  9. Assessment of adult pallid sturgeon fish condition, Lower Missouri River—Application of new information to the Missouri River Recovery Program

    USGS Publications Warehouse

    Randall, Michael T.; Colvin, Michael E.; Steffensen, Kirk D.; Welker, Timothy L.; Pierce, Landon L.; Jacobson, Robert B.

    2017-10-11

    During spring 2015, Nebraska Game and Parks Commission (NGPC) biologists noted that pallid sturgeon (Scaphirhynchus albus) were in poor condition during sampling associated with the Pallid Sturgeon Population Assessment Project and NGPC’s annual pallid sturgeon broodstock collection effort. These observations prompted concerns that reduced fish condition could compromise reproductive health and population growth of pallid sturgeon. There was a further concern that compromised condition could possibly be linked to U.S. Army Corps of Engineers management actions and increase jeopardy to the species. An evaluation request was made to the Missouri River Recovery Program and the Effects Analysis Team was chartered to evaluate the issue. Data on all Missouri River pallid sturgeon captures were requested and received from the National Pallid Sturgeon Database. All data were examined for completeness and accuracy; 12,053 records of captures between 200 millimeters fork length (mm FL) and 1,200 mm FL were accepted. We analyzed condition using (1) the condition formula (Kn) from Shuman and others (2011); (2) a second Kn formulation derived from the 12,053 records (hereafter referred to as “Alternative Kn”); and (3) an analysis of covariance (ANCOVA) approach that did not rely on a Kn formulation. The Kn data were analyzed using group (average annual Kn) and individual (percentage in low, normal, and robust conditions) approaches. Using the Shuman Kn formulation, annual mean Kn was fairly static from 2005 to 2011 (although always higher in the upper basin), declined from 2012 to 2015, then remained either static (lower basin) or increasing (upper basin) in 2016. Under the Alternative Kn formulation, the upper basin showed no decline in Kn, whereas the lower basin displayed the same trend as the Shuman Kn formulation. Using both formulations, the individual approach revealed a more complex situation; at the same times and locations that there are fish in poor condition, there are nearby fish in normal or robust condition. The ANCOVA approach revealed that fish condition at size changed between 400 and 600 mm and that some of the apparent trend in low condition was caused by differences in sample size across the size range of the population (that is, greater catch of intermediate-sized fish compared to large fish). We examined basin, year, origin (hatchery compared to wild), segment, and size class for effects on condition and concluded that, since 2012, there has been an increase in the percentage of pallid sturgeon in low condition. There are basin, year, and segment effects; origin and size class do not seem to have an effect. The lower basin, in particular segment 9 (Platte River to Kansas River), had a high percentage of low-condition fish. Within the segment, there were bend-level effects, but the bend effect was not spatially contiguous. We concluded that existing data confirm concerns about declining fish condition, especially in the segments between Sioux City, Iowa, and Kansas City, Missouri. Although the evidence is strong that fish condition has been in decline from 2011 to 2015, additional analysis of individual fish histories may provide more confidence in this conclusion; such analysis was beyond the scope of this effort but is part of our recommendations. The most recent data in 2016 indicate that decline of condition may have leveled off; however, the length of record is insufficient to determine whether recent declines are within the background range of variation. We recommend that monitoring of fish condition should be increased and enhanced with additional health metrics. We also recommend that, should condition continue to decline, processes are deployed to bring low-condition adult fish into the hatchery to improve nutrition and condition. We could not determine the cause of declining fish condition with available data, but we compiled information on several dominant hypotheses in two main categories: inter- or intraspecific competition for resources and habitat conditions. Data are insufficient to indicate a specific causation or solution, and it is possible that multiple causes apply. We make recommendations for additional research that can be pursued to address uncertainties in trends in fish health as well as potential causes.

  10. Automatic detection of diabetic foot complications with infrared thermography by asymmetric analysis

    NASA Astrophysics Data System (ADS)

    Liu, Chanjuan; van Netten, Jaap J.; van Baal, Jeff G.; Bus, Sicco A.; van der Heijden, Ferdi

    2015-02-01

    Early identification of diabetic foot complications and their precursors is essential in preventing their devastating consequences, such as foot infection and amputation. Frequent, automatic risk assessment by an intelligent telemedicine system might be feasible and cost effective. Infrared thermography is a promising modality for such a system. The temperature differences between corresponding areas on contralateral feet are the clinically significant parameters. This asymmetric analysis is hindered by (1) foot segmentation errors, especially when the foot temperature and the ambient temperature are comparable, and by (2) different shapes and sizes between contralateral feet due to deformities or minor amputations. To circumvent the first problem, we used a color image and a thermal image acquired synchronously. Foot regions, detected in the color image, were rigidly registered to the thermal image. This resulted in 97.8%±1.1% sensitivity and 98.4%±0.5% specificity over 76 high-risk diabetic patients with manual annotation as a reference. Nonrigid landmark-based registration with B-splines solved the second problem. Corresponding points in the two feet could be found regardless of the shapes and sizes of the feet. With that, the temperature difference of the left and right feet could be obtained.

  11. Serum chemistry reference ranges for Steller sea lion (Eumetopias jubatus) pups from Alaska: stock differentiation and comparisons within a North Pacific sentinel species.

    PubMed

    Lander, Michelle E; Fadely, Brian S; Gelatt, Thomas S; Rea, Lorrie D; Loughlin, Thomas R

    2013-12-01

    Blood chemistry and hematologic reference ranges are useful for population health assessment and establishing a baseline for future comparisons in the event of ecosystem changes due to natural or anthropogenic factors. The objectives of this study were to determine if there was any population spatial structure for blood variables of Steller sea lion (Eumetopias jubatus), an established sentinel species, and to report reference ranges for appropriate populations using standardized analyses. In addition to comparing reference ranges between populations with contrasting abundance trends, data were examined for evidence of disease or nutritional stress. From 1998 to 2011, blood samples were collected from 1,231 pups captured on 37 rookeries across their Alaskan range. Reference ranges are reported separately for the western and eastern distinct population segments (DPS) of Steller sea lion after cluster analysis and discriminant function analysis (DFA) supported underlying stock structure. Variables with greater loading scores for the DFA (creatinine, total protein, calcium, albumin, cholesterol, and alkaline phosphatase) also were greater for sea lions from the endangered western DPS, supporting previous studies that indicated pup condition in the west was not compromised during the first month postpartum. Differences between population segments were likely a result of ecological, physiological, or age related differences.

  12. Pancreas and cyst segmentation

    NASA Astrophysics Data System (ADS)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  13. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  14. Application of single- and dual-energy CT brain tissue segmentation to PET monitoring of proton therapy.

    PubMed

    Berndt, Bianca; Landry, Guillaume; Schwarz, Florian; Tessonnier, Thomas; Kamp, Florian; Dedes, George; Thieke, Christian; Würl, Matthias; Kurz, Christopher; Ganswindt, Ute; Verhaegen, Frank; Debus, Jürgen; Belka, Claus; Sommer, Wieland; Reiser, Maximilian; Bauer, Julia; Parodi, Katia

    2017-03-21

    The purpose of this work was to evaluate the ability of single and dual energy computed tomography (SECT, DECT) to estimate tissue composition and density for usage in Monte Carlo (MC) simulations of irradiation induced β + activity distributions. This was done to assess the impact on positron emission tomography (PET) range verification in proton therapy. A DECT-based brain tissue segmentation method was developed for white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF). The elemental composition of reference tissues was assigned to closest CT numbers in DECT space (DECT dist ). The method was also applied to SECT data (SECT dist ). In a validation experiment, the proton irradiation induced PET activity of three brain equivalent solutions (BES) was compared to simulations based on different tissue segmentations. Five patients scanned with a dual source DECT scanner were analyzed to compare the different segmentation methods. A single magnetic resonance (MR) scan was used for comparison with an established segmentation toolkit. Additionally, one patient with SECT and post-treatment PET scans was investigated. For BES, DECT dist and SECT dist reduced differences to the reference simulation by up to 62% when compared to the conventional stoichiometric segmentation (SECT Schneider ). In comparison to MR brain segmentation, Dice similarity coefficients for WM, GM and CSF were 0.61, 0.67 and 0.66 for DECT dist and 0.54, 0.41 and 0.66 for SECT dist . MC simulations of PET treatment verification in patients showed important differences between DECT dist /SECT dist and SECT Schneider for patients with large CSF areas within the treatment field but not in WM and GM. Differences could be misinterpreted as PET derived range shifts of up to 4 mm. DECT dist and SECT dist yielded comparable activity distributions, and comparison of SECT dist to a measured patient PET scan showed improved agreement when compared to SECT Schneider . The agreement between predicted and measured PET activity distributions was improved by employing a brain specific segmentation applicable to both DECT and SECT data.

  15. Effect of supersaturated oxygen delivery on infarct size after percutaneous coronary intervention in acute myocardial infarction.

    PubMed

    Stone, Gregg W; Martin, Jack L; de Boer, Menko-Jan; Margheri, Massimo; Bramucci, Ezio; Blankenship, James C; Metzger, D Christopher; Gibbons, Raymond J; Lindsay, Barbara S; Weiner, Bonnie H; Lansky, Alexandra J; Krucoff, Mitchell W; Fahy, Martin; Boscardin, W John

    2009-10-01

    Myocardial salvage is often suboptimal after percutaneous coronary intervention in ST-segment elevation myocardial infarction. Posthoc subgroup analysis from a previous trial (AMIHOT I) suggested that intracoronary delivery of supersaturated oxygen (SSO(2)) may reduce infarct size in patients with large ST-segment elevation myocardial infarction treated early. A prospective, multicenter trial was performed in which 301 patients with anterior ST-segment elevation myocardial infarction undergoing percutaneous coronary intervention within 6 hours of symptom onset were randomized to a 90-minute intracoronary SSO(2) infusion in the left anterior descending artery infarct territory (n=222) or control (n=79). The primary efficacy measure was infarct size in the intention-to-treat population (powered for superiority), and the primary safety measure was composite major adverse cardiovascular events at 30 days in the intention-to-treat and per-protocol populations (powered for noninferiority), with Bayesian hierarchical modeling used to allow partial pooling of evidence from AMIHOT I. Among 281 randomized patients with tc-99m-sestamibi single-photon emission computed tomography data in AMIHOT II, median (interquartile range) infarct size was 26.5% (8.5%, 44%) with control compared with 20% (6%, 37%) after SSO(2). The pooled adjusted infarct size was 25% (7%, 42%) with control compared with 18.5% (3.5%, 34.5%) after SSO(2) (P(Wilcoxon)=0.02; Bayesian posterior probability of superiority, 96.9%). The Bayesian pooled 30-day mean (+/-SE) rates of major adverse cardiovascular events were 5.0+/-1.4% for control and 5.9+/-1.4% for SSO(2) by intention-to-treat, and 5.1+/-1.5% for control and 4.7+/-1.5% for SSO(2) by per-protocol analysis (posterior probability of noninferiority, 99.5% and 99.9%, respectively). Among patients with anterior ST-segment elevation myocardial infarction undergoing percutaneous coronary intervention within 6 hours of symptom onset, infusion of SSO(2) into the left anterior descending artery infarct territory results in a significant reduction in infarct size with noninferior rates of major adverse cardiovascular events at 30 days. Clinical Trial Registration- clinicaltrials.gov Identifier: NCT00175058.

  16. Design and Analysis of Mirror Modules for IXO and Beyond

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; Powell, Cory; Saha, Timo T.; Zhang, William W.

    2011-01-01

    Advancements in X-ray astronomy demand thin, light, and closely packed thin optics which lend themselves to segmentation of the annular mirrors and, in turn, a modular approach to the mirror design. The functionality requirements of such a mirror module are well understood. A baseline modular concept for the proposed International X-Ray Observatory (IXO) Flight Mirror Assembly (FMA) consisting of 14,000 glass mirror segments divided into 60 modules was developed and extensively analyzed. Through this development, our understanding of module loads, mirror stress, thermal performance, and gravity distortion have greatly progressed. The latest progress in each of these areas is discussed herein. Gravity distortion during horizontal X-ray testing and on-orbit thermal performance have proved especially difficult design challenges. In light of these challenges, fundamental trades in modular X-ray mirror design have been performed. Future directions in module X-ray mirror design are explored including the development of a 1.8 m diameter FMA utilizing smaller mirror modules. The effect of module size on mirror stress, module self-weight distortion, thermal control, and range of segment sizes required is explored with advantages demonstrated from smaller module size in most cases.

  17. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  18. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  19. Growth of in situ functionalized luminescent silver nanoclusters by direct reduction and size focusing.

    PubMed

    Muhammed, Madathumpady Abubaker Habeeb; Aldeek, Fadi; Palui, Goutam; Trapiella-Alfonso, Laura; Mattoussi, Hedi

    2012-10-23

    We have used one phase growth reaction to prepare a series of silver nanoparticles (NPs) and luminescent nanoclusters (NCs) using sodium borohydride (NaBH(4)) reduction of silver nitrate in the presence of molecular scale ligands made of polyethylene glycol (PEG) appended with lipoic acid (LA) groups at one end and reactive (-COOH/-NH(2)) or inert (-OCH(3)) functional groups at the other end. The PEG segment in the ligand promotes solubility in a variety of solvents including water, while LAs provide multidentate coordinating groups that promote Ag-ligand complex formation and strong anchoring onto the NP/NC surface. The particle size and properties were primarily controlled by varying the Ag-to-ligand (Ag:L) molar ratios and the molar amount of NaBH(4) used. We found that while higher Ag:L ratios produced NPs, luminescent NCs were formed at lower ratios. We also found that nonluminescent NPs can be converted into luminescent clusters, via a process referred to as "size focusing", in the presence of added excess ligands and reducing agent. The nanoclusters emit in the far red region of the optical spectrum with a quantum yield of ~12%. They can be redispersed in a number of solvents with varying polarity while maintaining their optical and spectroscopic properties. Our synthetic protocol also allowed control over the number and type of reactive functional groups per nanocluster.

  20. Influence of riparian and watershed alterations on sandbars in a Great Plains river

    USGS Publications Warehouse

    Fischer, Jeffrey M.; Paukert, Craig P.; Daniels, M.L.

    2014-01-01

    Anthropogenic alterations have caused sandbar habitats in rivers and the biota dependent on them to decline. Restoring large river sandbars may be needed as these habitats are important components of river ecosystems and provide essential habitat to terrestrial and aquatic organisms. We quantified factors within the riparian zone of the Kansas River, USA, and within its tributaries that influenced sandbar size and density using aerial photographs and land use/land cover (LULC) data. We developed, a priori, 16 linear regression models focused on LULC at the local, adjacent upstream river bend, and the segment (18–44 km upstream) scales and used an information theoretic approach to determine what alterations best predicted the size and density of sandbars. Variation in sandbar density was best explained by the LULC within contributing tributaries at the segment scale, which indicated reduced sandbar density with increased forest cover within tributary watersheds. Similarly, LULC within contributing tributary watersheds at the segment scale best explained variation in sandbar size. These models indicated that sandbar size increased with agriculture and forest and decreased with urban cover within tributary watersheds. Our findings suggest that sediment supply and delivery from upstream tributary watersheds may be influential on sandbars within the Kansas River and that preserving natural grassland and reducing woody encroachment within tributary watersheds in Great Plains rivers may help improve sediment delivery to help restore natural river function.

  1. [InlineEquation not available: see fulltext.]-Means Based Fingerprint Segmentation with Sensor Interoperability

    NASA Astrophysics Data System (ADS)

    Yang, Gongping; Zhou, Guang-Tong; Yin, Yilong; Yang, Xiukun

    2010-12-01

    A critical step in an automatic fingerprint recognition system is the segmentation of fingerprint images. Existing methods are usually designed to segment fingerprint images originated from a certain sensor. Thus their performances are significantly affected when dealing with fingerprints collected by different sensors. This work studies the sensor interoperability of fingerprint segmentation algorithms, which refers to the algorithm's ability to adapt to the raw fingerprints obtained from different sensors. We empirically analyze the sensor interoperability problem, and effectively address the issue by proposing a [InlineEquation not available: see fulltext.]-means based segmentation method called SKI. SKI clusters foreground and background blocks of a fingerprint image based on the [InlineEquation not available: see fulltext.]-means algorithm, where a fingerprint block is represented by a 3-dimensional feature vector consisting of block-wise coherence, mean, and variance (abbreviated as CMV). SKI also employs morphological postprocessing to achieve favorable segmentation results. We perform SKI on each fingerprint to ensure sensor interoperability. The interoperability and robustness of our method are validated by experiments performed on a number of fingerprint databases which are obtained from various sensors.

  2. Segmented frequency-domain fluorescence lifetime measurements: minimizing the effects of photobleaching within a multi-component system.

    PubMed

    Marwani, Hadi M; Lowry, Mark; Keating, Patrick; Warner, Isiah M; Cook, Robert L

    2007-11-01

    This study introduces a newly developed frequency segmentation and recombination method for frequency-domain fluorescence lifetime measurements to address the effects of changing fractional contributions over time and minimize the effects of photobleaching within multi-component systems. Frequency segmentation and recombination experiments were evaluated using a two component system consisting of fluorescein and rhodamine B. Comparison of experimental data collected in traditional and segmented fashion with simulated data, generated using different changing fractional contributions, demonstrated the validity of the technique. Frequency segmentation and recombination was also applied to a more complex system consisting of pyrene with Suwannee River fulvic acid reference and was shown to improve recovered lifetimes and fractional intensity contributions. It was observed that photobleaching in both systems led to errors in recovered lifetimes which can complicate the interpretation of lifetime results. Results showed clear evidence that the frequency segmentation and recombination method reduced errors resulting from a changing fractional contribution in a multi-component system, and allowed photobleaching issues to be addressed by commercially available instrumentation.

  3. Reassortment between Influenza B Lineages and the Emergence of a Coadapted PB1–PB2–HA Gene Complex

    PubMed Central

    Dudas, Gytis; Bedford, Trevor; Lycett, Samantha; Rambaut, Andrew

    2015-01-01

    Influenza B viruses make a considerable contribution to morbidity attributed to seasonal influenza. Currently circulating influenza B isolates are known to belong to two antigenically distinct lineages referred to as B/Victoria and B/Yamagata. Frequent exchange of genomic segments of these two lineages has been noted in the past, but the observed patterns of reassortment have not been formalized in detail. We investigate interlineage reassortments by comparing phylogenetic trees across genomic segments. Our analyses indicate that of the eight segments of influenza B viruses only segments coding for polymerase basic 1 and 2 (PB1 and PB2) and hemagglutinin (HA) proteins have maintained separate Victoria and Yamagata lineages and that currently circulating strains possess PB1, PB2, and HA segments derived entirely from one or the other lineage; other segments have repeatedly reassorted between lineages thereby reducing genetic diversity. We argue that this difference between segments is due to selection against reassortant viruses with mixed-lineage PB1, PB2, and HA segments. Given sufficient time and continued recruitment to the reassortment-isolated PB1–PB2–HA gene complex, we expect influenza B viruses to eventually undergo sympatric speciation. PMID:25323575

  4. Segmentation of time series with long-range fractal correlations

    PubMed Central

    Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.

    2012-01-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997

  5. An algorithm for calculi segmentation on ureteroscopic images.

    PubMed

    Rosa, Benoît; Mozer, Pierre; Szewczyk, Jérôme

    2011-03-01

    The purpose of the study is to develop an algorithm for the segmentation of renal calculi on ureteroscopic images. In fact, renal calculi are common source of urological obstruction, and laser lithotripsy during ureteroscopy is a possible therapy. A laser-based system to sweep the calculus surface and vaporize it was developed to automate a very tedious manual task. The distal tip of the ureteroscope is directed using image guidance, and this operation is not possible without an efficient segmentation of renal calculi on the ureteroscopic images. We proposed and developed a region growing algorithm to segment renal calculi on ureteroscopic images. Using real video images to compute ground truth and compare our segmentation with a reference segmentation, we computed statistics on different image metrics, such as Precision, Recall, and Yasnoff Measure, for comparison with ground truth. The algorithm and its parameters were established for the most likely clinical scenarii. The segmentation results are encouraging: the developed algorithm was able to correctly detect more than 90% of the surface of the calculi, according to an expert observer. Implementation of an algorithm for the segmentation of calculi on ureteroscopic images is feasible. The next step is the integration of our algorithm in the command scheme of a motorized system to build a complete operating prototype.

  6. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    NASA Astrophysics Data System (ADS)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  7. MIA-Clustering: a novel method for segmentation of paleontological material.

    PubMed

    Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M

    2018-01-01

    Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  8. Reaching the Non-Traditional Stopout Population: A Segmentation Approach

    ERIC Educational Resources Information Center

    Schatzel, Kim; Callahan, Thomas; Scott, Crystal J.; Davis, Timothy

    2011-01-01

    An estimated 21% of 25-34-year-olds in the United States, about eight million individuals, have attended college and quit before completing a degree. These non-traditional students may or may not return to college. Those who return to college are referred to as stopouts, whereas those who do not return are referred to as stayouts. In the face of…

  9. Accounting for phase drifts in SSVEP-based BCIs by means of biphasic stimulation.

    PubMed

    Wu, Hung-Yi; Lee, Po-Lei; Chang, Hsiang-Chih; Hsieh, Jen-Chuen

    2011-05-01

    This study proposes a novel biphasic stimulation technique to solve the issue of phase drifts in steady-state visual evoked potential (SSVEPs) in phase-tagged systems. Phase calibration was embedded in stimulus sequences using a biphasic flicker, which is driven by a sequence with alternating reference and phase-shift states. Nine subjects were recruited to participate in off-line and online tests. Signals were bandpass filtered and segmented by trigger signals into reference and phase-shift epochs. Frequency components of SSVEP in the reference and phase-shift epochs were extracted using the Fourier method with a 50% overlapped sliding window. The real and imaginary parts of the SSVEP frequency components were organized into complex vectors in each epoch. Hotelling's t-square test was used to determine the significances of nonzero mean vectors. The rejection of noisy data segments and the validation of gaze detections were made based on p values. The phase difference between the valid mean vectors of reference and phase-shift epochs was used to identify user's gazed targets in this system. Data showed an average information transfer rate of 44.55 and 38.21 bits/min in off-line and online tests, respectively. © 2011 IEEE

  10. In Search of Conversational Grain Size: Modelling Semantic Structure Using Moving Stanza Windows

    ERIC Educational Resources Information Center

    Siebert-Evenstone, Amanda L.; Irgens, Golnaz Arastoopour; Collier, Wesley; Swiecki, Zachari; Ruis, Andrew R.; Shaffer, David Williamson

    2017-01-01

    Analyses of learning based on student discourse need to account not only for the content of the utterances but also for the ways in which students make connections across turns of talk. This requires segmentation of discourse data to define when connections are likely to be meaningful. In this paper, we present an approach to segmenting data for…

  11. The Axolotl Fibula as a Model for the Induction of Regeneration across Large Segment Defects in Long Bones of the Extremities

    PubMed Central

    Chen, Xiaoping; Song, Fengyu; Jhamb, Deepali; Li, Jiliang; Bottino, Marco C.; Palakal, Mathew J.; Stocum, David L.

    2015-01-01

    We tested the ability of the axolotl (Ambystoma mexicanum) fibula to regenerate across segment defects of different size in the absence of intervention or after implant of a unique 8-braid pig small intestine submucosa (SIS) scaffold, with or without incorporated growth factor combinations or tissue protein extract. Fractures and defects of 10% and 20% of the total limb length regenerated well without any intervention, but 40% and 50% defects failed to regenerate after either simple removal of bone or implanting SIS scaffold alone. By contrast, scaffold soaked in the growth factor combination BMP-4/HGF or in protein extract of intact limb tissue promoted partial or extensive induction of cartilage and bone across 50% segment defects in 30%-33% of cases. These results show that BMP-4/HGF and intact tissue protein extract can promote the events required to induce cartilage and bone formation across a segment defect larger than critical size and that the long bones of axolotl limbs are an inexpensive model to screen soluble factors and natural and synthetic scaffolds for their efficacy in stimulating this process. PMID:26098852

  12. Comparison of using single- or multi-polarimetric TerraSAR-X images for segmentation and classification of man-made maritime objects

    NASA Astrophysics Data System (ADS)

    Teutsch, Michael; Saur, Günter

    2011-11-01

    Spaceborne SAR imagery offers high capability for wide-ranging maritime surveillance especially in situations, where AIS (Automatic Identification System) data is not available. Therefore, maritime objects have to be detected and optional information such as size, orientation, or object/ship class is desired. In recent research work, we proposed a SAR processing chain consisting of pre-processing, detection, segmentation, and classification for single-polarimetric (HH) TerraSAR-X StripMap images to finally assign detection hypotheses to class "clutter", "non-ship", "unstructured ship", or "ship structure 1" (bulk carrier appearance) respectively "ship structure 2" (oil tanker appearance). In this work, we extend the existing processing chain and are now able to handle full-polarimetric (HH, HV, VH, VV) TerraSAR-X data. With the possibility of better noise suppression using the different polarizations, we slightly improve both the segmentation and the classification process. In several experiments we demonstrate the potential benefit for segmentation and classification. Precision of size and orientation estimation as well as correct classification rates are calculated individually for single- and quad-polarization and compared to each other.

  13. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)

    2015-01-01

    Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  14. Segmental dilatation of the ileum covered almost entirely by gastric mucosa: report of a case.

    PubMed

    Kobayashi, Tsutomu; Uchida, Nobuyuki; Shiojima, Masayuki; Sasamoto, Hajime; Shimura, Tatsuo; Takahasi, Atsusi; Kuwano, Hiroyuki

    2007-01-01

    A 13-year-old boy was referred to our hospital for investigation of intermittent abdominal colic pain and vomiting. He underwent an emergency laparotomy, which revealed a volvulus and segmental dilatation of the ileum. The dilated intestine was not associated with poor intestinal circulation. Because the dilated ileum did not seem to be the cause of the volvulus, we simply released the volvulus. However, after surgery, the patient still suffered from persistent abdominal pain, further episodes of volvulus, and invagination of the dilated ileum. Thus, we performed a second operation to resect the segmental dilatation of the ileum. Pathological examination revealed that most of the mucosa of the dilated ileum was composed of ectopic gastric mucosa. We postulate that the ectopic gastric mucosa led to the formation of segmental dilatation of the ileum.

  15. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George

    2013-01-01

    Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  16. Body Composition Assessment in Axial CT Images Using FEM-Based Automatic Segmentation of Skeletal Muscle.

    PubMed

    Popuri, Karteek; Cobzas, Dana; Esfandiari, Nina; Baracos, Vickie; Jägersand, Martin

    2016-02-01

    The proportions of muscle and fat tissues in the human body, referred to as body composition is a vital measurement for cancer patients. Body composition has been recently linked to patient survival and the onset/recurrence of several types of cancers in numerous cancer research studies. This paper introduces a fully automatic framework for the segmentation of muscle and fat tissues from CT images to estimate body composition. We developed a novel finite element method (FEM) deformable model that incorporates a priori shape information via a statistical deformation model (SDM) within the template-based segmentation framework. The proposed method was validated on 1000 abdominal and 530 thoracic CT images and we obtained very good segmentation results with Jaccard scores in excess of 90% for both the muscle and fat regions.

  17. Chain and mirophase-separated structures of ultrathin polyurethane films

    NASA Astrophysics Data System (ADS)

    Kojio, Ken; Uchiba, Yusuke; Yamamoto, Yasunori; Motokucho, Suguru; Furukawa, Mutsuhisa

    2009-08-01

    Measurements are presented how chain and microphase-separated structures of ultrathin polyurethane (PU) films are controlled by the thickness. The film thickness is varied by a solution concentration for spin coating. The systems are PUs prepared from commercial raw materials. Fourier-transform infrared spectroscopic measurement revealed that the degree of hydrogen bonding among hard segment chains decreased and increased with decreasing film thickness for strong and weak microphase separation systems, respectively. The microphase-separated structure, which is formed from hard segment domains and a surrounding soft segment matrix, were observed by atomic force microscopy. The size of hard segment domains decreased with decreasing film thickness, and possibility of specific orientation of the hard segment chains was exhibited for both systems. These results are due to decreasing space for the formation of the microphase-separated structure.

  18. Incorporating partially identified sample segments into acreage estimation procedures: Estimates using only observations from the current year

    NASA Technical Reports Server (NTRS)

    Sielken, R. L., Jr. (Principal Investigator)

    1981-01-01

    Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.

  19. Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography.

    PubMed

    Kirişli, H A; Schaap, M; Metz, C T; Dharampal, A S; Meijboom, W B; Papadopoulou, S L; Dedic, A; Nieman, K; de Graaf, M A; Meijs, M F L; Cramer, M J; Broersen, A; Cetin, S; Eslami, A; Flórez-Valencia, L; Lor, K L; Matuszewski, B; Melki, I; Mohr, B; Oksüz, I; Shahzad, R; Wang, C; Kitslaar, P H; Unal, G; Katouzian, A; Örkisz, M; Chen, C M; Precioso, F; Najman, L; Masood, S; Ünay, D; van Vliet, L; Moreno, R; Goldenberg, R; Vuçini, E; Krestin, G P; Niessen, W J; van Walsum, T

    2013-12-01

    Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Evaluating unsupervised methods to size and classify suspended particles using digital in-line holography

    USGS Publications Warehouse

    Davies, Emlyn J.; Buscombe, Daniel D.; Graham, George W.; Nimmo-Smith, W. Alex M.

    2015-01-01

    Substantial information can be gained from digital in-line holography of marine particles, eliminating depth-of-field and focusing errors associated with standard lens-based imaging methods. However, for the technique to reach its full potential in oceanographic research, fully unsupervised (automated) methods are required for focusing, segmentation, sizing and classification of particles. These computational challenges are the subject of this paper, in which we draw upon data collected using a variety of holographic systems developed at Plymouth University, UK, from a significant range of particle types, sizes and shapes. A new method for noise reduction in reconstructed planes is found to be successful in aiding particle segmentation and sizing. The performance of an automated routine for deriving particle characteristics (and subsequent size distributions) is evaluated against equivalent size metrics obtained by a trained operative measuring grain axes on screen. The unsupervised method is found to be reliable, despite some errors resulting from over-segmentation of particles. A simple unsupervised particle classification system is developed, and is capable of successfully differentiating sand grains, bubbles and diatoms from within the surf-zone. Avoiding miscounting bubbles and biological particles as sand grains enables more accurate estimates of sand concentrations, and is especially important in deployments of particle monitoring instrumentation in aerated water. Perhaps the greatest potential for further development in the computational aspects of particle holography is in the area of unsupervised particle classification. The simple method proposed here provides a foundation upon which further development could lead to reliable identification of more complex particle populations, such as those containing phytoplankton, zooplankton, flocculated cohesive sediments and oil droplets.

  1. Compaction of quasi-one-dimensional elastoplastic materials.

    PubMed

    Shaebani, M Reza; Najafi, Javad; Farnudi, Ali; Bonn, Daniel; Habibi, Mehdi

    2017-06-06

    Insight into crumpling or compaction of one-dimensional objects is important for understanding biopolymer packaging and designing innovative technological devices. By compacting various types of wires in rigid confinements and characterizing the morphology of the resulting crumpled structures, here, we report how friction, plasticity and torsion enhance disorder, leading to a transition from coiled to folded morphologies. In the latter case, where folding dominates the crumpling process, we find that reducing the relative wire thickness counter-intuitively causes the maximum packing density to decrease. The segment size distribution gradually becomes more asymmetric during compaction, reflecting an increase of spatial correlations. We introduce a self-avoiding random walk model and verify that the cumulative injected wire length follows a universal dependence on segment size, allowing for the prediction of the efficiency of compaction as a function of material properties, container size and injection force.

  2. Alumina+Silica+/-Germanium Alteration in Smectite-Bearing Marathon Valley, Endeavour Crater Rim, Mars

    NASA Technical Reports Server (NTRS)

    Mittlefehldt, D. W.; Gellert, R.; Van Bommel, S.; Arvidson, R. E.; Clark, B. C.; Ming, D. W.; Schroeder, C.; Yen, A. S.; Fox, V. K.; Farrand, W. H.; hide

    2016-01-01

    Mars Exploration Rover Opportunity has been exploring Mars for 12+ years, and is presently investigating the geology of a western rim segment of 22 kilometers diameter, Noachian- aged Endeavour crater. The Alpha Particle X-ray Spectrometer has determined the compositions of a pre-impact lithology, the Matijevic fm., and polymict impact breccias ejected from the crater, the Shoemaker fm. Opportunity is now investigating a region named Marathon Valley that cuts southwest-northeast through the central portion of the rim segment and provides a window into the lower stratigraphic record. (Geographic names used here are informal.) At the head of Marathon Valley, referred to here as Upper Marathon Valley, is a shallow, ovoid depression approximately 25×35 millimeters in size, named Spirit of Saint Louis. Layering inside Spirit of Saint Louis appears continuous with the Upper Marathon Valley rocks outside, indicating they are coeval. Spirit of Saint Louis is partly bounded by approximately 10-20 centimeters wide zone containing reddish altered rocks (red zone). Red zones also form prominent curvilinear features in Marathon Valley. Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) spectra provide evidence for a really extensive Fe-Mg smectite in the Marathon Valley region, indicating distinct styles of aqueous alteration. The CRISM detections of smectites are based on metal-OH absorptions at approximately 2.3 and 2.4 micron that are at least two times the background noise level.

  3. Differentiating invasive and pre-invasive lung cancer by quantitative analysis of histopathologic images

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Sun, Hongliu; Chan, Heang-Ping; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir; Kazerooni, Ella

    2018-02-01

    We are developing automated radiopathomics method for diagnosis of lung nodule subtypes. In this study, we investigated the feasibility of using quantitative methods to analyze the tumor nuclei and cytoplasm in pathologic wholeslide images for the classification of pathologic subtypes of invasive nodules and pre-invasive nodules. We developed a multiscale blob detection method with watershed transform (MBD-WT) to segment the tumor cells. Pathomic features were extracted to characterize the size, morphology, sharpness, and gray level variation in each segmented nucleus and the heterogeneity patterns of tumor nuclei and cytoplasm. With permission of the National Lung Screening Trial (NLST) project, a data set containing 90 digital haematoxylin and eosin (HE) whole-slide images from 48 cases was used in this study. The 48 cases contain 77 regions of invasive subtypes and 43 regions of pre-invasive subtypes outlined by a pathologist on the HE images using the pathological tumor region description provided by NLST as reference. A logistic regression model (LRM) was built using leave-one-case-out resampling and receiver operating characteristic (ROC) analysis for classification of invasive and pre-invasive subtypes. With 11 selected features, the LRM achieved a test area under the ROC curve (AUC) value of 0.91+/-0.03. The results demonstrated that the pathologic invasiveness of lung adenocarcinomas could be categorized with high accuracy using pathomics analysis.

  4. Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations

    PubMed Central

    Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J.

    2017-01-01

    House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4–12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a ‘gold standard’ reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community. PMID:28727808

  5. Effective user guidance in online interactive semantic segmentation

    NASA Astrophysics Data System (ADS)

    Petersen, Jens; Bendszus, Martin; Debus, Jürgen; Heiland, Sabine; Maier-Hein, Klaus H.

    2017-03-01

    With the recent success of machine learning based solutions for automatic image parsing, the availability of reference image annotations for algorithm training is one of the major bottlenecks in medical image segmentation. We are interested in interactive semantic segmentation methods that can be used in an online fashion to generate expert segmentations. These can be used to train automated segmentation techniques or, from an application perspective, for quick and accurate tumor progression monitoring. Using simulated user interactions in a MRI glioblastoma segmentation task, we show that if the user possesses knowledge of the correct segmentation it is significantly (p <= 0.009) better to present data and current segmentation to the user in such a manner that they can easily identify falsely classified regions compared to guiding the user to regions where the classifier exhibits high uncertainty, resulting in differences of mean Dice scores between +0.070 (Whole tumor) and +0.136 (Tumor Core) after 20 iterations. The annotation process should cover all classes equally, which results in a significant (p <= 0.002) improvement compared to completely random annotations anywhere in falsely classified regions for small tumor regions such as the necrotic tumor core (mean Dice +0.151 after 20 it.) and non-enhancing abnormalities (mean Dice +0.069 after 20 it.). These findings provide important insights for the development of efficient interactive segmentation systems and user interfaces.

  6. Functional significance of the taper of vertebrate cone photoreceptors

    PubMed Central

    Hárosi, Ferenc I.

    2012-01-01

    Vertebrate photoreceptors are commonly distinguished based on the shape of their outer segments: those of cones taper, whereas the ones from rods do not. The functional advantages of cone taper, a common occurrence in vertebrate retinas, remain elusive. In this study, we investigate this topic using theoretical analyses aimed at revealing structure–function relationships in photoreceptors. Geometrical optics combined with spectrophotometric and morphological data are used to support the analyses and to test predictions. Three functions are considered for correlations between taper and functionality. The first function proposes that outer segment taper serves to compensate for self-screening of the visual pigment contained within. The second function links outer segment taper to compensation for a signal-to-noise ratio decline along the longitudinal dimension. Both functions are supported by the data: real cones taper more than required for these compensatory roles. The third function relates outer segment taper to the optical properties of the inner compartment whereby the primary determinant is the inner segment’s ability to concentrate light via its ellipsoid. In support of this idea, the rod/cone ratios of primarily diurnal animals are predicted based on a principle of equal light flux gathering between photoreceptors. In addition, ellipsoid concentration factor, a measure of ellipsoid ability to concentrate light onto the outer segment, correlates positively with outer segment taper expressed as a ratio of characteristic lengths, where critical taper is the yardstick. Depending on a light-funneling property and the presence of focusing organelles such as oil droplets, cone outer segments can be reduced in size to various degrees. We conclude that outer segment taper is but one component of a miniaturization process that reduces metabolic costs while improving signal detection. Compromise solutions in the various retinas and retinal regions occur between ellipsoid size and acuity, on the one hand, and faster response time and reduced light sensitivity, on the other. PMID:22250013

  7. Two-stage atlas subset selection in multi-atlas based image segmentation.

    PubMed

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.

  8. Corpus callosum segmentation using deep neural networks with prior information from multi-atlas images

    NASA Astrophysics Data System (ADS)

    Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min

    2018-03-01

    In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.

  9. Origin of amphibian and avian chromosomes by fission, fusion, and retention of ancestral chromosomes

    PubMed Central

    Voss, Stephen R.; Kump, D. Kevin; Putta, Srikrishna; Pauly, Nathan; Reynolds, Anna; Henry, Rema J.; Basa, Saritha; Walker, John A.; Smith, Jeramiah J.

    2011-01-01

    Amphibian genomes differ greatly in DNA content and chromosome size, morphology, and number. Investigations of this diversity are needed to identify mechanisms that have shaped the evolution of vertebrate genomes. We used comparative mapping to investigate the organization of genes in the Mexican axolotl (Ambystoma mexicanum), a species that presents relatively few chromosomes (n = 14) and a gigantic genome (>20 pg/N). We show extensive conservation of synteny between Ambystoma, chicken, and human, and a positive correlation between the length of conserved segments and genome size. Ambystoma segments are estimated to be four to 51 times longer than homologous human and chicken segments. Strikingly, genes demarking the structures of 28 chicken chromosomes are ordered among linkage groups defining the Ambystoma genome, and we show that these same chromosomal segments are also conserved in a distantly related anuran amphibian (Xenopus tropicalis). Using linkage relationships from the amphibian maps, we predict that three chicken chromosomes originated by fusion, nine to 14 originated by fission, and 12–17 evolved directly from ancestral tetrapod chromosomes. We further show that some ancestral segments were fused prior to the divergence of salamanders and anurans, while others fused independently and randomly as chromosome numbers were reduced in lineages leading to Ambystoma and Xenopus. The maintenance of gene order relationships between chromosomal segments that have greatly expanded and contracted in salamander and chicken genomes, respectively, suggests selection to maintain synteny relationships and/or extremely low rates of chromosomal rearrangement. Overall, the results demonstrate the value of data from diverse, amphibian genomes in studies of vertebrate genome evolution. PMID:21482624

  10. Combining deep learning with anatomical analysis for segmentation of the portal vein for liver SBRT planning

    NASA Astrophysics Data System (ADS)

    Ibragimov, Bulat; Toesca, Diego; Chang, Daniel; Koong, Albert; Xing, Lei

    2017-12-01

    Automated segmentation of the portal vein (PV) for liver radiotherapy planning is a challenging task due to potentially low vasculature contrast, complex PV anatomy and image artifacts originated from fiducial markers and vasculature stents. In this paper, we propose a novel framework for automated segmentation of the PV from computed tomography (CT) images. We apply convolutional neural networks (CNNs) to learn the consistent appearance patterns of the PV using a training set of CT images with reference annotations and then enhance the PV in previously unseen CT images. Markov random fields (MRFs) were further used to smooth the results of the enhancement of the CNN enhancement and remove isolated mis-segmented regions. Finally, CNN-MRF-based enhancement was augmented with PV centerline detection that relied on PV anatomical properties such as tubularity and branch composition. The framework was validated on a clinical database with 72 CT images of patients scheduled for liver stereotactic body radiation therapy. The obtained accuracy of the segmentation was DSC= 0.83 and \

  11. The effect of particle size on the morphology and thermodynamics of diblock copolymer/tethered-particle membranes.

    PubMed

    Zhang, Bo; Edwards, Brian J

    2015-06-07

    A combination of self-consistent field theory and density functional theory was used to examine the effect of particle size on the stable, 3-dimensional equilibrium morphologies formed by diblock copolymers with a tethered nanoparticle attached either between the two blocks or at the end of one of the blocks. Particle size was varied between one and four tenths of the radius of gyration of the diblock polymer chain for neutral particles as well as those either favoring or disfavoring segments of the copolymer blocks. Phase diagrams were constructed and analyzed in terms of thermodynamic diagrams to understand the physics associated with the molecular-level self-assembly processes. Typical morphologies were observed, such as lamellar, spheroidal, cylindrical, gyroidal, and perforated lamellar, with the primary concentration region of the tethered particles being influenced heavily by particle size and tethering location, strength of the particle-segment energetic interactions, chain length, and copolymer radius of gyration. The effect of the simulation box size on the observed morphology and system thermodynamics was also investigated, indicating possible effects of confinement upon the system self-assembly processes.

  12. A systematic review of definitions and classification systems of adjacent segment pathology.

    PubMed

    Kraemer, Paul; Fehlings, Michael G; Hashimoto, Robin; Lee, Michael J; Anderson, Paul A; Chapman, Jens R; Raich, Annie; Norvell, Daniel C

    2012-10-15

    Systematic review. To undertake a systematic review to determine how "adjacent segment degeneration," "adjacent segment disease," or clinical pathological processes that serve as surrogates for adjacent segment pathology are classified and defined in the peer-reviewed literature. Adjacent segment degeneration and adjacent segment disease are terms referring to degenerative changes known to occur after reconstructive spine surgery, most commonly at an immediately adjacent functional spinal unit. These can include disc degeneration, instability, spinal stenosis, facet degeneration, and deformity. The true incidence and clinical impact of degenerative changes at the adjacent segment is unclear because there is lack of a universally accepted classification system that rigorously addresses clinical and radiological issues. A systematic review of the English language literature was undertaken and articles were classified using the Grades of Recommendation Assessment, Development, and Evaluation criteria. RESULTS.: Seven classification systems of spinal degeneration, including degeneration at the adjacent segment, were identified. None have been evaluated for reliability or validity specific to patients with degeneration at the adjacent segment. The ways in which terms related to adjacent segment "degeneration" or "disease" are defined in the peer-reviewed literature are highly variable. On the basis of the systematic review presented in this article, no formal classification system for either cervical or thoracolumbar adjacent segment disorders currently exists. No recommendations regarding the use of current classification of degeneration at any segments can be made based on the available literature. A new comprehensive definition for adjacent segment pathology (ASP, the now preferred terminology) has been proposed in this Focus Issue, which reflects the diverse pathology observed at functional spinal units adjacent to previous spinal reconstruction and balances detailed stratification with clinical utility. A comprehensive classification system is being developed through expert opinion and will require validation as well as peer review. Strength of Statement: Strong.

  13. Elaboration of a semi-automated algorithm for brain arteriovenous malformation segmentation: initial results.

    PubMed

    Clarençon, Frédéric; Maizeroi-Eugène, Franck; Bresson, Damien; Maingreaud, Flavien; Sourour, Nader; Couquet, Claude; Ayoub, David; Chiras, Jacques; Yardin, Catherine; Mounayer, Charbel

    2015-02-01

    The purpose of our study was to distinguish the different components of a brain arteriovenous malformation (bAVM) on 3D rotational angiography (3D-RA) using a semi-automated segmentation algorithm. Data from 3D-RA of 15 patients (8 males, 7 females; 14 supratentorial bAVMs, 1 infratentorial) were used to test the algorithm. Segmentation was performed in two steps: (1) nidus segmentation from propagation (vertical then horizontal) of tagging on the reference slice (i.e., the slice on which the nidus had the biggest surface); (2) contiguity propagation (based on density and variance) from tagging of arteries and veins distant from the nidus. Segmentation quality was evaluated by comparison with six frame/s DSA by two independent reviewers. Analysis of supraselective microcatheterisation was performed to dispel discrepancy. Mean duration for bAVM segmentation was 64 ± 26 min. Quality of segmentation was evaluated as good or fair in 93% of cases. Segmentation had better results than six frame/s DSA for the depiction of a focal ectasia on the main draining vein and for the evaluation of the venous drainage pattern. This segmentation algorithm is a promising tool that may help improve the understanding of bAVM angio-architecture, especially the venous drainage. • The segmentation algorithm allows for the distinction of the AVM's components • This algorithm helps to see the venous drainage of bAVMs more precisely • This algorithm may help to reduce the treatment-related complication rate.

  14. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    PubMed

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.

  15. Detection of Crohn disease lesions of the small and large bowel in pediatric patients: diagnostic value of MR enterography versus reference examinations.

    PubMed

    Maccioni, Francesca; Al Ansari, Najwa; Mazzamurro, Fabrizio; Civitelli, Fortunata; Viola, Franca; Cucchiara, Salvatore; Catalano, Carlo

    2014-11-01

    The purpose of this article is to prospectively determine the accuracy of MR enterography in detecting Crohn disease lesions from the jejunum to the anorectal region in pediatric patients, in comparison with main reference investigations. Fifty consecutive children with known Crohn disease underwent MR enterography with oral contrast agent and gadolinium-chelate intravenous injection. Two radiologists detected and localized lesions by dividing the bowel into nine segments (450 analyzed segments in 50 patients). Ileocolonoscopy, barium studies, intestinal ultrasound, and capsule endoscopy were considered as first- and second-level reference examinations and were performed within 15 days of MR enterography. MR enterography detected lesions in 164 of 450 segments, with 155 true-positive and nine false-positive findings; overall sensitivity, specificity, and positive and negative predictive values for small- and large-bowel lesions were 94.5%, 97%, 94.5%, and 97%, respectively (ĸ = 0.93; 95% CI, 0.89-0.97). Sensitivity and specificity values were 88% and 97%, respectively, for the jejunum, 100% and 97% for the proximal-to-mid ileum, 100% and 100% for the distal ileum, 93% and 100% for the cecum, 70% and 97% for the ascending colon, 80% and 100% for the transverse colon, 100% and 92% for the descending colon, 96% and 90% for the sigmoid colon, and 96% and 88% for the rectum. From jejunum to rectum, the AUC value ranged between 0.916 (jejunum) and 1.00 (distal ileum). Perianal fistulas were diagnosed in 15 patients, and other complications were found in 13 patients. MR enterography showed an accuracy comparable to that of reference investigations, for both small- and large-bowel lesions. Because MR enterography is safer and more comprehensive than the reference examinations, it should be considered the primary examination for detecting Crohn disease lesions in children.

  16. Monitoring hydrofrac-induced seismicity by surface arrays - the DHM-Project Basel case study

    NASA Astrophysics Data System (ADS)

    Blascheck, P.; Häge, M.; Joswig, M.

    2012-04-01

    The method "nanoseismic monitoring" was applied during the hydraulic stimulation at the Deep-Heat-Mining-Project (DHM-Project) Basel. Two small arrays in a distance of 2.1 km and 4.8 km to the borehole recorded continuously for two days. During this time more than 2500 seismic events were detected. The method of the surface monitoring of induced seismicity was compared to the reference which the hydrofrac monitoring presented. The latter was conducted by a network of borehole seismometers by Geothermal Explorers Limited. Array processing provides a outlier resistant, graphical jack-knifing localization method which resulted in a average deviation towards the reference of 850 m. Additionally, by applying the relative localization master-event method, the NNW-SSE strike direction of the reference was confirmed. It was shown that, in order to successfully estimate the magnitude of completeness as well as the b-value at the event rate and detection sensibility present, 3 h segments of data are sufficient. This is supported by two segment out of over 13 h of evaluated data. These segments were chosen so that they represent a time during the high seismic noise during normal working hours in daytime as well as the minimum anthropogenic noise at night. The low signal-to-noise ratio was compensated by the application of a sonogram event detection as well as a coincidence analysis within each array. Sonograms allow by autoadaptive, non-linear filtering to enhance signals whose amplitudes are just above noise level. For these events the magnitude was determined by the master-event method, allowing to compute the magnitude of completeness by the entire-magnitude-range method provided by the ZMAP toolbox. Additionally, the b-values were determined and compared to the reference values. An introduction to the method of "nanoseismic monitoring" will be given as well as the comparison to reference data in the Basel case study.

  17. SU-C-BRA-04: Automated Segmentation of Head-And-Neck CT Images for Radiotherapy Treatment Planning Via Multi-Atlas Machine Learning (MAML)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, X; Gao, H; Sharp, G

    Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  18. Model-based segmentation of hand radiographs

    NASA Astrophysics Data System (ADS)

    Weiler, Frank; Vogelsang, Frank

    1998-06-01

    An important procedure in pediatrics is to determine the skeletal maturity of a patient from radiographs of the hand. There is great interest in the automation of this tedious and time-consuming task. We present a new method for the segmentation of the bones of the hand, which allows the assessment of the skeletal maturity with an appropriate database of reference bones, similar to the atlas based methods. The proposed algorithm uses an extended active contour model for the segmentation of the hand bones, which incorporates a-priori knowledge of shape and topology of the bones in an additional energy term. This `scene knowledge' is integrated in a complex hierarchical image model, that is used for the image analysis task.

  19. [C57BL/6 mice open field behaviour qualitatively depends on arena size].

    PubMed

    Lebedev, I V; Pleskacheva, M G; Anokhin, K V

    2012-01-01

    Open field behavior is well known to depend on physical characteristics of the apparatus. However many of such effects are poorly described especially with using of modern methods of behavioral registration and analysis. The previous results of experiments on the effect of arena size on behavior are not numerous and contradictory. We compared the behavioral scores of four groups of C57BL/6 mice in round open field arenas of four different sizes (diameter 35, 75, 150 and 220 cm). The behavior was registered and analyzed using Noldus EthoVision, WinTrack and SegmentAnalyzer software. A significant effect of arena size was found. Traveled distance and velocity increased, but not in proportion to increase of arena size. Moreover a significant effect on segment characteristics of the trajectory was revealed. Detailed behavior analysis revealed drastic differences in trajectory structure and number of rears between smaller (35 and 75 cm) and bigger (150 and 220 cm) arenas. We conclude, that the character of exploration in smaller and bigger arenas depends on relative size of central open zone in arena. Apparently its extension increases the motivational heterogeneity of space, that requires another than in smaller arenas, strategy of exploration.

  20. Scaling bioinformatics applications on HPC.

    PubMed

    Mikailov, Mike; Luo, Fu-Jyh; Barkley, Stuart; Valleru, Lohit; Whitney, Stephen; Liu, Zhichao; Thakkar, Shraddha; Tong, Weida; Petrick, Nicholas

    2017-12-28

    Recent breakthroughs in molecular biology and next generation sequencing technologies have led to the expenential growh of the sequence databases. Researchrs use BLAST for processing these sequences. However traditional software parallelization techniques (threads, message passing interface) applied in newer versios of BLAST are not adequate for processing these sequences in timely manner. A new method for array job parallelization has been developed which offers O(T) theoretical speed-up in comparison to multi-threading and MPI techniques. Here T is the number of array job tasks. (The number of CPUs that will be used to complete the job equals the product of T multiplied by the number of CPUs used by a single task.) The approach is based on segmentation of both input datasets to the BLAST process, combining partial solutions published earlier (Dhanker and Gupta, Int J Comput Sci Inf Technol_5:4818-4820, 2014), (Grant et al., Bioinformatics_18:765-766, 2002), (Mathog, Bioinformatics_19:1865-1866, 2003). It is accordingly referred to as a "dual segmentation" method. In order to implement the new method, the BLAST source code was modified to allow the researcher to pass to the program the number of records (effective number of sequences) in the original database. The team also developed methods to manage and consolidate the large number of partial results that get produced. Dual segmentation allows for massive parallelization, which lifts the scaling ceiling in exciting ways. BLAST jobs that hitherto failed or slogged inefficiently to completion now finish with speeds that characteristically reduce wallclock time from 27 days on 40 CPUs to a single day using 4104 tasks, each task utilizing eight CPUs and taking less than 7 minutes to complete. The massive increase in the number of tasks when running an analysis job with dual segmentation reduces the size, scope and execution time of each task. Besides significant speed of completion, additional benefits include fine-grained checkpointing and increased flexibility of job submission. "Trickling in" a swarm of individual small tasks tempers competition for CPU time in the shared HPC environment, and jobs submitted during quiet periods can complete in extraordinarily short time frames. The smaller task size also allows the use of older and less powerful hardware. The CDRH workhorse cluster was commissioned in 2010, yet its eight-core CPUs with only 24GB RAM work well in 2017 for these dual segmentation jobs. Finally, these techniques are excitingly friendly to budget conscious scientific research organizations where probabilistic algorithms such as BLAST might discourage attempts at greater certainty because single runs represent a major resource drain. If a job that used to take 24 days can now be completed in less than an hour or on a space available basis (which is the case at CDRH), repeated runs for more exhaustive analyses can be usefully contemplated.

  1. Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Guinard, S.; Landrieu, L.

    2017-05-01

    We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.

  2. Segmental Refinement: A Multigrid Technique for Data Locality

    DOE PAGES

    Adams, Mark F.; Brown, Jed; Knepley, Matt; ...

    2016-08-04

    In this paper, we investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. Finally, we present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinementmore » and report performance results with up to 64K cores on a Cray XC30.« less

  3. Lung tumor segmentation in PET images using graph cuts.

    PubMed

    Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan

    2013-03-01

    The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. The segment as the minimal planning unit in speech production and reading aloud: evidence and implications.

    PubMed

    Kawamoto, Alan H; Liu, Qiang; Kello, Christopher T

    2015-01-01

    Speech production and reading aloud studies have much in common, especially the last stages involved in producing a response. We focus on the minimal planning unit (MPU) in articulation. Although most researchers now assume that the MPU is the syllable, we argue that it is at least as small as the segment based on negative response latencies (i.e., response initiation before presentation of the complete target) and longer initial segment durations in a reading aloud task where the initial segment is primed. We also discuss why such evidence was not found in earlier studies. Next, we rebut arguments that the segment cannot be the MPU by appealing to flexible planning scope whereby planning units of different sizes can be used due to individual differences, as well as stimulus and experimental design differences. We also discuss why negative response latencies do not arise in some situations and why anticipatory coarticulation does not preclude the segment MPU. Finally, we argue that the segment MPU is also important because it provides an alternative explanation of results implicated in the serial vs. parallel processing debate.

  5. A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.

    PubMed

    Pandis, Petros; Bull, Anthony Mj

    2017-11-01

    Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.

  6. Initialisation of 3D level set for hippocampus segmentation from volumetric brain MR images

    NASA Astrophysics Data System (ADS)

    Hajiesmaeili, Maryam; Dehmeshki, Jamshid; Bagheri Nakhjavanlo, Bashir; Ellis, Tim

    2014-04-01

    Shrinkage of the hippocampus is a primary biomarker for Alzheimer's disease and can be measured through accurate segmentation of brain MR images. The paper will describe the problem of initialisation of a 3D level set algorithm for hippocampus segmentation that must cope with the some challenging characteristics, such as small size, wide range of intensities, narrow width, and shape variation. In addition, MR images require bias correction, to account for additional inhomogeneity associated with the scanner technology. Due to these inhomogeneities, using a single initialisation seed region inside the hippocampus is prone to failure. Alternative initialisation strategies are explored, such as using multiple initialisations in different sections (such as the head, body and tail) of the hippocampus. The Dice metric is used to validate our segmentation results with respect to ground truth for a dataset of 25 MR images. Experimental results indicate significant improvement in segmentation performance using the multiple initialisations techniques, yielding more accurate segmentation results for the hippocampus.

  7. Muscle segmentation in time series images of Drosophila metamorphosis.

    PubMed

    Yadav, Kuleesha; Lin, Feng; Wasser, Martin

    2015-01-01

    In order to study genes associated with muscular disorders, we characterize the phenotypic changes in Drosophila muscle cells during metamorphosis caused by genetic perturbations. We collect in vivo images of muscle fibers during remodeling of larval to adult muscles. In this paper, we focus on the new image processing pipeline designed to quantify the changes in shape and size of muscles. We propose a new two-step approach to muscle segmentation in time series images. First, we implement a watershed algorithm to divide the image into edge-preserving regions, and then, we classify these regions into muscle and non-muscle classes on the basis of shape and intensity. The advantage of our method is two-fold: First, better results are obtained because classification of regions is constrained by the shape of muscle cell from previous time point; and secondly, minimal user intervention results in faster processing time. The segmentation results are used to compare the changes in cell size between controls and reduction of the autophagy related gene Atg 9 during Drosophila metamorphosis.

  8. Outdoor recreation activity trends by volume segments: U.S. and Northeast market analyses, 1982-1989

    Treesearch

    Rodney B. Warnick

    1992-01-01

    The purpose of this review was to examine volume segmentation within three selected outdoor recreational activities -- swimming, hunting and downhill skiing over an eight-year period, from 1982 through 1989 at the national level and within the Northeast Region of the U.S.; and to determine if trend patterns existed within any of these activities when the market size...

  9. KSOS Computer Program Development Specifications (Type B-5). (Kernelized Secure Operating System). I. Security Kernel (CDRL 0002AF). II. UNIX Emulator (CDRL 0002AG). III. Security-Related Software (CDRL 0002AH).

    DTIC Science & Technology

    1980-12-01

    Commun- ications Corporation, Palo Alto, CA (March 1978). g. [Walter at al. 74] Walter, K.G. et al., " Primitive Models for Computer .. Security", ESD-TR...discussion is followed by a presenta- tion of the Kernel primitive operations upon these objects. All Kernel objects shall be referenced by a common...set of sizes. All process segments, regardless of domain, shall be manipulated by the same set of Kernel segment primitives . User domain segments

  10. Polarization sensitive corneal and anterior segment swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lim, Yiheng; Yamanari, Masahiro; Yasuno, Yoshiaki

    2010-02-01

    We develop a compact polarization sensitive corneal and anterior segment swept-source optical coherence tomography (PS-CAS- OCT) for evaluating the usefulness of PS-OCT, and enabling large scale studies in the tissue properties of normal and diseased eyes using the benefits of the PS-OCT, which provides better tissue discrimination compared to the conventional OCT by visualizing the fibrous tissues in the anterior eye segment. Our polarization-sensitive interferometer is size reduced into a 19 inch box for the portability and the probe is integrated into a position adjustable scanning head for the usability of our system.

  11. Parallelized seeded region growing using CUDA.

    PubMed

    Park, Seongjin; Lee, Jeongjin; Lee, Hyunna; Shin, Juneseuk; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.

  12. Comparison of usefulness of N-terminal pro-brain natriuretic peptide as an independent predictor of cardiac function among admission cardiac serum biomarkers in patients with anterior wall versus nonanterior wall ST-segment elevation myocardial infarction undergoing primary percutaneous coronary intervention.

    PubMed

    Haeck, Joost D E; Verouden, Niels J W; Kuijt, Wichert J; Koch, Karel T; Van Straalen, Jan P; Fischer, Johan; Groenink, Maarten; Bilodeau, Luc; Tijssen, Jan G P; Krucoff, Mitchell W; De Winter, Robbert J

    2010-04-15

    The purpose of the present study was to determine the prognostic value of N-terminal pro-brain natriuretic peptide (NT-pro-BNP), among other serum biomarkers, on cardiac magnetic resonance (CMR) imaging parameters of cardiac function and infarct size in patients with ST-segment elevation myocardial infarction undergoing primary percutaneous coronary intervention. We measured NT-pro-BNP, cardiac troponin T, creatinine kinase-MB fraction, high-sensitivity C-reactive protein, and creatinine on the patients' arrival at the catheterization laboratory in 206 patients with ST-segment elevation myocardial infarction. The NT-pro-BNP levels were divided into quartiles and correlated with left ventricular function and infarct size measured by CMR imaging at 4 to 6 months. Compared to the lower quartiles, patients with nonanterior wall myocardial infarction in the highest quartile of NT-pro-BNP (> or = 260 pg/ml) more often had a greater left ventricular end-systolic volume (68 vs 39 ml/m(2), p <0.001), a lower left ventricular ejection fraction (42% vs 54%, p <0.001), a larger infarct size (9 vs 4 g/m(2), p = 0.002), and a larger number of transmural segments (11% of segments vs 3% of segments, p <0.001). Multivariate analysis revealed that a NT-pro-BNP level of > or = 260 pg/ml was the strongest independent predictor of left ventricular ejection fraction in patients with nonanterior wall myocardial infarction compared to the other serum biomarkers (beta = -5.8; p = 0.019). In conclusion, in patients with nonanterior wall myocardial infarction undergoing primary percutaneous coronary intervention, an admission NT-pro-BNP level of > or = 260 pg/ml was a strong, independent predictor of left ventricular function assessed by CMR imaging at follow-up. Our findings suggest that NT-pro-BNP, a widely available biomarker, might be helpful in the early risk stratification of patients with nonanterior wall myocardial infarction. Copyright 2010 Elsevier Inc. All rights reserved.

  13. Relaxation dynamics of internal segments of DNA chains in nanochannels

    NASA Astrophysics Data System (ADS)

    Jain, Aashish; Muralidhar, Abhiram; Dorfman, Kevin; Dorfman Group Team

    We will present relaxation dynamics of internal segments of a DNA chain confined in nanochannel. The results have direct application in genome mapping technology, where long DNA molecules containing sequence-specific fluorescent probes are passed through an array of nanochannels to linearize them, and then the distances between these probes (the so-called ``DNA barcode'') are measured. The relaxation dynamics of internal segments set the experimental error due to dynamic fluctuations. We developed a multi-scale simulation algorithm, combining a Pruned-Enriched Rosenbluth Method (PERM) simulation of a discrete wormlike chain model with hard spheres with Brownian dynamics (BD) simulations of a bead-spring chain. Realistic parameters such as the bead friction coefficient and spring force law parameters are obtained from PERM simulations and then mapped onto the bead-spring model. The BD simulations are carried out to obtain the extension autocorrelation functions of various segments, which furnish their relaxation times. Interestingly, we find that (i) corner segments relax faster than the center segments and (ii) relaxation times of corner segments do not depend on the contour length of DNA chain, whereas the relaxation times of center segments increase linearly with DNA chain size.

  14. 3-D segmentation of articular cartilages by graph cuts using knee MR images from osteoarthritis initiative

    NASA Astrophysics Data System (ADS)

    Shim, Hackjoon; Lee, Soochan; Kim, Bohyeong; Tao, Cheng; Chang, Samuel; Yun, Il Dong; Lee, Sang Uk; Kwoh, Kent; Bae, Kyongtae

    2008-03-01

    Knee osteoarthritis is the most common debilitating health condition affecting elderly population. MR imaging of the knee is highly sensitive for diagnosis and evaluation of the extent of knee osteoarthritis. Quantitative analysis of the progression of osteoarthritis is commonly based on segmentation and measurement of articular cartilage from knee MR images. Segmentation of the knee articular cartilage, however, is extremely laborious and technically demanding, because the cartilage is of complex geometry and thin and small in size. To improve precision and efficiency of the segmentation of the cartilage, we have applied a semi-automated segmentation method that is based on an s/t graph cut algorithm. The cost function was defined integrating regional and boundary cues. While regional cues can encode any intensity distributions of two regions, "object" (cartilage) and "background" (the rest), boundary cues are based on the intensity differences between neighboring pixels. For three-dimensional (3-D) segmentation, hard constraints are also specified in 3-D way facilitating user interaction. When our proposed semi-automated method was tested on clinical patients' MR images (160 slices, 0.7 mm slice thickness), a considerable amount of segmentation time was saved with improved efficiency, compared to a manual segmentation approach.

  15. Description and User Instructions for the Quaternion_to_Orbit_v3 Software

    NASA Technical Reports Server (NTRS)

    Strekalov, Dmitry V.; Kruizinga, Gerhard L.; Paik, Meegyeong; Yuan, Dah-Ning; Asmar, Sami W.

    2012-01-01

    For a given inertial frame of reference, the software combines the spacecraft orbits with the spacecraft attitude quaternions, and rotates the body-fixed reference frame of a particular spacecraft to the inertial reference frame. The conversion assumes that the two spacecraft are aligned with respect to the mutual line of sight, with a parameterized time tag. The software is implemented in Python and is completely open source. It is very versatile, and may be applied under various circumstances and for other related purposes. Based on the solid linear algebra analysis, it has an extra option for compensating the linear pitch. This software has been designed for simulation of the calibration maneuvers performed by the two spacecraft comprising the GRAIL mission to the Moon, but has potential use for other applications. In simulations of formation flights, one needs to coordinate the spacecraft orbits represented in an appropriate inertial reference frame and the spacecraft attitudes. The latter are usually given as the time series of quaternions rotating the body-fixed reference frame of a particular spacecraft to the inertial reference frame. It is often desirable to simulate the same maneuver for different segments of the orbit. It is also useful to study various maneuvers that could be performed at the same orbit segment. These two lines of study are more timeand labor-efficient if the attitude and orbit data are generated independently, so that the part of the data that has not been changed can be recycled in the course of multiple simulations.

  16. Genetics Home Reference: atelosteogenesis type 3

    MedlinePlus

    ... in the gene encoding filamin B disrupt vertebral segmentation, joint formation and skeletogenesis. Nat Genet. 2004 Apr; ... Celebrates Its 15th Anniversary Genetic Information Non-Discrimination Act (GINA) Turns 10 All Bulletins Features What is ...

  17. Genetics Home Reference: boomerang dysplasia

    MedlinePlus

    ... in the gene encoding filamin B disrupt vertebral segmentation, joint formation and skeletogenesis. Nat Genet. 2004 Apr; ... Celebrates Its 15th Anniversary Genetic Information Non-Discrimination Act (GINA) Turns 10 All Bulletins Features What is ...

  18. Genetics Home Reference: atelosteogenesis type 1

    MedlinePlus

    ... in the gene encoding filamin B disrupt vertebral segmentation, joint formation and skeletogenesis. Nat Genet. 2004 Apr; ... Celebrates Its 15th Anniversary Genetic Information Non-Discrimination Act (GINA) Turns 10 All Bulletins Features What is ...

  19. Genetics Home Reference: Smith-Magenis syndrome

    MedlinePlus

    ... segment most often includes approximately 3.7 million DNA building blocks (base pairs), also written as 3. ... AM, Lupski JR, Potocki L. Cognitive and adaptive behavior profiles in Smith-Magenis syndrome. J Dev Behav ...

  20. Genetics Home Reference: Potocki-Lupski syndrome

    MedlinePlus

    ... of this segment causes a related condition called Smith-Magenis syndrome .) In the remaining one-third of ... L. Neurodevelopmental Disorders Associated with Abnormal Gene Dosage: Smith-Magenis and Potocki-Lupski Syndromes. J Pediatr Genet. ...

  1. Rapid surface defect detection based on singular value decomposition using steel strips as an example

    NASA Astrophysics Data System (ADS)

    Sun, Qianlai; Wang, Yin; Sun, Zhiyi

    2018-05-01

    For most surface defect detection methods based on image processing, image segmentation is a prerequisite for determining and locating the defect. In our previous work, a method based on singular value decomposition (SVD) was used to determine and approximately locate surface defects on steel strips without image segmentation. For the SVD-based method, the image to be inspected was projected onto its first left and right singular vectors respectively. If there were defects in the image, there would be sharp changes in the projections. Then the defects may be determined and located according sharp changes in the projections of each image to be inspected. This method was simple and practical but the SVD should be performed for each image to be inspected. Owing to the high time complexity of SVD itself, it did not have a significant advantage in terms of time consumption over image segmentation-based methods. Here, we present an improved SVD-based method. In the improved method, a defect-free image is considered as the reference image which is acquired under the same environment as the image to be inspected. The singular vectors of each image to be inspected are replaced by the singular vectors of the reference image, and SVD is performed only once for the reference image off-line before detecting of the defects, thus greatly reducing the time required. The improved method is more conducive to real-time defect detection. Experimental results confirm its validity.

  2. A stochastic population model to evaluate Moapa dace (Moapa coriacea) population growth under alternative management scenarios

    USGS Publications Warehouse

    Perry, Russell W.; Jones, Edward; Scoppettone, G. Gary

    2015-07-14

    Increasing or decreasing the total carrying capacity of all stream segments resulted in changes in equilibrium population size that were directly proportional to the change in capacity. However, changes in carrying capacity to some stream segments but not others could result in disproportionate changes in equilibrium population sizes by altering density-dependent movement and survival in the stream network. These simulations show how our IBM can provide a useful management tool for understanding the effect of restoration actions or reintroductions on carrying capacity, and, in turn, how these changes affect Moapa dace abundance. Such tools are critical for devising management strategies to achieve recovery goals.

  3. Iris recognition: on the segmentation of degraded images acquired in the visible wavelength.

    PubMed

    Proença, Hugo

    2010-08-01

    Iris recognition imaging constraints are receiving increasing attention. There are several proposals to develop systems that operate in the visible wavelength and in less constrained environments. These imaging conditions engender acquired noisy artifacts that lead to severely degraded images, making iris segmentation a major issue. Having observed that existing iris segmentation methods tend to fail in these challenging conditions, we present a segmentation method that can handle degraded images acquired in less constrained conditions. We offer the following contributions: 1) to consider the sclera the most easily distinguishable part of the eye in degraded images, 2) to propose a new type of feature that measures the proportion of sclera in each direction and is fundamental in segmenting the iris, and 3) to run the entire procedure in deterministically linear time in respect to the size of the image, making the procedure suitable for real-time applications.

  4. Matrix Analysis of the Digital Divide in eHealth Services Using Awareness, Want, and Adoption Gap

    PubMed Central

    2012-01-01

    Background The digital divide usually refers to access or usage, but some studies have identified two other divides: awareness and demand (want). Given that the hierarchical stages of the innovation adoption process of a customer are interrelated, it is necessary and meaningful to analyze the digital divide in eHealth services through three main stages, namely, awareness, want, and adoption. Objective By following the three main integrated stages of the innovation diffusion theory, from the customer segment viewpoint, this study aimed to propose a new matrix analysis of the digital divide using the awareness, want, and adoption gap ratio (AWAG). I compared the digital divide among different groups. Furthermore, I conducted an empirical study on eHealth services to present the practicability of the proposed methodology. Methods Through a review and discussion of the literature, I proposed hypotheses and a new matrix analysis. To test the proposed method, 3074 Taiwanese respondents, aged 15 years and older, were surveyed by telephone. I used the stratified simple random sampling method, with sample size allocation proportioned by the population distribution of 23 cities and counties (strata). Results This study proposed the AWAG segment matrix to analyze the digital divide in eHealth services. First, awareness and want rates were divided into two levels at the middle point of 50%, and then the 2-dimensional cross of the awareness and want segment matrix was divided into four categories: opened group, desire-deficiency group, perception-deficiency group, and closed group. Second, according to the degrees of awareness and want, each category was further divided into four subcategories. I also defined four possible strategies, namely, hold, improve, evaluate, and leave, for different regions in the proposed matrix. An empirical test on two recently promoted eHealth services, the digital medical service (DMS) and the digital home care service (DHCS), was conducted. Results showed that for both eHealth services, the digital divides of awareness, want, and adoption existed across demographic variables, as well as between computer owners and nonowners, and between Internet users and nonusers. With respect to the analysis of the AWAG segment matrix for DMS, most of the segments, except for people with marriage status of Other or without computers, were positioned in the opened group. With respect to DHCS, segments were separately positioned in the opened, perception-deficiency, and closed groups. Conclusions Adoption does not closely follow people’s awareness or want, and a huge digital divide in adoption exists in DHS and DHCS. Thus, a strategy to promote adoption should be used for most demographic segments. PMID:22329958

  5. Indoor Spatial Updating with Reduced Visual Information

    PubMed Central

    Legge, Gordon E.; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M.

    2016-01-01

    Purpose Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Methods Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. Results With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. Discussion If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment. PMID:26943674

  6. Indoor Spatial Updating with Reduced Visual Information.

    PubMed

    Legge, Gordon E; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M

    2016-01-01

    Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.

  7. Caution on the use of NBS 30 biotite for hydrogen-isotope measurements with on-line high-temperature conversion systems

    USGS Publications Warehouse

    Qi, Haiping; Coplen, Tyler B.; Olack, Gerard; Vennemann, Torsten W.

    2014-01-01

    RATIONALEThe supply of NBS 30 biotite is nearly exhausted. During measurements of NBS 30 and potential replacements, reproducible δ2HVSMOW-SLAP values could not be obtained by three laboratories using high-temperature conversion (HTC) systems. The cause of this issue has been investigated using the silver-tube technique for hydrogen-isotope measurements of water.METHODSThe δ2HVSMOW-SLAP values of NBS 30 biotite, other biotites, muscovites, and kaolinite with different particle sizes, along with IAEA-CH-7 polyethylene, and reference waters and NBS 22 oil that were sealed in silver-tube segments, were measured. The effect of absorbed water on mineral surfaces was investigated with waters both enriched and depleted in 2H. The quantitative conversion of hydrogen from biotite into gaseous hydrogen as a function of mass and particle size was also investigated.RESULTSThe δ2HVSMOW-SLAP values of NBS 30 obtained by three laboratories were as much as 21 ‰ too high compared with the accepted value of −65.7 ‰, determined by conventional off-line measurements. The experiments showed a strong correlation between grain size and the δ2HVSMOW-SLAP value of NBS 30 biotite, but not of biotites with lower iron content. The δ2HVSMOW-SLAP values of NBS 30 as a function of particle size show a clear trend toward −65.7 ‰ with finer grain size.CONCLUSIONSDetermination of the δ2HVSMOW-SLAP values of hydrous minerals and of NBS 30 biotite by on-line HTC systems coupled to isotope-ratio mass spectrometers may be unreliable because hydrogen in this biotite may not be converted quantitatively into molecular hydrogen. Extreme caution in the use and interpretation of δ2HVSMOW-SLAP on-line measurements of hydrous minerals is recommended.

  8. Identifying trout refuges in the Indian and Hudson Rivers in northern New York through airborne thermal infrared remote sensing

    USGS Publications Warehouse

    Ernst, Anne G.; Baldigo, Barry P.; Calef, Fred J.; Freehafer, Douglas A.; Kremens, Robert L.

    2015-10-09

    The locations and sizes of potential cold-water refuges for trout were examined in 2005 along a 27-kilometer segment of the Indian and Hudson Rivers in northern New York to evaluate the extent of refuges, the effects of routine flow releases from an impoundment, and how these refuges and releases might influence trout survival in reaches that otherwise would be thermally stressed. This river segment supports small populations of brook trout (Salvelinus fontinalis), brown trout (Salmo trutta), and rainbow trout (Oncorhynchus mykiss) and also receives regular releases of reservoir-surface waters to support rafting during the summer, when water temperatures in both the reservoir and the river frequently exceed thermal thresholds for trout survival. Airborne thermal infrared imaging was supplemented with continuous, in-stream temperature loggers to identify potential refuges that may be associated with tributary inflows or groundwater seeps and to define the extent to which the release flows decrease the size of existing refuges. In general, the release flows overwhelmed the refuge areas and greatly decreased the size and number of the areas. Mean water temperatures were unaffected by the releases, but small-scale heterogeneity was diminished. At a larger scale, water temperatures in the upper and lower segments of the reach were consistently warmer than in the middle segment, even during passage of release waters. The inability of remote thermal infrared images to consistently distinguish land from water (in shaded areas) and to detect groundwater seeps (away from the shallow edges of the stream) limited data analysis and the ability to identify potential thermal refuge areas.

  9. SU-F-J-113: Multi-Atlas Based Automatic Organ Segmentation for Lung Radiotherapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J; Han, J; Ailawadi, S

    Purpose: Normal organ segmentation is one time-consuming and labor-intensive step for lung radiotherapy treatment planning. The aim of this study is to evaluate the performance of a multi-atlas based segmentation approach for automatic organs at risk (OAR) delineation. Methods: Fifteen Lung stereotactic body radiation therapy patients were randomly selected. Planning CT images and OAR contours of the heart - HT, aorta - AO, vena cava - VC, pulmonary trunk - PT, and esophagus – ES were exported and used as reference and atlas sets. For automatic organ delineation for a given target CT, 1) all atlas sets were deformably warpedmore » to the target CT, 2) the deformed sets were accumulated and normalized to produce organ probability density (OPD) maps, and 3) the OPD maps were converted to contours via image thresholding. Optimal threshold for each organ was empirically determined by comparing the auto-segmented contours against their respective reference contours. The delineated results were evaluated by measuring contour similarity metrics: DICE, mean distance (MD), and true detection rate (TD), where DICE=(intersection volume/sum of two volumes) and TD = {1.0 - (false positive + false negative)/2.0}. Diffeomorphic Demons algorithm was employed for CT-CT deformable image registrations. Results: Optimal thresholds were determined to be 0.53 for HT, 0.38 for AO, 0.28 for PT, 0.43 for VC, and 0.31 for ES. The mean similarity metrics (DICE[%], MD[mm], TD[%]) were (88, 3.2, 89) for HT, (79, 3.2, 82) for AO, (75, 2.7, 77) for PT, (68, 3.4, 73) for VC, and (51,2.7, 60) for ES. Conclusion: The investigated multi-atlas based approach produced reliable segmentations for the organs with large and relatively clear boundaries (HT and AO). However, the detection of small and narrow organs with diffused boundaries (ES) were challenging. Sophisticated atlas selection and multi-atlas fusion algorithms may further improve the quality of segmentations.« less

  10. SU-E-J-109: Accurate Contour Transfer Between Different Image Modalities Using a Hybrid Deformable Image Registration and Fuzzy Connected Image Segmentation Method.

    PubMed

    Yang, C; Paulson, E; Li, X

    2012-06-01

    To develop and evaluate a tool that can improve the accuracy of contour transfer between different image modalities under challenging conditions of low image contrast and large image deformation, comparing to a few commonly used methods, for radiation treatment planning. The software tool includes the following steps and functionalities: (1) accepting input of images of different modalities, (2) converting existing contours on reference images (e.g., MRI) into delineated volumes and adjusting the intensity within the volumes to match target images (e.g., CT) intensity distribution for enhanced similarity metric, (3) registering reference and target images using appropriate deformable registration algorithms (e.g., B-spline, demons) and generate deformed contours, (4) mapping the deformed volumes on target images, calculating mean, variance, and center of mass as the initialization parameters for consecutive fuzzy connectedness (FC) image segmentation on target images, (5) generate affinity map from FC segmentation, (6) achieving final contours by modifying the deformed contours using the affinity map with a gradient distance weighting algorithm. The tool was tested with the CT and MR images of four pancreatic cancer patients acquired at the same respiration phase to minimize motion distortion. Dice's Coefficient was calculated against direct delineation on target image. Contours generated by various methods, including rigid transfer, auto-segmentation, deformable only transfer and proposed method, were compared. Fuzzy connected image segmentation needs careful parameter initialization and user involvement. Automatic contour transfer by multi-modality deformable registration leads up to 10% of accuracy improvement over the rigid transfer. Two extra proposed steps of adjusting intensity distribution and modifying the deformed contour with affinity map improve the transfer accuracy further to 14% averagely. Deformable image registration aided by contrast adjustment and fuzzy connectedness segmentation improves the contour transfer accuracy between multi-modality images, particularly with large deformation and low image contrast. © 2012 American Association of Physicists in Medicine.

  11. Giant café-au-lait macule in neurofibromatosis 1: a type 2 segmental manifestation of neurofibromatosis 1?

    PubMed

    Yang, Chao-Chun; Happle, Rudolf; Chao, Sheau-Chiou; Yu-Yun Lee, Julia; Chen, WenChieh

    2008-03-01

    Type 2 segmental manifestation of autosomal dominant dermatoses refers to pronounced segmental lesions superimposed on the ordinary nonsegmental phenotype, indicating loss of heterozygosity occurring at an early stage of embryogenesis. We describe a 20-year-old Taiwanese woman with typical lesions of neurofibromatosis type 1 (NF1) in the form of characteristic café-au-lait spots, neurofibromas, axillary freckling and Lisch nodules. In addition, a giant garment-like or "bathing-trunk" café-au-lait macule involved the lower half of the trunk, the buttocks, and parts of the thighs, being superimposed on the ordinary smaller spots of NF1. This large café-au-lait macule may be best explained as an example of type 2 segmental NF1. A novel mutation (3009delG) in exon 23 was also identified in this patient, which has not yet been described in sporadic and familial NF1.

  12. Allocation of attentional resources during habituation and dishabituation of male sexual arousal.

    PubMed

    Koukounas, E; Over, R

    1999-12-01

    A secondary-task probe (tone) was presented intermittently while men viewed erotic film segments across a session involving 18 trials with the same film segment (habituation), then 2 trials with different film segments (novelty) and 2 trials with reinstatement of the original segment (dishabituation). Reaction time to the tone (an index of the extent processing resources were being committed to the erotic stimulus) shifted during the session in parallel with changes that occurred in penile tumescence and subjective sexual arousal. The decrease in sexual arousal over the first 18 trials in the session was accompanied by a progressively faster reaction to the tone, novel stimulation led to recovery of sexual arousal and a slower reaction to the tone, and on trials 21 and 22 sexual arousal and reaction time levels were above the values that prevailed immediately prior to novel stimulation. Results are discussed with reference to the relationship between habituation and attention.

  13. One size (never) fits all: segment differences observed following a school-based alcohol social marketing program.

    PubMed

    Dietrich, Timo; Rundle-Thiele, Sharyn; Leo, Cheryl; Connor, Jason

    2015-04-01

    According to commercial marketing theory, a market orientation leads to improved performance. Drawing on the social marketing principles of segmentation and audience research, the current study seeks to identify segments to examine responses to a school-based alcohol social marketing program. A sample of 371 year 10 students (aged: 14-16 years; 51.4% boys) participated in a prospective (pre-post) multisite alcohol social marketing program. Game On: Know Alcohol (GO:KA) program included 6, student-centered, and interactive lessons to teach adolescents about alcohol and strategies to abstain or moderate drinking. A repeated measures design was used. Baseline demographics, drinking attitudes, drinking intentions, and alcohol knowledge were cluster analyzed to identify segments. Change on key program outcome measures and satisfaction with program components were assessed by segment. Three segments were identified; (1) Skeptics, (2) Risky Males, (3) Good Females. Segments 2 and 3 showed greatest change in drinking attitudes and intentions. Good Females reported highest satisfaction with all program components and Skeptics lowest program satisfaction with all program components. Three segments, each differing on psychographic and demographic variables, exhibited different change patterns following participation in GO:KA. Post hoc analysis identified that satisfaction with program components differed by segment offering opportunities for further research. © 2015, American School Health Association.

  14. Boundary overlap for medical image segmentation evaluation

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina

    2017-03-01

    All medical image segmentation algorithms need to be validated and compared, and yet no evaluation framework is widely accepted within the imaging community. Collections of segmentation results often need to be compared and ranked by their effectiveness. Evaluation measures which are popular in the literature are based on region overlap or boundary distance. None of these are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, shape) but no single measure covers all error types. We introduce a new family of measures, with hybrid characteristics. These measures quantify similarity/difference of segmented regions by considering their overlap around the region boundaries. This family is more sensitive than other measures in the literature to combinations of segmentation error types. We compare measure performance on collections of segmentation results sourced from carefully compiled 2D synthetic data, and also on 3D medical image volumes. We show that our new measure: (1) penalises errors successfully, especially those around region boundaries; (2) gives a low similarity score when existing measures disagree, thus avoiding overly inflated scores; and (3) scores segmentation results over a wider range of values. We consider a representative measure from this family and the effect of its only free parameter on error sensitivity, typical value range, and running time.

  15. Total and segmental colon transit time in constipated children assessed by scintigraphy with 111In-DTPA given orally.

    PubMed

    Vattimo, A; Burroni, L; Bertelli, P; Messina, M; Meucci, D; Tota, G

    1993-12-01

    Serial colon scintigraphy using 111In-DTPA (2 MBq) given orally was performed in 39 children referred for constipation, and the total and segmental colon transit times were measured. The bowel movements during the study were recorded and the intervals between defecations (ID) were calculated. This method proved able to identify children with normal colon morphology (no. = 32) and those with dolichocolon (no. = 7). Normal children were not included for ethical reasons and we used the normal range determined by others using x-ray methods (29 +/- 4 hours). Total and segmental colon transit times were found to be prolonged in all children with dolichocolon (TC: 113.55 +/- 41.20 hours; RC: 39.85 +/- 26.39 hours; LC: 43.05 +/- 18.30 hours; RS: 30.66 +/- 26.89 hours). In the group of children with a normal colon shape, 13 presented total and segmental colon transit times within the referred normal value (TC: 27.79 +/- 4.10 hours; RC: 9.11 +/- 2.53 hours; LC: 9.80 +/- 3.50 hours; RS: 8.88 +/- 4.09 hours) and normal bowel function (ID: 23.37 +/- 5.93 hours). In the remaining children, 5 presented prolonged retention in the rectum (RS: 53.36 +/- 29.66 hours), and 14 a prolonged transit time in all segments. A good correlation was found between the transit time and bowel function. From the point of view of radiation dosimetry, the most heavily irradiated organs were the lower large intestine and the ovaries, and the level of radiation burden depended on the colon transit time. We can conclude that the described method results safe, accurate and fully diagnostic.

  16. The Expansion Segments of 28S Ribosomal RNA Extensively Match Human Messenger RNAs

    PubMed Central

    Parker, Michael S.; Balasubramaniam, Ambikaipakan; Sallee, Floyd R.; Parker, Steven L.

    2018-01-01

    Eukaryote ribosomal RNAs (rRNAs) have expanded in the course of phylogeny by addition of nucleotides in specific insertion areas, the expansion segments. These number about 40 in the larger (25–28S) rRNA (up to 2,400 nucleotides), and about 12 in the smaller (18S) rRNA (<700 nucleotides). Expansion of the larger rRNA shows a clear phylogenetic increase, with a dramatic rise in mammals and especially in hominids. Substantial portions of expansion segments in this RNA are not bound to ribosomal proteins, and may engage extraneous interactants, including messenger RNAs (mRNAs). Studies on the ribosome-mRNA interaction have focused on proteins of the smaller ribosomal subunit, with some examination of 18S rRNA. However, the expansion segments of human 28S rRNA show much higher density and numbers of mRNA matches than those of 18S rRNA, and also a higher density and match numbers than its own core parts. We have studied that with frequent and potentially stable matches containing 7–15 nucleotides. The expansion segments of 28S rRNA average more than 50 matches per mRNA even assuming only 5% of their sequence as available for such interaction. Large expansion segments 7, 15, and 27 of 28S rRNA also have copious long (≥10-nucleotide) matches to most human mRNAs, with frequencies much higher than in other 28S rRNA parts. Expansion segments 7 and 27 and especially segment 15 of 28S rRNA show large size increase in mammals compared to other metazoans, which could reflect a gain of function related to interaction with non-ribosomal partners. The 28S rRNA expansion segment 15 shows very high increments in size, guanosine, and cytidine nucleotide content and mRNA matching in mammals, and especially in hominids. With these segments (but not with other 28S rRNA or any 18S rRNA expansion segments) the density and number of matches are much higher in 5′-terminal than in 3′-terminal untranslated mRNA regions, which may relate to mRNA mobilization via 5′ termini. Matches in the expansion segments 7, 15, and 27 of human 28S rRNA appear as candidates for general interaction with mRNAs, especially those associated with intracellular matrices such as the endoplasmic reticulum. PMID:29563925

  17. Twin ruptures grew to build up the giant 2011 Tohoku, Japan, earthquake.

    PubMed

    Maercklin, Nils; Festa, Gaetano; Colombelli, Simona; Zollo, Aldo

    2012-01-01

    The 2011 Tohoku megathrust earthquake had an unexpected size for the region. To image the earthquake rupture in detail, we applied a novel backprojection technique to waveforms from local accelerometer networks. The earthquake began as a small-size twin rupture, slowly propagating mainly updip and triggering the break of a larger-size asperity at shallower depths, resulting in up to 50 m slip and causing high-amplitude tsunami waves. For a long time the rupture remained in a 100-150 km wide slab segment delimited by oceanic fractures, before propagating further to the southwest. The occurrence of large slip at shallow depths likely favored the propagation across contiguous slab segments and contributed to build up a giant earthquake. The lateral variations in the slab geometry may act as geometrical or mechanical barriers finally controlling the earthquake rupture nucleation, evolution and arrest.

  18. Twin ruptures grew to build up the giant 2011 Tohoku, Japan, earthquake

    PubMed Central

    Maercklin, Nils; Festa, Gaetano; Colombelli, Simona; Zollo, Aldo

    2012-01-01

    The 2011 Tohoku megathrust earthquake had an unexpected size for the region. To image the earthquake rupture in detail, we applied a novel backprojection technique to waveforms from local accelerometer networks. The earthquake began as a small-size twin rupture, slowly propagating mainly updip and triggering the break of a larger-size asperity at shallower depths, resulting in up to 50 m slip and causing high-amplitude tsunami waves. For a long time the rupture remained in a 100–150 km wide slab segment delimited by oceanic fractures, before propagating further to the southwest. The occurrence of large slip at shallow depths likely favored the propagation across contiguous slab segments and contributed to build up a giant earthquake. The lateral variations in the slab geometry may act as geometrical or mechanical barriers finally controlling the earthquake rupture nucleation, evolution and arrest. PMID:23050093

  19. Compaction of quasi-one-dimensional elastoplastic materials

    PubMed Central

    Shaebani, M. Reza; Najafi, Javad; Farnudi, Ali; Bonn, Daniel; Habibi, Mehdi

    2017-01-01

    Insight into crumpling or compaction of one-dimensional objects is important for understanding biopolymer packaging and designing innovative technological devices. By compacting various types of wires in rigid confinements and characterizing the morphology of the resulting crumpled structures, here, we report how friction, plasticity and torsion enhance disorder, leading to a transition from coiled to folded morphologies. In the latter case, where folding dominates the crumpling process, we find that reducing the relative wire thickness counter-intuitively causes the maximum packing density to decrease. The segment size distribution gradually becomes more asymmetric during compaction, reflecting an increase of spatial correlations. We introduce a self-avoiding random walk model and verify that the cumulative injected wire length follows a universal dependence on segment size, allowing for the prediction of the efficiency of compaction as a function of material properties, container size and injection force. PMID:28585550

  20. A universal reference sample derived from clone vector for improved detection of differential gene expression

    PubMed Central

    Khan, Rishi L; Gonye, Gregory E; Gao, Guang; Schwaber, James S

    2006-01-01

    Background Using microarrays by co-hybridizing two samples labeled with different dyes enables differential gene expression measurements and comparisons across slides while controlling for within-slide variability. Typically one dye produces weaker signal intensities than the other often causing signals to be undetectable. In addition, undetectable spots represent a large problem for two-color microarray designs and most arrays contain at least 40% undetectable spots even when labeled with reference samples such as Stratagene's Universal Reference RNAs™. Results We introduce a novel universal reference sample that produces strong signal for all spots on the array, increasing the average fraction of detectable spots to 97%. Maximizing detectable spots on the reference image channel also decreases the variability of microarray data allowing for reliable detection of smaller differential gene expression changes. The reference sample is derived from sequence contained in the parental EST clone vector pT7T3D-Pac and is called vector RNA (vRNA). We show that vRNA can also be used for quality control of microarray printing and PCR product quality, detection of hybridization anomalies, and simplification of spot finding and segmentation tasks. This reference sample can be made inexpensively in large quantities as a renewable resource that is consistent across experiments. Conclusion Results of this study show that vRNA provides a useful universal reference that yields high signal for almost all spots on a microarray, reduces variation and allows for comparisons between experiments and laboratories. Further, it can be used for quality control of microarray printing and PCR product quality, detection of hybridization anomalies, and simplification of spot finding and segmentation tasks. This type of reference allows for detection of small changes in differential expression while reference designs in general allow for large-scale multivariate experimental designs. vRNA in combination with reference designs enable systems biology microarray experiments of small physiologically relevant changes. PMID:16677381

  1. Enhancing the effectiveness of antismoking messages via self-congruent appeals.

    PubMed

    Chang, Chingching

    2009-01-01

    A self-congruent effect model was applied to understand adolescents' responses to antismoking advertising that referred to the self or others. Experiment 1 showed that self-referring ads generated more negative smoking attitudes than other-referring ads among adolescents with independent self-construals, whereas other-referring ads generated more negative smoking attitudes than self-referring ads among adolescents with interdependent self-construals. A survey further showed that smokers rated themselves higher on a measure of independent self-construal than nonsmokers. Experiment 2 then found that self-referring ads are more effective than other-referring ads for smokers, who have independent self-construals. Findings supported the idea that health communication campaign designers can maximize message effectiveness by developing different messages for different target segments of the population based on their self-construals.

  2. The Segmental Morphometric Properties of the Horse Cervical Spinal Cord: A Study of Cadaver

    PubMed Central

    Bahar, Sadullah; Bolat, Durmus; Selcuk, Muhammet Lutfi

    2013-01-01

    Although the cervical spinal cord (CSC) of the horse has particular importance in diseases of CNS, there is very little information about its segmental morphometry. The objective of the present study was to determine the morphometric features of the CSC segments in the horse and possible relationships among the morphometric features. The segmented CSC from five mature animals was used. Length, weight, diameter, and volume measurements of the segments were performed macroscopically. Lengths and diameters of segments were measured histologically, and area and volume measurements were performed using stereological methods. The length, weight, and volume of the CSC were 61.6 ± 3.2 cm, 107.2 ± 10.4 g, and 95.5 ± 8.3 cm3, respectively. The length of the segments was increased from C 1 to C 3, while it decreased from C 3 to C 8. The gross section (GS), white matter (WM), grey matter (GM), dorsal horn (DH), and ventral horn (VH) had the largest cross-section areas at C 8. The highest volume was found for the total segment and WM at C 4, GM, DH, and VH at C 7, and the central canal (CC) at C 3. The data obtained not only contribute to the knowledge of the normal anatomy of the CSC but may also provide reference data for veterinary pathologists and clinicians. PMID:23476145

  3. [Object-oriented segmentation and classification of forest gap based on QuickBird remote sensing image.

    PubMed

    Mao, Xue Gang; Du, Zi Han; Liu, Jia Qian; Chen, Shu Xin; Hou, Ji Yu

    2018-01-01

    Traditional field investigation and artificial interpretation could not satisfy the need of forest gaps extraction at regional scale. High spatial resolution remote sensing image provides the possibility for regional forest gaps extraction. In this study, we used object-oriented classification method to segment and classify forest gaps based on QuickBird high resolution optical remote sensing image in Jiangle National Forestry Farm of Fujian Province. In the process of object-oriented classification, 10 scales (10-100, with a step length of 10) were adopted to segment QuickBird remote sensing image; and the intersection area of reference object (RA or ) and intersection area of segmented object (RA os ) were adopted to evaluate the segmentation result at each scale. For segmentation result at each scale, 16 spectral characteristics and support vector machine classifier (SVM) were further used to classify forest gaps, non-forest gaps and others. The results showed that the optimal segmentation scale was 40 when RA or was equal to RA os . The accuracy difference between the maximum and minimum at different segmentation scales was 22%. At optimal scale, the overall classification accuracy was 88% (Kappa=0.82) based on SVM classifier. Combining high resolution remote sensing image data with object-oriented classification method could replace the traditional field investigation and artificial interpretation method to identify and classify forest gaps at regional scale.

  4. Analysis of a kinetic multi-segment foot model. Part I: Model repeatability and kinematic validity.

    PubMed

    Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L

    2012-04-01

    Kinematic multi-segment foot models are still evolving, but have seen increased use in clinical and research settings. The addition of kinetics may increase knowledge of foot and ankle function as well as influence multi-segment foot model evolution; however, previous kinetic models are too complex for clinical use. In this study we present a three-segment kinetic foot model and thorough evaluation of model performance during normal gait. In this first of two companion papers, model reference frames and joint centers are analyzed for repeatability, joint translations are measured, segment rigidity characterized, and sample joint angles presented. Within-tester and between-tester repeatability were first assessed using 10 healthy pediatric participants, while kinematic parameters were subsequently measured on 17 additional healthy pediatric participants. Repeatability errors were generally low for all sagittal plane measures as well as transverse plane Hindfoot and Forefoot segments (median<3°), while the least repeatable orientations were the Hindfoot coronal plane and Hallux transverse plane. Joint translations were generally less than 2mm in any one direction, while segment rigidity analysis suggested rigid body behavior for the Shank and Hindfoot, with the Forefoot violating the rigid body assumptions in terminal stance/pre-swing. Joint excursions were consistent with previously published studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans.

    PubMed

    Mendrik, Adriënne M; Vincken, Koen L; Kuijf, Hugo J; Breeuwer, Marcel; Bouvy, Willem H; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Persson, Mikael; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A; Vrooman, Henri A; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65-80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.

  6. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans

    PubMed Central

    Mendrik, Adriënne M.; Vincken, Koen L.; Kuijf, Hugo J.; Breeuwer, Marcel; Bouvy, Willem H.; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R.; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A.; Vrooman, Henri A.; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A.

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65–80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand. PMID:26759553

  7. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  8. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  9. A medical imaging analysis system for trigger finger using an adaptive texture-based active shape model (ATASM) in ultrasound images

    PubMed Central

    Chuang, Bo-I; Kuo, Li-Chieh; Yang, Tai-Hua; Su, Fong-Chin; Jou, I-Ming; Lin, Wei-Jr; Sun, Yung-Nien

    2017-01-01

    Trigger finger has become a prevalent disease that greatly affects occupational activity and daily life. Ultrasound imaging is commonly used for the clinical diagnosis of trigger finger severity. Due to image property variations, traditional methods cannot effectively segment the finger joint’s tendon structure. In this study, an adaptive texture-based active shape model method is used for segmenting the tendon and synovial sheath. Adapted weights are applied in the segmentation process to adjust the contribution of energy terms depending on image characteristics at different positions. The pathology is then determined according to the wavelet and co-occurrence texture features of the segmented tendon area. In the experiments, the segmentation results have fewer errors, with respect to the ground truth, than contours drawn by regular users. The mean values of the absolute segmentation difference of the tendon and synovial sheath are 3.14 and 4.54 pixels, respectively. The average accuracy of pathological determination is 87.14%. The segmentation results are all acceptable in data of both clear and fuzzy boundary cases in 74 images. And the symptom classifications of 42 cases are also a good reference for diagnosis according to the expert clinicians’ opinions. PMID:29077737

  10. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  11. Object-oriented feature extraction approach for mapping supraglacial debris in Schirmacher Oasis using very high-resolution satellite data

    NASA Astrophysics Data System (ADS)

    Jawak, Shridhar D.; Jadhav, Ajay; Luis, Alvarinho J.

    2016-05-01

    Supraglacial debris was mapped in the Schirmacher Oasis, east Antarctica, by using WorldView-2 (WV-2) high resolution optical remote sensing data consisting of 8-band calibrated Gram Schmidt (GS)-sharpened and atmospherically corrected WV-2 imagery. This study is a preliminary attempt to develop an object-oriented rule set to extract supraglacial debris for Antarctic region using 8-spectral band imagery. Supraglacial debris was manually digitized from the satellite imagery to generate the ground reference data. Several trials were performed using few existing traditional pixel-based classification techniques and color-texture based object-oriented classification methods to extract supraglacial debris over a small domain of the study area. Multi-level segmentation and attributes such as scale, shape, size, compactness along with spectral information from the data were used for developing the rule set. The quantitative analysis of error was carried out against the manually digitized reference data to test the practicability of our approach over the traditional pixel-based methods. Our results indicate that OBIA-based approach (overall accuracy: 93%) for extracting supraglacial debris performed better than all the traditional pixel-based methods (overall accuracy: 80-85%). The present attempt provides a comprehensive improved method for semiautomatic feature extraction in supraglacial environment and a new direction in the cryospheric research.

  12. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  13. Changepoint detection in base-resolution methylome data reveals a robust signature of methylated domain landscape.

    PubMed

    Yokoyama, Takao; Miura, Fumihito; Araki, Hiromitsu; Okamura, Kohji; Ito, Takashi

    2015-08-12

    Base-resolution methylome data generated by whole-genome bisulfite sequencing (WGBS) is often used to segment the genome into domains with distinct methylation levels. However, most segmentation methods include many parameters to be carefully tuned and/or fail to exploit the unsurpassed resolution of the data. Furthermore, there is no simple method that displays the composition of the domains to grasp global trends in each methylome. We propose to use changepoint detection for domain demarcation based on base-resolution methylome data. While the proposed method segments the methylome in a largely comparable manner to conventional approaches, it has only a single parameter to be tuned. Furthermore, it fully exploits the base-resolution of the data to enable simultaneous detection of methylation changes in even contrasting size ranges, such as focal hypermethylation and global hypomethylation in cancer methylomes. We also propose a simple plot termed methylated domain landscape (MDL) that globally displays the size, the methylation level and the number of the domains thus defined, thereby enabling one to intuitively grasp trends in each methylome. Since the pattern of MDL often reflects cell lineages and is largely unaffected by data size, it can serve as a novel signature of methylome. Changepoint detection in base-resolution methylome data followed by MDL plotting provides a novel method for methylome characterization and will facilitate global comparison among various WGBS data differing in size and even species origin.

  14. GPU-based relative fuzzy connectedness image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.

    2013-01-15

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzymore » connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.« less

  15. [Target volume segmentation of PET images by an iterative method based on threshold value].

    PubMed

    Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L

    2014-01-01

    An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  16. LDR segmented mirror technology assessment study

    NASA Technical Reports Server (NTRS)

    Krim, M.; Russo, J.

    1983-01-01

    In the mid-1990s, NASA plans to orbit a giant telescope, whose aperture may be as great as 30 meters, for infrared and sub-millimeter astronomy. Its primary mirror will be deployed or assembled in orbit from a mosaic of possibly hundreds of mirror segments. Each segment must be shaped to precise curvature tolerances so that diffraction-limited performance will be achieved at 30 micron (nominal operating wavelength). All panels must lie within 1 micron on a theoretical surface described by the optical precipitation of the telescope's primary mirror. To attain diffraction-limited performance, the issues of alignment and/or position sensing, position control of micron tolerances, and structural, thermal, and mechanical considerations for stowing, deploying, and erecting the reflector must be resolved. Radius of curvature precision influences panel size, shape, material, and type of construction. Two superior material choices emerged: fused quartz (sufficiently homogeneous with respect to thermal expansivity to permit a thin shell substrate to be drape molded between graphite dies to a precise enough off-axis asphere for optical finishing on the as-received a segment) and a Pyrex or Duran (less expensive than quartz and formable at lower temperatures). The optimal reflector panel size is between 1-1/2 and 2 meters. Making one, two-meter mirror every two weeks requires new approaches to manufacturing off-axis parabolic or aspheric segments (drape molding on precision dies and subsequent finishing on a nonrotationally symmetric dependent machine). Proof-of-concept developmental programs were identified to prove the feasibility of the materials and manufacturing ideas.

  17. Do Indo-Asians have smaller coronary arteries?

    PubMed

    Lip, G Y; Rathore, V S; Katira, R; Watson, R D; Singh, S P

    1999-08-01

    There is a widespread belief that coronary arteries are smaller in Indo-Asians. The aim of the present study was to compare the size of atheroma-free proximal and distal epicardial coronary arteries of Indo-Asians and Caucasians. We analysed normal coronary angiograms from 77 Caucasians and 39 Indo-Asians. The two groups were comparable for dominance of the coronary arteries. Indo-Asian patients had generally smaller coronary arteries, with a statistically significant difference in the mean diameters of the left main coronary artery, proximal, mid and left anterior descending, and proximal and distal right coronary artery segments. There was a non-significant trend towards smaller coronary artery segment diameters for the distal left anterior descending, proximal and distal circumflex, and obtuse marginal artery segments. However, after correction for body surface area, none of these differences in size were statistically significant. Thus, the smaller coronary arteries in Indo-Asian patients were explained by body size alone and were not due to ethnic origin per se. This finding nevertheless has important therapeutic implications, since smaller coronary arteries may give rise to technical difficulties during bypass graft and intervention procedures such as percutaneous transluminal coronary angioplasty, stents and atherectomy. On smaller arteries, atheroma may also give an impression of more severe disease than on larger diameter arteries.

  18. Virtual modeling of polycrystalline structures of materials using particle packing algorithms and Laguerre cells

    NASA Astrophysics Data System (ADS)

    Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló

    2018-04-01

    The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.

  19. Wavelet-based Encoding Scheme for Controlling Size of Compressed ECG Segments in Telecardiology Systems.

    PubMed

    Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben

    2017-09-12

    One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.

  20. Accurate segmentation of lung fields on chest radiographs using deep convolutional networks

    NASA Astrophysics Data System (ADS)

    Arbabshirani, Mohammad R.; Dallal, Ahmed H.; Agarwal, Chirag; Patel, Aalpan; Moore, Gregory

    2017-02-01

    Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years, deep learning has shown state of the art performance in many visual tasks such as object detection, image classification and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for lung field segmentation.

  1. Track structure model of microscopic energy deposition by protons and heavy ions in segments of neuronal cell dendrites represented by cylinders or spheres

    PubMed Central

    Alp, Murat; Cucinotta, Francis A.

    2017-01-01

    Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (>100 μm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3He and 12C particles at energies corresponding to a distance of 1 cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch. PMID:28554507

  2. Segment-Wise Genome-Wide Association Analysis Identifies a Candidate Region Associated with Schizophrenia in Three Independent Samples

    PubMed Central

    Rietschel, Marcella; Mattheisen, Manuel; Breuer, René; Schulze, Thomas G.; Nöthen, Markus M.; Levinson, Douglas; Shi, Jianxin; Gejman, Pablo V.; Cichon, Sven; Ophoff, Roel A.

    2012-01-01

    Recent studies suggest that variation in complex disorders (e.g., schizophrenia) is explained by a large number of genetic variants with small effect size (Odds Ratio∼1.05–1.1). The statistical power to detect these genetic variants in Genome Wide Association (GWA) studies with large numbers of cases and controls (∼15,000) is still low. As it will be difficult to further increase sample size, we decided to explore an alternative method for analyzing GWA data in a study of schizophrenia, dramatically reducing the number of statistical tests. The underlying hypothesis was that at least some of the genetic variants related to a common outcome are collocated in segments of chromosomes at a wider scale than single genes. Our approach was therefore to study the association between relatively large segments of DNA and disease status. An association test was performed for each SNP and the number of nominally significant tests in a segment was counted. We then performed a permutation-based binomial test to determine whether this region contained significantly more nominally significant SNPs than expected under the null hypothesis of no association, taking linkage into account. Genome Wide Association data of three independent schizophrenia case/control cohorts with European ancestry (Dutch, German, and US) using segments of DNA with variable length (2 to 32 Mbp) was analyzed. Using this approach we identified a region at chromosome 5q23.3-q31.3 (128–160 Mbp) that was significantly enriched with nominally associated SNPs in three independent case-control samples. We conclude that considering relatively wide segments of chromosomes may reveal reliable relationships between the genome and schizophrenia, suggesting novel methodological possibilities as well as raising theoretical questions. PMID:22723893

  3. Track structure model of microscopic energy deposition by protons and heavy ions in segments of neuronal cell dendrites represented by cylinders or spheres

    NASA Astrophysics Data System (ADS)

    Alp, Murat; Cucinotta, Francis A.

    2017-05-01

    Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (> 100 μm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3He and 12C particles at energies corresponding to a distance of 1 cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch.

  4. A novel measure and significance testing in data analysis of cell image segmentation.

    PubMed

    Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L

    2017-03-14

    Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.

  5. Track structure model of microscopic energy deposition by protons and heavy ions in segments of neuronal cell dendrites represented by cylinders or spheres.

    PubMed

    Alp, Murat; Cucinotta, Francis A

    2017-05-01

    Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (> 100µm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3 He and 12 C particles at energies corresponding to a distance of 1cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch. Copyright © 2017. Published by Elsevier Ltd.

  6. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network.

    PubMed

    Charron, Odelin; Lallement, Alex; Jarnet, Delphine; Noblet, Vincent; Clavier, Jean-Baptiste; Meyer, Philippe

    2018-04-01

    Stereotactic treatments are today the reference techniques for the irradiation of brain metastases in radiotherapy. The dose per fraction is very high, and delivered in small volumes (diameter <1 cm). As part of these treatments, effective detection and precise segmentation of lesions are imperative. Many methods based on deep-learning approaches have been developed for the automatic segmentation of gliomas, but very little for that of brain metastases. We adapted an existing 3D convolutional neural network (DeepMedic) to detect and segment brain metastases on MRI. At first, we sought to adapt the network parameters to brain metastases. We then explored the single or combined use of different MRI modalities, by evaluating network performance in terms of detection and segmentation. We also studied the interest of increasing the database with virtual patients or of using an additional database in which the active parts of the metastases are separated from the necrotic parts. Our results indicated that a deep network approach is promising for the detection and the segmentation of brain metastases on multimodal MRI. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. 3D Segmentation with an application of level set-method using MRI volumes for image guided surgery.

    PubMed

    Bosnjak, A; Montilla, G; Villegas, R; Jara, I

    2007-01-01

    This paper proposes an innovation in the application for image guided surgery using a comparative study of three different method of segmentation. This segmentation method is faster than the manual segmentation of images, with the advantage that it allows to use the same patient as anatomical reference, which has more precision than a generic atlas. This new methodology for 3D information extraction is based on a processing chain structured of the following modules: 1) 3D Filtering: the purpose is to preserve the contours of the structures and to smooth the homogeneous areas; several filters were tested and finally an anisotropic diffusion filter was used. 2) 3D Segmentation. This module compares three different methods: Region growing Algorithm, Cubic spline hand assisted, and Level Set Method. It then proposes a Level Set-based on the front propagation method that allows the making of the reconstruction of the internal walls of the anatomical structures of the brain. 3) 3D visualization. The new contribution of this work consists on the visualization of the segmented model and its use in the pre-surgery planning.

  8. Robust pulmonary lobe segmentation against incomplete fissures

    NASA Astrophysics Data System (ADS)

    Gu, Suicheng; Zheng, Qingfeng; Siegfried, Jill; Pu, Jiantao

    2012-03-01

    As important anatomical landmarks of the human lung, accurate lobe segmentation may be useful for characterizing specific lung diseases (e.g., inflammatory, granulomatous, and neoplastic diseases). A number of investigations showed that pulmonary fissures were often incomplete in image depiction, thereby leading to the computerized identification of individual lobes a challenging task. Our purpose is to develop a fully automated algorithm for accurate identification of individual lobes regardless of the integrity of pulmonary fissures. The underlying idea of the developed lobe segmentation scheme is to use piecewise planes to approximate the detected fissures. After a rotation and a global smoothing, a number of small planes were fitted using local fissures points. The local surfaces are finally combined for lobe segmentation using a quadratic B-spline weighting strategy to assure that the segmentation is smooth. The performance of the developed scheme was assessed by comparing with a manually created reference standard on a dataset of 30 lung CT examinations. These examinations covered a number of lung diseases and were selected from a large chronic obstructive pulmonary disease (COPD) dataset. The results indicate that our scheme of lobe segmentation is efficient and accurate against incomplete fissures.

  9. Genetics Home Reference: Peters anomaly

    MedlinePlus

    ... eye (cornea). During development of the eye, the elements of the anterior segment form separate structures. However, ... are some genetic conditions more common in particular ethnic groups? Genetic Changes Mutations in the FOXC1 , PAX6 , PITX2 , ...

  10. Automated segmentation of the prostate in 3D MR images using a probabilistic atlas and a spatially constrained deformable model.

    PubMed

    Martin, Sébastien; Troccaz, Jocelyne; Daanenc, Vincent

    2010-04-01

    The authors present a fully automatic algorithm for the segmentation of the prostate in three-dimensional magnetic resonance (MR) images. The approach requires the use of an anatomical atlas which is built by computing transformation fields mapping a set of manually segmented images to a common reference. These transformation fields are then applied to the manually segmented structures of the training set in order to get a probabilistic map on the atlas. The segmentation is then realized through a two stage procedure. In the first stage, the processed image is registered to the probabilistic atlas. Subsequently, a probabilistic segmentation is obtained by mapping the probabilistic map of the atlas to the patient's anatomy. In the second stage, a deformable surface evolves toward the prostate boundaries by merging information coming from the probabilistic segmentation, an image feature model and a statistical shape model. During the evolution of the surface, the probabilistic segmentation allows the introduction of a spatial constraint that prevents the deformable surface from leaking in an unlikely configuration. The proposed method is evaluated on 36 exams that were manually segmented by a single expert. A median Dice similarity coefficient of 0.86 and an average surface error of 2.41 mm are achieved. By merging prior knowledge, the presented method achieves a robust and completely automatic segmentation of the prostate in MR images. Results show that the use of a spatial constraint is useful to increase the robustness of the deformable model comparatively to a deformable surface that is only driven by an image appearance model.

  11. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET.

    PubMed

    Hatt, M; Lamare, F; Boussion, N; Turzo, A; Collet, C; Salzenstein, F; Roux, C; Jarritt, P; Carson, K; Cheze-Le Rest, C; Visvikis, D

    2007-06-21

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.

  12. Application of the Low-dose One-stop-shop Cardiac CT Protocol with Third-generation Dual-source CT.

    PubMed

    Lin, Lu; Wang, Yining; Yi, Yan; Cao, Jian; Kong, Lingyan; Qian, Hao; Zhang, Hongzhi; Wu, Wei; Wang, Yun; Jin, Zhengyu

    2017-02-20

    Objective To evaluate the feasibility of a low-dose one-stop-shop cardiac CT imaging protocol with third-generation dual-source CT (DSCT). Methods Totally 23 coronary artery disease (CAD) patients were prospectively enrolled between March to September in 2016. All patients underwent an ATP stress dynamic myocardial perfusion imaging (MPI) (data acquired prospectively ECG-triggered during end systole by table shuttle mode in 32 seconds) at 70 kV combined with prospectively ECG-triggered high-pitch coronary artery angiography (CCTA) on a third-generation DSCT system. Myocardial blood flow (MBF) was quantified and compared between perfusion normal and abnormal myocardial segments based on AHA-17-segment model. CCTA images were evaluated qualitatively based on SCCT-18-segment model and the effective dose(ED) was calculated. In patients with subsequent catheter coronary angiography (CCA) as reference,the diagnosis performance of MPI (for per-vessel ≥50% and ≥70% stenosis) and CCTA (for≥50% stenosis) were assessed. Results Of 23 patients who had completed the examination of ATP stress MPI plus CCTA,12 patients received follow-up CCA. At ATP stress MPI,77 segments (19.7%) in 13 patients (56.5%) had perfusion abnormalities. The MBF values of hypo-perfused myocardial segments decreased significantly compared with normal segments [(93±22)ml/(100 ml·min) vs. (147±27)ml/(100 ml·min);t=15.978,P=0.000]. At CCTA,93.9% (308/328) of the coronary segments had diagnostic image quality. With CCA as the reference standard,the per-vessel and per-segment sensitivity,specificity,and accuracy of CCTA for stenosis≥50% were 94.1%,93.5%,and 93.7% and 90.9%,97.8%,and 96.8%,and the per-vessel sensitivity,specificity and accuracy of ATP stress MPI for stenosis≥50% and ≥70% were 68.7%,100%,and 89.5% and 91.7%,100%,and 97.9%. The total ED of MPI and CCTA was (3.9±1.3) mSv [MPI:(3.5±1.2) mSv,CCTA:(0.3±0.1) mSv]. Conclusion The third-generation DSCT stress dynamic MPI at 70 kV combined with prospectively ECG-triggered high-pitch CCTA is a feasible and reliable tool for clinical diagnosis,with remarkably reduced radiation dose.

  13. Transposon-containing DNA cloning vector and uses thereof

    DOEpatents

    Berg, C.M.; Berg, D.E.; Wang, G.

    1997-07-08

    The present invention discloses a rapid method of restriction mapping, sequencing or localizing genetic features in a segment of deoxyribonucleic acid (DNA) that is up to 42 kb in size. The method in part comprises cloning of the DNA segment in a specialized cloning vector and then isolating nested deletions in either direction in vivo by intramolecular transposition into the cloned DNA. A plasmid has been prepared and disclosed. 4 figs.

  14. Transposon-containing DNA cloning vector and uses thereof

    DOEpatents

    Berg, Claire M.; Berg, Douglas E.; Wang, Gan

    1997-01-01

    The present invention discloses a rapid method of restriction mapping, sequencing or localizing genetic features in a segment of deoxyribonucleic acid (DNA) that is up to 42 kb in size. The method in part comprises cloning of the DNA segment in a specialized cloning vector and then isolating nested deletions in either direction in vivo by intramolecular transposition into the cloned DNA. A plasmid has been prepared and disclosed.

  15. Segmented polynomial taper equation incorporating years since thinning for loblolly pine plantations

    Treesearch

    A. Gordon Holley; Thomas B. Lynch; Charles T. Stiff; William Stansfield

    2010-01-01

    Data from 108 trees felled from 16 loblolly pine stands owned by Temple-Inland Forest Products Corp. were used to determine effects of years since thinning (YST) on stem taper using the Max–Burkhart type segmented polynomial taper model. Sample tree YST ranged from two to nine years prior to destructive sampling. In an effort to equalize sample sizes, tree data were...

  16. Scale-based fuzzy connectivity: a novel image segmentation methodology and its validation

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Udupa, Jayaram K.

    1999-05-01

    This paper extends a previously reported theory and algorithms for fuzzy connected object definition. It introduces `object scale' for determining the neighborhood size for defining affinity, the degree of local hanging togetherness between image elements. Object scale allows us to use a varying neighborhood size in different parts of the image. This paper argues that scale-based fuzzy connectivity is natural in object definition and demonstrates that this leads to a more effective object segmentation than without using scale in fuzzy concentrations. Affinity is described as consisting of a homogeneity-based and an object-feature- based component. Families of non scale-based and scale-based affinity relations are constructed. An effective method for giving a rough estimate of scale at different locations in the image is presented. The original theoretical and algorithmic framework remains more-or-less the same but considerably improved segmentations result. A quantitative statistical comparison between the non scale-based and the scale-based methods was made based on phantom images generated from patient MR brain studies by first segmenting the objects, and then by adding noise and blurring, and background component. Both the statistical and the subjective tests clearly indicate the superiority of scale- based method in capturing details and in robustness to noise.

  17. 3D prostate TRUS segmentation using globally optimized volume-preserving prior.

    PubMed

    Qiu, Wu; Rajchl, Martin; Guo, Fumin; Sun, Yue; Ukwatta, Eranga; Fenster, Aaron; Yuan, Jing

    2014-01-01

    An efficient and accurate segmentation of 3D transrectal ultrasound (TRUS) images plays an important role in the planning and treatment of the practical 3D TRUS guided prostate biopsy. However, a meaningful segmentation of 3D TRUS images tends to suffer from US speckles, shadowing and missing edges etc, which make it a challenging task to delineate the correct prostate boundaries. In this paper, we propose a novel convex optimization based approach to extracting the prostate surface from the given 3D TRUS image, while preserving a new global volume-size prior. We, especially, study the proposed combinatorial optimization problem by convex relaxation and introduce its dual continuous max-flow formulation with the new bounded flow conservation constraint, which results in an efficient numerical solver implemented on GPUs. Experimental results using 12 patient 3D TRUS images show that the proposed approach while preserving the volume-size prior yielded a mean DSC of 89.5% +/- 2.4%, a MAD of 1.4 +/- 0.6 mm, a MAXD of 5.2 +/- 3.2 mm, and a VD of 7.5% +/- 6.2% in - 1 minute, deomonstrating the advantages of both accuracy and efficiency. In addition, the low standard deviation of the segmentation accuracy shows a good reliability of the proposed approach.

  18. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  19. Geodemographic segmentation systems for screening health data.

    PubMed Central

    Openshaw, S; Blake, M

    1995-01-01

    AIM--To describe how geodemographic segmentation systems might be useful as a quick and easy way of exploring postcoded health databases for potential interesting patterns related to deprivation and other socioeconomic characteristics. DESIGN AND SETTING--This is demonstrated using GB Profiles, a freely available geodemographic classification system developed at Leeds University. It is used here to screen a database of colorectal cancer registrations as a first step in the analysis of that data. RESULTS AND CONCLUSION--Conventional geodemographics is a fairly simple technology and a number of outstanding methodological problems are identified. A solution to some problems is illustrated by using neural net based classifiers and then by reference to a more sophisticated geodemographic approach via a data optimal segmentation technique. Images PMID:8594132

  20. Automatic video segmentation and indexing

    NASA Astrophysics Data System (ADS)

    Chahir, Youssef; Chen, Liming

    1999-08-01

    Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marques da Silva, A; Narciso, L

    Purpose: Commercial workstations usually have their own software to calculate dynamic renal functions. However, usually they have low flexibility and subjectivity on delimiting kidney and background areas. The aim of this paper is to present a public domain software, called RenalQuant, capable to semi-automatically draw regions of interest on dynamic renal scintigraphies, extracting data and generating renal function quantification parameters. Methods: The software was developed in Java and written as an ImageJ-based plugin. The preprocessing and segmentation steps include the user’s selection of one time frame with higher activity in kidney’s region, compared with background, and low activity in themore » liver. Next, the chosen time frame is smoothed using a Gaussian low pass spatial filter (σ = 3) for noise reduction and better delimitation of kidneys. The maximum entropy thresholding method is used for segmentation. A background area is automatically placed below each kidney, and the user confirms if these regions are correctly segmented and positioned. Quantitative data are extracted and each renogram and relative renal function (RRF) value is calculated and displayed. Results: RenalQuant plugin was validated using retrospective 20 patients’ 99mTc-DTPA exams, and compared with results produced by commercial workstation software, referred as reference. The renograms intraclass correlation coefficients (ICC) were calculated and false-negative and false-positive RRF values were analyzed. The results showed that ICC values between RenalQuant plugin and reference software for both kidneys’ renograms were higher than 0.75, showing excellent reliability. Conclusion: Our results indicated RenalQuant plugin can be trustingly used to generate renograms, using DICOM dynamic renal scintigraphy exams as input. It is user friendly and user’s interaction occurs at a minimum level. Further studies have to investigate how to increase RRF accuracy and explore how to solve limitations in the segmentation step, mainly when background region has higher activity compared to kidneys. Financial support by CAPES.« less

  2. Ischemic stroke lesion segmentation in multi-spectral MR images with support vector machine classifiers

    NASA Astrophysics Data System (ADS)

    Maier, Oskar; Wilms, Matthias; von der Gablentz, Janina; Krämer, Ulrike; Handels, Heinz

    2014-03-01

    Automatic segmentation of ischemic stroke lesions in magnetic resonance (MR) images is important in clinical practice and for neuroscientific trials. The key problem is to detect largely inhomogeneous regions of varying sizes, shapes and locations. We present a stroke lesion segmentation method based on local features extracted from multi-spectral MR data that are selected to model a human observer's discrimination criteria. A support vector machine classifier is trained on expert-segmented examples and then used to classify formerly unseen images. Leave-one-out cross validation on eight datasets with lesions of varying appearances is performed, showing our method to compare favourably with other published approaches in terms of accuracy and robustness. Furthermore, we compare a number of feature selectors and closely examine each feature's and MR sequence's contribution.

  3. Capturing 'R&D excellence': indicators, international statistics, and innovative universities.

    PubMed

    Tijssen, Robert J W; Winnink, Jos J

    2018-01-01

    Excellent research may contribute to successful science-based technological innovation. We define 'R&D excellence' in terms of scientific research that has contributed to the development of influential technologies, where 'excellence' refers to the top segment of a statistical distribution based on internationally comparative performance scores. Our measurements are derived from frequency counts of literature references ('citations') from patents to research publications during the last 15 years. The 'D' part in R&D is represented by the top 10% most highly cited 'excellent' patents worldwide. The 'R' part is captured by research articles in international scholarly journals that are cited by these patented technologies. After analyzing millions of citing patents and cited research publications, we find very large differences between countries worldwide in terms of the volume of domestic science contributing to those patented technologies. Where the USA produces the largest numbers of cited research publications (partly because of database biases), Switzerland and Israel outperform the US after correcting for the size of their national science systems. To tease out possible explanatory factors, which may significantly affect or determine these performance differentials, we first studied high-income nations and advanced economies. Here we find that the size of R&D expenditure correlates with the sheer size of cited publications, as does the degree of university research cooperation with domestic firms. When broadening our comparative framework to 70 countries (including many medium-income nations) while correcting for size of national science systems, the important explanatory factors become the availability of human resources and quality of science systems. Focusing on the latter factor, our in-depth analysis of 716 research-intensive universities worldwide reveals several universities with very high scores on our two R&D excellence indicators. Confirming the above macro-level findings, an in-depth study of 27 leading US universities identifies research expenditure size as a prime determinant. Our analytical model and quantitative indicators provides a supplementary perspective to input-oriented statistics based on R&D expenditures. The country-level findings are indicative of significant disparities between national R&D systems. Comparing the performance of individual universities, we observe large differences within national science systems. The top ranking 'innovative' research universities contribute significantly to the development of advanced science-based technologies.

  4. Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue

    NASA Astrophysics Data System (ADS)

    Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.

    2018-02-01

    Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.

  5. A multi-atlas based method for automated anatomical Macaca fascicularis brain MRI segmentation and PET kinetic extraction.

    PubMed

    Ballanger, Bénédicte; Tremblay, Léon; Sgambato-Faure, Véronique; Beaudoin-Gobert, Maude; Lavenne, Franck; Le Bars, Didier; Costes, Nicolas

    2013-08-15

    MRI templates and digital atlases are needed for automated and reproducible quantitative analysis of non-human primate PET studies. Segmenting brain images via multiple atlases outperforms single-atlas labelling in humans. We present a set of atlases manually delineated on brain MRI scans of the monkey Macaca fascicularis. We use this multi-atlas dataset to evaluate two automated methods in terms of accuracy, robustness and reliability in segmenting brain structures on MRI and extracting regional PET measures. Twelve individual Macaca fascicularis high-resolution 3DT1 MR images were acquired. Four individual atlases were created by manually drawing 42 anatomical structures, including cortical and sub-cortical structures, white matter regions, and ventricles. To create the MRI template, we first chose one MRI to define a reference space, and then performed a two-step iterative procedure: affine registration of individual MRIs to the reference MRI, followed by averaging of the twelve resampled MRIs. Automated segmentation in native space was obtained in two ways: 1) Maximum probability atlases were created by decision fusion of two to four individual atlases in the reference space, and transformation back into the individual native space (MAXPROB)(.) 2) One to four individual atlases were registered directly to the individual native space, and combined by decision fusion (PROPAG). Accuracy was evaluated by computing the Dice similarity index and the volume difference. The robustness and reproducibility of PET regional measurements obtained via automated segmentation was evaluated on four co-registered MRI/PET datasets, which included test-retest data. Dice indices were always over 0.7 and reached maximal values of 0.9 for PROPAG with all four individual atlases. There was no significant mean volume bias. The standard deviation of the bias decreased significantly when increasing the number of individual atlases. MAXPROB performed better when increasing the number of atlases used. When all four atlases were used for the MAXPROB creation, the accuracy of morphometric segmentation approached that of the PROPAG method. PET measures extracted either via automatic methods or via the manually defined regions were strongly correlated, with no significant regional differences between methods. Intra-class correlation coefficients for test-retest data were over 0.87. Compared to single atlas extractions, multi-atlas methods improve the accuracy of region definition. They also perform comparably to manually defined regions for PET quantification. Multiple atlases of Macaca fascicularis brains are now available and allow reproducible and simplified analyses. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. The effect of particle size on the morphology and thermodynamics of diblock copolymer/tethered-particle membranes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Bo; Edwards, Brian J., E-mail: bje@utk.edu

    A combination of self-consistent field theory and density functional theory was used to examine the effect of particle size on the stable, 3-dimensional equilibrium morphologies formed by diblock copolymers with a tethered nanoparticle attached either between the two blocks or at the end of one of the blocks. Particle size was varied between one and four tenths of the radius of gyration of the diblock polymer chain for neutral particles as well as those either favoring or disfavoring segments of the copolymer blocks. Phase diagrams were constructed and analyzed in terms of thermodynamic diagrams to understand the physics associated withmore » the molecular-level self-assembly processes. Typical morphologies were observed, such as lamellar, spheroidal, cylindrical, gyroidal, and perforated lamellar, with the primary concentration region of the tethered particles being influenced heavily by particle size and tethering location, strength of the particle-segment energetic interactions, chain length, and copolymer radius of gyration. The effect of the simulation box size on the observed morphology and system thermodynamics was also investigated, indicating possible effects of confinement upon the system self-assembly processes.« less

  7. Insights in the Diffusion Controlled Interfacial Flow Synthesis of Au Nanostructures in a Microfluidic System.

    PubMed

    Kulkarni, Amol A; Sebastian Cabeza, Victor

    2017-12-19

    Continuous segmented flow interfacial synthesis of Au nanostructures is demonstrated in a microchannel reactor. This study brings new insights into the growth of nanostructures at continuous interfaces. The size as well as the shape of the nanostructures showed significant dependence on the reactant concentrations, reaction time, temperature, and surface tension, which actually controlled the interfacial mass transfer. The microchannel reactor assisted in achieving a high interfacial area, as well as uniformity in mass transfer effects. Hexagonal nanostructures were seen to be formed in synthesis times as short as 10 min. The wettability of the channel showed significant effect on the particle size as well as the actual shape. The hydrophobic channel yielded hexagonal structures of relatively smaller size than the hydrophilic microchannel, which yielded sharp hexagonal bipyramidal particles (diagonal distance of 30 nm). The evolution of particle size and shape for the case of hydrophilic microchannel is also shown as a function of the residence time. The interfacial synthesis approach based on a stable segmented flow promoted an excellent control on the reaction extent, reduction in axial dispersion as well as the particle size distribution.

  8. Prosthetic component segmentation with blur compensation: a fast method for 3D fluoroscopy.

    PubMed

    Tarroni, Giacomo; Tersi, Luca; Corsi, Cristiana; Stagni, Rita

    2012-06-01

    A new method for prosthetic component segmentation from fluoroscopic images is presented. The hybrid approach we propose combines diffusion filtering, region growing and level-set techniques without exploiting any a priori knowledge of the analyzed geometry. The method was evaluated on a synthetic dataset including 270 images of knee and hip prosthesis merged to real fluoroscopic data simulating different conditions of blurring and illumination gradient. The performance of the method was assessed by comparing estimated contours to references using different metrics. Results showed that the segmentation procedure is fast, accurate, independent on the operator as well as on the specific geometrical characteristics of the prosthetic component, and able to compensate for amount of blurring and illumination gradient. Importantly, the method allows a strong reduction of required user interaction time when compared to traditional segmentation techniques. Its effectiveness and robustness in different image conditions, together with simplicity and fast implementation, make this prosthetic component segmentation procedure promising and suitable for multiple clinical applications including assessment of in vivo joint kinematics in a variety of cases.

  9. Energy-efficient rings mechanism for greening multisegment fiber-wireless access networks

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoxue; Guo, Lei; Hou, Weigang; Zhang, Lincong

    2013-07-01

    Through integrating advantages of optical and wireless communications, the Fiber-Wireless (FiWi) has become a promising solution for the "last-mile" broadband access. In particular, greening FiWi has attained extensive attention, because the access network is a main energy contributor in the whole infrastructure. However, prior solutions of greening FiWi shut down or sleep unused/minimally used optical network units for a single segment, where we deploy only one optical linear terminal. We propose a green mechanism referred to as energy-efficient ring (EER) for multisegment FiWi access networks. We utilize an integer linear programming model and a generic algorithm to generate clusters, each having the shortest distance of fully connected segments of its own. Leveraging the backtracking method for each cluster, we then connect segments through fiber links, and the shortest distance fiber ring is constructed. Finally, we sleep low load segments and forward affected traffic to other active segments on the same fiber ring by our sleeping scheme. Experimental results show that our EER mechanism significantly reduces the energy consumption at the slightly additional cost of deploying fiber links.

  10. Comparison of thyroid segmentation techniques for 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Wunderling, T.; Golla, B.; Poudel, P.; Arens, C.; Friebe, M.; Hansen, C.

    2017-02-01

    The segmentation of the thyroid in ultrasound images is a field of active research. The thyroid is a gland of the endocrine system and regulates several body functions. Measuring the volume of the thyroid is regular practice of diagnosing pathological changes. In this work, we compare three approaches for semi-automatic thyroid segmentation in freehand-tracked three-dimensional ultrasound images. The approaches are based on level set, graph cut and feature classification. For validation, sixteen 3D ultrasound records were created with ground truth segmentations, which we make publicly available. The properties analyzed are the Dice coefficient when compared against the ground truth reference and the effort of required interaction. Our results show that in terms of Dice coefficient, all algorithms perform similarly. For interaction, however, each algorithm has advantages over the other. The graph cut-based approach gives the practitioner direct influence on the final segmentation. Level set and feature classifier require less interaction, but offer less control over the result. All three compared methods show promising results for future work and provide several possible extensions.

  11. Toward a standard for the evaluation of PET-Auto-Segmentation methods following the recommendations of AAPM task group No. 211: Requirements and implementation.

    PubMed

    Berthon, Beatrice; Spezi, Emiliano; Galavis, Paulina; Shepherd, Tony; Apte, Aditya; Hatt, Mathieu; Fayad, Hadi; De Bernardi, Elisabetta; Soffientini, Chiara D; Ross Schmidtlein, C; El Naqa, Issam; Jeraj, Robert; Lu, Wei; Das, Shiva; Zaidi, Habib; Mawlawi, Osama R; Visvikis, Dimitris; Lee, John A; Kirov, Assen S

    2017-08-01

    The aim of this paper is to define the requirements and describe the design and implementation of a standard benchmark tool for evaluation and validation of PET-auto-segmentation (PET-AS) algorithms. This work follows the recommendations of Task Group 211 (TG211) appointed by the American Association of Physicists in Medicine (AAPM). The recommendations published in the AAPM TG211 report were used to derive a set of required features and to guide the design and structure of a benchmarking software tool. These items included the selection of appropriate representative data and reference contours obtained from established approaches and the description of available metrics. The benchmark was designed in a way that it could be extendable by inclusion of bespoke segmentation methods, while maintaining its main purpose of being a standard testing platform for newly developed PET-AS methods. An example of implementation of the proposed framework, named PETASset, was built. In this work, a selection of PET-AS methods representing common approaches to PET image segmentation was evaluated within PETASset for the purpose of testing and demonstrating the capabilities of the software as a benchmark platform. A selection of clinical, physical, and simulated phantom data, including "best estimates" reference contours from macroscopic specimens, simulation template, and CT scans was built into the PETASset application database. Specific metrics such as Dice Similarity Coefficient (DSC), Positive Predictive Value (PPV), and Sensitivity (S), were included to allow the user to compare the results of any given PET-AS algorithm to the reference contours. In addition, a tool to generate structured reports on the evaluation of the performance of PET-AS algorithms against the reference contours was built. The variation of the metric agreement values with the reference contours across the PET-AS methods evaluated for demonstration were between 0.51 and 0.83, 0.44 and 0.86, and 0.61 and 1.00 for DSC, PPV, and the S metric, respectively. Examples of agreement limits were provided to show how the software could be used to evaluate a new algorithm against the existing state-of-the art. PETASset provides a platform that allows standardizing the evaluation and comparison of different PET-AS methods on a wide range of PET datasets. The developed platform will be available to users willing to evaluate their PET-AS methods and contribute with more evaluation datasets. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  12. A local segmentation parameter optimization approach for mapping heterogeneous urban environments using VHR imagery

    NASA Astrophysics Data System (ADS)

    Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore

    2017-10-01

    Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.

  13. Development and Prototyping of the PROSPECT Antineutrino Detector

    NASA Astrophysics Data System (ADS)

    Commeford, Kelley; Prospect Collaboration

    2017-01-01

    The PROSPECT experiment will make the most precise measurement of the 235U reactor antineutrino spectrum as well as search for sterile neutrinos using a segmented Li-loaded liquid scintillator neutrino detector. Several prototype detectors of increasing size, complexity, and fidelity have been constructed and tested as part of the PROSPECT detector development program. The challenges to overcome include the efficient rejection of cosmogenic background and collection of optical photons in a compact volume. Design choices regarding segment structure and layout, calibration source deployment, and optical collection methods are discussed. Results from the most recent multi-segment prototype, PROSPECT-50, will also be shown.

  14. Parallelized Seeded Region Growing Using CUDA

    PubMed Central

    Park, Seongjin; Lee, Hyunna; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests. PMID:25309619

  15. Real-time myocardium segmentation for the assessment of cardiac function variation

    NASA Astrophysics Data System (ADS)

    Zoehrer, Fabian; Huellebrand, Markus; Chitiboi, Teodora; Oechtering, Thekla; Sieren, Malte; Frahm, Jens; Hahn, Horst K.; Hennemuth, Anja

    2017-03-01

    Recent developments in MRI enable the acquisition of image sequences with high spatio-temporal resolution. Cardiac motion can be captured without gating and triggering. Image size and contrast relations differ from conventional cardiac MRI cine sequences requiring new adapted analysis methods. We suggest a novel segmentation approach utilizing contrast invariant polar scanning techniques. It has been tested with 20 datasets of arrhythmia patients. The results do not differ significantly more between automatic and manual segmentations than between observers. This indicates that the presented solution could enable clinical applications of real-time MRI for the examination of arrhythmic cardiac motion in the future.

  16. Katja — the 24th week of virtual pregnancy for dosimetric calculations

    NASA Astrophysics Data System (ADS)

    Becker, Janine; Zankl, Maria; Fill, Ute; Hoeschen, Christoph

    2008-01-01

    Virtual human models, a.k.a. voxel models, are currently the state of the art in radiation protection for computing organ irradiation doses without difficult or morally unfeasible experiments. They are based on medical image data of human patients and offer a realistic, three dimensional representation of human anatomy. We present our newest voxel model Katja, a virtual woman in the 24th week of pregnancy. Katja integrates two previous voxel models, one obtained from the abdominal MRI scan of a pregnant patient and an already segmented model of a non-pregnant woman. The latter is the ICRP-AF, fitting the reference values for standard height, weight and organ masses given by the Internationals Committee of Radiological Protection (ICRP). The dataset was altered in order to fit the segmented foetus taken from the abdominal MRI scan. The resulting pregnant woman model, Katja, complies with the ICRP reference values for the adult female.

  17. Wound size measurement of lower extremity ulcers using segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Dadkhah, Arash; Pang, Xing; Solis, Elizabeth; Fang, Ruogu; Godavarty, Anuradha

    2016-03-01

    Lower extremity ulcers are one of the most common complications that not only affect many people around the world but also have huge impact on economy since a large amount of resources are spent for treatment and prevention of the diseases. Clinical studies have shown that reduction in the wound size of 40% within 4 weeks is an acceptable progress in the healing process. Quantification of the wound size plays a crucial role in assessing the extent of healing and determining the treatment process. To date, wound healing is visually inspected and the wound size is measured from surface images. The extent of wound healing internally may vary from the surface. A near-infrared (NIR) optical imaging approach has been developed for non-contact imaging of wounds internally and differentiating healing from non-healing wounds. Herein, quantitative wound size measurements from NIR and white light images are estimated using a graph cuts and region growing image segmentation algorithms. The extent of the wound healing from NIR imaging of lower extremity ulcers in diabetic subjects are quantified and compared across NIR and white light images. NIR imaging and wound size measurements can play a significant role in potentially predicting the extent of internal healing, thus allowing better treatment plans when implemented for periodic imaging in future.

  18. Microplastics reduced posterior segment regeneration rate of the polychaete Perinereis aibuhitensis.

    PubMed

    Leung, Julia; Chan, Kit Yu Karen

    2018-04-01

    Microplastics are found in abundance in and on coastal sediments, and yet, whether exposure to this emerging pollutant negatively impact whole organism function is unknown. Focusing on a commercially important polychaete, Perinereis aibuhitensis, we demonstrated that presence of microplastics increased mortality and reduced the rate of posterior segment regeneration. The impact of the micro-polystyrene beads was size-dependent with smaller beads (8-12μm in diameter) being more detrimental than those bigger in size (32-38μm). This observed difference suggests microplastic impact could be affected by physical properties, e.g., sinking speed, surface area available for sorption of chemicals and bacteria, and selective feeding behaviors of the target organism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. METHOD AND MEANS FOR RECOGNIZING COMPLEX PATTERNS

    DOEpatents

    Hough, P.V.C.

    1962-12-18

    This patent relates to a method and means for recognizing a complex pattern in a picture. The picture is divided into framelets, each framelet being sized so that any segment of the complex pattern therewithin is essentially a straight line. Each framelet is scanned to produce an electrical pulse for each point scanned on the segment therewithin. Each of the electrical pulses of each segment is then transformed into a separate strnight line to form a plane transform in a pictorial display. Each line in the plane transform of a segment is positioned laterally so that a point on the line midway between the top and the bottom of the pictorial display occurs at a distance from the left edge of the pictorial display equal to the distance of the generating point in the segment from the left edge of the framelet. Each line in the plane transform of a segment is inclined in the pictorial display at an angle to the vertical whose tangent is proportional to the vertical displacement of the generating point in the segment from the center of the framelet. The coordinate position of the point of intersection of the lines in the pictorial display for each segment is determined and recorded. The sum total of said recorded coordinate positions being representative of the complex pattern. (AEC)

  20. A Parametric Finite-Element Model for Evaluating Segmented Mirrors with Discrete, Edgewise Connectivity

    NASA Technical Reports Server (NTRS)

    Gersh-Range, Jessica A.; Arnold, William R.; Peck, Mason A.; Stahl, H. Philip

    2011-01-01

    Since future astrophysics missions require space telescopes with apertures of at least 10 meters, there is a need for on-orbit assembly methods that decouple the size of the primary mirror from the choice of launch vehicle. One option is to connect the segments edgewise using mechanisms analogous to damped springs. To evaluate the feasibility of this approach, a parametric ANSYS model that calculates the mode shapes, natural frequencies, and disturbance response of such a mirror, as well as of the equivalent monolithic mirror, has been developed. This model constructs a mirror using rings of hexagonal segments that are either connected continuously along the edges (to form a monolith) or at discrete locations corresponding to the mechanism locations (to form a segmented mirror). As an example, this paper presents the case of a mirror whose segments are connected edgewise by mechanisms analogous to a set of four collocated single-degree-of-freedom damped springs. The results of a set of parameter studies suggest that such mechanisms can be used to create a 15-m segmented mirror that behaves similarly to a monolith, although fully predicting the segmented mirror performance would require incorporating measured mechanism properties into the model. Keywords: segmented mirror, edgewise connectivity, space telescope

  1. Multi-Scale Correlative Tomography of a Li-Ion Battery Composite Cathode

    PubMed Central

    Moroni, Riko; Börner, Markus; Zielke, Lukas; Schroeder, Melanie; Nowak, Sascha; Winter, Martin; Manke, Ingo; Zengerle, Roland; Thiele, Simon

    2016-01-01

    Focused ion beam/scanning electron microscopy tomography (FIB/SEMt) and synchrotron X-ray tomography (Xt) are used to investigate the same lithium manganese oxide composite cathode at the same specific spot. This correlative approach allows the investigation of three central issues in the tomographic analysis of composite battery electrodes: (i) Validation of state-of-the-art binary active material (AM) segmentation: Although threshold segmentation by standard algorithms leads to very good segmentation results, limited Xt resolution results in an AM underestimation of 6 vol% and severe overestimation of AM connectivity. (ii) Carbon binder domain (CBD) segmentation in Xt data: While threshold segmentation cannot be applied for this purpose, a suitable classification method is introduced. Based on correlative tomography, it allows for reliable ternary segmentation of Xt data into the pore space, CBD, and AM. (iii) Pore space analysis in the micrometer regime: This segmentation technique is applied to an Xt reconstruction with several hundred microns edge length, thus validating the segmentation of pores within the micrometer regime for the first time. The analyzed cathode volume exhibits a bimodal pore size distribution in the ranges between 0–1 μm and 1–12 μm. These ranges can be attributed to different pore formation mechanisms. PMID:27456201

  2. Automatic detection of diabetic foot complications with infrared thermography by asymmetric analysis.

    PubMed

    Liu, Chanjuan; van Netten, Jaap J; van Baal, Jeff G; Bus, Sicco A; van der Heijden, Ferdi

    2015-02-01

    Early identification of diabetic foot complications and their precursors is essential in preventing their devastating consequences, such as foot infection and amputation. Frequent, automatic risk assessment by an intelligent telemedicine system might be feasible and cost effective. Infrared thermography is a promising modality for such a system. The temperature differences between corresponding areas on contralateral feet are the clinically significant parameters. This asymmetric analysis is hindered by (1) foot segmentation errors, especially when the foot temperature and the ambient temperature are comparable, and by (2) different shapes and sizes between contralateral feet due to deformities or minor amputations. To circumvent the first problem, we used a color image and a thermal image acquired synchronously. Foot regions, detected in the color image, were rigidly registered to the thermal image. This resulted in 97.8% ± 1.1% sensitivity and 98.4% ± 0.5% specificity over 76 high-risk diabetic patients with manual annotation as a reference. Nonrigid landmark-based registration with B-splines solved the second problem. Corresponding points in the two feet could be found regardless of the shapes and sizes of the feet. With that, the temperature difference of the left and right feet could be obtained. © 2015 Society of Photo-Optical Instrumentation Engineers (SPIE)

  3. The role of ventral and preventral organs as attachment sites for segmental limb muscles in Onychophora

    PubMed Central

    2013-01-01

    Background The so-called ventral organs are amongst the most enigmatic structures in Onychophora (velvet worms). They were described as segmental, ectodermal thickenings in the onychophoran embryo, but the same term has also been applied to mid-ventral, cuticular structures in adults, although the relationship between the embryonic and adult ventral organs is controversial. In the embryo, these structures have been regarded as anlagen of segmental ganglia, but recent studies suggest that they are not associated with neural development. Hence, their function remains obscure. Moreover, their relationship to the anteriorly located preventral organs, described from several onychophoran species, is also unclear. To clarify these issues, we studied the anatomy and development of the ventral and preventral organs in several species of Onychophora. Results Our anatomical data, based on histology, and light, confocal and scanning electron microscopy in five species of Peripatidae and three species of Peripatopsidae, revealed that the ventral and preventral organs are present in all species studied. These structures are covered externally with cuticle that forms an internal, longitudinal, apodeme-like ridge. Moreover, phalloidin-rhodamine labelling for f-actin revealed that the anterior and posterior limb depressor muscles in each trunk and the slime papilla segment attach to the preventral and ventral organs, respectively. During embryonic development, the ventral and preventral organs arise as large segmental, paired ectodermal thickenings that decrease in size and are subdivided into the smaller, anterior anlagen of the preventral organs and the larger, posterior anlagen of the ventral organs, both of which persist as paired, medially-fused structures in adults. Our expression data of the genes Delta and Notch from embryos of Euperipatoides rowelli revealed that these genes are expressed in two, paired domains in each body segment, corresponding in number, position and size with the anlagen of the ventral and preventral organs. Conclusions Our findings suggest that the ventral and preventral organs are a common feature of onychophorans that serve as attachment sites for segmental limb depressor muscles. The origin of these structures can be traced back in the embryo as latero-ventral segmental, ectodermal thickenings, previously suggested to be associated with the development of the nervous system. PMID:24308783

  4. Two-stage atlas subset selection in multi-atlas based image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.« less

  5. Large Constituent Families Help Children Parse Compounds

    ERIC Educational Resources Information Center

    Krott, Andrea; Nicoladis, Elena

    2005-01-01

    The family size of the constituents of compound words, or the number of compounds sharing the constituents, has been shown to affect adults' access to compound words in the mental lexicon. The present study was designed to see if family size would affect children's segmentation of compounds. Twenty-five English-speaking children between 3;7 and…

  6. Holokinetic drive: centromere drive in chromosomes without centromeres.

    PubMed

    Bureš, Petr; Zedek, František

    2014-08-01

    Similar to how the model of centromere drive explains the size and complexity of centromeres in monocentrics (organisms with localized centromeres), our model of holokinetic drive is consistent with the divergent evolution of chromosomal size and number in holocentrics (organisms with nonlocalized centromeres) exhibiting holokinetic meiosis (holokinetics). Holokinetic drive is proposed to facilitate chromosomal fission and/or repetitive DNA removal (or any segmental deletion) when smaller homologous chromosomes are preferentially inherited or chromosomal fusion and/or repetitive DNA proliferation (or any segmental duplication) when larger homologs are preferred. The hypothesis of holokinetic drive is supported primarily by the negative correlation between chromosome number and genome size that is documented in holokinetic lineages. The supporting value of two older cross-experiments on holokinetic structural heterozygotes (the rush Luzula elegans and butterflies of the genus Antheraea) that indicate the presence of size-preferential homolog transmission via female meiosis for holokinetic drive is discussed, along with the further potential consequences of holokinetic drive in comparison with centromere drive. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  7. Morphing Wing Weight Predictors and Their Application in a Template-Based Morphing Aircraft Sizing Environment II. Part 2; Morphing Aircraft Sizing via Multi-level Optimization

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2008-01-01

    This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.

  8. Referent control and motor equivalence of reaching from standing

    PubMed Central

    Tomita, Yosuke; Feldman, Anatol G.

    2016-01-01

    Motor actions may result from central changes in the referent body configuration, defined as the body posture at which muscles begin to be activated or deactivated. The actual body configuration deviates from the referent configuration, particularly because of body inertia and environmental forces. Within these constraints, the system tends to minimize the difference between these configurations. For pointing movement, this strategy can be expressed as the tendency to minimize the difference between the referent trajectory (RT) and actual trajectory (QT) of the effector (hand). This process may underlie motor equivalent behavior that maintains the pointing trajectory regardless of the number of body segments involved. We tested the hypothesis that the minimization process is used to produce pointing in standing subjects. With eyes closed, 10 subjects reached from a standing position to a remembered target located beyond arm length. In randomly chosen trials, hip flexion was unexpectedly prevented, forcing subjects to take a step during pointing to prevent falling. The task was repeated when subjects were instructed to intentionally take a step during pointing. In most cases, reaching accuracy and trajectory curvature were preserved due to adaptive condition-specific changes in interjoint coordination. Results suggest that referent control and the minimization process associated with it may underlie motor equivalence in pointing. NEW & NOTEWORTHY Motor actions may result from minimization of the deflection of the actual body configuration from the centrally specified referent body configuration, in the limits of neuromuscular and environmental constraints. The minimization process may maintain reaching trajectory and accuracy regardless of the number of body segments involved (motor equivalence), as confirmed in this study of reaching from standing in young healthy individuals. Results suggest that the referent control process may underlie motor equivalence in reaching. PMID:27784802

  9. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).

    PubMed

    Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad

    2018-04-01

    A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.

  10. Semi-automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    PubMed Central

    Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867

  11. Semi-Automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less

  12. [The private vaccines market in Brazil: privatization of public health].

    PubMed

    Temporão, José Gomes

    2003-01-01

    The main objective of this article is to analyze the vaccines market in Brazil, which is characterized as consisting of two segments with distinct practices and logics: the public segment, focused on supply within the Unified National Health System (SUS) and the private segment, organized around private clinics, physicians' offices, and similar private health facilities. The private vaccines market segment, studied here for the first time, is characterized in relation to the supply and demand structure. Historical aspects of its structure are analyzed, based on the creation of one of the first immunization clinics in the country. The attempt was to analyze this segment in relation to its economic dimensions (imports and sales), principal manufacturers, and products marketed. It economic size proved much greater than initially hypothesized. The figures allow one to view it as one of the main segments in the pharmaceutical industry in Brazil as measured by sales volume. One detects the penetration of a privatizing logic in a sphere that has always been essentially public, thereby introducing into the SUS a new space for disregarding the principles of equity and universality.

  13. Active Segmentation.

    PubMed

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  14. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.

    PubMed

    Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku

    2017-07-01

    Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Quantitative evaluation of hidden defects in cast iron components using ultrasound activated lock-in vibrothermography.

    PubMed

    Montanini, R; Freni, F; Rossi, G L

    2012-09-01

    This paper reports one of the first experimental results on the application of ultrasound activated lock-in vibrothermography for quantitative assessment of buried flaws in complex cast parts. The use of amplitude modulated ultrasonic heat generation allowed selective response of defective areas within the part, as the defect itself is turned into a local thermal wave emitter. Quantitative evaluation of hidden damages was accomplished by estimating independently both the area and the depth extension of the buried flaws, while x-ray 3D computed tomography was used as reference for sizing accuracy assessment. To retrieve flaw's area, a simple yet effective histogram-based phase image segmentation algorithm with automatic pixels classification has been developed. A clear correlation was found between the thermal (phase) signature measured by the infrared camera on the target surface and the actual mean cross-section area of the flaw. Due to the very fast cycle time (<30 s/part), the method could potentially be applied for 100% quality control of casting components.

  16. Distribution of small channels on the Martian surface

    NASA Technical Reports Server (NTRS)

    Pieri, D.

    1976-01-01

    The distribution of small channels on Mars has been mapped from Mariner 9 images at the 1:5,000,000 scale. The small channels referred to here are small valleys ranging in width from the resolution limit of the Mariner 9 wide-angle images (about 1 km) to about 10 km. The greatest density of small channels occurs in dark cratered terrain. This dark zone forms a broad subequatorial band around the planet. The observed distribution may be the result of decreased small-channel visibility in bright areas due to obscuration by a high albedo dust or sediment mantle. Crater densities within two small-channel segments show crater size-frequency distributions consistent with those of the oldest of the heavily cratered plains units. Such crater densities coupled with the almost exclusive occurrence of small channels in old cratered terrain and the generally degraded appearance of small channels in the high-resolution images (about 100 m) imply a major episode of small-channel formation early in Martian geologic history.

  17. Multi-scale structural community organisation of the human genome.

    PubMed

    Boulos, Rasha E; Tremblay, Nicolas; Arneodo, Alain; Borgnat, Pierre; Audit, Benjamin

    2017-04-11

    Structural interaction frequency matrices between all genome loci are now experimentally achievable thanks to high-throughput chromosome conformation capture technologies. This ensues a new methodological challenge for computational biology which consists in objectively extracting from these data the structural motifs characteristic of genome organisation. We deployed the fast multi-scale community mining algorithm based on spectral graph wavelets to characterise the networks of intra-chromosomal interactions in human cell lines. We observed that there exist structural domains of all sizes up to chromosome length and demonstrated that the set of structural communities forms a hierarchy of chromosome segments. Hence, at all scales, chromosome folding predominantly involves interactions between neighbouring sites rather than the formation of links between distant loci. Multi-scale structural decomposition of human chromosomes provides an original framework to question structural organisation and its relationship to functional regulation across the scales. By construction the proposed methodology is independent of the precise assembly of the reference genome and is thus directly applicable to genomes whose assembly is not fully determined.

  18. Distribution of small channels on the Martian surface

    USGS Publications Warehouse

    Pieri, D.

    1976-01-01

    The distribution of small channels on Mars has been mapped from Mariner 9 images, at the 1:5 000 000 scale, by the author. The small channels referred to here are small valleys ranging in width from the resolution limit of the Mariner 9 wide-angle images (???1 km) to about 10 km. The greatest density of small band occurs in dark cratered terrain. This dark zone forms a broad subequatorial band around the planet. The observed distribution may be the result of decreased small-channel visibility in bright areas due to obscuration by a high albedo dust or sediment mantle. Crater densities within two small-channel segments show crater size-frequency distributions consistent with those of the oldest of the heavily cratered plains units. Such crater densities coupled with the almost exclusive occurrence of small channels in old cratered terrain and the generally degraded appearance of small channels in the high-resolution images (???100 m) imply a major episode of small-channel formation early in Martian geologic history. ?? 1976.

  19. Neural-net-based image matching

    NASA Astrophysics Data System (ADS)

    Jerebko, Anna K.; Barabanov, Nikita E.; Luciv, Vadim R.; Allinson, Nigel M.

    2000-04-01

    The paper describes a neural-based method for matching spatially distorted image sets. The matching of partially overlapping images is important in many applications-- integrating information from images formed from different spectral ranges, detecting changes in a scene and identifying objects of differing orientations and sizes. Our approach consists of extracting contour features from both images, describing the contour curves as sets of line segments, comparing these sets, determining the corresponding curves and their common reference points, calculating the image-to-image co-ordinate transformation parameters on the basis of the most successful variant of the derived curve relationships. The main steps are performed by custom neural networks. The algorithms describe in this paper have been successfully tested on a large set of images of the same terrain taken in different spectral ranges, at different seasons and rotated by various angles. In general, this experimental verification indicates that the proposed method for image fusion allows the robust detection of similar objects in noisy, distorted scenes where traditional approaches often fail.

  20. The diagnostic and clinical significance of café-au-lait macules.

    PubMed

    Shah, Kara N

    2010-10-01

    Café-au-lait, also referred to as café-au-lait spots or café-au-lait macules, present as well-circumscribed, evenly pigmented macules and patches that range in size from 1 to 2 mm to greater than 20 cm in greatest diameter. Café-au-lait are common in children. Although most café-au-lait present as 1 or 2 spots in an otherwise healthy child, the presence of multiple café-au-lait, large segmental café-au-lait, associated facial dysmorphism, other cutaneous anomalies, or unusual findings on physical examination should suggest the possibility of an associated syndrome. While neurofibromatosis type 1 is the most common syndrome seen in children with multiple café-au-lait, other syndromes associated with one or more café-au-lait include McCune-Albright syndrome, Legius syndrome, Noonan syndrome and other neuro-cardio-facialcutaneous syndromes, ring chromosome syndromes, and constitutional mismatch repair deficiency syndrome. Copyright © 2010 Elsevier Inc. All rights reserved.

Top