Sample records for database fill segment

  1. Introduction to the enhanced logistics intratheater support tool (ELIST) mission application and its segments : global data segment version 8.1.0.0, database instance segment version 8.1.0.0, database fill segment version 8.1.0.0, database segment versio

    DOT National Transportation Integrated Search

    2002-02-26

    This document, the Introduction to the Enhanced Logistics Intratheater Support Tool (ELIST) Mission Application and its Segments, satisfies the following objectives: : It identifies the mission application, known in brief as ELIST, and all seven ...

  2. Automated tissue segmentation of MR brain images in the presence of white matter lesions.

    PubMed

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Lladó, Xavier

    2017-01-01

    Over the last few years, the increasing interest in brain tissue volume measurements on clinical settings has led to the development of a wide number of automated tissue segmentation methods. However, white matter lesions are known to reduce the performance of automated tissue segmentation methods, which requires manual annotation of the lesions and refilling them before segmentation, which is tedious and time-consuming. Here, we propose a new, fully automated T1-w/FLAIR tissue segmentation approach designed to deal with images in the presence of WM lesions. This approach integrates a robust partial volume tissue segmentation with WM outlier rejection and filling, combining intensity and probabilistic and morphological prior maps. We evaluate the performance of this method on the MRBrainS13 tissue segmentation challenge database, which contains images with vascular WM lesions, and also on a set of Multiple Sclerosis (MS) patient images. On both databases, we validate the performance of our method with other state-of-the-art techniques. On the MRBrainS13 data, the presented approach was at the time of submission the best ranked unsupervised intensity model method of the challenge (7th position) and clearly outperformed the other unsupervised pipelines such as FAST and SPM12. On MS data, the differences in tissue segmentation between the images segmented with our method and the same images where manual expert annotations were used to refill lesions on T1-w images before segmentation were lower or similar to the best state-of-the-art pipeline incorporating automated lesion segmentation and filling. Our results show that the proposed pipeline achieved very competitive results on both vascular and MS lesions. A public version of this approach is available to download for the neuro-imaging community. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Global Binary Continuity for Color Face Detection With Complex Background

    NASA Astrophysics Data System (ADS)

    Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.

    2017-08-01

    In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.

  4. [Research report of experimental database establishment of digitized virtual Chinese No.1 female].

    PubMed

    Zhong, Shi-zhen; Yuan, Lin; Tang, Lei; Huang, Wen-hua; Dai, Jing-xing; Li, Jian-yi; Liu, Chang; Wang, Xing-hai; Li, Hua; Luo, Shu-qian; Qin, Dulie; Zeng, Shao-qun; Wu, Tao; Zhang, Mei-chao; Wu, Kun-cheng; Jiao, Pei-feng; Lu, Yun-tao; Chen, Hao; Li, Pei-liang; Gao, Yuan; Wang, Tong; Fan, Ji-hong

    2003-03-01

    To establish digitized virtual Chinese No.1 female (VCH-F1) image database. A 19 years old female cadaver was scanned by CT, MRI, and perfused with red filling material through formal artery before freezing and em- bedding. The whole body was cut by JZ1500A vertical milling machine with a 0.2 mm inter-spacing. All the images was produced by Fuji FinePix S2 Pro camera. The body index of VCH-F1 was 94%. We cut 8 556 sections of the whole body, and each image was 17.5 MB in size and the whole database reached 149.7 GB. We have totally 6 versions of the database for different applications. Compared with other databases, VCH-F1 has good representation of the Chinese body shape, colorful filling material in blood vessels providing enough information for future registration and segmentation. Vertical embedding and cutting helped to retain normal human physiological posture, and the image quality and operation efficiency were improved by using various techniques such as one-time freezing and fixation, double-temperature icehouse, large-diameter milling disc and whole body cutting.

  5. Assessing the Robustness of Complete Bacterial Genome Segmentations

    NASA Astrophysics Data System (ADS)

    Devillers, Hugo; Chiapello, Hélène; Schbath, Sophie; El Karoui, Meriem

    Comparison of closely related bacterial genomes has revealed the presence of highly conserved sequences forming a "backbone" that is interrupted by numerous, less conserved, DNA fragments. Segmentation of bacterial genomes into backbone and variable regions is particularly useful to investigate bacterial genome evolution. Several software tools have been designed to compare complete bacterial chromosomes and a few online databases store pre-computed genome comparisons. However, very few statistical methods are available to evaluate the reliability of these software tools and to compare the results obtained with them. To fill this gap, we have developed two local scores to measure the robustness of bacterial genome segmentations. Our method uses a simulation procedure based on random perturbations of the compared genomes. The scores presented in this paper are simple to implement and our results show that they allow to discriminate easily between robust and non-robust bacterial genome segmentations when using aligners such as MAUVE and MGA.

  6. Method to planarize three-dimensional structures to enable conformal electrodes

    DOEpatents

    Nikolic, Rebecca J; Conway, Adam M; Graff, Robert T; Reinhardt, Catherine; Voss, Lars F; Shao, Qinghui

    2012-11-20

    Methods for fabricating three-dimensional PIN structures having conformal electrodes are provided, as well as the structures themselves. The structures include a first layer and an array of pillars with cavity regions between the pillars. A first end of each pillar is in contact with the first layer. A segment is formed on the second end of each pillar. The cavity regions are filled with a fill material, which may be a functional material such as a neutron sensitive material. The fill material covers each segment. A portion of the fill material is etched back to produce an exposed portion of the segment. A first electrode is deposited onto the fill material and each exposed segment, thereby forming a conductive layer that provides a common contact to each the exposed segment. A second electrode is deposited onto the first layer.

  7. Automatic comparison of striation marks and automatic classification of shoe prints

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Keijzer, Jan; Keereweer, Isaac

    1995-09-01

    A database for toolmarks (named TRAX) and a database for footwear outsole designs (named REBEZO) have been developed on a PC. The databases are filled with video-images and administrative data about the toolmarks and the footwear designs. An algorithm for the automatic comparison of the digitized striation patterns has been developed for TRAX. The algorithm appears to work well for deep and complete striation marks and will be implemented in TRAX. For REBEZO some efforts have been made to the automatic classification of outsole patterns. The algorithm first segments the shoeprofile. Fourier-features are selected for the separate elements and are classified with a neural network. In future developments information on invariant moments of the shape and rotation angle will be included in the neural network.

  8. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  9. Targeted delayed scanning at CT urography: a worthwhile use of radiation?

    PubMed

    Hack, Kalesha; Pinto, Patricia A; Gollub, Marc J

    2012-10-01

    To determine whether ureteral segments not filled with contrast material at computed tomographic (CT) urography ever contain tumor detectable only by filling these segments with contrast material. In this institutional review board-approved, HIPAA-compliant retrospective study, with waiver of informed consent, databases were searched for all patients who underwent heminephroureterectomy or ureteroscopy between January 1, 2001, and December 31, 2009, with available CT urography findings in the 12 months prior to surgery or biopsy and patients who had undergone at least two CT urography procedures with a minimum 5-year follow-up between studies. One of two radiologists blinded to results of pathologic examination recorded location of unfilled segments, time of scan, subsequent filling, and pathologic or 5-year follow-up CT urography results. Tumors were considered missed in an unfilled segment if tumor was found at pathologic examination or follow-up CT urography in the same one-third of the ureter and there were no secondary signs of a mass with other index CT urography sequences. Estimated radiation dose for additional delayed sequences was calculated with a 32-cm phantom. In 59 male and 33 female patients (mean age, 66 years) undergoing heminephroureterectomy, 27 tumors were present in 41 partially nonopacified ureters in 20 patients. Six tumors were present in nonopacified segments (one multifocal, none bilateral); all were identifiable by means of secondary signs present with earlier sequences. Among 182 lesions biopsied at ureteroscopy in 124 male and 53 female patients (mean age, 69 years), 28 tumors were present in nonopacified segments in 25 patients (four multifocal, none bilateral), all with secondary imaging signs detectable without delayed scanning. In 64 male and 29 female patients (mean age, 69 years) who underwent 5-year follow-up CT urography, three new tumors were revealed in three patients; none occurred in the unfilled ureter at index CT urography. Estimated radiation dose from additional sequences was 4.3 mSv per patient. Targeted delayed scanning at CT urography yielded no additional ureteral tumors and resulted in additional radiation exposure. © RSNA, 2012.

  10. Use of landsat ETM+ SLC-off segment-based gap-filled imagery for crop type mapping

    USGS Publications Warehouse

    Maxwell, S.K.; Craig, M.E.

    2008-01-01

    Failure of the Scan Line Corrector (SLC) on the Landsat ETM+ sensor has had a major impact on many applications that rely on continuous medium resolution imagery to meet their objectives. The United States Department of Agriculture (USDA) Cropland Data Layer (CDL) program uses Landsat imagery as the primary source of data to produce crop-specific maps for 20 states in the USA. A new method has been developed to fill the image gaps resulting from the SLC failure to support the needs of Landsat users who require coincident spectral data, such as for crop type mapping and monitoring. We tested the new gap-filled method for a CDL crop type mapping project in eastern Nebraska. Scan line gaps were simulated on two Landsat 5 images (spring and late summer 2003) and then gap-filled using landscape boundary models, or segment models, that were derived from 1992 and 2002 Landsat images (used in the gap-fill process). Various date combinations of original and gap-filled images were used to derive crop maps using a supervised classification process. Overall kappa values were slightly higher for crop maps derived from SLC-off gap-filled images compared to crop maps derived from the original imagery (0.3–1.3% higher). Although the age of the segment model used to derive the SLC-off gap-filled product did not negatively impact the overall agreement, differences in individual cover type agreement did increase (−0.8%–1.6% using the 2002 segment model to −5.0–5.1% using the 1992 segment model). Classification agreement also decreased for most of the classes as the size of the segment used in the gap-fill process increased.

  11. Attention during adaptation weakens negative afterimages of perceptually colour-spread surfaces.

    PubMed

    Lak, Armin

    2008-06-01

    The visual system can complete coloured surfaces from stimulus fragments, inducing the subjective perception of a colour-spread figure. Negative afterimages of these induced colours were first reported by S. Shimojo, Y. Kamitani, and S. Nishida (2001). Two experiments were conducted to examine the effect of attention on the duration of these afterimages. The results showed that shifting attention to the colour-spread figure during the adaptation phase weakened the subsequent afterimage. On the basis of previous findings that the duration of these afterimages is correlated with the strength of perceptual filling-in (grouping) among local inducers during the adaptation phase, it is proposed that attention weakens perceptual filling-in during the adaptation phase and thereby prevents the stimulus from being segmented into an illusory figure. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  12. Ridge 2000 Data Management System

    NASA Astrophysics Data System (ADS)

    Goodwillie, A. M.; Carbotte, S. M.; Arko, R. A.; Haxby, W. F.; Ryan, W. B.; Chayes, D. N.; Lehnert, K. A.; Shank, T. M.

    2005-12-01

    Hosted at Lamont by the marine geoscience Data Management group, mgDMS, the NSF-funded Ridge 2000 electronic database, http://www.marine-geo.org/ridge2000/, is a key component of the Ridge 2000 multi-disciplinary program. The database covers each of the three Ridge 2000 Integrated Study Sites: Endeavour Segment, Lau Basin, and 8-11N Segment. It promotes the sharing of information to the broader community, facilitates integration of the suite of information collected at each study site, and enables comparisons between sites. The Ridge 2000 data system provides easy web access to a relational database that is built around a catalogue of cruise metadata. Any web browser can be used to perform a versatile text-based search which returns basic cruise and submersible dive information, sample and data inventories, navigation, and other relevant metadata such as shipboard personnel and links to NSF program awards. In addition, non-proprietary data files, images, and derived products which are hosted locally or in national repositories, as well as science and technical reports, can be freely downloaded. On the Ridge 2000 database page, our Data Link allows users to search the database using a broad range of parameters including data type, cruise ID, chief scientist, geographical location. The first Ridge 2000 field programs sailed in 2004 and, in addition to numerous data sets collected prior to the Ridge 2000 program, the database currently contains information on fifteen Ridge 2000-funded cruises and almost sixty Alvin dives. Track lines can be viewed using a recently- implemented Web Map Service button labelled Map View. The Ridge 2000 database is fully integrated with databases hosted by the mgDMS group for MARGINS and the Antarctic multibeam and seismic reflection data initiatives. Links are provided to partner databases including PetDB, SIOExplorer, and the ODP Janus system. Improved inter-operability with existing and new partner repositories continues to be strengthened. One major effort involves the gradual unification of the metadata across these partner databases. Standardised electronic metadata forms that can be filled in at sea are available from our web site. Interactive map-based exploration and visualisation of the Ridge 2000 database is provided by GeoMapApp, a freely-available Java(tm) application being developed within the mgDMS group. GeoMapApp includes high-resolution bathymetric grids for the 8-11N EPR segment and allows customised maps and grids for any of the Ridge 2000 ISS to be created. Vent and instrument locations can be plotted and saved as images, and Alvin dive photos are also available.

  13. A multi-scale segmentation approach to filling gaps in Landsat ETM+ SLC-off images

    USGS Publications Warehouse

    Maxwell, S.K.; Schmidt, Gail L.; Storey, James C.

    2007-01-01

    On 31 May 2003, the Landsat Enhanced Thematic Plus (ETM+) Scan Line Corrector (SLC) failed, causing the scanning pattern to exhibit wedge-shaped scan-to-scan gaps. We developed a method that uses coincident spectral data to fill the image gaps. This method uses a multi-scale segment model, derived from a previous Landsat SLC-on image (image acquired prior to the SLC failure), to guide the spectral interpolation across the gaps in SLC-off images (images acquired after the SLC failure). This paper describes the process used to generate the segment model, provides details of the gap-fill algorithm used in deriving the segment-based gap-fill product, and presents the results of the gap-fill process applied to grassland, cropland, and forest landscapes. Our results indicate this product will be useful for a wide variety of applications, including regional-scale studies, general land cover mapping (e.g. forest, urban, and grass), crop-specific mapping and monitoring, and visual assessments. Applications that need to be cautious when using pixels in the gap areas include any applications that require per-pixel accuracy, such as urban characterization or impervious surface mapping, applications that use texture to characterize landscape features, and applications that require accurate measurements of small or narrow landscape features such as roads, farmsteads, and riparian areas.

  14. Asynchronous (segmental early) relaxation impairs left ventricular filling in patients with coronary artery disease and normal systolic function.

    PubMed

    Vanoverschelde, J L; Wijns, W; Michel, X; Cosyns, J; Detry, J M

    1991-11-01

    Asynchronous segmental early relaxation, defined as a localized early segmental outward motion of the left ventricular endocardium during isovolumetric relaxation, has been associated with an altered left ventricular relaxation rate. To determine whether asynchronous segmental early relaxation also results in impaired left ventricular filling, early diastolic ventricular wall motion and Doppler-derived left ventricular filling indexes were examined in 25 patients with documented coronary artery disease and normal systolic function. Patients were further classified into two groups according to the presence (n = 15, group 1) or absence (n = 10, group 2) of asynchronous early relaxation at left ventriculography. A third group of 10 age-matched normal subjects served as a control group. No differences were observed between the two patient groups with coronary artery disease with respect to age, gender distribution, heart rate, left ventricular systolic and diastolic pressures or extent and severity of coronary artery disease. No differences in transmitral filling dynamics were observed between group 2 patients and age-matched control subjects. Conversely, group 1 patients had significantly lower peak early filling velocities (44 +/- 11 vs. 58 +/- 11 cm/s, p less than 0.01), larger atrial filling fraction (45 +/- 4% vs. 38 +/- 4%, p less than 0.001), lower ratio of early to late transmitral filling velocities (0.6 +/- 0.08 vs. 0.99 +/- 0.18, p less than 0.001) and a longer isovolumetric relaxation period (114 +/- 12 vs. 90 +/- 6 ms, p less than 0.001) compared with group 2 patients and control subjects.(ABSTRACT TRUNCATED AT 250 WORDS)

  15. Controlling retention, selectivity and magnitude of EOF by segmented monolithic columns consisting of octadecyl and naphthyl monolithic segments--applications to RP-CEC of both neutral and charged solutes.

    PubMed

    Karenga, Samuel; El Rassi, Ziad

    2011-04-01

    Monolithic capillaries made of two adjoining segments each filled with a different monolith were introduced for the control and manipulation of the electroosmotic flow (EOF), retention and selectivity in reversed phase-capillary electrochromatography (RP-CEC). These columns were called segmented monolithic columns (SMCs) where one segment was filled with a naphthyl methacrylate monolith (NMM) to provide hydrophobic and π-interactions, while the other segment was filled with an octadecyl acrylate monolith (ODM) to provide solely hydrophobic interaction. The ODM segment not only provided hydrophobic interactions but also functioned as the EOF accelerator segment. The average EOF of the SMC increased linearly with increasing the fractional length of the ODM segment. The neutral SMC provided a convenient way for tuning EOF, selectivity and retention in the absence of annoying electrostatic interactions and irreversible solute adsorption. The SMCs allowed the separation of a wide range of neutral solutes including polycyclic aromatic hydrocarbons (PAHs) that are difficult to separate using conventional alkyl-bonded stationary phases. In all cases, the k' of a given solute was a linear function of the fractional length of the ODM or NMM segment in the SMCs, thus facilitating the tailoring of a given SMC to solve a given separation problem. At some ODM fractional length, the fabricated SMC allowed the separation of charged solutes such as peptides and proteins that could not otherwise be achieved on a monolithic column made from NMM as an isotropic stationary phase due to the lower EOF exhibited by this monolith. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Systolic ventricular filling.

    PubMed

    Torrent-Guasp, Francisco; Kocica, Mladen J; Corno, Antonio; Komeda, Masashi; Cox, James; Flotats, A; Ballester-Rodes, Manel; Carreras-Costa, Francesc

    2004-03-01

    The evidence of the ventricular myocardial band (VMB) has revealed unavoidable coherence and mutual coupling of form and function in the ventricular myocardium, making it possible to understand the principles governing electrical, mechanical and energetical events within the human heart. From the earliest Erasistratus' observations, principal mechanisms responsible for the ventricular filling have still remained obscured. Contemporary experimental and clinical investigations unequivocally support the attitude that only powerful suction force, developed by the normal ventricles, would be able to produce an efficient filling of the ventricular cavities. The true origin and the precise time frame for generating such force are still controversial. Elastic recoil and muscular contraction were the most commonly mentioned, but yet, still not clearly explained mechanisms involved in the ventricular suction. Classical concepts about timing of successive mechanical events during the cardiac cycle, also do not offer understandable insight into the mechanism of the ventricular filling. The net result is the current state of insufficient knowledge of systolic and particularly diastolic function of normal and diseased heart. Here we summarize experimental evidence and theoretical backgrounds, which could be useful in understanding the phenomenon of the ventricular filling. Anatomy of the VMB, and recent proofs for its segmental electrical and mechanical activation, undoubtedly indicates that ventricular filling is the consequence of an active muscular contraction. Contraction of the ascendent segment of the VMB, with simultaneous shortening and rectifying of its fibers, produces the paradoxical increase of the ventricular volume and lengthening of its long axis. Specific spatial arrangement of the ascendent segment fibers, their interaction with adjacent descendent segment fibers, elastic elements and intra-cavitary blood volume (hemoskeleton), explain the physical principles involved in this action. This contraction occurs during the last part of classical systole and the first part of diastole. Therefore, the most important part of ventricular diastole (i.e. the rapid filling phase), in which it receives >70% of the stroke volume, belongs to the active muscular contraction of the ascendent segment. We hope that these facts will give rise to new understanding of the principal mechanisms involved in normal and abnormal diastolic heart function.

  17. A database of aerothermal measurements in hypersonic flow for CFD validation

    NASA Technical Reports Server (NTRS)

    Holden, M. S.; Moselle, J. R.

    1992-01-01

    This paper presents an experimental database selected and compiled from aerothermal measurements obtained on basic model configurations on which fundamental flow phenomena could be most easily examined. The experimental studies were conducted in hypersonic flows in 48-inch, 96-inch, and 6-foot shock tunnels. A special computer program was constructed to provide easy access to the measurements in the database as well as the means to plot the measurements and compare them with imported data. The database contains tabulations of model configurations, freestream conditions, and measurements of heat transfer, pressure, and skin friction for each of the studies selected for inclusion. The first segment contains measurements in laminar flow emphasizing shock-wave boundary-layer interaction. In the second segment, measurements in transitional flows over flat plates and cones are given. The third segment comprises measurements in regions of shock-wave/turbulent-boundary-layer interactions. Studies of the effects of surface roughness of nosetips and conical afterbodies are presented in the fourth segment of the database. Detailed measurements in regions of shock/shock boundary layer interaction are contained in the fifth segment. Measurements in regions of wall jet and transpiration cooling are presented in the final two segments.

  18. Acoustic fill factors for a 120 inch diameter fairing

    NASA Technical Reports Server (NTRS)

    Lee, Y. Albert

    1992-01-01

    Data from the acoustic test of a 120-inch diameter payload fairing were collected and an analysis of acoustic fill factors were performed. Correction factors for obtaining a weighted spatial average of the interior sound pressure level (SPL) were derived based on this database and a normalized 200-inch diameter fairing database. The weighted fill factors were determined and compared with statistical energy analysis (VAPEPS code) derived fill factors. The comparison is found to be reasonable.

  19. Automatic lung nodule graph cuts segmentation with deep learning false positive reduction

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang Bill; Qian, Wei

    2017-03-01

    To automatic detect lung nodules from CT images, we designed a two stage computer aided detection (CAD) system. The first stage is graph cuts segmentation to identify and segment the nodule candidates, and the second stage is convolutional neural network for false positive reduction. The dataset contains 595 CT cases randomly selected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) and the 305 pulmonary nodules achieved diagnosis consensus by all four experienced radiologists were our detection targets. Consider each slice as an individual sample, 2844 nodules were included in our database. The graph cuts segmentation was conducted in a two-dimension manner, 2733 lung nodule ROIs are successfully identified and segmented. With a false positive reduction by a seven-layer convolutional neural network, 2535 nodules remain detected while the false positive dropped to 31.6%. The average F-measure of segmented lung nodule tissue is 0.8501.

  20. Primary versus secondary achalasia: New signs on barium esophagogram

    PubMed Central

    Gupta, Pankaj; Debi, Uma; Sinha, Saroj Kant; Prasad, Kaushal Kishor

    2015-01-01

    Aim: To investigate new signs on barium swallow that can differentiate primary from secondary achalasia. Materials and Methods: Records of 30 patients with primary achalasia and 17 patients with secondary achalasia were reviewed. Clinical, endoscopic, and manometric data was recorded. Barium esophagograms were evaluated for peristalsis and morphology of distal esophageal segment (length, symmetry, nodularity, shouldering, filling defects, and “tram-track sign”). Results: Mean age at presentation was 39 years in primary achalasia and 49 years in secondary achalasia. The mean duration of symptoms was 3.5 years in primary achalasia and 3 months in secondary achalasia. False-negative endoscopic results were noted in the first instance in five patients. In the secondary achalasia group, five patients had distal esophageal segment morphology indistinguishable from that of primary achalasia. None of the patients with primary achalasia and 35% patients with secondary achalasia had a length of the distal segment approaching combined height of two vertebral bodies. None of the patients with secondary achalasia and 34% patients with primary achalasia had maximum caliber of esophagus approaching combined height of two vertebral bodies. Tertiary contractions were noted in 90% patients with primary achalasia and 24% patients with secondary achalasia. Tram-track sign was found in 55% patients with primary achalasia. Filling defects in the distal esophageal segment were noted in 94% patients with secondary achalasia. Conclusion: Length of distal esophageal segment, tertiary contractions, tram-track sign, and filling defects in distal esophageal segment are useful esophagographic features distinguishing primary from secondary achalasia. PMID:26288525

  1. [Comparison of initial and delayed myocardial imaging with beta-methyl-p-[123I]-iodophenylpentadecanoic acid in acute myocardial infarction].

    PubMed

    Naruse, H; Yoshimura, N; Yamamoto, J; Morita, M; Fukutake, N; Ohyanagi, M; Iwasaki, T; Fukuchi, M

    1994-01-01

    Myocardial imaging using beta-methyl-p-[123I]-iodophenylpentadecanoic acid (BMIPP) of 15 patients with acute myocardial infarction was performed to assess "fill-in" and "washout" defects in the delayed myocardial image. The initial and delayed images were evaluated by a visual and quantitative washout rate method. Visual judgement found 8/180 (4%) segments showed "fill-in" defects, and 24/180 segments (13%) showed "washout" defects. There was no relationship between days from onset to the study and the frequency of fill-in and washout defects. The mean washout rate in the segments with "fill-in" defects was 9.0 +/- 16.6%, and that of "washout" defects was 24.9 +/- 18.1% which was significantly higher than in controls (8.7 +/- 15.4%, p < 0.05). There was no correlation between mean washout rate and total blood lipids, total cholesterol, triglyceride and HDL-cholesterol. Therefore, neither time from onset nor blood lipids level was related to changes from the initial image to the delayed image. These changes may be due to relative (false) findings due to changes in circumference, and may be based on myocardial characteristics after myocardial infarction and/or reperfusion.

  2. 21 CFR 640.15 - Segments for testing.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... cells. (c) All segments accompanying a unit of Red Blood Cells shall be filled at the time the blood is... ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Red Blood Cells § 640.15 Segments for testing... provided with each unit of Whole Blood or Red Blood Cells when issued or reissued. (b) Before they are...

  3. 21 CFR 640.15 - Segments for testing.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... cells. (c) All segments accompanying a unit of Red Blood Cells shall be filled at the time the blood is... ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Red Blood Cells § 640.15 Segments for testing... provided with each unit of Whole Blood or Red Blood Cells when issued or reissued. (b) Before they are...

  4. 21 CFR 640.15 - Segments for testing.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... cells. (c) All segments accompanying a unit of Red Blood Cells shall be filled at the time the blood is... ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Red Blood Cells § 640.15 Segments for testing... provided with each unit of Whole Blood or Red Blood Cells when issued or reissued. (b) Before they are...

  5. 21 CFR 640.15 - Segments for testing.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... cells. (c) All segments accompanying a unit of Red Blood Cells shall be filled at the time the blood is... ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Red Blood Cells § 640.15 Segments for testing... provided with each unit of Whole Blood or Red Blood Cells when issued or reissued. (b) Before they are...

  6. 21 CFR 640.15 - Segments for testing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Red Blood Cells § 640.15 Segments for testing... provided with each unit of Whole Blood or Red Blood Cells when issued or reissued. (b) Before they are... cells. (c) All segments accompanying a unit of Red Blood Cells shall be filled at the time the blood is...

  7. Quantifying the impact of new freeway segments.

    DOT National Transportation Integrated Search

    2013-05-01

    Many freeway users complain that new freeway segments immediately fill up with traffic after they are constructed. This : diminishes the advantages of reduced costs and reduced driving time that would make freeways theoretically superior to : arteria...

  8. Video-assisted segmentation of speech and audio track

    NASA Astrophysics Data System (ADS)

    Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.

    1999-08-01

    Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.

  9. Universal null DTE (data terminal equipment)

    DOEpatents

    George, M.; Pierson, L.G.; Wilkins, M.E.

    1987-11-09

    A communication device in the form of data terminal equipment permits two data communication equipments, each having its own master clock and operating at substantially the same nominal clock rate, to communicate with each other in a multi-segment circuit configuration of a general communication network even when phase or frequency errors exist between the two clocks. Data transmitted between communication equipments of two segments of the communication network is buffered. A variable buffer fill circuit is provided to fill the buffer to a selectable extent prior to initiation of data output clocking. Selection switches are provided to select the degree of buffer preload. A dynamic buffer fill circuit may be incorporated for automatically selecting the buffer fill level as a function of the difference in clock frequencies of the two equipments. Controllable alarm circuitry is provided for selectively generating an underflow or an overflow alarm to one or both of the communicating equipments. 5 figs.

  10. Universal null DTE

    DOEpatents

    George, Michael; Pierson, Lyndon G.; Wilkins, Mark E.

    1989-01-01

    A communication device in the form of data terminal equipment permits two data communication equipments, each having its own master clock and operating at substantially the same nominal clock rate, to communicate with each other in a multi-segment circuit configuration of a general communication network even when phase or frequency errors exist between the two clocks. Data transmitted between communication equipments of two segments of the communication network is buffered. A variable buffer fill circuit is provided to fill the buffer to a selectable extent prior to initiation of data output clocking. Selection switches are provided to select the degree of buffer preload. A dynamic buffer fill circuit may be incorporated for automatically selecting the buffer fill level as a function of the difference in clock frequencies of the two equipments. Controllable alarm circuitry is provided for selectively generating an underflow or an overflow alarm to one or both of the communicating equipments.

  11. filltex: Automatic queries to ADS and INSPIRE databases to fill LaTex bibliography

    NASA Astrophysics Data System (ADS)

    Gerosa, Davide; Vallisneri, Michele

    2017-05-01

    filltex is a simple tool to fill LaTex reference lists with records from the ADS and INSPIRE databases. ADS and INSPIRE are the most common databases used among the theoretical physics and astronomy scientific communities, respectively. filltex automatically looks for all citation labels present in a tex document and, by means of web-scraping, downloads all the required citation records from either of the two databases. filltex significantly speeds up the LaTex scientific writing workflow, as all required actions (compile the tex file, fill the bibliography, compile the bibliography, compile the tex file again) are automated in a single command. We also provide an integration of filltex for the macOS LaTex editor TexShop.

  12. Seismic Behavior and Design of Segmental Precast Post-Tensioned Concrete Piers

    DOT National Transportation Integrated Search

    2011-06-01

    Segmental precast column construction is an economic environmental friendly solution to accelerate bridge construction in the United : States. Also, concrete-filled fiber reinforced polymer tubes (CFFT) represents a potential economic solution for du...

  13. Divide and Conquer: Applying the Marketing Concept of "Segmentation" to the Placement Function.

    ERIC Educational Resources Information Center

    Cowles, Deborah; Franzak, Frank

    1991-01-01

    Describes concept of market segmentation, then use of segmentation approach used by a college career planning and placement office which had the objectives of gaining a better understanding of the needs of employers looking to fill entry-level positions with marketing major graduates and collaborating more effectively with academic faculty in…

  14. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  15. From 20th century metabolic wall charts to 21st century systems biology: database of mammalian metabolic enzymes

    PubMed Central

    Corcoran, Callan C.; Grady, Cameron R.; Pisitkun, Trairak; Parulekar, Jaya

    2017-01-01

    The organization of the mammalian genome into gene subsets corresponding to specific functional classes has provided key tools for systems biology research. Here, we have created a web-accessible resource called the Mammalian Metabolic Enzyme Database (https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/MetabolicEnzymeDatabase.html) keyed to the biochemical reactions represented on iconic metabolic pathway wall charts created in the previous century. Overall, we have mapped 1,647 genes to these pathways, representing ~7 percent of the protein-coding genome. To illustrate the use of the database, we apply it to the area of kidney physiology. In so doing, we have created an additional database (Database of Metabolic Enzymes in Kidney Tubule Segments: https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/), mapping mRNA abundance measurements (mined from RNA-Seq studies) for all metabolic enzymes to each of 14 renal tubule segments. We carry out bioinformatics analysis of the enzyme expression pattern among renal tubule segments and mine various data sources to identify vasopressin-regulated metabolic enzymes in the renal collecting duct. PMID:27974320

  16. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  17. Inductive coupler for downhole components and method for making same

    DOEpatents

    Hall, David R.; Hall, Jr., H. Tracy; Pixton, David S.; Dahlgren, Scott; Briscoe, Michael A.; Sneddon, Cameron; Fox, Joe

    2006-05-09

    The present invention includes a method of making an inductive coupler for downhole components. The method includes providing an annular housing, preferably made of steel, the housing having a recess. A conductor, preferably an insulated wire, is also provided along with a plurality of generally U-shaped magnetically conducting, electrically insulating (MCEI) segments. Preferably, the MCEI segments comprise ferrite. An assembly is formed by placing the plurality of MCEI segments within the recess in the annular housing. The segments are aligned to form a generally circular trough. A first portion of the conductor is placed within the circular trough. This assembly is consolidated with a meltable polymer which fills spaces between the segments, annular housing and the first portion of the conductor. The invention also includes an inductive coupler including an annular housing having a recess defined by a bottom portion and two opposing side wall portions. At least one side wall portion includes a lip extending toward but not reaching the other side wall portion. A plurality of generally U-shaped MCEI segments, preferably comprised of ferrite, are disposed in the recess and aligned so as to form a circular trough. The coupler further includes a conductor disposed within the circular trough and a polymer filling spaces between the segments, the annular housing and the conductor.

  18. Wavefront Control Testbed (WCT) Experiment Results

    NASA Technical Reports Server (NTRS)

    Burns, Laura A.; Basinger, Scott A.; Campion, Scott D.; Faust, Jessica A.; Feinberg, Lee D.; Hayden, William L.; Lowman, Andrew E.; Ohara, Catherine M.; Petrone, Peter P., III

    2004-01-01

    The Wavefront Control Testbed (WCT) was created to develop and test wavefront sensing and control algorithms and software for the segmented James Webb Space Telescope (JWST). Last year, we changed the system configuration from three sparse aperture segments to a filled aperture with three pie shaped segments. With this upgrade we have performed experiments on fine phasing with line-of-sight and segment-to-segment jitter, dispersed fringe visibility and grism angle;. high dynamic range tilt sensing; coarse phasing with large aberrations, and sampled sub-aperture testing. This paper reviews the results of these experiments.

  19. Towards online iris and periocular recognition under relaxed imaging constraints.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2013-10-01

    Online iris recognition using distantly acquired images in a less imaging constrained environment requires the development of a efficient iris segmentation approach and recognition strategy that can exploit multiple features available for the potential identification. This paper presents an effective solution toward addressing such a problem. The developed iris segmentation approach exploits a random walker algorithm to efficiently estimate coarsely segmented iris images. These coarsely segmented iris images are postprocessed using a sequence of operations that can effectively improve the segmentation accuracy. The robustness of the proposed iris segmentation approach is ascertained by providing comparison with other state-of-the-art algorithms using publicly available UBIRIS.v2, FRGC, and CASIA.v4-distance databases. Our experimental results achieve improvement of 9.5%, 4.3%, and 25.7% in the average segmentation accuracy, respectively, for the UBIRIS.v2, FRGC, and CASIA.v4-distance databases, as compared with most competing approaches. We also exploit the simultaneously extracted periocular features to achieve significant performance improvement. The joint segmentation and combination strategy suggest promising results and achieve average improvement of 132.3%, 7.45%, and 17.5% in the recognition performance, respectively, from the UBIRIS.v2, FRGC, and CASIA.v4-distance databases, as compared with the related competing approaches.

  20. A dynamic appearance descriptor approach to facial actions temporal modeling.

    PubMed

    Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja

    2014-02-01

    Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.

  1. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    PubMed

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.

  2. From 20th century metabolic wall charts to 21st century systems biology: database of mammalian metabolic enzymes.

    PubMed

    Corcoran, Callan C; Grady, Cameron R; Pisitkun, Trairak; Parulekar, Jaya; Knepper, Mark A

    2017-03-01

    The organization of the mammalian genome into gene subsets corresponding to specific functional classes has provided key tools for systems biology research. Here, we have created a web-accessible resource called the Mammalian Metabolic Enzyme Database ( https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/MetabolicEnzymeDatabase.html) keyed to the biochemical reactions represented on iconic metabolic pathway wall charts created in the previous century. Overall, we have mapped 1,647 genes to these pathways, representing ~7 percent of the protein-coding genome. To illustrate the use of the database, we apply it to the area of kidney physiology. In so doing, we have created an additional database ( Database of Metabolic Enzymes in Kidney Tubule Segments: https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/), mapping mRNA abundance measurements (mined from RNA-Seq studies) for all metabolic enzymes to each of 14 renal tubule segments. We carry out bioinformatics analysis of the enzyme expression pattern among renal tubule segments and mine various data sources to identify vasopressin-regulated metabolic enzymes in the renal collecting duct. Copyright © 2017 the American Physiological Society.

  3. Benchmarking Insulin Treatment Persistence Among Patients with Type 2 Diabetes Across Different U.S. Payer Segments.

    PubMed

    Wei, Wenhui; Jiang, Jenny; Lou, Youbei; Ganguli, Sohini; Matusik, Mark S

    2017-03-01

    Treatment persistence with basal insulins is crucial to achieving sustained glycemic control, which is associated with a reduced risk of microvascular disease and other complications of type 2 diabetes (T2D). However, studies suggest that persistence with basal insulin treatment is often poor. To measure and benchmark real-world basal insulin treatment persistence among patients with T2D across different payer segments in the United States. This was a retrospective observational study of data from a national pharmacy database (Walgreen Co., Deerfield, IL). The analysis included patients with T2D aged ≥ 18 years who filled ≥ 1 prescription for basal insulins between January 2013 and June 2013 (the index prescription) and who had also filled prescriptions for ≥ 1 oral antidiabetes drug in the database. Patients with claims for premixed insulin were excluded. Treatment persistence was defined as remaining on the study medication(s) during the 1-year follow-up period. Patients were stratified according to treatment history (existing basal insulin users vs. new insulin users), payer segments (commercially insured, Medicare, Medicaid, or cash-pay), type of basal insulin (insulin glargine, insulin detemir, or neutral protamine Hagedorn insulin [NPH]), and device for insulin administration (pen or vial/syringe). A total of 274,102 patients were included in this analysis, 82% of whom were existing insulin users. In terms of payer segments, 45.3% of patients were commercially insured, 47.8% had Medicare, 5.9% had Medicaid, and 1.1% were cash-pay. At the 1-year follow-up, basal insulin treatment persistence rate was 66.8% overall, 61.7% for new users, and 67.9% for existing users. In general, for both existing and new basal insulin users, higher persistence rate and duration were associated with Medicare versus cash-pay patients, use of insulin pens versus vial/syringe, and use of insulin glargine versus NPH. This large-scale study provides a benchmark of basal insulin treatment persistence across different payers in the United States. Findings indicate that basal insulin persistence patterns are significantly different across different payers, basal insulin types, and devices. This information may be useful in developing targeted approaches to improve T2D patients' persistence with insulin treatment for better glycemic control. This study was funded by Sanofi U.S. through a grant provided to Walgreens for research services. Matusik, Jiang, and Lou are employed by Walgreen Co. Wei and Ganguli were employed by Sanofi U.S. at the time of this study. Study concept and design were contributed by Wei, Ganguli, and Matusik, with assistance from Lou. Jiang took the lead in data collection, along with Lou, and data interpretation was performed by Wei, Lou, and Jiang, along with Ganguli and Matusik. The manuscript was written by Wei and Jiang, along with Ganguli and Matusik, and revised by Wei and Ganguli, along with the other authors.

  4. Standardizing terminology and definitions of medication adherence and persistence in research employing electronic databases.

    PubMed

    Raebel, Marsha A; Schmittdiel, Julie; Karter, Andrew J; Konieczny, Jennifer L; Steiner, John F

    2013-08-01

    To propose a unifying set of definitions for prescription adherence research utilizing electronic health record prescribing databases, prescription dispensing databases, and pharmacy claims databases and to provide a conceptual framework to operationalize these definitions consistently across studies. We reviewed recent literature to identify definitions in electronic database studies of prescription-filling patterns for chronic oral medications. We then develop a conceptual model and propose standardized terminology and definitions to describe prescription-filling behavior from electronic databases. The conceptual model we propose defines 2 separate constructs: medication adherence and persistence. We define primary and secondary adherence as distinct subtypes of adherence. Metrics for estimating secondary adherence are discussed and critiqued, including a newer metric (New Prescription Medication Gap measure) that enables estimation of both primary and secondary adherence. Terminology currently used in prescription adherence research employing electronic databases lacks consistency. We propose a clear, consistent, broadly applicable conceptual model and terminology for such studies. The model and definitions facilitate research utilizing electronic medication prescribing, dispensing, and/or claims databases and encompasses the entire continuum of prescription-filling behavior. Employing conceptually clear and consistent terminology to define medication adherence and persistence will facilitate future comparative effectiveness research and meta-analytic studies that utilize electronic prescription and dispensing records.

  5. Thigh muscle segmentation of chemical shift encoding-based water-fat magnetic resonance images: The reference database MyoSegmenTUM.

    PubMed

    Schlaeger, Sarah; Freitag, Friedemann; Klupp, Elisabeth; Dieckmeyer, Michael; Weidlich, Dominik; Inhuber, Stephanie; Deschauer, Marcus; Schoser, Benedikt; Bublitz, Sarah; Montagnese, Federica; Zimmer, Claus; Rummeny, Ernst J; Karampinos, Dimitrios C; Kirschke, Jan S; Baum, Thomas

    2018-01-01

    Magnetic resonance imaging (MRI) can non-invasively assess muscle anatomy, exercise effects and pathologies with different underlying causes such as neuromuscular diseases (NMD). Quantitative MRI including fat fraction mapping using chemical shift encoding-based water-fat MRI has emerged for reliable determination of muscle volume and fat composition. The data analysis of water-fat images requires segmentation of the different muscles which has been mainly performed manually in the past and is a very time consuming process, currently limiting the clinical applicability. An automatization of the segmentation process would lead to a more time-efficient analysis. In the present work, the manually segmented thigh magnetic resonance imaging database MyoSegmenTUM is presented. It hosts water-fat MR images of both thighs of 15 healthy subjects and 4 patients with NMD with a voxel size of 3.2x2x4 mm3 with the corresponding segmentation masks for four functional muscle groups: quadriceps femoris, sartorius, gracilis, hamstrings. The database is freely accessible online at https://osf.io/svwa7/?view_only=c2c980c17b3a40fca35d088a3cdd83e2. The database is mainly meant as ground truth which can be used as training and test dataset for automatic muscle segmentation algorithms. The segmentation allows extraction of muscle cross sectional area (CSA) and volume. Proton density fat fraction (PDFF) of the defined muscle groups from the corresponding images and quadriceps muscle strength measurements/neurological muscle strength rating can be used for benchmarking purposes.

  6. Web-based Visualization and Query of semantically segmented multiresolution 3D Models in the Field of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Auer, M.; Agugiaro, G.; Billen, N.; Loos, L.; Zipf, A.

    2014-05-01

    Many important Cultural Heritage sites have been studied over long periods of time by different means of technical equipment, methods and intentions by different researchers. This has led to huge amounts of heterogeneous "traditional" datasets and formats. The rising popularity of 3D models in the field of Cultural Heritage in recent years has brought additional data formats and makes it even more necessary to find solutions to manage, publish and study these data in an integrated way. The MayaArch3D project aims to realize such an integrative approach by establishing a web-based research platform bringing spatial and non-spatial databases together and providing visualization and analysis tools. Especially the 3D components of the platform use hierarchical segmentation concepts to structure the data and to perform queries on semantic entities. This paper presents a database schema to organize not only segmented models but also different Levels-of-Details and other representations of the same entity. It is further implemented in a spatial database which allows the storing of georeferenced 3D data. This enables organization and queries by semantic, geometric and spatial properties. As service for the delivery of the segmented models a standardization candidate of the OpenGeospatialConsortium (OGC), the Web3DService (W3DS) has been extended to cope with the new database schema and deliver a web friendly format for WebGL rendering. Finally a generic user interface is presented which uses the segments as navigation metaphor to browse and query the semantic segmentation levels and retrieve information from an external database of the German Archaeological Institute (DAI).

  7. Retinal blood vessel segmentation using fully convolutional network with transfer learning.

    PubMed

    Jiang, Zhexin; Zhang, Hao; Wang, Yi; Ko, Seok-Bum

    2018-04-26

    Since the retinal blood vessel has been acknowledged as an indispensable element in both ophthalmological and cardiovascular disease diagnosis, the accurate segmentation of the retinal vessel tree has become the prerequisite step for automated or computer-aided diagnosis systems. In this paper, a supervised method is presented based on a pre-trained fully convolutional network through transfer learning. This proposed method has simplified the typical retinal vessel segmentation problem from full-size image segmentation to regional vessel element recognition and result merging. Meanwhile, additional unsupervised image post-processing techniques are applied to this proposed method so as to refine the final result. Extensive experiments have been conducted on DRIVE, STARE, CHASE_DB1 and HRF databases, and the accuracy of the cross-database test on these four databases is state-of-the-art, which also presents the high robustness of the proposed approach. This successful result has not only contributed to the area of automated retinal blood vessel segmentation but also supports the effectiveness of transfer learning when applying deep learning technique to medical imaging. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Element for use in an inductive coupler for downhole drilling components

    DOEpatents

    Hall, David R.; Hall, Jr., H. Tracy; Pixton, David S.; Dahlgren, Scott; Fox, Joe; Sneddon, Cameron

    2006-08-29

    The present invention includes an element for use in an inductive coupler in a downhole component. The element includes a plurality of ductile, generally U-shaped leaves that are electrically conductive. The leaves are less than about 0.0625" thick and are separated by an electrically insulating material. These leaves are aligned so as to form a generally circular trough. The invention also includes an inductive coupler for use in downhole components, the inductive coupler including an annular housing having a recess with a magnetically conductive, electrically insulating (MCEI) element disposed in the recess. The MCEI element includes a plurality of segments where each segment further includes a plurality of ductile, generally U-shaped electrically conductive leaves. Each leaf is less than about 0.0625" thick and separated from the otherwise adjacent leaves by electrically insulating material. The segments and leaves are aligned so as to form a generally circular trough. The inductive coupler further includes an insulated conductor disposed within the generally circular trough. A polymer fills spaces between otherwise adjacent segments, the annular housing, insulated conductor, and further fills the circular trough.

  9. Optimal retinal cyst segmentation from OCT images

    NASA Astrophysics Data System (ADS)

    Oguz, Ipek; Zhang, Li; Abramoff, Michael D.; Sonka, Milan

    2016-03-01

    Accurate and reproducible segmentation of cysts and fluid-filled regions from retinal OCT images is an important step allowing quantification of the disease status, longitudinal disease progression, and response to therapy in wet-pathology retinal diseases. However, segmentation of fluid-filled regions from OCT images is a challenging task due to their inhomogeneous appearance, the unpredictability of their number, size and location, as well as the intensity profile similarity between such regions and certain healthy tissue types. While machine learning techniques can be beneficial for this task, they require large training datasets and are often over-fitted to the appearance models of specific scanner vendors. We propose a knowledge-based approach that leverages a carefully designed cost function and graph-based segmentation techniques to provide a vendor-independent solution to this problem. We illustrate the results of this approach on two publicly available datasets with a variety of scanner vendors and retinal disease status. Compared to a previous machine-learning based approach, the volume similarity error was dramatically reduced from 81:3+/-56:4% to 22:2+/-21:3% (paired t-test, p << 0:001).

  10. Evaluation of Landsat-7 SLC-off image products for forest change detection

    USGS Publications Warehouse

    Wulder, Michael A.; Ortlepp, Stephanie M.; White, Joanne C.; Maxwell, Susan

    2008-01-01

    Since July 2003, Landsat-7 ETM+ has been operating without the scan line corrector (SLC), which compensates for the forward motion of the satellite in the imagery acquired. Data collected in SLC-off mode have gaps in a systematic wedge-shaped pattern outside of the central 22 km swath of the imagery; however, the spatial and spectral quality of the remaining portions of the imagery are not diminished. To explore the continued use of Landsat-7 ETM+ SLC-off imagery to characterize change in forested environments, we compare the change detection results generated from a reference image pair (a 1999 Landsat-7 ETM+ image and a 2003 Landsat-5 TM image) with change detection results generated from the same 1999 Landsat-7 ETM+ image coupled with three different 2003 Landsat-7 SLC-off products: unremediated SLC-off (i.e., with gaps); histogram-based gap-filled; and segment-based gap-filled. The results are compared on both a pixel and polygon basis; on a pixel basis, the unremediated SLC-off product missed 35% of the change identified by the reference data, and the histogram- and segment-based gap-filled products missed 23% and 21% of the change, respectively. When using forest inventory polygons as a context for change (to reduce commission error), the amount of change missed was 31%, 14%, and 12% for the each of the unremediated, histogram-based gap-filled, and segment-based gap-filled products, respectively. Our results indicate that over the time period considered, and given the types and spatial distribution of change events within our study area, the gap-filled products can provide a useful data source for change detection in forested environments. The selection of which product to use is, however, very dependent on the nature of the application and the spatial configuration of change events. ?? 2008 Government of Canada.

  11. Dual-energy-based metal segmentation for metal artifact reduction in dental computed tomography.

    PubMed

    Hegazy, Mohamed A A; Eldib, Mohamed Elsayed; Hernandez, Daniel; Cho, Myung Hye; Cho, Min Hyoung; Lee, Soo Yeol

    2018-02-01

    In a dental CT scan, the presence of dental fillings or dental implants generates severe metal artifacts that often compromise readability of the CT images. Many metal artifact reduction (MAR) techniques have been introduced, but dental CT scans still suffer from severe metal artifacts particularly when multiple dental fillings or implants exist around the region of interest. The high attenuation coefficient of teeth often causes erroneous metal segmentation, compromising the MAR performance. We propose a metal segmentation method for a dental CT that is based on dual-energy imaging with a narrow energy gap. Unlike a conventional dual-energy CT, we acquire two projection data sets at two close tube voltages (80 and 90 kV p ), and then, we compute the difference image between the two projection images with an optimized weighting factor so as to maximize the contrast of the metal regions. We reconstruct CT images from the weighted difference image to identify the metal region with global thresholding. We forward project the identified metal region to designate metal trace on the projection image. We substitute the pixel values on the metal trace with the ones computed by the region filling method. The region filling in the metal trace removes high-intensity data made by the metallic objects from the projection image. We reconstruct final CT images from the region-filled projection image with the fusion-based approach. We have done imaging experiments on a dental phantom and a human skull phantom using a lab-built micro-CT and a commercial dental CT system. We have corrected the projection images of a dental phantom and a human skull phantom using the single-energy and dual-energy-based metal segmentation methods. The single-energy-based method often failed in correcting the metal artifacts on the slices on which tooth enamel exists. The dual-energy-based method showed better MAR performances in all cases regardless of the presence of tooth enamel on the slice of interest. We have compared the MAR performances between both methods in terms of the relative error (REL), the sum of squared difference (SSD) and the normalized absolute difference (NAD). For the dental phantom images corrected by the single-energy-based method, the metric values were 95.3%, 94.5%, and 90.6%, respectively, while they were 90.1%, 90.05%, and 86.4%, respectively, for the images corrected by the dual-energy-based method. For the human skull phantom images, the metric values were improved from 95.6%, 91.5%, and 89.6%, respectively, to 88.2%, 82.5%, and 81.3%, respectively. The proposed dual-energy-based method has shown better performance in metal segmentation leading to better MAR performance in dental imaging. We expect the proposed metal segmentation method can be used to improve the MAR performance of existing MAR techniques that have metal segmentation steps in their correction procedures. © 2017 American Association of Physicists in Medicine.

  12. The VirusBanker database uses a Java program to allow flexible searching through Bunyaviridae sequences.

    PubMed

    Fourment, Mathieu; Gibbs, Mark J

    2008-02-05

    Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically.

  13. ECG signal quality during arrhythmia and its application to false alarm reduction.

    PubMed

    Behar, Joachim; Oster, Julien; Li, Qiao; Clifford, Gari D

    2013-06-01

    An automated algorithm to assess electrocardiogram (ECG) quality for both normal and abnormal rhythms is presented for false arrhythmia alarm suppression of intensive care unit (ICU) monitors. A particular focus is given to the quality assessment of a wide variety of arrhythmias. Data from three databases were used: the Physionet Challenge 2011 dataset, the MIT-BIH arrhythmia database, and the MIMIC II database. The quality of more than 33 000 single-lead 10 s ECG segments were manually assessed and another 12 000 bad-quality single-lead ECG segments were generated using the Physionet noise stress test database. Signal quality indices (SQIs) were derived from the ECGs segments and used as the inputs to a support vector machine classifier with a Gaussian kernel. This classifier was trained to estimate the quality of an ECG segment. Classification accuracies of up to 99% on the training and test set were obtained for normal sinus rhythm and up to 95% for arrhythmias, although performance varied greatly depending on the type of rhythm. Additionally, the association between 4050 ICU alarms from the MIMIC II database and the signal quality, as evaluated by the classifier, was studied. Results suggest that the SQIs should be rhythm specific and that the classifier should be trained for each rhythm call independently. This would require a substantially increased set of labeled data in order to train an accurate algorithm.

  14. MS lesion segmentation using a multi-channel patch-based approach with spatial consistency

    NASA Astrophysics Data System (ADS)

    Mechrez, Roey; Goldberger, Jacob; Greenspan, Hayit

    2015-03-01

    This paper presents an automatic method for segmentation of Multiple Sclerosis (MS) in Magnetic Resonance Images (MRI) of the brain. The approach is based on similarities between multi-channel patches (T1, T2 and FLAIR). An MS lesion patch database is built using training images for which the label maps are known. For each patch in the testing image, k similar patches are retrieved from the database. The matching labels for these k patches are then combined to produce an initial segmentation map for the test case. Finally a novel iterative patch-based label refinement process based on the initial segmentation map is performed to ensure spatial consistency of the detected lesions. A leave-one-out evaluation is done for each testing image in the MS lesion segmentation challenge of MICCAI 2008. Results are shown to compete with the state-of-the-art methods on the MICCAI 2008 challenge.

  15. New Embedded Denotes Fuzzy C-Mean Application for Breast Cancer Density Segmentation in Digital Mammograms

    NASA Astrophysics Data System (ADS)

    Othman, Khairulnizam; Ahmad, Afandi

    2016-11-01

    In this research we explore the application of normalize denoted new techniques in advance fast c-mean in to the problem of finding the segment of different breast tissue regions in mammograms. The goal of the segmentation algorithm is to see if new denotes fuzzy c- mean algorithm could separate different densities for the different breast patterns. The new density segmentation is applied with multi-selection of seeds label to provide the hard constraint, whereas the seeds labels are selected based on user defined. New denotes fuzzy c- mean have been explored on images of various imaging modalities but not on huge format digital mammograms just yet. Therefore, this project is mainly focused on using normalize denoted new techniques employed in fuzzy c-mean to perform segmentation to increase visibility of different breast densities in mammography images. Segmentation of the mammogram into different mammographic densities is useful for risk assessment and quantitative evaluation of density changes. Our proposed methodology for the segmentation of mammograms on the basis of their region into different densities based categories has been tested on MIAS database and Trueta Database.

  16. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  17. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection.

    PubMed

    Zhuang, Xiahai; Bai, Wenjia; Song, Jingjing; Zhan, Songhua; Qian, Xiaohua; Shi, Wenzhe; Lian, Yanyun; Rueckert, Daniel

    2015-07-01

    Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors' proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p < 0.03). In the atlas database study, the authors showed that the MAS using larger atlas databases generated better performance curves than the MAS using smaller ones, indicating larger atlas databases could produce more accurate segmentation. The authors have developed a new MAS framework for automatic WHS of CTA and investigated alternative implementations of MAS. With the proposed atlas ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation within practically acceptable computation time. This method can be useful for the development of new clinical applications of cardiac CT.

  18. MOSAIC: an online database dedicated to the comparative genomics of bacterial strains at the intra-species level.

    PubMed

    Chiapello, Hélène; Gendrault, Annie; Caron, Christophe; Blum, Jérome; Petit, Marie-Agnès; El Karoui, Meriem

    2008-11-27

    The recent availability of complete sequences for numerous closely related bacterial genomes opens up new challenges in comparative genomics. Several methods have been developed to align complete genomes at the nucleotide level but their use and the biological interpretation of results are not straightforward. It is therefore necessary to develop new resources to access, analyze, and visualize genome comparisons. Here we present recent developments on MOSAIC, a generalist comparative bacterial genome database. This database provides the bacteriologist community with easy access to comparisons of complete bacterial genomes at the intra-species level. The strategy we developed for comparison allows us to define two types of regions in bacterial genomes: backbone segments (i.e., regions conserved in all compared strains) and variable segments (i.e., regions that are either specific to or variable in one of the aligned genomes). Definition of these segments at the nucleotide level allows precise comparative and evolutionary analyses of both coding and non-coding regions of bacterial genomes. Such work is easily performed using the MOSAIC Web interface, which allows browsing and graphical visualization of genome comparisons. The MOSAIC database now includes 493 pairwise comparisons and 35 multiple maximal comparisons representing 78 bacterial species. Genome conserved regions (backbones) and variable segments are presented in various formats for further analysis. A graphical interface allows visualization of aligned genomes and functional annotations. The MOSAIC database is available online at http://genome.jouy.inra.fr/mosaic.

  19. The VirusBanker database uses a Java program to allow flexible searching through Bunyaviridae sequences

    PubMed Central

    Fourment, Mathieu; Gibbs, Mark J

    2008-01-01

    Background Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. Results The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. Conclusion VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically. PMID:18251994

  20. Word-level recognition of multifont Arabic text using a feature vector matching approach

    NASA Astrophysics Data System (ADS)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  1. Utilization of tooth filling services by people with disabilities in Taiwan.

    PubMed

    Chen, Ming-Chuan; Kung, Pei-Tseng; Su, Hsun-Pi; Yen, Suh-May; Chiu, Li-Ting; Tsai, Wen-Chen

    2016-04-05

    The oral condition of people with disabilities has considerable influence on their physical and mental health. However, nationwide surveys regarding this group have not been conducted. For this study, we used the National Health Insurance Research Database to explore the tooth filling utilization among people with disabilities. Using the database of the Ministry of the Interior in 2008 which included people with disabilities registered, we merged with the medical claims database in 2008 of the Bureau of National Health Insurance to calculate the tooth filling utilization and to analyze relative factors. We recruited 993,487 people with disabilities as the research sample. The tooth filling utilization was 17.53 %. The multiple logistic regression result showed that the utilization rate of men was lower than that of women (OR = 0.78, 95 % CI = 0.77-0.79) and older people had lower utilization rates (aged over 75, OR = 0.22, 95 % CI = 0.22-0.23) compared to those under the age of 20. Other factors that significantly influenced the low tooth filling utilization included a low education level, living in less urbanized areas, low economic capacity, dementia, and severe disability. We identified the factors that influence and decrease the tooth-filling service utilization rate: male sex, old age, low education level, being married, indigenous ethnicity, residing in a low urbanization area, low income, chronic circulatory system diseases, dementia, and severe disabilities. We suggest establishing proper medical care environments for high-risk groups to maintain their quality of life.

  2. Automated analysis of high-throughput B-cell sequencing data reveals a high frequency of novel immunoglobulin V gene segment alleles.

    PubMed

    Gadala-Maria, Daniel; Yaari, Gur; Uduman, Mohamed; Kleinstein, Steven H

    2015-02-24

    Individual variation in germline and expressed B-cell immunoglobulin (Ig) repertoires has been associated with aging, disease susceptibility, and differential response to infection and vaccination. Repertoire properties can now be studied at large-scale through next-generation sequencing of rearranged Ig genes. Accurate analysis of these repertoire-sequencing (Rep-Seq) data requires identifying the germline variable (V), diversity (D), and joining (J) gene segments used by each Ig sequence. Current V(D)J assignment methods work by aligning sequences to a database of known germline V(D)J segment alleles. However, existing databases are likely to be incomplete and novel polymorphisms are hard to differentiate from the frequent occurrence of somatic hypermutations in Ig sequences. Here we develop a Tool for Ig Genotype Elucidation via Rep-Seq (TIgGER). TIgGER analyzes mutation patterns in Rep-Seq data to identify novel V segment alleles, and also constructs a personalized germline database containing the specific set of alleles carried by a subject. This information is then used to improve the initial V segment assignments from existing tools, like IMGT/HighV-QUEST. The application of TIgGER to Rep-Seq data from seven subjects identified 11 novel V segment alleles, including at least one in every subject examined. These novel alleles constituted 13% of the total number of unique alleles in these subjects, and impacted 3% of V(D)J segment assignments. These results reinforce the highly polymorphic nature of human Ig V genes, and suggest that many novel alleles remain to be discovered. The integration of TIgGER into Rep-Seq processing pipelines will increase the accuracy of V segment assignments, thus improving B-cell repertoire analyses.

  3. Reconstruction of ECG signals in presence of corruption.

    PubMed

    Ganeshapillai, Gartheeban; Liu, Jessica F; Guttag, John

    2011-01-01

    We present an approach to identifying and reconstructing corrupted regions in a multi-parameter physiological signal. The method, which uses information in correlated signals, is specifically designed to preserve clinically significant aspects of the signals. We use template matching to jointly segment the multi-parameter signal, morphological dissimilarity to estimate the quality of the signal segment, similarity search using features on a database of templates to find the closest match, and time-warping to reconstruct the corrupted segment with the matching template. In experiments carried out on the MIT-BIH Arrhythmia Database, a two-parameter database with many clinically significant arrhythmias, our method improved the classification accuracy of the beat type by more than 7 times on a signal corrupted with white Gaussian noise, and increased the similarity to the original signal, as measured by the normalized residual distance, by more than 2.5 times.

  4. Effect of segmentation algorithms on the performance of computerized detection of lung nodules in CT

    PubMed Central

    Guo, Wei; Li, Qiang

    2014-01-01

    Purpose: The purpose of this study is to reveal how the performance of lung nodule segmentation algorithm impacts the performance of lung nodule detection, and to provide guidelines for choosing an appropriate segmentation algorithm with appropriate parameters in a computer-aided detection (CAD) scheme. Methods: The database consisted of 85 CT scans with 111 nodules of 3 mm or larger in diameter from the standard CT lung nodule database created by the Lung Image Database Consortium. The initial nodule candidates were identified as those with strong response to a selective nodule enhancement filter. A uniform viewpoint reformation technique was applied to a three-dimensional nodule candidate to generate 24 two-dimensional (2D) reformatted images, which would be used to effectively distinguish between true nodules and false positives. Six different algorithms were employed to segment the initial nodule candidates in the 2D reformatted images. Finally, 2D features from the segmented areas in the 24 reformatted images were determined, selected, and classified for removal of false positives. Therefore, there were six similar CAD schemes, in which only the segmentation algorithms were different. The six segmentation algorithms included the fixed thresholding (FT), Otsu thresholding (OTSU), fuzzy C-means (FCM), Gaussian mixture model (GMM), Chan and Vese model (CV), and local binary fitting (LBF). The mean Jaccard index and the mean absolute distance (Dmean) were employed to evaluate the performance of segmentation algorithms, and the number of false positives at a fixed sensitivity was employed to evaluate the performance of the CAD schemes. Results: For the segmentation algorithms of FT, OTSU, FCM, GMM, CV, and LBF, the highest mean Jaccard index between the segmented nodule and the ground truth were 0.601, 0.586, 0.588, 0.563, 0.543, and 0.553, respectively, and the corresponding Dmean were 1.74, 1.80, 2.32, 2.80, 3.48, and 3.18 pixels, respectively. With these segmentation results of the six segmentation algorithms, the six CAD schemes reported 4.4, 8.8, 3.4, 9.2, 13.6, and 10.4 false positives per CT scan at a sensitivity of 80%. Conclusions: When multiple algorithms are available for segmenting nodule candidates in a CAD scheme, the “optimal” segmentation algorithm did not necessarily lead to the “optimal” CAD detection performance. PMID:25186393

  5. Unified framework for automated iris segmentation using distantly acquired face images.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2012-09-01

    Remote human identification using iris biometrics has high civilian and surveillance applications and its success requires the development of robust segmentation algorithm to automatically extract the iris region. This paper presents a new iris segmentation framework which can robustly segment the iris images acquired using near infrared or visible illumination. The proposed approach exploits multiple higher order local pixel dependencies to robustly classify the eye region pixels into iris or noniris regions. Face and eye detection modules have been incorporated in the unified framework to automatically provide the localized eye region from facial image for iris segmentation. We develop robust postprocessing operations algorithm to effectively mitigate the noisy pixels caused by the misclassification. Experimental results presented in this paper suggest significant improvement in the average segmentation errors over the previously proposed approaches, i.e., 47.5%, 34.1%, and 32.6% on UBIRIS.v2, FRGC, and CASIA.v4 at-a-distance databases, respectively. The usefulness of the proposed approach is also ascertained from recognition experiments on three different publicly available databases.

  6. Geochemical Database for Igneous Rocks of the Ancestral Cascades Arc - Southern Segment, California and Nevada

    USGS Publications Warehouse

    du Bray, Edward A.; John, David A.; Putirka, Keith; Cousens, Brian L.

    2009-01-01

    Volcanic rocks that form the southern segment of the Cascades magmatic arc are an important manifestation of Cenozoic subduction and associated magmatism in western North America. Until recently, these rocks had been little studied and no systematic compilation of existing composition data had been assembled. This report is a compilation of all available chemical data for igneous rocks that constitute the southern segment of the ancestral Cascades magmatic arc and complement a previously completed companion compilation that pertains to rocks that constitute the northern segment of the arc. Data for more than 2,000 samples from a diversity of sources were identified and incorporated in the database. The association between these igneous rocks and spatially and temporally associated mineral deposits is well established and suggests a probable genetic relationship. The ultimate goal of the related research is an evaluation of the time-space-compositional evolution of magmatism associated with the southern Cascades arc segment and identification of genetic associations between magmatism and mineral deposits in this region.

  7. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua; Bai, Wenjia

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluatingmore » the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p < 0.03). In the atlas database study, the authors showed that the MAS using larger atlas databases generated better performance curves than the MAS using smaller ones, indicating larger atlas databases could produce more accurate segmentation. Conclusions: The authors have developed a new MAS framework for automatic WHS of CTA and investigated alternative implementations of MAS. With the proposed atlas ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation within practically acceptable computation time. This method can be useful for the development of new clinical applications of cardiac CT.« less

  8. Towards Automatic Image Segmentation Using Optimised Region Growing Technique

    NASA Astrophysics Data System (ADS)

    Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi

    Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.

  9. Automated volumetric segmentation of retinal fluid on optical coherence tomography

    PubMed Central

    Wang, Jie; Zhang, Miao; Pechauer, Alex D.; Liu, Liang; Hwang, Thomas S.; Wilson, David J.; Li, Dengwang; Jia, Yali

    2016-01-01

    We propose a novel automated volumetric segmentation method to detect and quantify retinal fluid on optical coherence tomography (OCT). The fuzzy level set method was introduced for identifying the boundaries of fluid filled regions on B-scans (x and y-axes) and C-scans (z-axis). The boundaries identified from three types of scans were combined to generate a comprehensive volumetric segmentation of retinal fluid. Then, artefactual fluid regions were removed using morphological characteristics and by identifying vascular shadowing with OCT angiography obtained from the same scan. The accuracy of retinal fluid detection and quantification was evaluated on 10 eyes with diabetic macular edema. Automated segmentation had good agreement with manual segmentation qualitatively and quantitatively. The fluid map can be integrated with OCT angiogram for intuitive clinical evaluation. PMID:27446676

  10. Topological materials discovery using electron filling constraints

    NASA Astrophysics Data System (ADS)

    Chen, Ru; Po, Hoi Chun; Neaton, Jeffrey B.; Vishwanath, Ashvin

    2018-01-01

    Nodal semimetals are classes of topological materials that have nodal-point or nodal-line Fermi surfaces, which give them novel transport and topological properties. Despite being highly sought after, there are currently very few experimental realizations, and identifying new materials candidates has mainly relied on exhaustive database searches. Here we show how recent studies on the interplay between electron filling and nonsymmorphic space-group symmetries can guide the search for filling-enforced nodal semimetals. We recast the previously derived constraints on the allowed band-insulator fillings in any space group into a new form, which enables effective screening of materials candidates based solely on their space group, electron count in the formula unit, and multiplicity of the formula unit. This criterion greatly reduces the computation load for discovering topological materials in a database of previously synthesized compounds. As a demonstration, we focus on a few selected nonsymmorphic space groups which are predicted to host filling-enforced Dirac semimetals. Of the more than 30,000 entires listed, our filling criterion alone eliminates 96% of the entries before they are passed on for further analysis. We discover a handful of candidates from this guided search; among them, the monoclinic crystal Ca2Pt2Ga is particularly promising.

  11. Development and Implementation of a Segment/Junction Box Level Database for the ITS Fiber Optic Conduit Network

    DOT National Transportation Integrated Search

    2012-03-01

    This project initiated the development of a computerized database of ITS facilities, including conduits, junction : boxes, cameras, connections, etc. The current system consists of a database of conduit sections of various lengths. : Over the length ...

  12. QENS investigation of filled rubbers

    NASA Astrophysics Data System (ADS)

    Triolo, A.; Lo Celso, F.; Negroni, F.; Arrighi, V.; Qian, H.; Lechner, R. E.; Desmedt, A.; Pieper, J.; Frick, B.; Triolo, R.

    The polymer segmental dynamics is investigated in a series of silica-filled rubbers. The presence of inert fillers in polymers greatly affects the mechanical and physical performance of the final materials. For example, silica has been proposed as a reinforcing agent of elastomers in tire production. Results from quasielastic neutron scattering and Dynamic Mechanical Thermal Analysis (DMTA) measurements are presented on styrene-ran-butadiene rubber filled with silica. A clear indication is obtained of the existence of a bimodal dynamics, which can be rationalized in terms of the relaxation of bulk rubber and the much slower relaxation of the rubber adsorbed on the filler surface.

  13. Resolving Large Pre-glacial Valleys Buried by Glacial Sediment Using Electric Resistivity Imaging (ERI)

    NASA Astrophysics Data System (ADS)

    Schmitt, D. R.; Welz, M.; Rokosh, C. D.; Pontbriand, M.-C.; Smith, D. G.

    2004-05-01

    Two-dimensional electric resistivity imaging (ERI) is the most exciting and promising geological tool in geomorphology and stratigraphy since development of ground-penetrating radar. Recent innovations in 2-D ERI provides a non-intrusive mean of efficiently resolving complex shallow subsurface structures under a number of different geological scenarios. In this paper, we test the capacity of ERI to image two large pre-late Wisconsinan-aged valley-fills in central Alberta and north-central Montana. Valley-fills record the history of pre-glacial and glacial sedimentary deposits. These fills are of considerable economical value as groundwater aquifers, aggregate resources (sand and gravel), placers (gold, diamond) and sometime gas reservoirs in Alberta. Although the approximate locations of pre-glacial valley-fills have been mapped, the scarcity of borehole (well log) information and sediment exposures make accurate reconstruction of their stratigraphy and cross-section profiles difficult. When coupled with borehole information, ERI successfully imaged three large pre-glacial valley-fills representing three contrasting geological settings. The Sand Coulee segment of the ancestral Missouri River, which has never been glaciated, is filled by electrically conductive pro-glacial lacustrine deposits over resistive sandstone bedrock. By comparison, the Big Sandy segment of the ancestral Missouri River valley has a complex valley-fill composed of till units interbedded with glaciofluvial gravel and varved clays over conductive shale. The fill is capped by floodplain, paludal and low alluvial fan deposits. The pre-glacial Onoway Valley (the ancestral North Saskatchewan River valley) is filled with thick, resistive fluvial gravel over conductive shale and capped with conductive till. The cross-sectional profile of each surveyed pre-glacial valley exhibits discrete benches (terraces) connected by steep drops, features that are hard to map using only boreholes. Best quality ERI results were obtained along the Sand Coulee and Onoway transects where the contrast between the bedrock and valley-fill was large and the surficial sediment was homogeneous. The effects of decreasing reliability with depth, 3-D anomalies, principles of equivalence and suppression, and surface inhomogeneity on the image quality are discussed.

  14. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance.

    PubMed

    Yuan, Yading; Chao, Ming; Lo, Yeh-Chi

    2017-09-01

    Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.

  15. Michigan urban trunkline segments safety performance functions (SPFs) : final report.

    DOT National Transportation Integrated Search

    2016-07-01

    This study involves the development of safety performance functions (SPFs) for urban and suburban trunkline segments in the : state of Michigan. Extensive databases were developed through the integration of traffic crash information, traffic volumes,...

  16. Distance-based over-segmentation for single-frame RGB-D images

    NASA Astrophysics Data System (ADS)

    Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao

    2017-11-01

    Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.

  17. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset.

    PubMed

    Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R

    2015-05-01

    We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Quantitative performance evaluation of 124I PET/MRI lesion dosimetry in differentiated thyroid cancer

    NASA Astrophysics Data System (ADS)

    Wierts, R.; Jentzen, W.; Quick, H. H.; Wisselink, H. J.; Pooters, I. N. A.; Wildberger, J. E.; Herrmann, K.; Kemerink, G. J.; Backes, W. H.; Mottaghy, F. M.

    2018-01-01

    The aim was to investigate the quantitative performance of 124I PET/MRI for pre-therapy lesion dosimetry in differentiated thyroid cancer (DTC). Phantom measurements were performed on a PET/MRI system (Biograph mMR, Siemens Healthcare) using 124I and 18F. The PET calibration factor and the influence of radiofrequency coil attenuation were determined using a cylindrical phantom homogeneously filled with radioactivity. The calibration factor was 1.00  ±  0.02 for 18F and 0.88  ±  0.02 for 124I. Near the radiofrequency surface coil an underestimation of less than 5% in radioactivity concentration was observed. Soft-tissue sphere recovery coefficients were determined using the NEMA IEC body phantom. Recovery coefficients were systematically higher for 18F than for 124I. In addition, the six spheres of the phantom were segmented using a PET-based iterative segmentation algorithm. For all 124I measurements, the deviations in segmented lesion volume and mean radioactivity concentration relative to the actual values were smaller than 15% and 25%, respectively. The effect of MR-based attenuation correction (three- and four-segment µ-maps) on bone lesion quantification was assessed using radioactive spheres filled with a K2HPO4 solution mimicking bone lesions. The four-segment µ-map resulted in an underestimation of the imaged radioactivity concentration of up to 15%, whereas the three-segment µ-map resulted in an overestimation of up to 10%. For twenty lesions identified in six patients, a comparison of 124I PET/MRI to PET/CT was performed with respect to segmented lesion volume and radioactivity concentration. The interclass correlation coefficients showed excellent agreement in segmented lesion volume and radioactivity concentration (0.999 and 0.95, respectively). In conclusion, it is feasible that accurate quantitative 124I PET/MRI could be used to perform radioiodine pre-therapy lesion dosimetry in DTC.

  19. ASM Based Synthesis of Handwritten Arabic Text Pages

    PubMed Central

    Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059

  20. ASM Based Synthesis of Handwritten Arabic Text Pages.

    PubMed

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  1. White blood cell segmentation by color-space-based k-means clustering.

    PubMed

    Zhang, Congcong; Xiao, Xiaoyan; Li, Xiaomei; Chen, Ying-Jie; Zhen, Wu; Chang, Jun; Zheng, Chengyun; Liu, Zhi

    2014-09-01

    White blood cell (WBC) segmentation, which is important for cytometry, is a challenging issue because of the morphological diversity of WBCs and the complex and uncertain background of blood smear images. This paper proposes a novel method for the nucleus and cytoplasm segmentation of WBCs for cytometry. A color adjustment step was also introduced before segmentation. Color space decomposition and k-means clustering were combined for segmentation. A database including 300 microscopic blood smear images were used to evaluate the performance of our method. The proposed segmentation method achieves 95.7% and 91.3% overall accuracy for nucleus segmentation and cytoplasm segmentation, respectively. Experimental results demonstrate that the proposed method can segment WBCs effectively with high accuracy.

  2. Geodemographic segmentation systems for screening health data.

    PubMed Central

    Openshaw, S; Blake, M

    1995-01-01

    AIM--To describe how geodemographic segmentation systems might be useful as a quick and easy way of exploring postcoded health databases for potential interesting patterns related to deprivation and other socioeconomic characteristics. DESIGN AND SETTING--This is demonstrated using GB Profiles, a freely available geodemographic classification system developed at Leeds University. It is used here to screen a database of colorectal cancer registrations as a first step in the analysis of that data. RESULTS AND CONCLUSION--Conventional geodemographics is a fairly simple technology and a number of outstanding methodological problems are identified. A solution to some problems is illustrated by using neural net based classifiers and then by reference to a more sophisticated geodemographic approach via a data optimal segmentation technique. Images PMID:8594132

  3. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network.

    PubMed

    Charron, Odelin; Lallement, Alex; Jarnet, Delphine; Noblet, Vincent; Clavier, Jean-Baptiste; Meyer, Philippe

    2018-04-01

    Stereotactic treatments are today the reference techniques for the irradiation of brain metastases in radiotherapy. The dose per fraction is very high, and delivered in small volumes (diameter <1 cm). As part of these treatments, effective detection and precise segmentation of lesions are imperative. Many methods based on deep-learning approaches have been developed for the automatic segmentation of gliomas, but very little for that of brain metastases. We adapted an existing 3D convolutional neural network (DeepMedic) to detect and segment brain metastases on MRI. At first, we sought to adapt the network parameters to brain metastases. We then explored the single or combined use of different MRI modalities, by evaluating network performance in terms of detection and segmentation. We also studied the interest of increasing the database with virtual patients or of using an additional database in which the active parts of the metastases are separated from the necrotic parts. Our results indicated that a deep network approach is promising for the detection and the segmentation of brain metastases on multimodal MRI. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Data Processing on Database Management Systems with Fuzzy Query

    NASA Astrophysics Data System (ADS)

    Şimşek, Irfan; Topuz, Vedat

    In this study, a fuzzy query tool (SQLf) for non-fuzzy database management systems was developed. In addition, samples of fuzzy queries were made by using real data with the tool developed in this study. Performance of SQLf was tested with the data about the Marmara University students' food grant. The food grant data were collected in MySQL database by using a form which had been filled on the web. The students filled a form on the web to describe their social and economical conditions for the food grant request. This form consists of questions which have fuzzy and crisp answers. The main purpose of this fuzzy query is to determine the students who deserve the grant. The SQLf easily found the eligible students for the grant through predefined fuzzy values. The fuzzy query tool (SQLf) could be used easily with other database system like ORACLE and SQL server.

  5. An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.

  6. Texture-based CAD improves diagnosis for low-dose CT colonography

    NASA Astrophysics Data System (ADS)

    Liang, Zhengrong; Cohen, Harris; Posniak, Erica; Fiore, Eddie; Wang, Zigang; Li, Bin; Andersen, Joseph; Harrington, Donald

    2008-03-01

    Computed tomography (CT)-based virtual colonoscopy or CT colonography (CTC) currently utilizes oral contrast solutions to tag the colonic fluid and possibly residual stool for differentiation from the colon wall and polyps. The enhanced image density of the tagged colonic materials causes a significant partial volume (PV) effect into the colon wall as well as the lumen space (filled with air or CO II). The PV effect on the colon wall can "bury" polyps of size as large as 5mm by increasing their image densities to a noticeable level, resulting in false negatives. It can also create false positives when PV effect goes into the lumen space. We have been modeling the PV effect for mixture-based image segmentation and developing text-based computer-aided detection of polyp (CADpolyp) by utilizing the PV mixture-based image segmentation. This work presents some preliminary results of developing and applying texture-based CADpolyp technique to low-dose CTC studies. A total of 114 studies of asymptomatic patients older than 50, who underwent CTC and then optical colonoscopy (OC) on the same day, were selected from a database, which was accumulated in the past decade and contains various bowel preparations and CT scanning protocols. The participating radiologists found ten polyps of greater than 5 mm from a total of 16 OC proved polyps, i.e., a detection sensitivity of 63%. They scored 23 false positives from the database, i.e., a 20% false positive rate. Approximately 70% of the datasets were marked as imperfect bowel cleansing and/or presence of image artifacts. The impact of imperfect bowel cleansing and image artifacts on VC performance is significant. The texture-based CADpolyp detected all the polyps with an average of 2.68 false positives per patient. This indicates that texture-based CADpolyp can improve the CTC performance in the cases of imperfect cleansed bowels and presence of image artifacts.

  7. Determination of absorption coefficient based on laser beam thermal blooming in gas-filled tube.

    PubMed

    Hafizi, B; Peñano, J; Fischer, R; DiComo, G; Ting, A

    2014-08-01

    Thermal blooming of a laser beam propagating in a gas-filled tube is investigated both analytically and experimentally. A self-consistent formulation taking into account heating of the gas and the resultant laser beam spreading (including diffraction) is presented. The heat equation is used to determine the temperature variation while the paraxial wave equation is solved in the eikonal approximation to determine the temporal and spatial variation of the Gaussian laser spot radius, Gouy phase (longitudinal phase delay), and wavefront curvature. The analysis is benchmarked against a thermal blooming experiment in the literature using a CO₂ laser beam propagating in a tube filled with air and propane. New experimental results are presented in which a CW fiber laser (1 μm) propagates in a tube filled with nitrogen and water vapor. By matching laboratory and theoretical results, the absorption coefficient of water vapor is found to agree with calculations using MODTRAN (the MODerate-resolution atmospheric TRANsmission molecular absorption database) and HITRAN (the HIgh-resolution atmospheric TRANsmission molecular absorption database).

  8. A review of automatic mass detection and segmentation in mammographic images.

    PubMed

    Oliver, Arnau; Freixenet, Jordi; Martí, Joan; Pérez, Elsa; Pont, Josep; Denton, Erika R E; Zwiggelaar, Reyer

    2010-04-01

    The aim of this paper is to review existing approaches to the automatic detection and segmentation of masses in mammographic images, highlighting the key-points and main differences between the used strategies. The key objective is to point out the advantages and disadvantages of the various approaches. In contrast with other reviews which only describe and compare different approaches qualitatively, this review also provides a quantitative comparison. The performance of seven mass detection methods is compared using two different mammographic databases: a public digitised database and a local full-field digital database. The results are given in terms of Receiver Operating Characteristic (ROC) and Free-response Receiver Operating Characteristic (FROC) analysis. Copyright 2009 Elsevier B.V. All rights reserved.

  9. Pleural effusion segmentation in thin-slice CT

    NASA Astrophysics Data System (ADS)

    Donohue, Rory; Shearer, Andrew; Bruzzi, John; Khosa, Huma

    2009-02-01

    A pleural effusion is excess fluid that collects in the pleural cavity, the fluid-filled space that surrounds the lungs. Surplus amounts of such fluid can impair breathing by limiting the expansion of the lungs during inhalation. Measuring the fluid volume is indicative of the effectiveness of any treatment but, due to the similarity to surround regions, fragments of collapsed lung present and topological changes; accurate quantification of the effusion volume is a difficult imaging problem. A novel code is presented which performs conditional region growth to accurately segment the effusion shape across a dataset. We demonstrate the applicability of our technique in the segmentation of pleural effusion and pulmonary masses.

  10. CT Urography: Segmentation of Urinary Bladder using CLASS with Local Contour Refinement

    PubMed Central

    Cha, Kenny; Hadjiiski, Lubomir; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Zhou, Chuan

    2016-01-01

    Purpose We are developing a computerized system for bladder segmentation on CT urography (CTU), as a critical component for computer-aided detection of bladder cancer. Methods The presence of regions filled with intravenous contrast and without contrast presents a challenge for bladder segmentation. Previously, we proposed a Conjoint Level set Analysis and Segmentation System (CLASS). In case the bladder is partially filled with contrast, CLASS segments the non-contrast (NC) region and the contrast-filled (C) region separately and automatically conjoins the NC and C region contours; however, inaccuracies in the NC and C region contours may cause the conjoint contour to exclude portions of the bladder. To alleviate this problem, we implemented a local contour refinement (LCR) method that exploits model-guided refinement (MGR) and energy-driven wavefront propagation (EDWP). MGR propagates the C region contours if the level set propagation in the C region stops prematurely due to substantial non-uniformity of the contrast. EDWP with regularized energies further propagates the conjoint contours to the correct bladder boundary. EDWP uses changes in energies, smoothness criteria of the contour, and previous slice contour to determine when to stop the propagation, following decision rules derived from training. A data set of 173 cases was collected for this study: 81 cases in the training set (42 lesions, 21 wall thickenings, 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, 13 normal bladders). For all cases, 3D hand segmented contours were obtained as reference standard and used for the evaluation of the computerized segmentation accuracy. Results For CLASS with LCR, the average volume intersection ratio, average volume error, absolute average volume error, average minimum distance and Jaccard index were 84.2±11.4%, 8.2±17.4%, 13.0±14.1%, 3.5±1.9 mm, 78.8±11.6%, respectively, for the training set and 78.0±14.7%, 16.4±16.9%, 18.2±15.0%, 3.8±2.3 mm, 73.8±13.4% respectively, for the test set. With CLASS only, the corresponding values were 75.1±13.2%, 18.7±19.5%, 22.5±14.9%, 4.3±2.2 mm, 71.0±12.6%, respectively, for the training set and 67.3±14.3%, 29.3±15.9%, 29.4±15.6%, 4.9±2.6 mm, 65.0±13.3%, respectively, for the test set. The differences between the two methods for all five measures were statistically significant (p<0.001) for both the training and test sets. Conclusions The results demonstrate the potential of CLASS with LCR for segmentation of the bladder. PMID:24801066

  11. 30 CFR 250.1751 - How do I decommission a pipeline in place?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to be decommissioned; and (4) Length (feet) of segment remaining. (b) Pig the pipeline, unless the Regional Supervisor determines that pigging is not practical; (c) Flush the pipeline; (d) Fill the pipeline...

  12. 30 CFR 250.1751 - How do I decommission a pipeline in place?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... to be decommissioned; and (4) Length (feet) of segment remaining. (b) Pig the pipeline, unless the Regional Supervisor determines that pigging is not practical; (c) Flush the pipeline; (d) Fill the pipeline...

  13. 30 CFR 250.1751 - How do I decommission a pipeline in place?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Length (feet) of segment remaining. (b) Pig the pipeline, unless the Regional Supervisor determines that pigging is not practical; (c) Flush the pipeline; (d) Fill the pipeline with seawater; (e) Cut and plug...

  14. 30 CFR 250.1751 - How do I decommission a pipeline in place?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to be decommissioned; and (4) Length (feet) of segment remaining. (b) Pig the pipeline, unless the Regional Supervisor determines that pigging is not practical; (c) Flush the pipeline; (d) Fill the pipeline...

  15. 30 CFR 250.1751 - How do I decommission a pipeline in place?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to be decommissioned; and (4) Length (feet) of segment remaining. (b) Pig the pipeline, unless the Regional Supervisor determines that pigging is not practical; (c) Flush the pipeline; (d) Fill the pipeline...

  16. Text Detection and Translation from Natural Scenes

    DTIC Science & Technology

    2001-06-01

    is no explicit tags around Chinese words. A module for Chinese word segmentation is included in the system. This segmentor uses a word- frequency ... list to make segmentation decisions. We tested the EBMT based method using randomly selected 50 signs from our database, assuming perfect sign

  17. A lane line segmentation algorithm based on adaptive threshold and connected domain theory

    NASA Astrophysics Data System (ADS)

    Feng, Hui; Xu, Guo-sheng; Han, Yi; Liu, Yang

    2018-04-01

    Before detecting cracks and repairs on road lanes, it's necessary to eliminate the influence of lane lines on the recognition result in road lane images. Aiming at the problems caused by lane lines, an image segmentation algorithm based on adaptive threshold and connected domain is proposed. First, by analyzing features like grey level distribution and the illumination of the images, the algorithm uses Hough transform to divide the images into different sections and convert them into binary images separately. It then uses the connected domain theory to amend the outcome of segmentation, remove noises and fill the interior zone of lane lines. Experiments have proved that this method could eliminate the influence of illumination and lane line abrasion, removing noises thoroughly while maintaining high segmentation precision.

  18. Machine Learning Techniques for the Detection of Shockable Rhythms in Automated External Defibrillators

    PubMed Central

    Irusta, Unai; Morgado, Eduardo; Aramendi, Elisabete; Ayala, Unai; Wik, Lars; Kramer-Johansen, Jo; Eftestøl, Trygve; Alonso-Atienza, Felipe

    2016-01-01

    Early recognition of ventricular fibrillation (VF) and electrical therapy are key for the survival of out-of-hospital cardiac arrest (OHCA) patients treated with automated external defibrillators (AED). AED algorithms for VF-detection are customarily assessed using Holter recordings from public electrocardiogram (ECG) databases, which may be different from the ECG seen during OHCA events. This study evaluates VF-detection using data from both OHCA patients and public Holter recordings. ECG-segments of 4-s and 8-s duration were analyzed. For each segment 30 features were computed and fed to state of the art machine learning (ML) algorithms. ML-algorithms with built-in feature selection capabilities were used to determine the optimal feature subsets for both databases. Patient-wise bootstrap techniques were used to evaluate algorithm performance in terms of sensitivity (Se), specificity (Sp) and balanced error rate (BER). Performance was significantly better for public data with a mean Se of 96.6%, Sp of 98.8% and BER 2.2% compared to a mean Se of 94.7%, Sp of 96.5% and BER 4.4% for OHCA data. OHCA data required two times more features than the data from public databases for an accurate detection (6 vs 3). No significant differences in performance were found for different segment lengths, the BER differences were below 0.5-points in all cases. Our results show that VF-detection is more challenging for OHCA data than for data from public databases, and that accurate VF-detection is possible with segments as short as 4-s. PMID:27441719

  19. Machine Learning Techniques for the Detection of Shockable Rhythms in Automated External Defibrillators.

    PubMed

    Figuera, Carlos; Irusta, Unai; Morgado, Eduardo; Aramendi, Elisabete; Ayala, Unai; Wik, Lars; Kramer-Johansen, Jo; Eftestøl, Trygve; Alonso-Atienza, Felipe

    2016-01-01

    Early recognition of ventricular fibrillation (VF) and electrical therapy are key for the survival of out-of-hospital cardiac arrest (OHCA) patients treated with automated external defibrillators (AED). AED algorithms for VF-detection are customarily assessed using Holter recordings from public electrocardiogram (ECG) databases, which may be different from the ECG seen during OHCA events. This study evaluates VF-detection using data from both OHCA patients and public Holter recordings. ECG-segments of 4-s and 8-s duration were analyzed. For each segment 30 features were computed and fed to state of the art machine learning (ML) algorithms. ML-algorithms with built-in feature selection capabilities were used to determine the optimal feature subsets for both databases. Patient-wise bootstrap techniques were used to evaluate algorithm performance in terms of sensitivity (Se), specificity (Sp) and balanced error rate (BER). Performance was significantly better for public data with a mean Se of 96.6%, Sp of 98.8% and BER 2.2% compared to a mean Se of 94.7%, Sp of 96.5% and BER 4.4% for OHCA data. OHCA data required two times more features than the data from public databases for an accurate detection (6 vs 3). No significant differences in performance were found for different segment lengths, the BER differences were below 0.5-points in all cases. Our results show that VF-detection is more challenging for OHCA data than for data from public databases, and that accurate VF-detection is possible with segments as short as 4-s.

  20. Automatic cardiac LV segmentation in MRI using modified graph cuts with smoothness and interslice constraints.

    PubMed

    Albà, Xènia; Figueras I Ventura, Rosa M; Lekadir, Karim; Tobon-Gomez, Catalina; Hoogendoorn, Corné; Frangi, Alejandro F

    2014-12-01

    Magnetic resonance imaging (MRI), specifically late-enhanced MRI, is the standard clinical imaging protocol to assess cardiac viability. Segmentation of myocardial walls is a prerequisite for this assessment. Automatic and robust multisequence segmentation is required to support processing massive quantities of data. A generic rule-based framework to automatically segment the left ventricle myocardium is presented here. We use intensity information, and include shape and interslice smoothness constraints, providing robustness to subject- and study-specific changes. Our automatic initialization considers the geometrical and appearance properties of the left ventricle, as well as interslice information. The segmentation algorithm uses a decoupled, modified graph cut approach with control points, providing a good balance between flexibility and robustness. The method was evaluated on late-enhanced MRI images from a 20-patient in-house database, and on cine-MRI images from a 15-patient open access database, both using as reference manually delineated contours. Segmentation agreement, measured using the Dice coefficient, was 0.81±0.05 and 0.92±0.04 for late-enhanced MRI and cine-MRI, respectively. The method was also compared favorably to a three-dimensional Active Shape Model approach. The experimental validation with two magnetic resonance sequences demonstrates increased accuracy and versatility. © 2013 Wiley Periodicals, Inc.

  1. Draft environmental impact statement: Space Shuttle Advanced Solid Rocket Motor Program

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The proposed action is design, development, testing, and evaluation of Advanced Solid Rocket Motors (ASRM) to replace the motors currently used to launch the Space Shuttle. The proposed action includes design, construction, and operation of new government-owned, contractor-operated facilities for manufacturing and testing the ASRM's. The proposed action also includes transport of propellant-filled rocket motor segments from the manufacturing facility to the testing and launch sites and the return of used and/or refurbished segments to the manufacturing site.

  2. Surgical management of an ACM aneurysm eight years after coiling.

    PubMed

    Pogády, P; Fellner, F; Trenkler, J; Wurm, G

    2007-04-01

    The authors present a case report on rebleeding of a medial cerebral aneurysm (MCA) eight years after complete endovascular coiling. The primarily successfully coiled MCA aneurysm showed a local regrowth which, however, was not the source of the rebleeding. The angiogram demonstrated no evidence of contrast filling of the coiled segment, but according to intraoperative findings (haematoma location, displacement of coils, evident place of rupture) there is no doubt that the coiled segment of the aneurysm was responsible for the haemorrhage.

  3. Geophysical investigation, Lake Sherwood dam site, east-central Missouri.

    DOT National Transportation Integrated Search

    2011-10-01

    Electrical resistivity and self potential (SP) data were acquired across selected segment of the Lake Sherwood earth-fill : dam and in designated areas immediately adjacent to the dam. : The 2-D electrical resistivity profile data were acquired with ...

  4. Consistency Analysis of Genome-Scale Models of Bacterial Metabolism: A Metamodel Approach

    PubMed Central

    Ponce-de-Leon, Miguel; Calle-Espinosa, Jorge; Peretó, Juli; Montero, Francisco

    2015-01-01

    Genome-scale metabolic models usually contain inconsistencies that manifest as blocked reactions and gap metabolites. With the purpose to detect recurrent inconsistencies in metabolic models, a large-scale analysis was performed using a previously published dataset of 130 genome-scale models. The results showed that a large number of reactions (~22%) are blocked in all the models where they are present. To unravel the nature of such inconsistencies a metamodel was construed by joining the 130 models in a single network. This metamodel was manually curated using the unconnected modules approach, and then, it was used as a reference network to perform a gap-filling on each individual genome-scale model. Finally, a set of 36 models that had not been considered during the construction of the metamodel was used, as a proof of concept, to extend the metamodel with new biochemical information, and to assess its impact on gap-filling results. The analysis performed on the metamodel allowed to conclude: 1) the recurrent inconsistencies found in the models were already present in the metabolic database used during the reconstructions process; 2) the presence of inconsistencies in a metabolic database can be propagated to the reconstructed models; 3) there are reactions not manifested as blocked which are active as a consequence of some classes of artifacts, and; 4) the results of an automatic gap-filling are highly dependent on the consistency and completeness of the metamodel or metabolic database used as the reference network. In conclusion the consistency analysis should be applied to metabolic databases in order to detect and fill gaps as well as to detect and remove artifacts and redundant information. PMID:26629901

  5. Associative memory model for searching an image database by image snippet

    NASA Astrophysics Data System (ADS)

    Khan, Javed I.; Yun, David Y.

    1994-09-01

    This paper presents an associative memory called an multidimensional holographic associative computing (MHAC), which can be potentially used to perform feature based image database query using image snippet. MHAC has the unique capability to selectively focus on specific segments of a query frame during associative retrieval. As a result, this model can perform search on the basis of featural significance described by a subset of the snippet pixels. This capability is critical for visual query in image database because quite often the cognitive index features in the snippet are statistically weak. Unlike, the conventional artificial associative memories, MHAC uses a two level representation and incorporates additional meta-knowledge about the reliability status of segments of information it receives and forwards. In this paper we present the analysis of focus characteristics of MHAC.

  6. SU-E-I-96: A Study About the Influence of ROI Variation On Tumor Segmentation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To study the influence of different regions of interest (ROI) on tumor segmentation in PET. Methods: The experiments were conducted on a cylindrical phantom. Six spheres with different volumes (0.5ml, 1ml, 6ml, 12ml, 16ml and 20 ml) were placed inside a cylindrical container to mimic tumors of different sizes. The spheres were filled with 11C solution as sources and the cylindrical container was filled with 18F-FDG solution as the background. The phantom was continuously scanned in a Biograph-40 True Point/True View PET/CT scanner, and 42 images were reconstructed with source-to-background ratio (SBR) ranging from 16:1 to 1.8:1. We tookmore » a large and a small ROI for each sphere, both of which contain the whole sphere and does not contain any other spheres. Six other ROIs of different sizes were then taken between the large and the small ROI. For each ROI, all images were segmented by eitht thresholding methods and eight advanced methods, respectively. The segmentation results were evaluated by dice similarity index (DSI), classification error (CE) and volume error (VE). The robustness of different methods to ROI variation was quantified using the interrun variation and a generalized Cohen's kappa. Results: With the change of ROI, the segmentation results of all tested methods changed more or less. Compared with all advanced methods, thresholding methods were less affected by the ROI change. In addition, most of the thresholding methods got more accurate segmentation results for all sphere sizes. Conclusion: The results showed that the segmentation performance of all tested methods was affected by the change of ROI. Thresholding methods were more robust to this change and they can segment the PET image more accurately. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  7. Donor cycle and donor segmentation: new tools for improving blood donor management.

    PubMed

    Veldhuizen, I; Folléa, G; de Kort, W

    2013-07-01

    An adequate donor population is of key importance for the entire blood transfusion chain. For good donor management, a detailed overview of the donor database is therefore imperative. This study offers a new description of the donor cycle related to the donor management process. It also presents the outcomes of a European Project, Donor Management IN Europe (DOMAINE), regarding the segmentation of the donor population into donor types. Blood establishments (BEs) from 18 European countries, the Thalassaemia International Federation and a representative from the South-Eastern Europe Health Network joined forces in DOMAINE. A questionnaire assessed blood donor management practices and the composition of the donor population using the newly proposed DOMAINE donor segmentation. 48 BEs in 34 European countries were invited to participate. The response rate was high (88%). However, only 14 BEs could deliver data on the composition of their donor population. The data showed large variations and major imbalances in the donor population. In 79% of the countries, inactive donors formed the dominant donor type. Only in 21%, regular donors were the largest subgroup, and in 29%, the proportion of first-time donors was higher than the proportion of regular donors. Good donor management depends on a thorough insight into the flow of donors through their donor career. Segmentation of the donor database is an essential tool to understand the influx and efflux of donors. The DOMAINE donor segmentation helps BEs in understanding their donor database and to adapt their donor recruitment and retention practices accordingly. Ways to use this new tool are proposed. © 2013 International Society of Blood Transfusion.

  8. 3D marker-controlled watershed for kidney segmentation in clinical CT exams.

    PubMed

    Wieclawek, Wojciech

    2018-02-27

    Image segmentation is an essential and non trivial task in computer vision and medical image analysis. Computed tomography (CT) is one of the most accessible medical examination techniques to visualize the interior of a patient's body. Among different computer-aided diagnostic systems, the applications dedicated to kidney segmentation represent a relatively small group. In addition, literature solutions are verified on relatively small databases. The goal of this research is to develop a novel algorithm for fully automated kidney segmentation. This approach is designed for large database analysis including both physiological and pathological cases. This study presents a 3D marker-controlled watershed transform developed and employed for fully automated CT kidney segmentation. The original and the most complex step in the current proposition is an automatic generation of 3D marker images. The final kidney segmentation step is an analysis of the labelled image obtained from marker-controlled watershed transform. It consists of morphological operations and shape analysis. The implementation is conducted in a MATLAB environment, Version 2017a, using i.a. Image Processing Toolbox. 170 clinical CT abdominal studies have been subjected to the analysis. The dataset includes normal as well as various pathological cases (agenesis, renal cysts, tumors, renal cell carcinoma, kidney cirrhosis, partial or radical nephrectomy, hematoma and nephrolithiasis). Manual and semi-automated delineations have been used as a gold standard. Wieclawek Among 67 delineated medical cases, 62 cases are 'Very good', whereas only 5 are 'Good' according to Cohen's Kappa interpretation. The segmentation results show that mean values of Sensitivity, Specificity, Dice, Jaccard, Cohen's Kappa and Accuracy are 90.29, 99.96, 91.68, 85.04, 91.62 and 99.89% respectively. All 170 medical cases (with and without outlines) have been classified by three independent medical experts as 'Very good' in 143-148 cases, as 'Good' in 15-21 cases and as 'Moderate' in 6-8 cases. An automatic kidney segmentation approach for CT studies to compete with commonly known solutions was developed. The algorithm gives promising results, that were confirmed during validation procedure done on a relatively large database, including 170 CTs with both physiological and pathological cases.

  9. Puzzling Little Martian Spheres That Dont Taste Like Blueberries

    NASA Image and Video Library

    2012-09-14

    Small spherical objects fill the field in this mosaic combining four images from the Microscopic Imager on NASA Mars Exploration Rover Opportunity at an outcrop called Kirkwood in the Cape York segment of the western rim of Endeavour Crater.

  10. NORTHERLY STRETCH OF MILLBURY PORTION; GENERAL VIEW ACROSS CANAL PRISM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    NORTHERLY STRETCH OF MILLBURY PORTION; GENERAL VIEW ACROSS CANAL PRISM TO TOWPATH BERM (LATER FILL ENCROACHING LEFT) NEAR CENTER OF THIS STRETCH; VIEW TO SOUTHWEST - Blackstone Canal Worcester-Millbury Segment, Eastern bank of Blackstone River, Millbury, Worcester County, MA

  11. A motivation-based explanatory model of street drinking among young people.

    PubMed

    Martín-Santana, Josefa D; Beerli-Palacio, Asunción; Fernández-Monroy, Margarita

    2014-01-01

    This social marketing study focuses on street drinking behavior among young people. The objective is to divide the market of young people who engage in this activity into segments according to their motivations. For the three segments identified, a behavior model is created using the beliefs, attitudes, behavior, and social belonging of young people who engage in street drinking. The methodology used individual questionnaires filled in by a representative sample of young people. The results show that the behavior model follows the sequence of attitudes-beliefs-behavior and that social belonging influences these three variables. Similarly, differences are observed in the behavior model depending on the segment individuals belong to.

  12. A multifractal approach to space-filling recovery for PET quantification.

    PubMed

    Willaime, Julien M Y; Aboagye, Eric O; Tsoumpas, Charalampos; Turkheimer, Federico E

    2014-11-01

    A new image-based methodology is developed for estimating the apparent space-filling properties of an object of interest in PET imaging without need for a robust segmentation step and used to recover accurate estimates of total lesion activity (TLA). A multifractal approach and the fractal dimension are proposed to recover the apparent space-filling index of a lesion (tumor volume, TV) embedded in nonzero background. A practical implementation is proposed, and the index is subsequently used with mean standardized uptake value (SUV mean) to correct TLA estimates obtained from approximate lesion contours. The methodology is illustrated on fractal and synthetic objects contaminated by partial volume effects (PVEs), validated on realistic (18)F-fluorodeoxyglucose PET simulations and tested for its robustness using a clinical (18)F-fluorothymidine PET test-retest dataset. TLA estimates were stable for a range of resolutions typical in PET oncology (4-6 mm). By contrast, the space-filling index and intensity estimates were resolution dependent. TLA was generally recovered within 15% of ground truth on postfiltered PET images affected by PVEs. Volumes were recovered within 15% variability in the repeatability study. Results indicated that TLA is a more robust index than other traditional metrics such as SUV mean or TV measurements across imaging protocols. The fractal procedure reported here is proposed as a simple and effective computational alternative to existing methodologies which require the incorporation of image preprocessing steps (i.e., partial volume correction and automatic segmentation) prior to quantification.

  13. High Tech High School Interns Develop a Mid-Ocean Ridge Database for Research and Education

    NASA Astrophysics Data System (ADS)

    Staudigel, D.; Delaney, R.; Staudigel, H.; Koppers, A. A.; Miller, S. P.

    2004-12-01

    Mid-ocean ridges (MOR) represent one of the most important geographical and geological features on planet Earth. MORs are the locations where plates spread apart, they are the locations of the majority of the Earths' volcanoes that harbor some of the most extreme life forms. These concepts attract much research, but mid-ocean ridges are still effectively underrepresented in the Earth science class rooms. As two High Tech High School students, we began an internship at Scripps to develop a database for mid-ocean ridges as a resource for science and education. This Ridge Catalog will be accessible via http://earthref.org/databases/RC/ and applies a similar structure, design and data archival principle as the Seamount Catalog under EarthRef.org. Major research goals of this project include the development of (1) an archival structure for multibeam and sidescan data, standard bathymetric maps (including ODP-DSDP drill site and dredge locations) or any other arbitrary digital objects relating to MORs, and (2) to compile a global data set for some of the most defining characteristics of every ridge segment including ridge segment length, depth and azimuth and half spreading rates. One of the challenges included the need of making MOR data useful to the scientist as well as the teacher in the class room. Since the basic structure follows the design of the Seamount Catalog closely, we could move our attention to the basic data population of the database. We have pulled together multibeam data for the MOR segments from various public archives (SIOExplorer, SIO-GDC, NGDC, Lamont), and pre-processed it for public use. In particular, we have created individual bathymetric maps for each ridge segment, while merging the multibeam data with global satellite bathymetry data from Smith & Sandwell (1997). The global scale of this database will give it the ability to be used for any number of applications, from cruise planning to data

  14. SU-D-BRD-06: Automated Population-Based Planning for Whole Brain Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, E; Fox, T; Crocker, I

    2014-06-01

    Purpose: Treatment planning for whole brain radiation treatment is technically a simple process but in practice it takes valuable clinical time of repetitive and tedious tasks. This report presents a method that automatically segments the relevant target and normal tissues and creates a treatment plan in only a few minutes after patient simulation. Methods: Segmentation is performed automatically through morphological operations on the soft tissue. The treatment plan is generated by searching a database of previous cases for patients with similar anatomy. In this search, each database case is ranked in terms of similarity using a customized metric designed formore » sensitivity by including only geometrical changes that affect the dose distribution. The database case with the best match is automatically modified to replace relevant patient info and isocenter position while maintaining original beam and MLC settings. Results: Fifteen patients were used to validate the method. In each of these cases the anatomy was accurately segmented to mean Dice coefficients of 0.970 ± 0.008 for the brain, 0.846 ± 0.009 for the eyes and 0.672 ± 0.111 for the lens as compared to clinical segmentations. Each case was then subsequently matched against a database of 70 validated treatment plans and the best matching plan (termed auto-planned), was compared retrospectively with the clinical plans in terms of brain coverage and maximum doses to critical structures. Maximum doses were reduced by a maximum of 20.809 Gy for the left eye (mean 3.533), by 13.352 (1.311) for the right eye, and by 27.471 (4.856), 25.218 (6.315) for the left and right lens. Time from simulation to auto-plan was 3-4 minutes. Conclusion: Automated database- based matching is an alternative to classical treatment planning that improves quality while providing a cost—effective solution to planning through modifying previous validated plans to match a current patient's anatomy.« less

  15. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  16. A structure-based approach for colon gland segmentation in digital pathology

    NASA Astrophysics Data System (ADS)

    Ben Cheikh, Bassem; Bertheau, Philippe; Racoceanu, Daniel

    2016-03-01

    The morphology of intestinal glands is an important and significant indicator of the level of the severity of an inflammatory bowel disease, and has also been used routinely by pathologists to evaluate the malignancy and the prognosis of colorectal cancers such as adenocarcinomas. The extraction of meaningful information describing the morphology of glands relies on an accurate segmentation method. In this work, we propose a novel technique based on mathematical morphology that characterizes the spatial positioning of nuclei for intestinal gland segmentation in histopathological images. According to their appearance, glands can be divided into two types: hallow glands and solid glands. Hallow glands are composed of lumen and/or goblet cells cytoplasm, or filled with abscess in some advanced stages of the disease, while solid glands are composed of bunches of cells clustered together and can also be filled with necrotic debris. Given this scheme, an efficient characterization of the spatial distribution of cells is sufficient to carry out the segmentation. In this approach, hallow glands are first identified as regions empty of nuclei and surrounded by thick layers of epithelial cells, then solid glands are identified by detecting regions crowded of nuclei. First, cell nuclei are identified by color classification. Then, morphological maps are generated by the mean of advanced morphological operators applied to nuclei objects in order to interpret their spatial distribution and properties to identify candidates for glands central-regions and epithelial layers that are combined to extract the glandular structures.

  17. The heterogeneity of segmental dynamics of filled EPDM by (1)H transverse relaxation NMR.

    PubMed

    Moldovan, D; Fechete, R; Demco, D E; Culea, E; Blümich, B; Herrmann, V; Heinz, M

    2011-01-01

    Residual second moment of dipolar interactions M(2) and correlation time segmental dynamics distributions were measured by Hahn-echo decays in combination with inverse Laplace transform for a series of unfilled and filled EPDM samples as functions of carbon-black N683 filler content. The fillers-polymer chain interactions which dramatically restrict the mobility of bound rubber modify the dynamics of mobile chains. These changes depend on the filler content and can be evaluated from distributions of M(2). A dipolar filter was applied to eliminate the contribution of bound rubber. In the first approach the Hahn-echo decays were fitted with a theoretical relationship to obtain the average values of the (1)H residual second moment and correlation time <τ(c)>. For the mobile EPDM segments the power-law distribution of correlation function was compared to the exponential correlation function and found inadequate in the long-time regime. In the second approach a log-Gauss distribution for the correlation time was assumed. Furthermore, using an averaged value of the correlation time, the distributions of the residual second moment were determined using an inverse Laplace transform for the entire series of measured samples. The unfilled EPDM sample shows a bimodal distribution of residual second moments, which can be associated to the mobile polymer sub-chains (M(2) ≅ 6.1 rad (2) s(-2)) and the second one associated to the dangling chains M(2) ≅ 5.4 rad(2) s(-2)). By restraining the mobility of bound rubber, the carbon-black fillers induce diversity in the segmental dynamics like the apparition of a distinct mobile component and changes in the distribution of mobile and free-end polymer segments. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. The heterogeneity of segmental dynamics of filled EPDM by 1H transverse relaxation NMR

    NASA Astrophysics Data System (ADS)

    Moldovan, D.; Fechete, R.; Demco, D. E.; Culea, E.; Blümich, B.; Herrmann, V.; Heinz, M.

    2011-01-01

    Residual second moment of dipolar interactions M∼2 and correlation time segmental dynamics distributions were measured by Hahn-echo decays in combination with inverse Laplace transform for a series of unfilled and filled EPDM samples as functions of carbon-black N683 filler content. The fillers-polymer chain interactions which dramatically restrict the mobility of bound rubber modify the dynamics of mobile chains. These changes depend on the filler content and can be evaluated from distributions of M∼2. A dipolar filter was applied to eliminate the contribution of bound rubber. In the first approach the Hahn-echo decays were fitted with a theoretical relationship to obtain the average values of the 1H residual second moment and correlation time <τc>. For the mobile EPDM segments the power-law distribution of correlation function was compared to the exponential correlation function and found inadequate in the long-time regime. In the second approach a log-Gauss distribution for the correlation time was assumed. Furthermore, using an averaged value of the correlation time, the distributions of the residual second moment were determined using an inverse Laplace transform for the entire series of measured samples. The unfilled EPDM sample shows a bimodal distribution of residual second moments, which can be associated to the mobile polymer sub-chains (M∼2≅6.1 rad s) and the second one associated to the dangling chains M∼2≅5.4 rad s). By restraining the mobility of bound rubber, the carbon-black fillers induce diversity in the segmental dynamics like the apparition of a distinct mobile component and changes in the distribution of mobile and free-end polymer segments.

  19. Segmentation of lung nodules in computed tomography images using dynamic programming and multidirection fusion techniques.

    PubMed

    Wang, Qian; Song, Enmin; Jin, Renchao; Han, Ping; Wang, Xiaotong; Zhou, Yanying; Zeng, Jianchao

    2009-06-01

    The aim of this study was to develop a novel algorithm for segmenting lung nodules on three-dimensional (3D) computed tomographic images to improve the performance of computer-aided diagnosis (CAD) systems. The database used in this study consists of two data sets obtained from the Lung Imaging Database Consortium. The first data set, containing 23 nodules (22% irregular nodules, 13% nonsolid nodules, 17% nodules attached to other structures), was used for training. The second data set, containing 64 nodules (37% irregular nodules, 40% nonsolid nodules, 62% nodules attached to other structures), was used for testing. Two key techniques were developed in the segmentation algorithm: (1) a 3D extended dynamic programming model, with a newly defined internal cost function based on the information between adjacent slices, allowing parameters to be adapted to each slice, and (2) a multidirection fusion technique, which makes use of the complementary relationships among different directions to improve the final segmentation accuracy. The performance of this approach was evaluated by the overlap criterion, complemented by the true-positive fraction and the false-positive fraction criteria. The mean values of the overlap, true-positive fraction, and false-positive fraction for the first data set achieved using the segmentation scheme were 66%, 75%, and 15%, respectively, and the corresponding values for the second data set were 58%, 71%, and 22%, respectively. The experimental results indicate that this segmentation scheme can achieve better performance for nodule segmentation than two existing algorithms reported in the literature. The proposed 3D extended dynamic programming model is an effective way to segment sequential images of lung nodules. The proposed multidirection fusion technique is capable of reducing segmentation errors especially for no-nodule and near-end slices, thus resulting in better overall performance.

  20. Radioactive 133-Xenon gas-filled balloon to prevent restenosis: dosimetry, efficacy, and safety considerations.

    PubMed

    Apple, Marc; Waksman, Ron; Chan, Rosanna C; Vodovotz, Yoram; Fournadjiev, Jana; Bass, Bill G

    2002-08-06

    Ionizing radiation administered intraluminally via catheter-based systems using solid beta and gamma sources or liquid-filled balloons has shown reduction in the neointima formation after injury in the porcine model. We propose a novel system that uses a 133-Xenon (133Xe) radioactive gas-filled balloon catheter system. Overstretch balloon injury was performed in the coronary arteries of 33 domestic pigs. A novel 133Xe radioactive gas-filled balloon (3.5/45 mm) was positioned to overlap the injured segment with margins. After vacuum was obtained in the balloon catheter, approximately 2.5 cc of 133Xe gas was injected to fill the balloon. Doses of 0, 7.5, 15, and 30 Gy were delivered to a distance of 0.25 mm from the balloon surface. The dwell time ranged from 1.0 to 4.0 minutes, depending on the dose. Localization of 133Xe in the balloon was verified by a gamma camera. The average activity in a 3.5/45-mm balloon was measured at 67.7+/-12.1 mCi, and the total diffusion loss of the injected dose was 0.26% per minute of the injected dose. Bedside radiation exposure measured between 2 and 6 mR/h, and the shallow dose equivalent was calculated as 0.037 mrem per treatment. Histomorphometric analysis at 2 weeks showed inhibition of the intimal area (intimal area corrected for medial fracture length [IA/FL]) in the irradiated segments of 0.26+/-0.08 with 30 Gy, 0.07+/-0.24 with 15 Gy, and 0.12+/-0.89 with 7.5 Gy versus 0.76+/-0.08 with control P<0.001. 133Xe gas-filled balloon is feasible and effective in the reduction of neointima formation in the porcine model and safe for use in coronary arteries.

  1. Methodology for the Evaluation of the Algorithms for Text Line Segmentation Based on Extended Binary Classification

    NASA Astrophysics Data System (ADS)

    Brodic, D.

    2011-01-01

    Text line segmentation represents the key element in the optical character recognition process. Hence, testing of text line segmentation algorithms has substantial relevance. All previously proposed testing methods deal mainly with text database as a template. They are used for testing as well as for the evaluation of the text segmentation algorithm. In this manuscript, methodology for the evaluation of the algorithm for text segmentation based on extended binary classification is proposed. It is established on the various multiline text samples linked with text segmentation. Their results are distributed according to binary classification. Final result is obtained by comparative analysis of cross linked data. At the end, its suitability for different types of scripts represents its main advantage.

  2. Automatic graph-cut based segmentation of bones from knee magnetic resonance images for osteoarthritis research.

    PubMed

    Ababneh, Sufyan Y; Prescott, Jeff W; Gurcan, Metin N

    2011-08-01

    In this paper, a new, fully automated, content-based system is proposed for knee bone segmentation from magnetic resonance images (MRI). The purpose of the bone segmentation is to support the discovery and characterization of imaging biomarkers for the incidence and progression of osteoarthritis, a debilitating joint disease, which affects a large portion of the aging population. The segmentation algorithm includes a novel content-based, two-pass disjoint block discovery mechanism, which is designed to support automation, segmentation initialization, and post-processing. The block discovery is achieved by classifying the image content to bone and background blocks according to their similarity to the categories in the training data collected from typical bone structures. The classified blocks are then used to design an efficient graph-cut based segmentation algorithm. This algorithm requires constructing a graph using image pixel data followed by applying a maximum-flow algorithm which generates a minimum graph-cut that corresponds to an initial image segmentation. Content-based refinements and morphological operations are then applied to obtain the final segmentation. The proposed segmentation technique does not require any user interaction and can distinguish between bone and highly similar adjacent structures, such as fat tissues with high accuracy. The performance of the proposed system is evaluated by testing it on 376 MR images from the Osteoarthritis Initiative (OAI) database. This database included a selection of single images containing the femur and tibia from 200 subjects with varying levels of osteoarthritis severity. Additionally, a full three-dimensional segmentation of the bones from ten subjects with 14 slices each, and synthetic images with background having intensity and spatial characteristics similar to those of bone are used to assess the robustness and consistency of the developed algorithm. The results show an automatic bone detection rate of 0.99 and an average segmentation accuracy of 0.95 using the Dice similarity index. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Left atrial appendage segmentation and quantitative assisted diagnosis of atrial fibrillation based on fusion of temporal-spatial information.

    PubMed

    Jin, Cheng; Feng, Jianjiang; Wang, Lei; Yu, Heng; Liu, Jiang; Lu, Jiwen; Zhou, Jie

    2018-05-01

    In this paper, we present an approach for left atrial appendage (LAA) multi-phase fast segmentation and quantitative assisted diagnosis of atrial fibrillation (AF) based on 4D-CT data. We take full advantage of the temporal dimension information to segment the living, flailed LAA based on a parametric max-flow method and graph-cut approach to build 3-D model of each phase. To assist the diagnosis of AF, we calculate the volumes of 3-D models, and then generate a "volume-phase" curve to calculate the important dynamic metrics: ejection fraction, filling flux, and emptying flux of the LAA's blood by volume. This approach demonstrates more precise results than the conventional approaches that calculate metrics by area, and allows for the quick analysis of LAA-volume pattern changes of in a cardiac cycle. It may also provide insight into the individual differences in the lesions of the LAA. Furthermore, we apply support vector machines (SVMs) to achieve a quantitative auto-diagnosis of the AF by exploiting seven features from volume change ratios of the LAA, and perform multivariate logistic regression analysis for the risk of LAA thrombosis. The 100 cases utilized in this research were taken from the Philips 256-iCT. The experimental results demonstrate that our approach can construct the 3-D LAA geometries robustly compared to manual annotations, and reasonably infer that the LAA undergoes filling, emptying and re-filling, re-emptying in a cardiac cycle. This research provides a potential for exploring various physiological functions of the LAA and quantitatively estimating the risk of stroke in patients with AF. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Collaborative SDOCT Segmentation and Analysis Software.

    PubMed

    Yun, Yeyi; Carass, Aaron; Lang, Andrew; Prince, Jerry L; Antony, Bhavna J

    2017-02-01

    Spectral domain optical coherence tomography (SDOCT) is routinely used in the management and diagnosis of a variety of ocular diseases. This imaging modality also finds widespread use in research, where quantitative measurements obtained from the images are used to track disease progression. In recent years, the number of available scanners and imaging protocols grown and there is a distinct absence of a unified tool that is capable of visualizing, segmenting, and analyzing the data. This is especially noteworthy in longitudinal studies, where data from older scanners and/or protocols may need to be analyzed. Here, we present a graphical user interface (GUI) that allows users to visualize and analyze SDOCT images obtained from two commonly used scanners. The retinal surfaces in the scans can be segmented using a previously described method, and the retinal layer thicknesses can be compared to a normative database. If necessary, the segmented surfaces can also be corrected and the changes applied. The interface also allows users to import and export retinal layer thickness data to an SQL database, thereby allowing for the collation of data from a number of collaborating sites.

  5. Partially Filled Aperture Interferometric Telescopes: Achieving Large Aperture and Coronagraphic Performance

    NASA Astrophysics Data System (ADS)

    Moretto, G.; Kuhn, J.; Langlois, M.; Berdugyna, S.; Tallon, M.

    2017-09-01

    Telescopes larger than currently planned 30-m class instruments must break the mass-aperture scaling relationship of the Keck-generation of multi-segmented telescopes. Partially filled aperture, but highly redundant baseline interferometric instruments may achieve both large aperture and high dynamic range. The PLANETS FOUNDATION group has explored hybrid telescope-interferometer concepts for narrow-field optical systems that exhibit coronagraphic performance over narrow fields-of-view. This paper describes how the Colossus and Exo-Life Finder telescope designs achieve 10x lower moving masses than current Extremely Large Telescopes.

  6. Pangea break-up: from passive to active margin in the Colombian Caribbean Realm

    NASA Astrophysics Data System (ADS)

    Gómez, Cristhian; Kammer, Andreas

    2017-04-01

    The break-up of Western Pangea has lead to a back-arc type tectonic setting along the periphery of Gondwana, with the generation of syn-rift basins filled with sedimentary and volcanic sequences during the Middle to Late Triassic. The Indios and Corual formations in the Santa Marta massif of Northern Andes were deposited in this setting. In this contribution we elaborate a stratigraphic model for both the Indios and Corual formations, based on the description and classification of sedimentary facies and their architecture and a provenance analysis. Furthermore, geotectonic environments for volcanic and volcanoclastic rock of both units are postulated. The Indios Formation is a shallow-marine syn-rift basin fill and contains gravity flows deposits. This unit is divided into three segments; the lower and upper segments are related to fan-deltas, while the middle segment is associated to offshore deposits with lobe incursions of submarine fans. Volcanoclastic and volcanic rocks of the Indios and Corual formations are bimodal in composition and are associated to alkaline basalts. Volcanogenic deposits comprise debris, pyroclastic and lava flows of both effusive and explosive eruptions. These units record multiple phases of rifting and reveal together a first stage in the break-up of Pangea during Middle and Late Triassic in North Colombia.

  7. Atlas-based fuzzy connectedness segmentation and intensity nonuniformity correction applied to brain MRI.

    PubMed

    Zhou, Yongxin; Bai, Jing

    2007-01-01

    A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.

  8. Method to acquire regions of fruit, branch and leaf from image of red apple in orchard

    NASA Astrophysics Data System (ADS)

    Lv, Jidong; Xu, Liming

    2017-07-01

    This work proposed a method to acquire regions of fruit, branch and leaf from red apple image in orchard. To acquire fruit image, R-G image was extracted from the RGB image for corrosive working, hole filling, subregion removal, expansive working and opening operation in order. Finally, fruit image was acquired by threshold segmentation. To acquire leaf image, fruit image was subtracted from RGB image before extracting 2G-R-B image. Then, leaf image was acquired by subregion removal and threshold segmentation. To acquire branch image, dynamic threshold segmentation was conducted in the R-G image. Then, the segmented image was added to fruit image to acquire adding fruit image which was subtracted from RGB image with leaf image. Finally, branch image was acquired by opening operation, subregion removal and threshold segmentation after extracting the R-G image from the subtracting image. Compared with previous methods, more complete image of fruit, leaf and branch can be acquired from red apple image with this method.

  9. Feasibility Study on a Segmented Ferrofluid Flow Linear Generator for Increasing the Time-Varying Magnetic Flux.

    PubMed

    Lee, Won-Ho; Lee, Se-Hee; Lee, Sangyoup; Lee, Jong-Chul

    2018-09-01

    Nanoparticles and nanofluids have been implemented in energy harvesting devices, and energy harvesting based on magnetic nanofluid flow was recently achieved by using a layer-built magnet and micro-bubble injection to induce a voltage on the order of 10-1 mV. However, this is not yet suitable for some commercial purpose. In order to further increase the amount of electric voltage and current from this energy harvesting the air bubbles must be segmented in the base fluid, and the magnetic flux of the segmented flow should be materially altered over time. The focus of this research is on the development of a segmented ferrofluid flow linear generator that would scavenge electrical power from waste heat. Experiments were conducted to obtain the induced voltage, which was generated by moving a ferrofluid-filled capsule inside a multi-turn coil. Computations were then performed to explain the fundamental physical basis of the motion of the segmented flow of the ferrofluids and the air-layers.

  10. Epidermis area detection for immunofluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dovganich, Andrey; Krylov, Andrey; Nasonov, Andrey; Makhneva, Natalia

    2018-04-01

    We propose a novel image segmentation method for immunofluorescence microscopy images of skin tissue for the diagnosis of various skin diseases. The segmentation is based on machine learning algorithms. The feature vector is filled by three groups of features: statistical features, Laws' texture energy measures and local binary patterns. The images are preprocessed for better learning. Different machine learning algorithms have been used and the best results have been obtained with random forest algorithm. We use the proposed method to detect the epidermis region as a part of pemphigus diagnosis system.

  11. Age-specific MRI templates for pediatric neuroimaging

    PubMed Central

    Sanchez, Carmen E.; Richards, John E.; Almli, C. Robert

    2012-01-01

    This study created a database of pediatric age-specific MRI brain templates for normalization and segmentation. Participants included children from 4.5 through 19.5 years, totaling 823 scans from 494 subjects. Open-source processing programs (FSL, SPM, ANTS) constructed head, brain and segmentation templates in 6 month intervals. The tissue classification (WM, GM, CSF) showed changes over age similar to previous reports. A volumetric analysis of age-related changes in WM and GM based on these templates showed expected increase/decrease pattern in GM and an increase in WM over the sampled ages. This database is available for use for neuroimaging studies (blindedforreview). PMID:22799759

  12. High temperature in-situ observations of multi-segmented metal nanowires encapsulated within carbon nanotubes by in-situ filling technique.

    PubMed

    Hayashi, Yasuhiko; Tokunaga, Tomoharu; Iijima, Toru; Iwata, Takuya; Kalita, Golap; Tanemura, Masaki; Sasaki, Katsuhiro; Kuroda, Kotaro

    2012-08-08

    Multi-segmented one-dimensional metal nanowires were encapsulated within carbon nanotubes (CNTs) through in-situ filling technique during plasma-enhanced chemical vapor deposition process. Transmission electron microscopy (TEM) and environmental TEM were employed to characterize the as-prepared sample at room temperature and high temperature. The selected area electron diffractions revealed that the Pd4Si nanowire and face-centered-cubic Co nanowire on top of the Pd nanowire were encapsulated within the bottom and tip parts of the multiwall CNT, respectively. Although the strain-induced deformation of graphite walls was observed, the solid-state phases of Pd4Si and Co-Pd remain even at above their expected melting temperatures and up to 1,550 ± 50°C. Finally, the encapsulated metals were melted and flowed out from the tip of the CNT after 2 h at the same temperature due to the increase of internal pressure of the CNT.

  13. Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA

    Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less

  14. Predicting missing values in a home care database using an adaptive uncertainty rule method.

    PubMed

    Konias, S; Gogou, G; Bamidis, P D; Vlahavas, I; Maglaveras, N

    2005-01-01

    Contemporary literature illustrates an abundance of adaptive algorithms for mining association rules. However, most literature is unable to deal with the peculiarities, such as missing values and dynamic data creation, that are frequently encountered in fields like medicine. This paper proposes an uncertainty rule method that uses an adaptive threshold for filling missing values in newly added records. A new approach for mining uncertainty rules and filling missing values is proposed, which is in turn particularly suitable for dynamic databases, like the ones used in home care systems. In this study, a new data mining method named FiMV (Filling Missing Values) is illustrated based on the mined uncertainty rules. Uncertainty rules have quite a similar structure to association rules and are extracted by an algorithm proposed in previous work, namely AURG (Adaptive Uncertainty Rule Generation). The main target was to implement an appropriate method for recovering missing values in a dynamic database, where new records are continuously added, without needing to specify any kind of thresholds beforehand. The method was applied to a home care monitoring system database. Randomly, multiple missing values for each record's attributes (rate 5-20% by 5% increments) were introduced in the initial dataset. FiMV demonstrated 100% completion rates with over 90% success in each case, while usual approaches, where all records with missing values are ignored or thresholds are required, experienced significantly reduced completion and success rates. It is concluded that the proposed method is appropriate for the data-cleaning step of the Knowledge Discovery process in databases. The latter, containing much significance for the output efficiency of any data mining technique, can improve the quality of the mined information.

  15. A multifractal approach to space-filling recovery for PET quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willaime, Julien M. Y., E-mail: julien.willaime@siemens.com; Aboagye, Eric O.; Tsoumpas, Charalampos

    2014-11-01

    Purpose: A new image-based methodology is developed for estimating the apparent space-filling properties of an object of interest in PET imaging without need for a robust segmentation step and used to recover accurate estimates of total lesion activity (TLA). Methods: A multifractal approach and the fractal dimension are proposed to recover the apparent space-filling index of a lesion (tumor volume, TV) embedded in nonzero background. A practical implementation is proposed, and the index is subsequently used with mean standardized uptake value (SUV {sub mean}) to correct TLA estimates obtained from approximate lesion contours. The methodology is illustrated on fractal andmore » synthetic objects contaminated by partial volume effects (PVEs), validated on realistic {sup 18}F-fluorodeoxyglucose PET simulations and tested for its robustness using a clinical {sup 18}F-fluorothymidine PET test–retest dataset. Results: TLA estimates were stable for a range of resolutions typical in PET oncology (4–6 mm). By contrast, the space-filling index and intensity estimates were resolution dependent. TLA was generally recovered within 15% of ground truth on postfiltered PET images affected by PVEs. Volumes were recovered within 15% variability in the repeatability study. Results indicated that TLA is a more robust index than other traditional metrics such as SUV {sub mean} or TV measurements across imaging protocols. Conclusions: The fractal procedure reported here is proposed as a simple and effective computational alternative to existing methodologies which require the incorporation of image preprocessing steps (i.e., partial volume correction and automatic segmentation) prior to quantification.« less

  16. DR HAGIS-a fundus image database for the automatic extraction of retinal surface vessels from diabetic patients.

    PubMed

    Holm, Sven; Russell, Greg; Nourrit, Vincent; McLoughlin, Niall

    2017-01-01

    A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).

  17. Poster — Thur Eve — 59: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallawi, A; Farrell, T; Diamond, K

    2014-08-15

    Automated atlas-based segmentation has recently been evaluated for use in planning prostate cancer radiotherapy. In the typical approach, the essential step is the selection of an atlas from a database that best matches the target image. This work proposes an atlas selection strategy and evaluates its impact on the final segmentation accuracy. Prostate length (PL), right femoral head diameter (RFHD), and left femoral head diameter (LFHD) were measured in CT images of 20 patients. Each subject was then taken as the target image to which all remaining 19 images were affinely registered. For each pair of registered images, the overlapmore » between prostate and femoral head contours was quantified using the Dice Similarity Coefficient (DSC). Finally, we designed an atlas selection strategy that computed the ratio of PL (prostate segmentation), RFHD (right femur segmentation), and LFHD (left femur segmentation) between the target subject and each subject in the atlas database. Five atlas subjects yielding ratios nearest to one were then selected for further analysis. RFHD and LFHD were excellent parameters for atlas selection, achieving a mean femoral head DSC of 0.82 ± 0.06. PL had a moderate ability to select the most similar prostate, with a mean DSC of 0.63 ± 0.18. The DSC obtained with the proposed selection method were slightly lower than the maximums established using brute force, but this does not include potential improvements expected with deformable registration. Atlas selection based on PL for prostate and femoral diameter for femoral heads provides reasonable segmentation accuracy.« less

  18. Waste tire and shingle scrap/bituminous paving test sections on the Munger Recreational Trail Gateway segment. interim report

    DOT National Transportation Integrated Search

    1991-02-01

    The need to reduce Minnesota's dependence on land fills resulted in a unique cooperative venture by three state agencies. A partnership was forged between the Minnesota Pollution Control Agency (MPCA), the Minnesota Department of Natural Resources (D...

  19. Dictionary learning-based CT detection of pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Wu, Panpan; Xia, Kewen; Zhang, Yanbo; Qian, Xiaohua; Wang, Ge; Yu, Hengyong

    2016-10-01

    Segmentation of lung features is one of the most important steps for computer-aided detection (CAD) of pulmonary nodules with computed tomography (CT). However, irregular shapes, complicated anatomical background and poor pulmonary nodule contrast make CAD a very challenging problem. Here, we propose a novel scheme for feature extraction and classification of pulmonary nodules through dictionary learning from training CT images, which does not require accurately segmented pulmonary nodules. Specifically, two classification-oriented dictionaries and one background dictionary are learnt to solve a two-category problem. In terms of the classification-oriented dictionaries, we calculate sparse coefficient matrices to extract intrinsic features for pulmonary nodule classification. The support vector machine (SVM) classifier is then designed to optimize the performance. Our proposed methodology is evaluated with the lung image database consortium and image database resource initiative (LIDC-IDRI) database, and the results demonstrate that the proposed strategy is promising.

  20. Sedimentation in the central segment of the Aleutian Trench: Sources, transport, and depositional style

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevenson, A.J.; Scholl, D.W.; Vallier, T.L.

    1990-05-01

    The central segment of the Aleutian Trench (162{degree}W to 175{degree}E) is an intraoceanic subduction zone that contains an anomalously thick sedimentary fill (4 km maximum). The fill is an arcward-thickening and slightly tilted wedge of sediment characterized acoustically by laterally continuous, closely spaced, parallel reflectors. These relations are indicative of turbidite deposition. The trench floor and reflection horizons are planar, showing no evidence of an axial channel or any transverse fan bodies. Cores of surface sediment recover turbidite layers, implying that sediment transport and deposition occur via diffuse, sheetlike, fine-grained turbidite flows that occupy the full width of the trench.more » The mineralogy of Holocene trench sediments document a mixture of island-arc (dominant) and continental source terranes. GLORIA side-scan sonar images reveal a westward-flowing axial trench channel that conducts sediment to the eastern margin of the central segment, where channelized flow cases. Much of the sediment transported in this channel is derived from glaciated drainages surrounding the Gulf of Alaska which empty into the eastern trench segment via deep-sea channel systems (Surveyor and others) and submarine canyons (Hinchinbrook and others). Insular sediment transport is more difficult to define. GLORIA images show the efficiency with which the actively growing accretionary wedge impounds sediment that manages to cross a broad fore-arc terrace. It is likely that island-arc sediment reaches the trench either directly via air fall, via recycling of the accretionary prism, or via overtopping of the accretionary ridges by the upper parts of thick turbidite flows.« less

  1. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters.

    PubMed

    Wang, Chunlin; Lefkowitz, Elliot J

    2004-10-28

    Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist.

  2. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters

    PubMed Central

    Wang, Chunlin; Lefkowitz, Elliot J

    2004-01-01

    Background Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. Results We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Conclusions Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist. PMID:15511296

  3. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  4. Emergence of Convolutional Neural Network in Future Medicine: Why and How. A Review on Brain Tumor Segmentation

    NASA Astrophysics Data System (ADS)

    Alizadeh Savareh, Behrouz; Emami, Hassan; Hajiabadi, Mohamadreza; Ghafoori, Mahyar; Majid Azimi, Seyed

    2018-03-01

    Manual analysis of brain tumors magnetic resonance images is usually accompanied by some problem. Several techniques have been proposed for the brain tumor segmentation. This study will be focused on searching popular databases for related studies, theoretical and practical aspects of Convolutional Neural Network surveyed in brain tumor segmentation. Based on our findings, details about related studies including the datasets used, evaluation parameters, preferred architectures and complementary steps analyzed. Deep learning as a revolutionary idea in image processing, achieved brilliant results in brain tumor segmentation too. This can be continuing until the next revolutionary idea emerging.

  5. GenomeVista

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poliakov, Alexander; Couronne, Olivier

    2002-11-04

    Aligning large vertebrate genomes that are structurally complex poses a variety of problems not encountered on smaller scales. Such genomes are rich in repetitive elements and contain multiple segmental duplications, which increases the difficulty of identifying true orthologous SNA segments in alignments. The sizes of the sequences make many alignment algorithms designed for comparing single proteins extremely inefficient when processing large genomic intervals. We integrated both local and global alignment tools and developed a suite of programs for automatically aligning large vertebrate genomes and identifying conserved non-coding regions in the alignments. Our method uses the BLAT local alignment program tomore » find anchors on the base genome to identify regions of possible homology for a query sequence. These regions are postprocessed to find the best candidates which are then globally aligned using the AVID global alignment program. In the last step conserved non-coding segments are identified using VISTA. Our methods are fast and the resulting alignments exhibit a high degree of sensitivity, covering more than 90% of known coding exons in the human genome. The GenomeVISTA software is a suite of Perl programs that is built on a MySQL database platform. The scheduler gets control data from the database, builds a queve of jobs, and dispatches them to a PC cluster for execution. The main program, running on each node of the cluster, processes individual sequences. A Perl library acts as an interface between the database and the above programs. The use of a separate library allows the programs to function independently of the database schema. The library also improves on the standard Perl MySQL database interfere package by providing auto-reconnect functionality and improved error handling.« less

  6. Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree

    ERIC Educational Resources Information Center

    Chen, Wei-Bang

    2012-01-01

    The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…

  7. Aersol Jet Deposition of Geramic Thin Films for Electromechanical Applications

    DTIC Science & Technology

    2012-03-01

    Ethyl Cellulose (EC) were utilized as binder and plasticizer constituents. Inks were prepared by the addition of powders, to the combination of...0 NiO 0 0 0 0 0 0 34.6 LSM Disperbyk-111 Solsperse-3000 Dispex-A40 Ethyl Cellulose 0 0 0 0 0.21 0 0 0.56 0 0.21 0 0.56 0 0 0.21 0 0 0 0.56 0.21...of pattern filling was done by using a spiral fill (Fig 3d , e, f.). For this, a decagonal pattern was generated using AutoCAD® polyline segments

  8. Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification.

    PubMed

    Soares, João V B; Leandro, Jorge J G; Cesar Júnior, Roberto M; Jelinek, Herbert F; Cree, Michael J

    2006-09-01

    We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al., 2004) and STARE (Hoover et al., 2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods.

  9. Ultrasensitive Mach-Zehnder Interferometric Temperature Sensor Based on Liquid-Filled D-Shaped Fiber Cavity.

    PubMed

    Zhang, Hui; Gao, Shecheng; Luo, Yunhan; Chen, Zhenshi; Xiong, Songsong; Wan, Lei; Huang, Xincheng; Huang, Bingsen; Feng, Yuanhua; He, Miao; Liu, Weiping; Chen, Zhe; Li, Zhaohui

    2018-04-17

    A liquid-filled D-shaped fiber (DF) cavity serving as an in-fiber Mach–Zehnder interferometer (MZI) has been proposed and experimentally demonstrated for temperature sensing with ultrahigh sensitivity. The miniature MZI is constructed by splicing a segment of DF between two single-mode fibers (SMFs) to form a microcavity (MC) for filling and replacement of various refractive index (RI) liquids. By adjusting the effective RI difference between the DF and MC (the two interference arms), experimental and calculated results indicate that the interference spectra show different degrees of temperature dependence. As the effective RI of the liquid-filled MC approaches that of the DF, temperature sensitivity up to −84.72 nm/°C with a linear correlation coefficient of 0.9953 has been experimentally achieved for a device with the MC length of 456 μm, filled with liquid RI of 1.482. Apart from ultrahigh sensitivity, the proposed MCMZI device possesses additional advantages of its miniature size and simple configuration; these features make it promising and competitive in various temperature sensing applications, such as consumer electronics, biological treatments, and medical diagnosis.

  10. Denoising and segmentation of retinal layers in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Dash, Puspita; Sigappi, A. N.

    2018-04-01

    Optical Coherence Tomography (OCT) is an imaging technique used to localize the intra-retinal boundaries for the diagnostics of macular diseases. Due to speckle noise, low image contrast and accurate segmentation of individual retinal layers is difficult. Due to this, a method for retinal layer segmentation from OCT images is presented. This paper proposes a pre-processing filtering approach for denoising and segmentation methods for segmenting retinal layers OCT images using graph based segmentation technique. These techniques are used for segmentation of retinal layers for normal as well as patients with Diabetic Macular Edema. The algorithm based on gradient information and shortest path search is applied to optimize the edge selection. In this paper the four main layers of the retina are segmented namely Internal limiting membrane (ILM), Retinal pigment epithelium (RPE), Inner nuclear layer (INL) and Outer nuclear layer (ONL). The proposed method is applied on a database of OCT images of both ten normal and twenty DME affected patients and the results are found to be promising.

  11. Automatic Structural Parcellation of Mouse Brain MRI Using Multi-Atlas Label Fusion

    PubMed Central

    Ma, Da; Cardoso, Manuel J.; Modat, Marc; Powell, Nick; Wells, Jack; Holmes, Holly; Wiseman, Frances; Tybulewicz, Victor; Fisher, Elizabeth; Lythgoe, Mark F.; Ourselin, Sébastien

    2014-01-01

    Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS) algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE) framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework. PMID:24475148

  12. 13. The south segment of the building has a stone ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. The south segment of the building has a stone basement. The alley wall had a number of areaway windows that are now infilled with bricks. These areaways were subsequently filled with earth, probably when the alley was paved. Here the first-floor joists are seen with a make-shift support beam and column. The basement floor originally was part earth and part wood. Some of the earth floor is now covered with a concrete slab; the wood floor remains. Credit GADA/MRM. - Stroud Building, 31-33 North Central Avenue, Phoenix, Maricopa County, AZ

  13. Inductive coupler for downhole components and method for making same

    DOEpatents

    Hall, David R.; Hall, Jr., H. Tracy; Pixton, David S.; Dahlgren, Scott; Sneddon, Cameron; Fox, Joe; Briscoe, Michael A.

    2006-10-03

    An inductive coupler for downhole components. The inductive coupler includes an annular housing having a recess defined by a bottom portion and two opposing side wall portions. At least one side wall portion includes a lip extending toward but not reaching the other side wall portion. A plurality of generally U-shaped MCEI segments, preferably comprised of ferrite, are disposed in the recess and aligned so as to form a circular trough. The coupler further includes a conductor disposed within the circular trough and a polymer filling spaces between the segments, the annular housing and the conductor.

  14. Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.

    PubMed

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif

    2016-03-11

    Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.

  15. Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research

    PubMed Central

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif

    2016-01-01

    Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction. PMID:26978368

  16. Document segmentation via oblique cuts

    NASA Astrophysics Data System (ADS)

    Svendsen, Jeremy; Branzan-Albu, Alexandra

    2013-01-01

    This paper presents a novel solution for the layout segmentation of graphical elements in Business Intelligence documents. We propose a generalization of the recursive X-Y cut algorithm, which allows for cutting along arbitrary oblique directions. An intermediate processing step consisting of line and solid region removal is also necessary due to presence of decorative elements. The output of the proposed segmentation is a hierarchical structure which allows for the identification of primitives in pie and bar charts. The algorithm was tested on a database composed of charts from business documents. Results are very promising.

  17. Automated segmentation and tracking for large-scale analysis of focal adhesion dynamics.

    PubMed

    Würflinger, T; Gamper, I; Aach, T; Sechi, A S

    2011-01-01

    Cell adhesion, a process mediated by the formation of discrete structures known as focal adhesions (FAs), is pivotal to many biological events including cell motility. Much is known about the molecular composition of FAs, although our knowledge of the spatio-temporal recruitment and the relative occupancy of the individual components present in the FAs is still incomplete. To fill this gap, an essential prerequisite is a highly reliable procedure for the recognition, segmentation and tracking of FAs. Although manual segmentation and tracking may provide some advantages when done by an expert, its performance is usually hampered by subjective judgement and the long time required in analysing large data sets. Here, we developed a model-based segmentation and tracking algorithm that overcomes these problems. In addition, we developed a dedicated computational approach to correct segmentation errors that may arise from the analysis of poorly defined FAs. Thus, by achieving accurate and consistent FA segmentation and tracking, our work establishes the basis for a comprehensive analysis of FA dynamics under various experimental regimes and the future development of mathematical models that simulate FA behaviour. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  18. Arabic handwritten: pre-processing and segmentation

    NASA Astrophysics Data System (ADS)

    Maliki, Makki; Jassim, Sabah; Al-Jawad, Naseer; Sellahewa, Harin

    2012-06-01

    This paper is concerned with pre-processing and segmentation tasks that influence the performance of Optical Character Recognition (OCR) systems and handwritten/printed text recognition. In Arabic, these tasks are adversely effected by the fact that many words are made up of sub-words, with many sub-words there associated one or more diacritics that are not connected to the sub-word's body; there could be multiple instances of sub-words overlap. To overcome these problems we investigate and develop segmentation techniques that first segment a document into sub-words, link the diacritics with their sub-words, and removes possible overlapping between words and sub-words. We shall also investigate two approaches for pre-processing tasks to estimate sub-words baseline, and to determine parameters that yield appropriate slope correction, slant removal. We shall investigate the use of linear regression on sub-words pixels to determine their central x and y coordinates, as well as their high density part. We also develop a new incremental rotation procedure to be performed on sub-words that determines the best rotation angle needed to realign baselines. We shall demonstrate the benefits of these proposals by conducting extensive experiments on publicly available databases and in-house created databases. These algorithms help improve character segmentation accuracy by transforming handwritten Arabic text into a form that could benefit from analysis of printed text.

  19. A multi-scale tensor voting approach for small retinal vessel segmentation in high resolution fundus images.

    PubMed

    Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida

    2016-09-01

    Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    PubMed

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  1. Brain Tumour Segmentation based on Extremely Randomized Forest with high-level features.

    PubMed

    Pinto, Adriano; Pereira, Sergio; Correia, Higino; Oliveira, J; Rasteiro, Deolinda M L D; Silva, Carlos A

    2015-08-01

    Gliomas are among the most common and aggressive brain tumours. Segmentation of these tumours is important for surgery and treatment planning, but also for follow-up evaluations. However, it is a difficult task, given that its size and locations are variable, and the delineation of all tumour tissue is not trivial, even with all the different modalities of the Magnetic Resonance Imaging (MRI). We propose a discriminative and fully automatic method for the segmentation of gliomas, using appearance- and context-based features to feed an Extremely Randomized Forest (Extra-Trees). Some of these features are computed over a non-linear transformation of the image. The proposed method was evaluated using the publicly available Challenge database from BraTS 2013, having obtained a Dice score of 0.83, 0.78 and 0.73 for the complete tumour, and the core and the enhanced regions, respectively. Our results are competitive, when compared against other results reported using the same database.

  2. Bi-model processing for early detection of breast tumor in CAD system

    NASA Astrophysics Data System (ADS)

    Mughal, Bushra; Sharif, Muhammad; Muhammad, Nazeer

    2017-06-01

    Early screening of skeptical masses in mammograms may reduce mortality rate among women. This rate can be further reduced upon developing the computer-aided diagnosis system with decrease in false assumptions in medical informatics. This method highlights the early tumor detection in digitized mammograms. For improving the performance of this system, a novel bi-model processing algorithm is introduced. It divides the region of interest into two parts, the first one is called pre-segmented region (breast parenchyma) and other is the post-segmented region (suspicious region). This system follows the scheme of the preprocessing technique of contrast enhancement that can be utilized to segment and extract the desired feature of the given mammogram. In the next phase, a hybrid feature block is presented to show the effective performance of computer-aided diagnosis. In order to assess the effectiveness of the proposed method, a database provided by the society of mammographic images is tested. Our experimental outcomes on this database exhibit the usefulness and robustness of the proposed method.

  3. Physical–chemical determinants of coil conformations in globular proteins

    PubMed Central

    Perskie, Lauren L; Rose, George D

    2010-01-01

    We present a method with the potential to generate a library of coil segments from first principles. Proteins are built from α-helices and/or β-strands interconnected by these coil segments. Here, we investigate the conformational determinants of short coil segments, with particular emphasis on chain turns. Toward this goal, we extracted a comprehensive set of two-, three-, and four-residue turns from X-ray–elucidated proteins and classified them by conformation. A remarkably small number of unique conformers account for most of this experimentally determined set, whereas remaining members span a large number of rare conformers, many occurring only once in the entire protein database. Factors determining conformation were identified via Metropolis Monte Carlo simulations devised to test the effectiveness of various energy terms. Simulated structures were validated by comparison to experimental counterparts. After filtering rare conformers, we found that 98% of the remaining experimentally determined turn population could be reproduced by applying a hydrogen bond energy term to an exhaustively generated ensemble of clash-free conformers in which no backbone polar group lacks a hydrogen-bond partner. Further, at least 90% of longer coil segments, ranging from 5- to 20 residues, were found to be structural composites of these shorter primitives. These results are pertinent to protein structure prediction, where approaches can be divided into either empirical or ab initio methods. Empirical methods use database-derived information; ab initio methods rely on physical–chemical principles exclusively. Replacing the database-derived coil library with one generated from first principles would transform any empirically based method into its corresponding ab initio homologue. PMID:20512968

  4. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  5. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  6. Characterizing and reaching high-risk drinkers using audience segmentation.

    PubMed

    Moss, Howard B; Kirby, Susan D; Donodeo, Fred

    2009-08-01

    Market or audience segmentation is widely used in social marketing efforts to help planners identify segments of a population to target for tailored program interventions. Market-based segments are typically defined by behaviors, attitudes, knowledge, opinions, or lifestyles. They are more helpful to health communication and marketing planning than epidemiologically defined groups because market-based segments are similar in respect to how they behave or might react to marketing and communication efforts. However, market segmentation has rarely been used in alcohol research. As an illustration of its utility, we employed commercial data that describes the sociodemographic characteristics of high-risk drinkers as an audience segment, including where they tend to live, lifestyles, interests, consumer behaviors, alcohol consumption behaviors, other health-related behaviors, and cultural values. Such information can be extremely valuable in targeting and planning public health campaigns, targeted mailings, prevention interventions, and research efforts. We described the results of a segmentation analysis of those individuals who self-reported to consume 5 or more drinks per drinking episode at least twice in the last 30 days. The study used the proprietary PRIZM (Claritas, Inc., San Diego, CA) audience segmentation database merged with the Center for Disease Control and Prevention's (CDC) Behavioral Risk Factor Surveillance System (BRFSS) database. The top 10 of the 66 PRIZM audience segments for this risky drinking pattern are described. For five of these segments we provided additional in-depth details about consumer behavior and the estimates of the market areas where these risky drinkers resided. The top 10 audience segments (PRIZM clusters) most likely to engage in high-risk drinking are described. The cluster with the highest concentration of binge-drinking behavior is referred to as the "Cyber Millenials." This cluster is characterized as "the nation's tech-savvy singles and couples living in fashionable neighborhoods on the urban fringe." Almost 65% of Cyber Millenials households are found in the Pacific and Middle Atlantic regions of the United States. Additional consumer behaviors of the Cyber Millenials and other segments are also described. Audience segmentation can assist in identifying and describing target audience segments, as well as identifying places where segments congregate on- or offline. This information can be helpful for recruiting subjects for alcohol prevention research as well as planning health promotion campaigns. Through commercial data about high-risk drinkers as "consumers," planners can develop interventions that have heightened salience in terms of opportunities, perceptions, and motivations, and have better media channel identification.

  7. Characterizing and Reaching High-Risk Drinkers Using Audience Segmentation

    PubMed Central

    Moss, Howard B.; Kirby, Susan D.; Donodeo, Fred

    2010-01-01

    Background Market or audience segmentation is widely used in social marketing efforts to help planners identify segments of a population to target for tailored program interventions. Market-based segments are typically defined by behaviors, attitudes, knowledge, opinions, or lifestyles. They are more helpful to health communication and marketing planning than epidemiologically-defined groups because market-based segments are similar in respect to how they behave or might react to marketing and communication efforts. However, market segmentation has rarely been used in alcohol research. As an illustration of its utility, we employed commercial data that describes the sociodemographic characteristics of high-risk drinkers as an audience segment; where they tend to live, lifestyles, interests, consumer behaviors, alcohol consumption behaviors, other health-related behaviors, and cultural values. Such information can be extremely valuable in targeting and planning public health campaigns, targeted mailings, prevention interventions and research efforts. Methods We describe the results of a segmentation analysis of those individuals who self-report consuming five or more drinks per drinking episode at least twice in the last 30-days. The study used the proprietary PRIZM™ audience segmentation database merged with Center for Disease Control and Prevention's (CDC) Behavioral Risk Factor Surveillance System (BRFSS) database. The top ten of the 66 PRIZM™ audience segments for this risky drinking pattern are described. For five of these segments we provide additional in-depth details about consumer behavior and the estimates of the market areas where these risky drinkers reside. Results The top ten audience segments (PRIZM clusters) most likely to engage in high-risk drinking are described. The cluster with the highest concentration of binge drinking behavior is referred to as the “Cyber Millenials.” This cluster is characterized as “the nation's tech-savvy singles and couples living in fashionable neighborhoods on the urban fringe. Almost 65% of Cyber Millenials households are found in the Pacific and Middle Atlantic regions of the U.S. Additional consumer behaviors of the Cyber Millenials and other segments are also described. Conclusions Audience segmentation can assist in identifying and describing target audience segments, as well as identifying places where segments congregate on- or offline. This information can be helpful for recruiting subjects for alcohol prevention research, as well as planning health promotion campaigns. Through commercial data about high-risk drinkers as “consumers,” planners can develop interventions that have heightened salience in terms of opportunities, perceptions, and motivations, and have better media channel identification. PMID:19413650

  8. The processing of English regular inflections: Phonological cues to morphological structure

    PubMed Central

    Post, Brechtje; Marslen-Wilson, William D.; Randall, Billi; Tyler, Lorraine K.

    2008-01-01

    Previous studies suggest that different neural and functional mechanisms are involved in the analysis of irregular (caught) and regular (filled) past tense forms in English. In particular, the comprehension and production of regular forms is argued to require processes of morpho-phonological assembly and disassembly, analysing these forms into a stem plus an inflectional affix (e.g., {fill} + {-ed}), as opposed to irregular forms, which do not have an overt stem + affix structure and must be analysed as full forms [Marslen-Wilson, W. D., & Tyler, L. K. (1997). Dissociating types of mental computation. Nature, 387, 592–594; Marslen-Wilson, W. D., & Tyler, L. K. (1998). Rules, representations, and the English past tense. Trends in Cognitive Science, 2, 428–435]. On this account, any incoming string that shows the critical diagnostic properties of an inflected form – a final coronal consonant (/t/, /d/, /s/, /z/) that agrees in voicing with the preceding segment as in filled, mild, or nilled – will automatically trigger an attempt at segmentation. We report an auditory speeded judgment experiment which explored the contribution of these critical morpho-phonological properties (labelled as the English inflectional rhyme pattern) to the processing of English regular inflections. The results show that any stimulus that can be interpreted as ending in a regular inflection, whether it is a real inflection (filled–fill), a pseudo-inflection (mild–mile) or a phonologically matched nonword (nilled–nill), is responded to more slowly than an unambiguously monomorphemic stimulus pair (e.g., belt–bell). This morpho-phonological effect was independent of phonological effects of voicing and syllabicity. The findings are interpreted as evidence for a basic morpho-phonological parsing process that applies to all items with the criterial phonological properties. PMID:18834584

  9. Relationship of ischemic times and left atrial volume and function in patients with ST-segment elevation myocardial infarction treated with primary percutaneous coronary intervention.

    PubMed

    Ilic, Ivan; Stankovic, Ivan; Vidakovic, Radosav; Jovanovic, Vladimir; Vlahovic Stipac, Alja; Putnikovic, BiIjana; Neskovic, Aleksandar N

    2015-04-01

    Little is known about the impact of duration of ischemia on left atrial (LA) volumes and function during acute phase of myocardial infarction. We investigated the relationship of ischemic times, echocardiographic indices of diastolic function and LA volumes in patients with ST-segment elevation myocardial infarction (STEMI) treated with primary percutaneous coronary intervention (PCI). A total of 433 consecutive STEMI patients underwent echocardiographic examination within 48 h of primary PCI, including the measurement of LA volumes and the ratio of mitral peak velocity of early filling to early diastolic mitral annular velocity (E/e'). Time intervals from onset of chest pain to hospital admission and reperfusion were collected and magnitude of Troponin I release was used to assess infarct size. Patients with LA volume index (LAVI) ≥28 ml/m(2) had longer total ischemic time (410 ± 347 vs. 303 ± 314 min, p = 0.007) and higher E/e' ratio (15 ± 5 vs. 10 ± 3, p < 0.001) than those with LAVI <28 ml/m(2), while the indices of LA function were similar between the study groups (p > 0.05, for all). Significant correlation was found between E/e' and LA volumes at all stages of LA filling and contraction (r = 0.363-0.434; p < 0.001, for all) while total ischemic time along with E/e' and restrictive filling pattern remained independent predictor of LA enlargement. Increased LA volume is associated with longer ischemic times and may be a sensitive marker of increased left ventricular filling pressures in STEMI patients treated with primary PCI.

  10. Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics

    PubMed Central

    de Santos Sierra, Alberto; Ávila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage. PMID:22247658

  11. Gaussian multiscale aggregation applied to segmentation in hand biometrics.

    PubMed

    de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  12. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging.

    PubMed

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.

  13. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging

    PubMed Central

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691

  14. Use of Knowledge Bases in Education of Database Management

    ERIC Educational Resources Information Center

    Radványi, Tibor; Kovács, Emod

    2008-01-01

    In this article we present a segment of Sulinet Digital Knowledgebase curriculum system in which you can find the sections of subject-matter which aid educating the database management. You can follow the order of the course from the beginning when some topics appearance and raise in elementary school, through the topics accomplish in secondary…

  15. Aerodynamic Characteristics, Database Development and Flight Simulation of the X-34 Vehicle

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Brauckmann, Gregory J.; Ruth, Michael J.; Fuhrmann, Henri D.

    2000-01-01

    An overview of the aerodynamic characteristics, development of the preflight aerodynamic database and flight simulation of the NASA/Orbital X-34 vehicle is presented in this paper. To develop the aerodynamic database, wind tunnel tests from subsonic to hypersonic Mach numbers including ground effect tests at low subsonic speeds were conducted in various facilities at the NASA Langley Research Center. Where wind tunnel test data was not available, engineering level analysis is used to fill the gaps in the database. Using this aerodynamic data, simulations have been performed for typical design reference missions of the X-34 vehicle.

  16. In vitro comparison of gutta-percha-filled area percentages in root canals instrumented and obturated with different techniques.

    PubMed

    Yilmaz, Ayca; Karagoz-Kucukay, Isil

    2017-01-01

    To evaluate the efficacy of different obturation techniques in root canals instrumented either by hand or rotary instruments with regard to the percentage of gutta- percha-filled area (PGFA). One hundred and sixty extracted mandibular premolars with single, straight root canals were studied. Root canals were prepared to an apical size of 30 by hand with a modified crown-down technique or the ProTaper and HEROShaper systems. Teeth were divided into eight groups (n=20) according to the following instrumentation and obturation techniques: G1: Hand files+lateral condensation (LC), G2: Hand files+Thermafil, G3: ProTaper+LC, G4: ProTaper+single-cone, G5: ProTaper+ProTaper-Obturator, G6: HEROShaper+LC, G7: HEROShaper+single-cone, G8: HEROShaper+HEROfill. Horizontal sections were cut at 1, 3, 5, 7, 9, 11 and 13 mm from the apical foramen. A total of 1120 sections obtained were digitally photographed under a stereomicroscope set at 48X magnification. The cross-sectional area of the canal and the gutta-percha was measured by digital image analysis and the PGFA was calculated for each section. The mean of the PGFA in Thermafil (G2), ProTaper-Obturator (G5) and HEROfill (G8) groups was significantly higher than the other groups. In G3 and G4, PGFA showed no significant difference in the apical segments whereas PGFA was significantly higher at the middle and coronal segments in G3. In G6 and G7, PGFA showed no significant difference in the apical and middle segments whereas PGFA was significantly higher at the coronal segments in G6. The carrier-based gutta-percha obturation systems revealed significantly higher PGFA in comparison to single-cone and lateral condensation techniques.

  17. Thermal and water regime of green roof segments filled with Technosol

    NASA Astrophysics Data System (ADS)

    Jelínková, Vladimíra; Šácha, Jan; Dohnal, Michal; Skala, Vojtěch

    2016-04-01

    Artificial soil systems and structures comprise appreciable part of the urban areas and are considered to be perspective for number of reasons. One of the most important lies in contribution of green roofs and facades to the heat island effect mitigation, air quality improvement, storm water reduction, etc. The aim of the presented study is to evaluate thermal and water regime of the anthropogenic soil systems during the first months of the construction life cycle. Green roof test segments filled with two different anthropogenic soils were built to investigate the benefits of such systems in the temperate climate. Temperature and water balance measurements complemented with meteorological observations and knowledge of physical properties of the soil substrates provided basis for detailed analysis of thermal and hydrological regime. Water balance of green roof segments was calculated for available vegetation seasons and individual rainfall events. On the basis of an analysis of individual rainfall events rainfall-runoff dependency was found for green roof segments. The difference between measured actual evapotranspiration and calculated potential evapotranspiration was discussed on period with contrasting conditions in terms of the moisture stress. Thermal characteristics of soil substrates resulted in highly contrasting diurnal variation of soils temperatures. Green roof systems under study were able to reduce heat load of the roof construction when comparing with a concrete roof construction. Similarly, received rainfall was significantly reduced. The extent of the rainfall reduction mainly depends on soil, vegetation status and experienced weather patterns. The research was realized as a part of the University Centre for Energy Efficient Buildings supported by the EU and with financial support from the Czech Science Foundation under project number 14-10455P.

  18. Earthquake recurrence and risk assessment in circum-Pacific seismic gaps

    USGS Publications Warehouse

    Thatcher, W.

    1989-01-01

    THE development of the concept of seismic gaps, regions of low earthquake activity where large events are expected, has been one of the notable achievements of seismology and plate tectonics. Its application to long-term earthquake hazard assessment continues to be an active field of seismological research. Here I have surveyed well documented case histories of repeated rupture of the same segment of circum-Pacific plate boundary and characterized their general features. I find that variability in fault slip and spatial extent of great earthquakes rupturing the same plate boundary segment is typical rather than exceptional but sequences of major events fill identified seismic gaps with remarkable order. Earthquakes are concentrated late in the seismic cycle and occur with increasing size and magnitude. Furthermore, earthquake rup-ture starts near zones of concentrated moment release, suggesting that high-slip regions control the timing of recurrent events. The absence of major earthquakes early in the seismic cycle indicates a more complex behaviour for lower-slip regions, which may explain the observed cycle-to-cycle diversity of gap-filling sequences. ?? 1989 Nature Publishing Group.

  19. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    NASA Astrophysics Data System (ADS)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  20. The effects of becoming an entrepreneur on the use of psychotropics among entrepreneurs and their spouses.

    PubMed

    Dahl, Michael S; Nielsen, Jimmi; Mojtabai, Ramin

    2010-12-01

    Entering entrepreneurship (i.e. becoming an entrepreneur) is known to be a demanding activity with increased workload, financial uncertainty and increased levels of stress. However, there are no systematic studies on how entering entrepreneurship affects the people involved. The authors investigated prescriptions of psychotropics for 6,221 first-time entrepreneurs from 2001-2004 and their 2,381 spouses in the first two years after becoming entrepreneurs in a matched case-control study using linked data from three Danish national registries: The Danish database for Labor Market Research, the Danish Entrepreneurship database and the Danish Prescription database. Entrepreneurs were more likely to fill prescriptions at pharmacies for sedatives/hypnotics (adjusted odds ratio (AOR): 1.45 [95% CI: 1.26-1.66], p < .0001). However, they were less likely to fill prescriptions for antidepressants (AOR: 0.74 [95% CI: 0.59-0.92] p = 0.007). Spouses of these entrepreneurs were also more likely to fill prescriptions for sedatives/hypnotics (AOR: 1.36 [95% CI: 1.10-1.67], p = 0.005). No difference in prescription of antidepressants was found for spouses. This study showed that there was a significant relation between entering entrepreneurship and receiving prescriptions for sedative/ hypnotics both among the entrepreneurs themselves and their spouses, suggesting that entering entrepreneurship may be associated with increased stress for both the entrepreneurs and their families.

  1. Pattern-based, multi-scale segmentation and regionalization of EOSD land cover

    NASA Astrophysics Data System (ADS)

    Niesterowicz, Jacek; Stepinski, Tomasz F.

    2017-10-01

    The Earth Observation for Sustainable Development of Forests (EOSD) map is a 25 m resolution thematic map of Canadian forests. Because of its large spatial extent and relatively high resolution the EOSD is difficult to analyze using standard GIS methods. In this paper we propose multi-scale segmentation and regionalization of EOSD as new methods for analyzing EOSD on large spatial scales. Segments, which we refer to as forest land units (FLUs), are delineated as tracts of forest characterized by cohesive patterns of EOSD categories; we delineated from 727 to 91,885 FLUs within the spatial extent of EOSD depending on the selected scale of a pattern. Pattern of EOSD's categories within each FLU is described by 1037 landscape metrics. A shapefile containing boundaries of all FLUs together with an attribute table listing landscape metrics make up an SQL-searchable spatial database providing detailed information on composition and pattern of land cover types in Canadian forest. Shapefile format and extensive attribute table pertaining to the entire legend of EOSD are designed to facilitate broad range of investigations in which assessment of composition and pattern of forest over large areas is needed. We calculated four such databases using different spatial scales of pattern. We illustrate the use of FLU database for producing forest regionalization maps of two Canadian provinces, Quebec and Ontario. Such maps capture the broad scale variability of forest at the spatial scale of the entire province. We also demonstrate how FLU database can be used to map variability of landscape metrics, and thus the character of landscape, over the entire Canada.

  2. Automatic initialization and quality control of large-scale cardiac MRI segmentations.

    PubMed

    Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F

    2018-01-01

    Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Learning of perceptual grouping for object segmentation on RGB-D data☆

    PubMed Central

    Richtsfeld, Andreas; Mörwald, Thomas; Prankl, Johann; Zillich, Michael; Vincze, Markus

    2014-01-01

    Object segmentation of unknown objects with arbitrary shape in cluttered scenes is an ambitious goal in computer vision and became a great impulse with the introduction of cheap and powerful RGB-D sensors. We introduce a framework for segmenting RGB-D images where data is processed in a hierarchical fashion. After pre-clustering on pixel level parametric surface patches are estimated. Different relations between patch-pairs are calculated, which we derive from perceptual grouping principles, and support vector machine classification is employed to learn Perceptual Grouping. Finally, we show that object hypotheses generation with Graph-Cut finds a globally optimal solution and prevents wrong grouping. Our framework is able to segment objects, even if they are stacked or jumbled in cluttered scenes. We also tackle the problem of segmenting objects when they are partially occluded. The work is evaluated on publicly available object segmentation databases and also compared with state-of-the-art work of object segmentation. PMID:24478571

  4. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  5. Health claims database study of cyclosporine ophthalmic emulsion treatment patterns in dry eye patients

    PubMed Central

    Stonecipher, Karl G; Chia, Jenny; Onyenwenyi, Ahunna; Villanueva, Linda; Hollander, David A

    2013-01-01

    Background Dry eye is a multifactorial, symptomatic disease associated with ocular surface inflammation and tear film hyperosmolarity. This study was designed to assess patterns of topical cyclosporine ophthalmic emulsion 0.05% (Restasis®) use in dry eye patients and determine if there were any differences in use based on whether dry eye is physician-coded as a primary or nonprimary diagnosis. Methods Records for adult patients with a diagnosis of dry eye at an outpatient visit from January 1, 2008 to December 31, 2009 were selected from Truven Health MarketScan® Research Databases. The primary endpoint was percentage of patients with at least one primary versus no primary dry eye diagnosis who filled a topical cyclosporine prescription. Data analyzed included utilization of topical corticosteroids, oral tetracyclines, and punctal plugs. Results The analysis included 576,416 patients, accounting for 875,692 dry eye outpatient visits: 74.7% were female, 64.2% were ages 40–69 years, and 84.4% had at least one primary dry eye diagnosis. During 2008–2009, 15.9% of dry eye patients with a primary diagnosis versus 6.5% with no primary diagnosis filled at least one cyclosporine prescription. For patients who filled at least one prescription, the mean months’ supply of cyclosporine filled over 12 months was 4.44. Overall, 33.9% of dry eye patients filled a prescription for topical cyclosporine, topical corticosteroid, or oral tetracycline over 2 years. Conclusion Patients with a primary dry eye diagnosis were more likely to fill a topical cyclosporine prescription. Although inflammation is key to the pathophysiology of dry eye, most patients seeing a physician for dry eye may not receive anti-inflammatory therapies. PMID:24179335

  6. Health claims database study of cyclosporine ophthalmic emulsion treatment patterns in dry eye patients.

    PubMed

    Stonecipher, Karl G; Chia, Jenny; Onyenwenyi, Ahunna; Villanueva, Linda; Hollander, David A

    2013-01-01

    Dry eye is a multifactorial, symptomatic disease associated with ocular surface inflammation and tear film hyperosmolarity. This study was designed to assess patterns of topical cyclosporine ophthalmic emulsion 0.05% (Restasis®) use in dry eye patients and determine if there were any differences in use based on whether dry eye is physician-coded as a primary or nonprimary diagnosis. Records for adult patients with a diagnosis of dry eye at an outpatient visit from January 1, 2008 to December 31, 2009 were selected from Truven Health MarketScan® Research Databases. The primary endpoint was percentage of patients with at least one primary versus no primary dry eye diagnosis who filled a topical cyclosporine prescription. Data analyzed included utilization of topical corticosteroids, oral tetracyclines, and punctal plugs. The analysis included 576,416 patients, accounting for 875,692 dry eye outpatient visits: 74.7% were female, 64.2% were ages 40-69 years, and 84.4% had at least one primary dry eye diagnosis. During 2008-2009, 15.9% of dry eye patients with a primary diagnosis versus 6.5% with no primary diagnosis filled at least one cyclosporine prescription. For patients who filled at least one prescription, the mean months' supply of cyclosporine filled over 12 months was 4.44. Overall, 33.9% of dry eye patients filled a prescription for topical cyclosporine, topical corticosteroid, or oral tetracycline over 2 years. Patients with a primary dry eye diagnosis were more likely to fill a topical cyclosporine prescription. Although inflammation is key to the pathophysiology of dry eye, most patients seeing a physician for dry eye may not receive anti-inflammatory therapies.

  7. Segmentation of pulmonary nodules in three-dimensional CT images by use of a spiral-scanning technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Jiahui; Engelmann, Roger; Li Qiang

    2007-12-15

    Accurate segmentation of pulmonary nodules in computed tomography (CT) is an important and difficult task for computer-aided diagnosis of lung cancer. Therefore, the authors developed a novel automated method for accurate segmentation of nodules in three-dimensional (3D) CT. First, a volume of interest (VOI) was determined at the location of a nodule. To simplify nodule segmentation, the 3D VOI was transformed into a two-dimensional (2D) image by use of a key 'spiral-scanning' technique, in which a number of radial lines originating from the center of the VOI spirally scanned the VOI from the 'north pole' to the 'south pole'. Themore » voxels scanned by the radial lines provided a transformed 2D image. Because the surface of a nodule in the 3D image became a curve in the transformed 2D image, the spiral-scanning technique considerably simplified the segmentation method and enabled reliable segmentation results to be obtained. A dynamic programming technique was employed to delineate the 'optimal' outline of a nodule in the 2D image, which corresponded to the surface of the nodule in the 3D image. The optimal outline was then transformed back into 3D image space to provide the surface of the nodule. An overlap between nodule regions provided by computer and by the radiologists was employed as a performance metric for evaluating the segmentation method. The database included two Lung Imaging Database Consortium (LIDC) data sets that contained 23 and 86 CT scans, respectively, with 23 and 73 nodules that were 3 mm or larger in diameter. For the two data sets, six and four radiologists manually delineated the outlines of the nodules as reference standards in a performance evaluation for nodule segmentation. The segmentation method was trained on the first and was tested on the second LIDC data sets. The mean overlap values were 66% and 64% for the nodules in the first and second LIDC data sets, respectively, which represented a higher performance level than those of two existing segmentation methods that were also evaluated by use of the LIDC data sets. The segmentation method provided relatively reliable results for pulmonary nodule segmentation and would be useful for lung cancer quantification, detection, and diagnosis.« less

  8. Differential modulation of right ventricular strain and right atrial mechanics in mild vs. severe pressure overload

    PubMed Central

    Voeller, Rochus K.; Aziz, Abdulhameed; Maniar, Hersh S.; Ufere, Nneka N.; Taggar, Ajay K.; Bernabe, Noel J.; Cupps, Brian P.

    2011-01-01

    Increased right atrial (RA) and ventricular (RV) chamber volumes are a late maladaptive response to chronic pulmonary hypertension. The purpose of the current investigation was to characterize the early compensatory changes that occur in the right heart during chronic RV pressure overload before the development of chamber dilation. Magnetic resonance imaging with radiofrequency tissue tagging was performed on dogs at baseline and after 10 wk of pulmonary artery banding to yield either mild RV pressure overload (36% rise in RV pressure; n = 5) or severe overload (250% rise in RV pressure; n = 4). The RV free wall was divided into three segments within a midventricular plane, and circumferential myocardial strain was calculated for each segment, the septum, and the left ventricle. Chamber volumes were calculated from stacked MRI images, and RA mechanics were characterized by calculating the RA reservoir, conduit, and pump contribution to RV filling. With mild RV overload, there were no changes in RV strain or RA function. With severe RV overload, RV circumferential strain diminished by 62% anterior (P = 0.04), 42% inferior (P = 0.03), and 50% in the septum (P = 0.02), with no change in the left ventricle (P = 0.12). RV filling became more dependent on RA conduit function, which increased from 30 ± 9 to 43 ± 13% (P = 0.01), than on RA reservoir function, which decreased from 47 ± 6 to 33 ± 4% (P = 0.04), with no change in RA pump function (P = 0.94). RA and RV volumes and RV ejection fraction were unchanged from baseline during either mild (P > 0.10) or severe RV pressure overload (P > 0.53). In response to severe RV pressure overload, RV myocardial strain is segmentally diminished and RV filling becomes more dependent on RA conduit rather than reservoir function. These compensatory mechanisms of the right heart occur early in chronic RV pressure overload before chamber dilation develops. PMID:21926343

  9. Evolution of Fseg/Cseg dimorphism in region III of the Plasmodium falciparum eba-175 gene.

    PubMed

    Yasukochi, Yoshiki; Naka, Izumi; Patarapotikul, Jintana; Hananantachai, Hathairad; Ohashi, Jun

    2017-04-01

    The 175-kDa erythrocyte binding antigen (EBA-175) of the malaria parasite Plasmodium falciparum is important for its invasion into human erythrocytes. The primary structure of eba-175 is divided into seven regions, namely I to VII. Region III contains highly divergent dimorphic segments, termed Fseg and Cseg. The allele frequencies of segmental dimorphism within a P. falciparum population have been extensively examined; however, the molecular evolution of segmental dimorphism is not well understood. A comprehensive comparison of nucleotide sequences among 32 P. falciparum eba-175 alleles identified in our previous study, two Plasmodium reichenowi, and one P. gaboni orthologous alleles obtained from the GenBank database was conducted to uncover the origin and evolutionary processes of segmental dimorphism in P. falciparum eba-175. In the eba-175 nucleotide sequence derived from a P. reichenowi CDC strain, both Fseg and Cseg were found in region III, which implies that the original eba-175 gene had both segments, and deletions of F- and C-segments generated Cseg and Fseg alleles, respectively. We also confirmed the presence of allele with Fseg and Cseg in another P. reichenowi strain (SY57), by re-mapping short reads obtained from the GenBank database. On the other hand, the segmental sequence of eba-175 ortholog in P. gaboni was quite diverged from those of the other species, suggesting that the original eba-175 dimorphism of P. falciparum can be traced back to the stem linage of P. falciparum and P. reichenowi. Our findings suggest that Fseg and Cseg alleles are derived from a single eba-175 allele containing both segments in the ancestral population of P. falciparum and P. reichenowi, and that the allelic dimorphism of eba-175 was shaped by the independent emergence of similar dimorphic lineage in different species that has never been observed in any evolutionary mode of allelic dimorphism at other loci in malaria genomes. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Segmentation, surface rendering, and surface simplification of 3-D skull images for the repair of a large skull defect

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Shi, Pengfei; Li, Shuguang

    2009-10-01

    Given the potential demonstrated by research into bone-tissue engineering, the use of medical image data for the rapid prototyping (RP) of scaffolds is a subject worthy of research. Computer-aided design and manufacture and medical imaging have created new possibilities for RP. Accurate and efficient design and fabrication of anatomic models is critical to these applications. We explore the application of RP computational methods to the repair of a pediatric skull defect. The focus of this study is the segmentation of the defect region seen in computerized tomography (CT) slice images of this patient's skull and the three-dimensional (3-D) surface rendering of the patient's CT-scan data. We see if our segmentation and surface rendering software can improve the generation of an implant model to fill a skull defect.

  11. A completely automated processing pipeline for lung and lung lobe segmentation and its application to the LIDC-IDRI data base

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Wiemker, Rafael; Barschdorf, Hans; Kabus, Sven; Klinder, Tobias; Lorenz, Cristian; Schadewaldt, Nicole; Dharaiya, Ekta

    2010-03-01

    Automated segmentation of lung lobes in thoracic CT images has relevance for various diagnostic purposes like localization of tumors within the lung or quantification of emphysema. Since emphysema is a known risk factor for lung cancer, both purposes are even related to each other. The main steps of the segmentation pipeline described in this paper are the lung detector and the lung segmentation based on a watershed algorithm, and the lung lobe segmentation based on mesh model adaptation. The segmentation procedure was applied to data sets of the data base of the Image Database Resource Initiative (IDRI) that currently contains over 500 thoracic CT scans with delineated lung nodule annotations. We visually assessed the reliability of the single segmentation steps, with a success rate of 98% for the lung detection and 90% for lung delineation. For about 20% of the cases we found the lobe segmentation not to be anatomically plausible. A modeling confidence measure is introduced that gives a quantitative indication of the segmentation quality. For a demonstration of the segmentation method we studied the correlation between emphysema score and malignancy on a per-lobe basis.

  12. Handwritten text line segmentation by spectral clustering

    NASA Astrophysics Data System (ADS)

    Han, Xuecheng; Yao, Hui; Zhong, Guoqiang

    2017-02-01

    Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.

  13. MAPS: The Organization of a Spatial Database System Using Imagery, Terrain, and Map Data

    DTIC Science & Technology

    1983-06-01

    segments which share the same pixel position. Finally, in any largo system, a logical partitioning of the database must be performed in order to avoid...34theodore roosevelt memoria entry 0; entry 1: Virginia ’northwest Washington* 2 en 11" ies for "crossover" for ’theodore roosevelt memor i entry 0

  14. Discriminative dictionary learning for abdominal multi-organ segmentation.

    PubMed

    Tong, Tong; Wolz, Robin; Wang, Zehan; Gao, Qinquan; Misawa, Kazunari; Fujiwara, Michitaka; Mori, Kensaku; Hajnal, Joseph V; Rueckert, Daniel

    2015-07-01

    An automated segmentation method is presented for multi-organ segmentation in abdominal CT images. Dictionary learning and sparse coding techniques are used in the proposed method to generate target specific priors for segmentation. The method simultaneously learns dictionaries which have reconstructive power and classifiers which have discriminative ability from a set of selected atlases. Based on the learnt dictionaries and classifiers, probabilistic atlases are then generated to provide priors for the segmentation of unseen target images. The final segmentation is obtained by applying a post-processing step based on a graph-cuts method. In addition, this paper proposes a voxel-wise local atlas selection strategy to deal with high inter-subject variation in abdominal CT images. The segmentation performance of the proposed method with different atlas selection strategies are also compared. Our proposed method has been evaluated on a database of 150 abdominal CT images and achieves a promising segmentation performance with Dice overlap values of 94.9%, 93.6%, 71.1%, and 92.5% for liver, kidneys, pancreas, and spleen, respectively. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  15. New approach for segmentation and recognition of handwritten numeral strings

    NASA Astrophysics Data System (ADS)

    Sadri, Javad; Suen, Ching Y.; Bui, Tien D.

    2004-12-01

    In this paper, we propose a new system for segmentation and recognition of unconstrained handwritten numeral strings. The system uses a combination of foreground and background features for segmentation of touching digits. The method introduces new algorithms for traversing the top/bottom-foreground-skeletons of the touched digits, and for finding feature points on these skeletons, and matching them to build all the segmentation paths. For the first time a genetic representation is used to show all the segmentation hypotheses. Our genetic algorithm tries to search and evolve the population of candidate segmentations and finds the one with the highest confidence for its segmentation and recognition. We have also used a new method for feature extraction which lowers the variations in the shapes of the digits, and then a MLP neural network is utilized to produce the labels and confidence values for those digits. The NIST SD19 and CENPARMI databases are used for evaluating the system. Our system can get a correct segmentation-recognition rate of 96.07% with rejection rate of 2.61% which compares favorably with those that exist in the literature.

  16. New approach for segmentation and recognition of handwritten numeral strings

    NASA Astrophysics Data System (ADS)

    Sadri, Javad; Suen, Ching Y.; Bui, Tien D.

    2005-01-01

    In this paper, we propose a new system for segmentation and recognition of unconstrained handwritten numeral strings. The system uses a combination of foreground and background features for segmentation of touching digits. The method introduces new algorithms for traversing the top/bottom-foreground-skeletons of the touched digits, and for finding feature points on these skeletons, and matching them to build all the segmentation paths. For the first time a genetic representation is used to show all the segmentation hypotheses. Our genetic algorithm tries to search and evolve the population of candidate segmentations and finds the one with the highest confidence for its segmentation and recognition. We have also used a new method for feature extraction which lowers the variations in the shapes of the digits, and then a MLP neural network is utilized to produce the labels and confidence values for those digits. The NIST SD19 and CENPARMI databases are used for evaluating the system. Our system can get a correct segmentation-recognition rate of 96.07% with rejection rate of 2.61% which compares favorably with those that exist in the literature.

  17. Sensor-oriented feature usability evaluation in fingerprint segmentation

    NASA Astrophysics Data System (ADS)

    Li, Ying; Yin, Yilong; Yang, Gongping

    2013-06-01

    Existing fingerprint segmentation methods usually process fingerprint images captured by different sensors with the same feature or feature set. We propose to improve the fingerprint segmentation result in view of an important fact that images from different sensors have different characteristics for segmentation. Feature usability evaluation, which means to evaluate the usability of features to find the personalized feature or feature set for different sensors to improve the performance of segmentation. The need for feature usability evaluation for fingerprint segmentation is raised and analyzed as a new issue. To address this issue, we present a decision-tree-based feature-usability evaluation method, which utilizes a C4.5 decision tree algorithm to evaluate and pick the best suitable feature or feature set for fingerprint segmentation from a typical candidate feature set. We apply the novel method on the FVC2002 database of fingerprint images, which are acquired by four different respective sensors and technologies. Experimental results show that the accuracy of segmentation is improved, and time consumption for feature extraction is dramatically reduced with selected feature(s).

  18. Northeast Artificial Intelligence Consortium Annual Report for 1987. Volume 4. Research in Automated Photointerpretation

    DTIC Science & Technology

    1989-03-01

    KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection

  19. The roles of nearest neighbor methods in imputing missing data in forest inventory and monitoring databases

    Treesearch

    Bianca N. I. Eskelson; Hailemariam Temesgen; Valerie Lemay; Tara M. Barrett; Nicholas L. Crookston; Andrew T. Hudak

    2009-01-01

    Almost universally, forest inventory and monitoring databases are incomplete, ranging from missing data for only a few records and a few variables, common for small land areas, to missing data for many observations and many variables, common for large land areas. For a wide variety of applications, nearest neighbor (NN) imputation methods have been developed to fill in...

  20. Multi-level deep supervised networks for retinal vessel segmentation.

    PubMed

    Mo, Juan; Zhang, Lei

    2017-12-01

    Changes in the appearance of retinal blood vessels are an important indicator for various ophthalmologic and cardiovascular diseases, including diabetes, hypertension, arteriosclerosis, and choroidal neovascularization. Vessel segmentation from retinal images is very challenging because of low blood vessel contrast, intricate vessel topology, and the presence of pathologies such as microaneurysms and hemorrhages. To overcome these challenges, we propose a neural network-based method for vessel segmentation. A deep supervised fully convolutional network is developed by leveraging multi-level hierarchical features of the deep networks. To improve the discriminative capability of features in lower layers of the deep network and guide the gradient back propagation to overcome gradient vanishing, deep supervision with auxiliary classifiers is incorporated in some intermediate layers of the network. Moreover, the transferred knowledge learned from other domains is used to alleviate the issue of insufficient medical training data. The proposed approach does not rely on hand-crafted features and needs no problem-specific preprocessing or postprocessing, which reduces the impact of subjective factors. We evaluate the proposed method on three publicly available databases, the DRIVE, STARE, and CHASE_DB1 databases. Extensive experiments demonstrate that our approach achieves better or comparable performance to state-of-the-art methods with a much faster processing speed, making it suitable for real-world clinical applications. The results of cross-training experiments demonstrate its robustness with respect to the training set. The proposed approach segments retinal vessels accurately with a much faster processing speed and can be easily applied to other biomedical segmentation tasks.

  1. New perspectives on the geometry of the Albuquerque Basin, Rio Grande rift, New Mexico: Insights from geophysical models of rift-fill thickness

    USGS Publications Warehouse

    Grauch, V. J.; Connell, Sean D.

    2013-01-01

    Discrepancies among previous models of the geometry of the Albuquerque Basin motivated us to develop a new model using a comprehensive approach. Capitalizing on a natural separation between the densities of mainly Neogene basin fill (Santa Fe Group) and those of older rocks, we developed a three-dimensional (3D) geophysical model of syn-rift basin-fill thickness that incorporates well data, seismic-reflection data, geologic cross sections, and other geophysical data in a constrained gravity inversion. Although the resulting model does not show structures directly, it elucidates important aspects of basin geometry. The main features are three, 3–5-km-deep, interconnected structural depressions, which increase in size, complexity, and segmentation from north to south: the Santo Domingo, Calabacillas, and Belen subbasins. The increase in segmentation and complexity may reflect a transition of the Rio Grande rift from well-defined structural depressions in the north to multiple, segmented basins within a broader region of crustal extension to the south. The modeled geometry of the subbasins and their connections differs from a widely accepted structural model based primarily on seismic-reflection interpretations. Key elements of the previous model are an east-tilted half-graben block on the north separated from a west-tilted half-graben block on the south by a southwest-trending, scissor-like transfer zone. Instead, we find multiple subbasins with predominantly easterly tilts for much of the Albuquerque Basin, a restricted region of westward tilting in the southwestern part of the basin, and a northwesterly trending antiform dividing subbasins in the center of the basin instead of a major scissor-like transfer zone. The overall eastward tilt indicated by the 3D geophysical model generally conforms to stratal tilts observed for the syn-rift succession, implying a prolonged eastward tilting of the basin during Miocene time. An extensive north-south synform in the central part of the Belen subbasin suggests a possible path for the ancestral Rio Grande during late Miocene or early Pliocene time. Variations in rift-fill thickness correspond to pre-rift structures in several places, suggesting that a better understanding of pre-rift history may shed light on debates about structural inheritance within the rift.

  2. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  3. Sealing properties of one-step root-filling fibre post-obturators vs. two-step delayed fibre post-placement.

    PubMed

    Monticelli, Francesca; Osorio, Raquel; Toledano, Manuel; Ferrari, Marco; Pashley, David H; Tay, Franklin R

    2010-07-01

    The sealing properties of a one-step obturation post-placement technique consisting of Resilon-capped fibre post-obturators were compared with a two-step technique based on initial Resilon root filling following by 24h-delayed fibre post-placement. Thirty root segments were shaped to size 40, 0.04 taper and filled with: (1) InnoEndo obturators; (2) Resilon/24h-delayed FibreKor post-cementation. Obturator, root filling and post-cementation procedures were performed using InnoEndo bonding agent/dual-cured root canal sealer. Fluid flow rate through the filled roots was evaluated at 10psi using a computerised fluid filtration model before root resection and after 3 and 9mm apical resections. Fluid flow data were analysed using two-way repeated measures ANOVA and Tukey test to examine the effects of root-filling post-placement techniques and root resection lengths on fluid leakage from the filled canals (alpha=0.05). A significantly greater amount of fluid leakage was observed with the one-step technique when compared with two-step technique. No difference in fluid leakage was observed among intact canals and canals resected at different lengths for both materials. The seal of root canals achieved with the one-step obturator is less effective than separate Resilon root fillings followed by a 24-h delay prior to the fibre post-placement. Incomplete setting of the sealer and restricted relief of polymerisation shrinkage stresses may be responsible for the inferior seal of the one-step root-filling/post-restoration technique. Copyright 2010 Elsevier Ltd. All rights reserved.

  4. Ultrasensitive Mach-Zehnder Interferometric Temperature Sensor Based on Liquid-Filled D-Shaped Fiber Cavity

    PubMed Central

    Zhang, Hui; Gao, Shecheng; Luo, Yunhan; Xiong, Songsong; Wan, Lei; Huang, Xincheng; Huang, Bingsen; Feng, Yuanhua; He, Miao; Liu, Weiping; Chen, Zhe; Li, Zhaohui

    2018-01-01

    A liquid-filled D-shaped fiber (DF) cavity serving as an in-fiber Mach–Zehnder interferometer (MZI) has been proposed and experimentally demonstrated for temperature sensing with ultrahigh sensitivity. The miniature MZI is constructed by splicing a segment of DF between two single-mode fibers (SMFs) to form a microcavity (MC) for filling and replacement of various refractive index (RI) liquids. By adjusting the effective RI difference between the DF and MC (the two interference arms), experimental and calculated results indicate that the interference spectra show different degrees of temperature dependence. As the effective RI of the liquid-filled MC approaches that of the DF, temperature sensitivity up to −84.72 nm/°C with a linear correlation coefficient of 0.9953 has been experimentally achieved for a device with the MC length of 456 μm, filled with liquid RI of 1.482. Apart from ultrahigh sensitivity, the proposed MCMZI device possesses additional advantages of its miniature size and simple configuration; these features make it promising and competitive in various temperature sensing applications, such as consumer electronics, biological treatments, and medical diagnosis. PMID:29673220

  5. Segmentation of hepatic artery in multi-phase liver CT using directional dilation and connectivity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Schnurr, Alena-Kathrin; Zidowitz, Stephan; Georgii, Joachim; Zhao, Yue; Razavi, Mohammad; Schwier, Michael; Hahn, Horst K.; Hansen, Christian

    2016-03-01

    Segmentation of hepatic arteries in multi-phase computed tomography (CT) images is indispensable in liver surgery planning. During image acquisition, the hepatic artery is enhanced by the injection of contrast agent. The enhanced signals are often not stably acquired due to non-optimal contrast timing. Other vascular structure, such as hepatic vein or portal vein, can be enhanced as well in the arterial phase, which can adversely affect the segmentation results. Furthermore, the arteries might suffer from partial volume effects due to their small diameter. To overcome these difficulties, we propose a framework for robust hepatic artery segmentation requiring a minimal amount of user interaction. First, an efficient multi-scale Hessian-based vesselness filter is applied on the artery phase CT image, aiming to enhance vessel structures with specified diameter range. Second, the vesselness response is processed using a Bayesian classifier to identify the most probable vessel structures. Considering the vesselness filter normally performs not ideally on the vessel bifurcations or the segments corrupted by noise, two vessel-reconnection techniques are proposed. The first technique uses a directional morphological operator to dilate vessel segments along their centerline directions, attempting to fill the gap between broken vascular segments. The second technique analyzes the connectivity of vessel segments and reconnects disconnected segments and branches. Finally, a 3D vessel tree is reconstructed. The algorithm has been evaluated using 18 CT images of the liver. To quantitatively measure the similarities between segmented and reference vessel trees, the skeleton coverage and mean symmetric distance are calculated to quantify the agreement between reference and segmented vessel skeletons, resulting in an average of 0:55+/-0:27 and 12:7+/-7:9 mm (mean standard deviation), respectively.

  6. Repair of segmental radial defect with autologous bone marrow aspirate and hydroxyapatite in rabbit radius: A clinical and radiographic evaluation

    PubMed Central

    Yassine, Kalbaza Ahmed; Mokhtar, Benchohra; Houari, Hemida; Karim, Amara; Mohamed, Melizi

    2017-01-01

    Aim: Finding an ideal bone substitute to treat large bone defects, delayed union and nonunions remain a challenge for orthopedic surgeons and researchers. Several studies have been conducted on bone regeneration; each has its own advantages and disadvantages. The aim of this study was to evaluate the effect of a combination of hydroxyapatite (HA) powder with autologous bone marrow (BM) aspirate on the repair of segmental radial defect in a rabbit model. Materials and Methods: A total of 36 male and adult New Zealand rabbit with a mean weight of 2.25 kg were used in this study. Approximately, 5 mm defect was created in the mid-shaft of the radius to be filled with HA powder in the control group “HA” (n=18) and with a combination of HA powder and autologous BM aspirate in the test group “HA+BM” (n=18). Animals were observed daily for healing by inspection of the surgical site, and six rabbits of each group were sacrificed at 30, 60, and 90 post-operative days to perform a radiographic evaluation of defect site. Results: Obtained results revealed a better and more rapid bone regeneration in the test group: Since the defect was rapidly and completely filled with mature bone tissue after 90 days. Conclusion: Based on these findings, we could infer that adding a BM aspirate to HA is responsible of a better regeneration process leading to a complete filling of the defect. PMID:28831217

  7. Filling-In Models of Completion: Rejoinder to Kellman, Garrigan, Shipley, and Keane (2007) and Albert (2007)

    ERIC Educational Resources Information Center

    Anderson, Barton L.

    2007-01-01

    There has been a growing interest in understanding the computations involved in the processes underlying visual segmentation and interpolation in conditions of occlusion. P. J. Kellman, P. Garrigan, T. F. Shipley, and B. P. Keane and M. K. Albert defended the view that identical contour interpolation mechanisms underlie modal and amodal…

  8. 75 FR 23253 - Notice of Intent To Prepare a Draft Environmental Impact Statement (EIS) for the Central Palm...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-03

    ... reach. The County has nourished the project area dune toes on several occasions and has planted native dune vegetation at several locations. Due to the narrow beach profile, much of this effort has been... nourishment and dune restoration through filling activities, groins, segmented submerged breakwaters, upland...

  9. The Virtuous All-News Radio Journalist: Perceptions of News Directors.

    ERIC Educational Resources Information Center

    Wulfemeyer, K. Tim; McFadden, Lori L.

    To date, most of the scholarly research and critical articles about ethics in journalism have dealt with newspapers and television rather than with radio. To help fill this gap, a study surveyed a segment of the radio news community to determine some of the attitudes, values, and beliefs of news directors concerning ethics in their workplace.…

  10. Quantification of the cerebrospinal fluid from a new whole body MRI sequence

    NASA Astrophysics Data System (ADS)

    Lebret, Alain; Petit, Eric; Durning, Bruno; Hodel, Jérôme; Rahmouni, Alain; Decq, Philippe

    2012-03-01

    Our work aims to develop a biomechanical model of hydrocephalus both intended to perform clinical research and to assist the neurosurgeon in diagnosis decisions. Recently, we have defined a new MR imaging sequence based on SPACE (Sampling Perfection with Application optimized Contrast using different flip-angle Evolution). On these images, the cerebrospinal fluid (CSF) appears as a homogeneous hypersignal. Therefore such images are suitable for segmentation and for volume assessment of the CSF. In this paper we present a fully automatic 3D segmentation of such SPACE MRI sequences. We choose a topological approach considering that CSF can be modeled as a simply connected object (i.e. a filled sphere). First an initial object which must be strictly included in the CSF and homotopic to a filled sphere, is determined by using a moment-preserving thresholding. Then a priority function based on an Euclidean distance map is computed in order to control the thickening process that adds "simple points" to the initial thresholded object. A point is called simple if its addition or its suppression does not result in change of topology neither for the object, nor for the background. The method is validated by measuring fluid volume of brain phantoms and by comparing our volume assessments on clinical data to those derived from a segmentation controlled by expert physicians. Then we show that a distinction between pathological cases and healthy adult people can be achieved by a linear discriminant analysis on volumes of the ventricular and intracranial subarachnoid spaces.

  11. Brain tumor classification and segmentation using sparse coding and dictionary learning.

    PubMed

    Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo

    2016-08-01

    This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.

  12. Retinal vascular segmentation using superpixel-based line operator and its application to vascular topology estimation.

    PubMed

    Na, Tong; Xie, Jianyang; Zhao, Yitian; Zhao, Yifan; Liu, Yue; Wang, Yongtian; Liu, Jiang

    2018-05-09

    Automatic methods of analyzing of retinal vascular networks, such as retinal blood vessel detection, vascular network topology estimation, and arteries/veins classification are of great assistance to the ophthalmologist in terms of diagnosis and treatment of a wide spectrum of diseases. We propose a new framework for precisely segmenting retinal vasculatures, constructing retinal vascular network topology, and separating the arteries and veins. A nonlocal total variation inspired Retinex model is employed to remove the image intensity inhomogeneities and relatively poor contrast. For better generalizability and segmentation performance, a superpixel-based line operator is proposed as to distinguish between lines and the edges, thus allowing more tolerance in the position of the respective contours. The concept of dominant sets clustering is adopted to estimate retinal vessel topology and classify the vessel network into arteries and veins. The proposed segmentation method yields competitive results on three public data sets (STARE, DRIVE, and IOSTAR), and it has superior performance when compared with unsupervised segmentation methods, with accuracy of 0.954, 0.957, and 0.964, respectively. The topology estimation approach has been applied to five public databases (DRIVE,STARE, INSPIRE, IOSTAR, and VICAVR) and achieved high accuracy of 0.830, 0.910, 0.915, 0.928, and 0.889, respectively. The accuracies of arteries/veins classification based on the estimated vascular topology on three public databases (INSPIRE, DRIVE and VICAVR) are 0.90.9, 0.910, and 0.907, respectively. The experimental results show that the proposed framework has effectively addressed crossover problem, a bottleneck issue in segmentation and vascular topology reconstruction. The vascular topology information significantly improves the accuracy on arteries/veins classification. © 2018 American Association of Physicists in Medicine.

  13. Breast mass segmentation in mammograms combining fuzzy c-means and active contours

    NASA Astrophysics Data System (ADS)

    Hmida, Marwa; Hamrouni, Kamel; Solaiman, Basel; Boussetta, Sana

    2018-04-01

    Segmentation of breast masses in mammograms is a challenging issue due to the nature of mammography and the characteristics of masses. In fact, mammographic images are poor in contrast and breast masses have various shapes and densities with fuzzy and ill-defined borders. In this paper, we propose a method based on a modified Chan-Vese active contour model for mass segmentation in mammograms. We conduct the experiment on mass Regions of Interest (ROI) extracted from the MIAS database. The proposed method consists of mainly three stages: Firstly, the ROI is preprocessed to enhance the contrast. Next, two fuzzy membership maps are generated from the preprocessed ROI based on fuzzy C-Means algorithm. These fuzzy membership maps are finally used to modify the energy of the Chan-Vese model and to perform the final segmentation. Experimental results indicate that the proposed method yields good mass segmentation results.

  14. ECG signal analysis through hidden Markov models.

    PubMed

    Andreão, Rodrigo V; Dorizzi, Bernadette; Boudy, Jérôme

    2006-08-01

    This paper presents an original hidden Markov model (HMM) approach for online beat segmentation and classification of electrocardiograms. The HMM framework has been visited because of its ability of beat detection, segmentation and classification, highly suitable to the electrocardiogram (ECG) problem. Our approach addresses a large panel of topics some of them never studied before in other HMM related works: waveforms modeling, multichannel beat segmentation and classification, and unsupervised adaptation to the patient's ECG. The performance was evaluated on the two-channel QT database in terms of waveform segmentation precision, beat detection and classification. Our waveform segmentation results compare favorably to other systems in the literature. We also obtained high beat detection performance with sensitivity of 99.79% and a positive predictivity of 99.96%, using a test set of 59 recordings. Moreover, premature ventricular contraction beats were detected using an original classification strategy. The results obtained validate our approach for real world application.

  15. Blurry-frame detection and shot segmentation in colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2003-12-01

    Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.

  16. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening.

    PubMed

    Panda, Rashmi; Puhan, N B; Panda, Ganapati

    2018-02-01

    Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.

  17. Patch-based automatic retinal vessel segmentation in global and local structural context.

    PubMed

    Cao, Shuoying; Bharath, Anil A; Parker, Kim H; Ng, Jeffrey

    2012-01-01

    In this paper, we extend our published work [1] and propose an automated system to segment retinal vessel bed in digital fundus images with enough adaptability to analyze images from fluorescein angiography. This approach takes into account both the global and local context and enables both vessel segmentation and microvascular centreline extraction. These tools should allow researchers and clinicians to estimate and assess vessel diameter, capillary blood volume and microvascular topology for early stage disease detection, monitoring and treatment. Global vessel bed segmentation is achieved by combining phase-invariant orientation fields with neighbourhood pixel intensities in a patch-based feature vector for supervised learning. This approach is evaluated against benchmarks on the DRIVE database [2]. Local microvascular centrelines within Regions-of-Interest (ROIs) are segmented by linking the phase-invariant orientation measures with phase-selective local structure features. Our global and local structural segmentation can be used to assess both pathological structural alterations and microemboli occurrence in non-invasive clinical settings in a longitudinal study.

  18. Compound image segmentation of published biomedical figures.

    PubMed

    Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit

    2018-04-01

    Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.

  19. CT-based manual segmentation and evaluation of paranasal sinuses.

    PubMed

    Pirner, S; Tingelhoff, K; Wagner, I; Westphal, R; Rilk, M; Wahl, F M; Bootz, F; Eichhorn, Klaus W G

    2009-04-01

    Manual segmentation of computed tomography (CT) datasets was performed for robot-assisted endoscope movement during functional endoscopic sinus surgery (FESS). Segmented 3D models are needed for the robots' workspace definition. A total of 50 preselected CT datasets were each segmented in 150-200 coronal slices with 24 landmarks being set. Three different colors for segmentation represent diverse risk areas. Extension and volumetric measurements were performed. Three-dimensional reconstruction was generated after segmentation. Manual segmentation took 8-10 h for each CT dataset. The mean volumes were: right maxillary sinus 17.4 cm(3), left side 17.9 cm(3), right frontal sinus 4.2 cm(3), left side 4.0 cm(3), total frontal sinuses 7.9 cm(3), sphenoid sinus right side 5.3 cm(3), left side 5.5 cm(3), total sphenoid sinus volume 11.2 cm(3). Our manually segmented 3D-models present the patient's individual anatomy with a special focus on structures in danger according to the diverse colored risk areas. For safe robot assistance, the high-accuracy models represent an average of the population for anatomical variations, extension and volumetric measurements. They can be used as a database for automatic model-based segmentation. None of the segmentation methods so far described provide risk segmentation. The robot's maximum distance to the segmented border can be adjusted according to the differently colored areas.

  20. Unraveling Pancreatic Segmentation.

    PubMed

    Renard, Yohann; de Mestier, Louis; Perez, Manuela; Avisse, Claude; Lévy, Philippe; Kianmanesh, Reza

    2018-04-01

    Limited pancreatic resections are increasingly performed, but the rate of postoperative fistula is higher than after classical resections. Pancreatic segmentation, anatomically and radiologically identifiable, may theoretically help the surgeon removing selected anatomical portions with their own segmental pancreatic duct and thus might decrease the postoperative fistula rate. We aimed at systematically and comprehensively reviewing the previously proposed pancreatic segmentations and discuss their relevance and limitations. PubMed database was searched for articles investigating pancreatic segmentation, including human or animal anatomy, and cadaveric or surgical studies. Overall, 47/99 articles were selected and grouped into 4 main hypotheses of pancreatic segmentation methodology: anatomic, vascular, embryologic and lymphatic. The head, body and tail segments are gross description without distinct borders. The arterial territories defined vascular segments and isolate an isthmic paucivascular area. The embryological theory relied on the fusion plans of the embryological buds. The lymphatic drainage pathways defined the lymphatic segmentation. These theories had differences, but converged toward separating the head and body/tail parts, and the anterior from posterior and inferior parts of the pancreatic head. The rate of postoperative fistula was not decreased when surgical resection was performed following any of these segmentation theories; hence, none of them appeared relevant enough to guide pancreatic transections. Current pancreatic segmentation theories do not enable defining anatomical-surgical pancreatic segments. Other approaches should be explored, in particular focusing on pancreatic ducts, through pancreatic ducts reconstructions and embryologic 3D modelization.

  1. Multilevel segmentation of intracranial aneurysms in CT angiography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Zhang, Yue, E-mail: y.zhang525@gmail.com; Navarro, Laurent

    Purpose: Segmentation of aneurysms plays an important role in interventional planning. Yet, the segmentation of both the lumen and the thrombus of an intracranial aneurysm in computed tomography angiography (CTA) remains a challenge. This paper proposes a multilevel segmentation methodology for efficiently segmenting intracranial aneurysms in CTA images. Methods: The proposed methodology first uses the lattice Boltzmann method (LBM) to extract the lumen part directly from the original image. Then, the LBM is applied again on an intermediate image whose lumen part is filled by the mean gray-level value outside the lumen, to yield an image region containing part ofmore » the aneurysm boundary. After that, an expanding disk is introduced to estimate the complete contour of the aneurysm. Finally, the contour detected is used as the initial contour of the level set with ellipse to refine the aneurysm. Results: The results obtained on 11 patients from different hospitals showed that the proposed segmentation was comparable with manual segmentation, and that quantitatively, the average segmentation matching factor (SMF) reached 86.99%, demonstrating good segmentation accuracy. Chan–Vese method, Sen’s model, and Luca’s model were used to compare the proposed method and their average SMF values were 39.98%, 40.76%, and 77.11%, respectively. Conclusions: The authors have presented a multilevel segmentation method based on the LBM and level set with ellipse for accurate segmentation of intracranial aneurysms. Compared to three existing methods, for all eleven patients, the proposed method can successfully segment the lumen with the highest SMF values for nine patients and second highest SMF values for the two. It also segments the entire aneurysm with the highest SMF values for ten patients and second highest SMF value for the one. This makes it potential for clinical assessment of the volume and aspect ratio of the intracranial aneurysms.« less

  2. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  3. Knee cartilage segmentation using active shape models and local binary patterns

    NASA Astrophysics Data System (ADS)

    González, Germán.; Escalante-Ramírez, Boris

    2014-05-01

    Segmentation of knee cartilage has been useful for opportune diagnosis and treatment of osteoarthritis (OA). This paper presents a semiautomatic segmentation technique based on Active Shape Models (ASM) combined with Local Binary Patterns (LBP) and its approaches to describe the surrounding texture of femoral cartilage. The proposed technique is tested on a 16-image database of different patients and it is validated through Leave- One-Out method. We compare different segmentation techniques: ASM-LBP, ASM-medianLBP, and ASM proposed by Cootes. The ASM-LBP approaches are tested with different ratios to decide which of them describes the cartilage texture better. The results show that ASM-medianLBP has better performance than ASM-LBP and ASM. Furthermore, we add a routine which improves the robustness versus two principal problems: oversegmentation and initialization.

  4. A compact gas-filled avalanche counter for DANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, C. Y.; Chyzh, A.; Kwan, E.

    2012-08-04

    A compact gas-filled avalanche counter for the detection of fission fragments was developed for a highly segmented 4π γ-ray calorimeter, namely the Detector for Advanced Neutron Capture Experiments located at the Lujan Center of the Los Alamos Neutron Science Center. It has been used successfully for experiments with 235U, 238Pu, 239Pu, and 241Pu isotopes to provide a unique signature to differentiate the fission from the competing neutron-capture reaction channel. We also used it to study the spontaneous fission in 252Cf. The design and performance of this avalanche counter for targets with extreme α-decay rate up to ~2.4×108/s are described.

  5. Foot Structure in Japanese Speech Errors: Normal vs. Pathological

    ERIC Educational Resources Information Center

    Miyakoda, Haruko

    2008-01-01

    Although many studies of speech errors have been presented in the literature, most have focused on errors occurring at either the segmental or feature level. Few, if any, studies have dealt with the prosodic structure of errors. This paper aims to fill this gap by taking up the issue of prosodic structure in Japanese speech errors, with a focus on…

  6. Neoliberal Imaginary, School Choice, and "New Elites" in Public Secondary Schools

    ERIC Educational Resources Information Center

    Yoon, Ee-Seul

    2016-01-01

    There has been a growing concentration of high-achieving students attending selective public schools of choice as part of the neoliberal reforms of education. While this growth has had an eroding effect on the aim of inclusivity in public education, few have explored this development as a new segment of elite schooling. This paper fills this gap…

  7. Multi-atlas based segmentation using probabilistic label fusion with adaptive weighting of image similarity measures.

    PubMed

    Sjöberg, C; Ahnesjö, A

    2013-06-01

    Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging

    PubMed Central

    Beaton, L.; Mazzaferri, J.; Lalonde, F.; Hidalgo-Aguirre, M.; Descovich, D.; Lesk, M. R.; Costantino, S.

    2015-01-01

    We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373

  9. TOOTHPASTEV6.11.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sankel, David J.; Clair, Aaron B. St.; Langsfield, Joshua D.

    2006-11-01

    Toothpaste is a graphical user interface and Computer Aided Drafting/Manufacturing (CAD/CAM) software package used to plan tool paths for Galil Motion Control hardware. The software is a tool for computer controlled dispensing of materials. The software may be used for solid freeform fabrication of components or the precision printing of inks. Mathematical calculations are used to produce a set of segments and arcs that when coupled together will fill space. The paths of the segments and arcs are then translated into a machine language that controls the motion of motors and translational stages to produce tool paths in three dimensions.more » As motion begins material(s) are dispensed or printed along the three-dimensional pathway.« less

  10. A scale self-adapting segmentation approach and knowledge transfer for automatically updating land use/cover change databases using high spatial resolution images

    NASA Astrophysics Data System (ADS)

    Wang, Zhihua; Yang, Xiaomei; Lu, Chen; Yang, Fengshuo

    2018-07-01

    Automatic updating of land use/cover change (LUCC) databases using high spatial resolution images (HSRI) is important for environmental monitoring and policy making, especially for coastal areas that connect the land and coast and that tend to change frequently. Many object-based change detection methods are proposed, especially those combining historical LUCC with HSRI. However, the scale parameter(s) segmenting the serial temporal images, which directly determines the average object size, is hard to choose without experts' intervention. And the samples transferred from historical LUCC also need experts' intervention to avoid insufficient or wrong samples. With respect to the scale parameter(s) choosing, a Scale Self-Adapting Segmentation (SSAS) approach based on the exponential sampling of a scale parameter and location of the local maximum of a weighted local variance was proposed to determine the scale selection problem when segmenting images constrained by LUCC for detecting changes. With respect to the samples transferring, Knowledge Transfer (KT), a classifier trained on historical images with LUCC and applied in the classification of updated images, was also proposed. Comparison experiments were conducted in a coastal area of Zhujiang, China, using SPOT 5 images acquired in 2005 and 2010. The results reveal that (1) SSAS can segment images more effectively without intervention of experts. (2) KT can also reach the maximum accuracy of samples transfer without experts' intervention. Strategy SSAS + KT would be a good choice if the temporal historical image and LUCC match, and the historical image and updated image are obtained from the same resource.

  11. Soft computing approach to 3D lung nodule segmentation in CT.

    PubMed

    Badura, P; Pietka, E

    2014-10-01

    This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening.

    PubMed

    Seoud, Lama; Hurtut, Thomas; Chelbi, Jihed; Cheriet, Farida; Langlois, J M Pierre

    2016-04-01

    The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.

  13. K-SPAN: A lexical database of Korean surface phonetic forms and phonological neighborhood density statistics.

    PubMed

    Holliday, Jeffrey J; Turnbull, Rory; Eychenne, Julien

    2017-10-01

    This article presents K-SPAN (Korean Surface Phonetics and Neighborhoods), a database of surface phonetic forms and several measures of phonological neighborhood density for 63,836 Korean words. Currently publicly available Korean corpora are limited by the fact that they only provide orthographic representations in Hangeul, which is problematic since phonetic forms in Korean cannot be reliably predicted from orthographic forms. We describe the method used to derive the surface phonetic forms from a publicly available orthographic corpus of Korean, and report on several statistics calculated using this database; namely, segment unigram frequencies, which are compared to previously reported results, along with segment-based and syllable-based neighborhood density statistics for three types of representation: an "orthographic" form, which is a quasi-phonological representation, a "conservative" form, which maintains all known contrasts, and a "modern" form, which represents the pronunciation of contemporary Seoul Korean. These representations are rendered in an ASCII-encoded scheme, which allows users to query the corpus without having to read Korean orthography, and permits the calculation of a wide range of phonological measures.

  14. Event segmentation improves event memory up to one month later.

    PubMed

    Flores, Shaney; Bailey, Heather R; Eisenberg, Michelle L; Zacks, Jeffrey M

    2017-08-01

    When people observe everyday activity, they spontaneously parse it into discrete meaningful events. Individuals who segment activity in a more normative fashion show better subsequent memory for the events. If segmenting events effectively leads to better memory, does asking people to attend to segmentation improve subsequent memory? To answer this question, participants viewed movies of naturalistic activity with instructions to remember the activity for a later test, and in some conditions additionally pressed a button to segment the movies into meaningful events or performed a control condition that required button-pressing but not attending to segmentation. In 5 experiments, memory for the movies was assessed at intervals ranging from immediately following viewing to 1 month later. Performing the event segmentation task led to superior memory at delays ranging from 10 min to 1 month. Further, individual differences in segmentation ability predicted individual differences in memory performance for up to a month following encoding. This study provides the first evidence that manipulating event segmentation affects memory over long delays and that individual differences in event segmentation are related to differences in memory over long delays. These effects suggest that attending to how an activity breaks down into meaningful events contributes to memory formation. Instructing people to more effectively segment events may serve as a potential intervention to alleviate everyday memory complaints in aging and clinical populations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Gene Expression Profiling Reveals Functional Specialization along the Intestinal Tract of a Carnivorous Teleostean Fish (Dicentrarchus labrax)

    PubMed Central

    Calduch-Giner, Josep A.; Sitjà-Bobadilla, Ariadna; Pérez-Sánchez, Jaume

    2016-01-01

    High-quality sequencing reads from the intestine of European sea bass were assembled, annotated by similarity against protein reference databases and combined with nucleotide sequences from public and private databases. After redundancy filtering, 24,906 non-redundant annotated sequences encoding 15,367 different gene descriptions were obtained. These annotated sequences were used to design a custom, high-density oligo-microarray (8 × 15 K) for the transcriptomic profiling of anterior (AI), middle (MI), and posterior (PI) intestinal segments. Similar molecular signatures were found for AI and MI segments, which were combined in a single group (AI-MI) whereas the PI outstood separately, with more than 1900 differentially expressed genes with a fold-change cutoff of 2. Functional analysis revealed that molecular and cellular functions related to feed digestion and nutrient absorption and transport were over-represented in AI-MI segments. By contrast, the initiation and establishment of immune defense mechanisms became especially relevant in PI, although the microarray expression profiling validated by qPCR indicated that these functional changes are gradual from anterior to posterior intestinal segments. This functional divergence occurred in association with spatial transcriptional changes in nutrient transporters and the mucosal chemosensing system via G protein-coupled receptors. These findings contribute to identify key indicators of gut functions and to compare different fish feeding strategies and immune defense mechanisms acquired along the evolution of teleosts. PMID:27610085

  16. Gene Expression Profiling Reveals Functional Specialization along the Intestinal Tract of a Carnivorous Teleostean Fish (Dicentrarchus labrax).

    PubMed

    Calduch-Giner, Josep A; Sitjà-Bobadilla, Ariadna; Pérez-Sánchez, Jaume

    2016-01-01

    High-quality sequencing reads from the intestine of European sea bass were assembled, annotated by similarity against protein reference databases and combined with nucleotide sequences from public and private databases. After redundancy filtering, 24,906 non-redundant annotated sequences encoding 15,367 different gene descriptions were obtained. These annotated sequences were used to design a custom, high-density oligo-microarray (8 × 15 K) for the transcriptomic profiling of anterior (AI), middle (MI), and posterior (PI) intestinal segments. Similar molecular signatures were found for AI and MI segments, which were combined in a single group (AI-MI) whereas the PI outstood separately, with more than 1900 differentially expressed genes with a fold-change cutoff of 2. Functional analysis revealed that molecular and cellular functions related to feed digestion and nutrient absorption and transport were over-represented in AI-MI segments. By contrast, the initiation and establishment of immune defense mechanisms became especially relevant in PI, although the microarray expression profiling validated by qPCR indicated that these functional changes are gradual from anterior to posterior intestinal segments. This functional divergence occurred in association with spatial transcriptional changes in nutrient transporters and the mucosal chemosensing system via G protein-coupled receptors. These findings contribute to identify key indicators of gut functions and to compare different fish feeding strategies and immune defense mechanisms acquired along the evolution of teleosts.

  17. Systematization of the protein sequence diversity in enzymes related to secondary metabolic pathways in plants, in the context of big data biology inspired by the KNApSAcK motorcycle database.

    PubMed

    Ikeda, Shun; Abe, Takashi; Nakamura, Yukiko; Kibinge, Nelson; Hirai Morita, Aki; Nakatani, Atsushi; Ono, Naoaki; Ikemura, Toshimichi; Nakamura, Kensuke; Altaf-Ul-Amin, Md; Kanaya, Shigehiko

    2013-05-01

    Biology is increasingly becoming a data-intensive science with the recent progress of the omics fields, e.g. genomics, transcriptomics, proteomics and metabolomics. The species-metabolite relationship database, KNApSAcK Core, has been widely utilized and cited in metabolomics research, and chronological analysis of that research work has helped to reveal recent trends in metabolomics research. To meet the needs of these trends, the KNApSAcK database has been extended by incorporating a secondary metabolic pathway database called Motorcycle DB. We examined the enzyme sequence diversity related to secondary metabolism by means of batch-learning self-organizing maps (BL-SOMs). Initially, we constructed a map by using a big data matrix consisting of the frequencies of all possible dipeptides in the protein sequence segments of plants and bacteria. The enzyme sequence diversity of the secondary metabolic pathways was examined by identifying clusters of segments associated with certain enzyme groups in the resulting map. The extent of diversity of 15 secondary metabolic enzyme groups is discussed. Data-intensive approaches such as BL-SOM applied to big data matrices are needed for systematizing protein sequences. Handling big data has become an inevitable part of biology.

  18. Impact of the accuracy of automatic segmentation of cell nuclei clusters on classification of thyroid follicular lesions.

    PubMed

    Jung, Chanho; Kim, Changick

    2014-08-01

    Automatic segmentation of cell nuclei clusters is a key building block in systems for quantitative analysis of microscopy cell images. For that reason, it has received a great attention over the last decade, and diverse automatic approaches to segment clustered nuclei with varying levels of performance under different test conditions have been proposed in literature. To the best of our knowledge, however, so far there is no comparative study on the methods. This study is a first attempt to fill this research gap. More precisely, the purpose of this study is to present an objective performance comparison of existing state-of-the-art segmentation methods. Particularly, the impact of their accuracy on classification of thyroid follicular lesions is also investigated "quantitatively" under the same experimental condition, to evaluate the applicability of the methods. Thirteen different segmentation approaches are compared in terms of not only errors in nuclei segmentation and delineation, but also their impact on the performance of system to classify thyroid follicular lesions using different metrics (e.g., diagnostic accuracy, sensitivity, specificity, etc.). Extensive experiments have been conducted on a total of 204 digitized thyroid biopsy specimens. Our study demonstrates that significant diagnostic errors can be avoided using more advanced segmentation approaches. We believe that this comprehensive comparative study serves as a reference point and guide for developers and practitioners in choosing an appropriate automatic segmentation technique adopted for building automated systems for specifically classifying follicular thyroid lesions. © 2014 International Society for Advancement of Cytometry.

  19. On the importance of FIB-SEM specific segmentation algorithms for porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salzer, Martin, E-mail: martin.salzer@uni-ulm.de; Thiele, Simon, E-mail: simon.thiele@imtek.uni-freiburg.de; Zengerle, Roland, E-mail: zengerle@imtek.uni-freiburg.de

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin,more » is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.« less

  20. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  1. Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images.

    PubMed

    Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2017-09-01

    Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

  2. A shape prior-based MRF model for 3D masseter muscle segmentation

    NASA Astrophysics Data System (ADS)

    Majeed, Tahir; Fundana, Ketut; Lüthi, Marcel; Beinemann, Jörg; Cattin, Philippe

    2012-02-01

    Medical image segmentation is generally an ill-posed problem that can only be solved by incorporating prior knowledge. The ambiguities arise due to the presence of noise, weak edges, imaging artifacts, inhomogeneous interior and adjacent anatomical structures having similar intensity profile as the target structure. In this paper we propose a novel approach to segment the masseter muscle using the graph-cut incorporating additional 3D shape priors in CT datasets, which is robust to noise; artifacts; and shape deformations. The main contribution of this paper is in translating the 3D shape knowledge into both unary and pairwise potentials of the Markov Random Field (MRF). The segmentation task is casted as a Maximum-A-Posteriori (MAP) estimation of the MRF. Graph-cut is then used to obtain the global minimum which results in the segmentation of the masseter muscle. The method is tested on 21 CT datasets of the masseter muscle, which are noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. the very common dental fillings and dental implants. We show that the proposed technique produces clinically acceptable results to the challenging problem of muscle segmentation, and further provide a quantitative and qualitative comparison with other methods. We statistically show that adding additional shape prior into both unary and pairwise potentials can increase the robustness of the proposed method in noisy datasets.

  3. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  4. Torque test measurement in segmental bone defects using porous calcium phosphate cement implants.

    PubMed

    Kroese-Deutman, Henriette C; Wolke, Joop G C; Spauwen, Paul H M; Jansen, John A

    2010-10-01

    This study was performed to assess the bone healing supporting characteristics of porous calcium phosphate (Ca-P) cement when implanted in a rabbit segmental defect model as well as to determine the reliability of torque testing as a method to verify bone healing. The middiaphyseal radius was chosen as the area to create bilaterally increasing defect sizes (5, 10, and 15 mm), which were either filled with porous Ca-P cement or left open as a control. After 12 weeks of implantation, torque test measurements as well as histological and radiographic evaluation were performed. In two of the open 15 mm control defects, bone bridging was visible at the radiographic and histological evaluation. Bone was observed to be present in all porous Ca-P cement implants (5, 10, and 15 mm defects) after 12 weeks. No significant differences in torque measurements were observed between the 5 and 10 mm filled and open control defects using a t-test. In addition, the mechanical strength of all operated specimens was similar compared with nonoperated bone samples. The torsion data for the 15 mm open defect appeared to be lower compared with the filled 15 mm defect, but no significant difference could be proven. Within the limitation of the study design, porous Ca-P cement implants demonstrated osteoconductive properties and confirmed to be a suitable scaffold material in a weight-bearing situation. Further, the used torque testing method was found to be unreliable for testing the mechanical properties of the healed bone defect.

  5. Automatic bone outer contour extraction from B-modes ultrasound images based on local phase symmetry and quadratic polynomial fitting

    NASA Astrophysics Data System (ADS)

    Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery

    2017-06-01

    Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.

  6. Image-based modeling of the flow transition from a Berea rock matrix to a propped fracture

    NASA Astrophysics Data System (ADS)

    Sanematsu, P.; Willson, C. S.; Thompson, K. E.

    2013-12-01

    In the past decade, new technologies and advances in horizontal hydraulic fracturing to extract oil and gas from tight rocks have raised questions regarding the physics of the flow and transport processes that occur during production. Many of the multi-dimensional details of flow from the rock matrix into the fracture and within the proppant-filled fracture are still unknown, which leads to unreliable well production estimations. In this work, we use x-ray computed micro tomography (XCT) to image 30/60 CarboEconoprop light weight ceramic proppant packed between berea sandstone cores (6 mm in diameter and ~2 mm in height) under 4000 psi (~28 MPa) loading stress. Image processing and segmentation of the 6 micron voxel resolution tomography dataset into solid and void space involved filtering with anisotropic diffusion (AD), segmentation using an indicator kriging (IK) algorithm, and removal of noise using a remove islands and holes program. Physically-representative pore network structures were generated from the XCT images, and a representative elementary volume (REV) was analyzed using both permeability and effective porosity convergence. Boundary conditions were introduced to mimic the flow patterns that occur when fluid moves from the matrix into the proppant-filled fracture and then downstream within the proppant-filled fracture. A smaller domain, containing Berea and proppants close to the interface, was meshed using an in-house unstructured meshing algorithm that allows different levels of refinement. Although most of this domain contains proppants, the Berea section accounted for the majority of the elements due to mesh refinement in this region of smaller pores. A finite element method (FEM) Stokes flow model was used to provide more detailed insights on the flow transition from rock matrix to fracture. Results using different pressure gradients are used to describe the flow transition from the Berea rock matrix to proppant-filled fracture.

  7. Sequence-based Network Completion Reveals the Integrality of Missing Reactions in Metabolic Networks*

    PubMed Central

    Krumholz, Elias W.; Libourel, Igor G. L.

    2015-01-01

    Genome-scale metabolic models are central in connecting genotypes to metabolic phenotypes. However, even for well studied organisms, such as Escherichia coli, draft networks do not contain a complete biochemical network. Missing reactions are referred to as gaps. These gaps need to be filled to enable functional analysis, and gap-filling choices influence model predictions. To investigate whether functional networks existed where all gap-filling reactions were supported by sequence similarity to annotated enzymes, four draft networks were supplemented with all reactions from the Model SEED database for which minimal sequence similarity was found in their genomes. Quadratic programming revealed that the number of reactions that could partake in a gap-filling solution was vast: 3,270 in the case of E. coli, where 72% of the metabolites in the draft network could connect a gap-filling solution. Nonetheless, no network could be completed without the inclusion of orphaned enzymes, suggesting that parts of the biochemistry integral to biomass precursor formation are uncharacterized. However, many gap-filling reactions were well determined, and the resulting networks showed improved prediction of gene essentiality compared with networks generated through canonical gap filling. In addition, gene essentiality predictions that were sensitive to poorly determined gap-filling reactions were of poor quality, suggesting that damage to the network structure resulting from the inclusion of erroneous gap-filling reactions may be predictable. PMID:26041773

  8. Molecular-dynamics simulations of crosslinking and confinement effects on structure, segmental mobility and mechanics of filled elastomers

    NASA Astrophysics Data System (ADS)

    Davris, Theodoros; Lyulin, Alexey V.

    2016-05-01

    The significant drop of the storage modulus under uniaxial deformation (Payne effect) restrains the performance of the elastomer-based composites and the development of possible new applications. In this paper molecular-dynamics (MD) computer simulations using LAMMPS MD package have been performed to study the mechanical properties of a coarse-grained model of this family of nanocomposite materials. Our goal is to provide simulational insights into the viscoelastic properties of filled elastomers, and try to connect the macroscopic mechanics with composite microstructure, the strength of the polymer-filler interactions and the polymer mobility at different scales. To this end we simulate random copolymer films capped between two infinite solid (filler aggregate) walls. We systematically vary the strength of the polymer-substrate adhesion interactions, degree of polymer confinement (film thickness), polymer crosslinking density, and study their influence on the equilibrium and non-equilibrium structure, segmental dynamics, and the mechanical properties of the simulated systems. The glass-transition temperature increases once the mesh size became smaller than the chain radius of gyration; otherwise it remained invariant to mesh-size variations. This increase in the glass-transition temperature was accompanied by a monotonic slowing-down of segmental dynamics on all studied length scales. This observation is attributed to the correspondingly decreased width of the bulk density layer that was obtained in films whose thickness was larger than the end-to-end distance of the bulk polymer chains. To test this hypothesis additional simulations were performed in which the crystalline walls were replaced with amorphous or rough walls.

  9. Fault displacement hazard assessment for nuclear installations based on IAEA safety standards

    NASA Astrophysics Data System (ADS)

    Fukushima, Y.

    2016-12-01

    In the IAEA Safety NS-R-3, surface fault displacement hazard assessment (FDHA) is required for the siting of nuclear installations. If any capable faults exist in the candidate site, IAEA recommends the consideration of alternative sites. However, due to the progress in palaeoseismological investigations, capable faults may be found in existing site. In such a case, IAEA recommends to evaluate the safety using probabilistic FDHA (PFDHA), which is an empirical approach based on still quite limited database. Therefore a basic and crucial improvement is to increase the database. In 2015, IAEA produced a TecDoc-1767 on Palaeoseismology as a reference for the identification of capable faults. Another IAEA Safety Report 85 on ground motion simulation based on fault rupture modelling provides an annex introducing recent PFDHAs and fault displacement simulation methodologies. The IAEA expanded the project of FDHA for the probabilistic approach and the physics based fault rupture modelling. The first approach needs a refinement of the empirical methods by building a world wide database, and the second approach needs to shift from kinematic to the dynamic scheme. Both approaches can complement each other, since simulated displacement can fill the gap of a sparse database and geological observations can be useful to calibrate the simulations. The IAEA already supported a workshop in October 2015 to discuss the existing databases with the aim of creating a common worldwide database. A consensus of a unified database was reached. The next milestone is to fill the database with as many fault rupture data sets as possible. Another IAEA work group had a WS in November 2015 to discuss the state-of-the-art PFDHA as well as simulation methodologies. Two groups jointed a consultancy meeting in February 2016, shared information, identified issues, discussed goals and outputs, and scheduled future meetings. Now we may aim at coordinating activities for the whole FDHA tasks jointly.

  10. Algorithm to calculate proportional area transformation factors for digital geographic databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, R.

    1983-01-01

    A computer technique is described for determining proportionate-area factors used to transform thematic data between large geographic areal databases. The number of calculations in the algorithm increases linearly with the number of segments in the polygonal definitions of the databases, and increases with the square root of the total number of chains. Experience is presented in calculating transformation factors for two national databases, the USGS Water Cataloging Unit outlines and DOT county boundaries which consist of 2100 and 3100 polygons respectively. The technique facilitates using thematic data defined on various natural bases (watersheds, landcover units, etc.) in analyses involving economicmore » and other administrative bases (states, counties, etc.), and vice versa.« less

  11. High-dynamic-range imaging for cloud segmentation

    NASA Astrophysics Data System (ADS)

    Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan

    2018-04-01

    Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.

  12. Geodesic Distance Algorithm for Extracting the Ascending Aorta from 3D CT Images

    PubMed Central

    Jang, Yeonggul; Jung, Ho Yub; Hong, Youngtaek; Cho, Iksung; Shim, Hackjoon; Chang, Hyuk-Jae

    2016-01-01

    This paper presents a method for the automatic 3D segmentation of the ascending aorta from coronary computed tomography angiography (CCTA). The segmentation is performed in three steps. First, the initial seed points are selected by minimizing a newly proposed energy function across the Hough circles. Second, the ascending aorta is segmented by geodesic distance transformation. Third, the seed points are effectively transferred through the next axial slice by a novel transfer function. Experiments are performed using a database composed of 10 patients' CCTA images. For the experiment, the ground truths are annotated manually on the axial image slices by a medical expert. A comparative evaluation with state-of-the-art commercial aorta segmentation algorithms shows that our approach is computationally more efficient and accurate under the DSC (Dice Similarity Coefficient) measurements. PMID:26904151

  13. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    PubMed

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  14. Fast Multiclass Segmentation using Diffuse Interface Methods on Graphs

    DTIC Science & Technology

    2013-02-01

    000 28 × 28 images of handwritten digits 0 through 9. Examples of entries can be found in Figure 6. The task is to classify each of the images into the...database of handwritten digits .” [Online]. Available: http://yann.lecun.com/exdb/mnist/ [36] J. Lellmann, J. H. Kappes, J. Yuan, F. Becker, and C...corresponding digit . The images include digits from 0 to 9; thus, this is a 10 class segmentation problem. To construct the weight matrix, we used N

  15. Model-based segmentation of hand radiographs

    NASA Astrophysics Data System (ADS)

    Weiler, Frank; Vogelsang, Frank

    1998-06-01

    An important procedure in pediatrics is to determine the skeletal maturity of a patient from radiographs of the hand. There is great interest in the automation of this tedious and time-consuming task. We present a new method for the segmentation of the bones of the hand, which allows the assessment of the skeletal maturity with an appropriate database of reference bones, similar to the atlas based methods. The proposed algorithm uses an extended active contour model for the segmentation of the hand bones, which incorporates a-priori knowledge of shape and topology of the bones in an additional energy term. This `scene knowledge' is integrated in a complex hierarchical image model, that is used for the image analysis task.

  16. Off-lexicon online Arabic handwriting recognition using neural network

    NASA Astrophysics Data System (ADS)

    Yahia, Hamdi; Chaabouni, Aymen; Boubaker, Houcine; Alimi, Adel M.

    2017-03-01

    This paper highlights a new method for online Arabic handwriting recognition based on graphemes segmentation. The main contribution of our work is to explore the utility of Beta-elliptic model in segmentation and features extraction for online handwriting recognition. Indeed, our method consists in decomposing the input signal into continuous part called graphemes based on Beta-Elliptical model, and classify them according to their position in the pseudo-word. The segmented graphemes are then described by the combination of geometric features and trajectory shape modeling. The efficiency of the considered features has been evaluated using feed forward neural network classifier. Experimental results using the benchmarking ADAB Database show the performance of the proposed method.

  17. Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image

    NASA Astrophysics Data System (ADS)

    Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian

    2014-07-01

    To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.

  18. Neural Networks for Segregation of Multiple Objects: Visual Figure-Ground Separation and Auditory Pitch Perception.

    NASA Astrophysics Data System (ADS)

    Wyse, Lonce

    An important component of perceptual object recognition is the segmentation into coherent perceptual units of the "blooming buzzing confusion" that bombards the senses. The work presented herein develops neural network models of some key processes of pre-attentive vision and audition that serve this goal. A neural network model, called an FBF (Feature -Boundary-Feature) network, is proposed for automatic parallel separation of multiple figures from each other and their backgrounds in noisy images. Figure-ground separation is accomplished by iterating operations of a Boundary Contour System (BCS) that generates a boundary segmentation of a scene, and a Feature Contour System (FCS) that compensates for variable illumination and fills-in surface properties using boundary signals. A key new feature is the use of the FBF filling-in process for the figure-ground separation of connected regions, which are subsequently more easily recognized. The new CORT-X 2 model is a feed-forward version of the BCS that is designed to detect, regularize, and complete boundaries in up to 50 percent noise. It also exploits the complementary properties of on-cells and off -cells to generate boundary segmentations and to compensate for boundary gaps during filling-in. In the realm of audition, many sounds are dominated by energy at integer multiples, or "harmonics", of a fundamental frequency. For such sounds (e.g., vowels in speech), the individual frequency components fuse, so that they are perceived as one sound source with a pitch at the fundamental frequency. Pitch is integral to separating auditory sources, as well as to speaker identification and speech understanding. A neural network model of pitch perception called SPINET (SPatial PItch NETwork) is developed and used to simulate a broader range of perceptual data than previous spectral models. The model employs a bank of narrowband filters as a simple model of basilar membrane mechanics, spectral on-center off-surround competitive interactions, and a "harmonic sieve" mechanism whereby the strength of a pitch depends only on spectral regions near harmonics. The model is evaluated using data involving mistuned components, shifted harmonics, complex tones with varying phase relationships, and continuous spectra such as rippled noise and narrow noise bands.

  19. [InlineEquation not available: see fulltext.]-Means Based Fingerprint Segmentation with Sensor Interoperability

    NASA Astrophysics Data System (ADS)

    Yang, Gongping; Zhou, Guang-Tong; Yin, Yilong; Yang, Xiukun

    2010-12-01

    A critical step in an automatic fingerprint recognition system is the segmentation of fingerprint images. Existing methods are usually designed to segment fingerprint images originated from a certain sensor. Thus their performances are significantly affected when dealing with fingerprints collected by different sensors. This work studies the sensor interoperability of fingerprint segmentation algorithms, which refers to the algorithm's ability to adapt to the raw fingerprints obtained from different sensors. We empirically analyze the sensor interoperability problem, and effectively address the issue by proposing a [InlineEquation not available: see fulltext.]-means based segmentation method called SKI. SKI clusters foreground and background blocks of a fingerprint image based on the [InlineEquation not available: see fulltext.]-means algorithm, where a fingerprint block is represented by a 3-dimensional feature vector consisting of block-wise coherence, mean, and variance (abbreviated as CMV). SKI also employs morphological postprocessing to achieve favorable segmentation results. We perform SKI on each fingerprint to ensure sensor interoperability. The interoperability and robustness of our method are validated by experiments performed on a number of fingerprint databases which are obtained from various sensors.

  20. Fuzzy-C-Means Clustering Based Segmentation and CNN-Classification for Accurate Segmentation of Lung Nodules

    PubMed

    K, Jalal Deen; R, Ganesan; A, Merline

    2017-07-27

    Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. Creative Commons Attribution License

  1. Fuzzy-C-Means Clustering Based Segmentation and CNN-Classification for Accurate Segmentation of Lung Nodules

    PubMed Central

    K, Jalal Deen; R, Ganesan; A, Merline

    2017-01-01

    Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. PMID:28749127

  2. Unsupervised sputum color image segmentation for lung cancer diagnosis based on a Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Sammouda, Rachid; Niki, Noboru; Nishitani, Hiroshi; Nakamura, S.; Mori, Shinichiro

    1997-04-01

    The paper presents a method for automatic segmentation of sputum cells with color images, to develop an efficient algorithm for lung cancer diagnosis based on a Hopfield neural network. We formulate the segmentation problem as a minimization of an energy function constructed with two terms, the cost-term as a sum of squared errors, and the second term a temporary noise added to the network as an excitation to escape certain local minima with the result of being closer to the global minimum. To increase the accuracy in segmenting the regions of interest, a preclassification technique is used to extract the sputum cell regions within the color image and remove those of the debris cells. The former is then given with the raw image to the input of Hopfield neural network to make a crisp segmentation by assigning each pixel to label such as background, cytoplasm, and nucleus. The proposed technique has yielded correct segmentation of complex scene of sputum prepared by ordinary manual staining method in most of the tested images selected from our database containing thousands of sputum color images.

  3. Segmentation of suspicious objects in an x-ray image using automated region filling approach

    NASA Astrophysics Data System (ADS)

    Fu, Kenneth; Guest, Clark; Das, Pankaj

    2009-08-01

    To accommodate the flow of commerce, cargo inspection systems require a high probability of detection and low false alarm rate while still maintaining a minimum scan speed. Since objects of interest (high atomic-number metals) will often be heavily shielded to avoid detection, any detection algorithm must be able to identify such objects despite the shielding. Since pixels of a shielded object have a greater opacity than the shielding, we use a clustering method to classify objects in the image by pixel intensity levels. We then look within each intensity level region for sub-clusters of pixels with greater opacity than the surrounding region. A region containing an object has an enclosed-contour region (a hole) inside of it. We apply a region filling technique to fill in the hole, which represents a shielded object of potential interest. One method for region filling is seed-growing, which puts a "seed" starting point in the hole area and uses a selected structural element to fill out that region. However, automatic seed point selection is a hard problem; it requires additional information to decide if a pixel is within an enclosed region. Here, we propose a simple, robust method for region filling that avoids the problem of seed point selection. In our approach, we calculate the gradient Gx and Gy at each pixel in a binary image, and fill in 1s between a pair of x1 Gx(x1,y)=-1 and x2 Gx(x2,y)=1, and do the same thing in y-direction. The intersection of the two results will be filled region. We give a detailed discussion of our algorithm, discuss the strengths this method has over other methods, and show results of using our method.

  4. Can We Rely on Pharmacy Claims Databases to Ascertain Maternal Use of Medications during Pregnancy?

    PubMed

    Zhao, Jin-Ping; Sheehy, Odile; Gorgui, Jessica; Bérard, Anick

    2017-04-03

    Administrative databases are increasingly used to measure drug exposure in perinatal pharmacoepidemiology. We aimed to estimate the concordance between records of prescriptions filled in pharmacies and self-reported drug use during pregnancy. Data on self-reported medication use were collected at each trimester of pregnancy among a sub-sample from the Organization of Teratology Information Specialists Antidepressants in Pregnancy Cohort. Women were eligible if they were Quebec resident and provided their pharmacist's contact information. Maternal self-reports were compared with prescriptions filled in pharmacies, which are transferred to pharmaceutical services files of Quebec provincial health plan database (Régie de l'asssurance maladie du Québec). Positive and negative predictive values (PPV and NPV) for medications taken chronically (antidepressants, thyroid hormones), acutely (antibiotics), and as needed (antiemetics, asthma medications) were calculated. Among the 93 participants (mean age = 30.2 ± 3.8 years), 41.9% (n = 39) took at least one antidepressant during pregnancy according to self-reports, and 39.8% (n = 37) according to pharmacy records. Other commonly used drugs were antiemetics (self-reported 22.6%, pharmacy record 24.7%), antibiotics (20.4%, 16.1%), asthma medications (15.1%, 15.1%), and thyroid hormones (10.8%, 8.6%). PPVs and NPVs were: (1) chronic medication: antidepressants PPV = 100% (95% confidence interval [CI], 100-100%), NPV = 96% (95% CI, 92-100%); thyroid hormones PPV = 100% (95% CI, 100-100%), NPV = 98% (95% CI, 95-100%); (2) Acute medication: antibiotics PPV = 87% (95% CI, 70-100%), NPV = 92% (95% CI, 86-98%); (3) as needed medications: antiemetics: PPV = 78% (95% CI, 62-95%), NPV = 96% (95% CI, 91-100%); asthma: PPV = 33% (95% CI, 3-64%), NPV = 99% (95% CI, 97-100%). The high PPV and NPV validate the use of filled prescription data in large databases as a measure of medication exposure. Birth Defects Research 109:423-431, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  6. Shape based segmentation of MRIs of the bones in the knee using phase and intensity information

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Bourgeat, Pierrick; Crozier, Stuart; Ourselin, Sébastien

    2007-03-01

    The segmentation of the bones from MR images is useful for performing subsequent segmentation and quantitative measurements of cartilage tissue. In this paper, we present a shape based segmentation scheme for the bones that uses texture features derived from the phase and intensity information in the complex MR image. The phase can provide additional information about the tissue interfaces, but due to the phase unwrapping problem, this information is usually discarded. By using a Gabor filter bank on the complex MR image, texture features (including phase) can be extracted without requiring phase unwrapping. These texture features are then analyzed using a support vector machine classifier to obtain probability tissue matches. The segmentation of the bone is fully automatic and performed using a 3D active shape model based approach driven using gradient and texture information. The 3D active shape model is automatically initialized using a robust affine registration. The approach is validated using a database of 18 FLASH MR images that are manually segmented, with an average segmentation overlap (Dice similarity coefficient) of 0.92 compared to 0.9 obtained using the classifier only.

  7. 3D Multi-segment foot kinematics in children: A developmental study in typically developing boys.

    PubMed

    Deschamps, Kevin; Staes, Filip; Peerlinck, Kathelijne; Van Geet, Christel; Hermans, Cedric; Matricali, Giovanni Arnoldo; Lobet, Sebastien

    2017-02-01

    The relationship between age and 3D rotations objectivized with multisegment foot models has not been quantified until now. The purpose of this study was therefore to investigate the relationship between age and multi-segment foot kinematics in a cross-sectional database. Barefoot multi-segment foot kinematics of thirty two typically developing boys, aged 6-20 years, were captured with the Rizzoli Multi-segment Foot Model. One-dimensional statistical parametric mapping linear regression was used to examine the relationship between age and 3D inter-segment rotations of the dominant leg during the full gait cycle. Age was significantly correlated with sagittal plane kinematics of the midfoot and the calcaneus-metatarsus inter-segment angle (p<0.0125). Age was also correlated with the transverse plane kinematics of the calcaneus-metatarsus angle (p<0.0001). Gait labs should consider age related differences and variability if optimal decision making is pursued. It remains unclear if this is of interest for all foot models, however, the current study highlights that this is of particular relevance for foot models which incorporate a separate midfoot segment. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients.

    PubMed

    Mayer, Markus A; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2010-11-08

    Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis.

  9. Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients

    PubMed Central

    Mayer, Markus A.; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2010-01-01

    Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis. PMID:21258556

  10. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  11. Hidden Markov random field model and Broyden-Fletcher-Goldfarb-Shanno algorithm for brain image segmentation

    NASA Astrophysics Data System (ADS)

    Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane

    2018-05-01

    Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.

  12. An improved pulse coupled neural network with spectral residual for infrared pedestrian segmentation

    NASA Astrophysics Data System (ADS)

    He, Fuliang; Guo, Yongcai; Gao, Chao

    2017-12-01

    Pulse coupled neural network (PCNN) has become a significant tool for the infrared pedestrian segmentation, and a variety of relevant methods have been developed at present. However, these existing models commonly have several problems of the poor adaptability of infrared noise, the inaccuracy of segmentation results, and the fairly complex determination of parameters in current methods. This paper presents an improved PCNN model that integrates the simplified framework and spectral residual to alleviate the above problem. In this model, firstly, the weight matrix of the feeding input field is designed by the anisotropic Gaussian kernels (ANGKs), in order to suppress the infrared noise effectively. Secondly, the normalized spectral residual saliency is introduced as linking coefficient to enhance the edges and structural characteristics of segmented pedestrians remarkably. Finally, the improved dynamic threshold based on the average gray values of the iterative segmentation is employed to simplify the original PCNN model. Experiments on the IEEE OTCBVS benchmark and the infrared pedestrian image database built by our laboratory, demonstrate that the superiority of both subjective visual effects and objective quantitative evaluations in information differences and segmentation errors in our model, compared with other classic segmentation methods.

  13. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  14. Semantic Segmentation of Building Elements Using Point Cloud Hashing

    NASA Astrophysics Data System (ADS)

    Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.

    2018-05-01

    For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).

  15. Combining deep learning with anatomical analysis for segmentation of the portal vein for liver SBRT planning

    NASA Astrophysics Data System (ADS)

    Ibragimov, Bulat; Toesca, Diego; Chang, Daniel; Koong, Albert; Xing, Lei

    2017-12-01

    Automated segmentation of the portal vein (PV) for liver radiotherapy planning is a challenging task due to potentially low vasculature contrast, complex PV anatomy and image artifacts originated from fiducial markers and vasculature stents. In this paper, we propose a novel framework for automated segmentation of the PV from computed tomography (CT) images. We apply convolutional neural networks (CNNs) to learn the consistent appearance patterns of the PV using a training set of CT images with reference annotations and then enhance the PV in previously unseen CT images. Markov random fields (MRFs) were further used to smooth the results of the enhancement of the CNN enhancement and remove isolated mis-segmented regions. Finally, CNN-MRF-based enhancement was augmented with PV centerline detection that relied on PV anatomical properties such as tubularity and branch composition. The framework was validated on a clinical database with 72 CT images of patients scheduled for liver stereotactic body radiation therapy. The obtained accuracy of the segmentation was DSC= 0.83 and \

  16. Automated construction of arterial and venous trees in retinal images.

    PubMed

    Hu, Qiao; Abràmoff, Michael D; Garvin, Mona K

    2015-10-01

    While many approaches exist to segment retinal vessels in fundus photographs, only a limited number focus on the construction and disambiguation of arterial and venous trees. Previous approaches are local and/or greedy in nature, making them susceptible to errors or limiting their applicability to large vessels. We propose a more global framework to generate arteriovenous trees in retinal images, given a vessel segmentation. In particular, our approach consists of three stages. The first stage is to generate an overconnected vessel network, named the vessel potential connectivity map (VPCM), consisting of vessel segments and the potential connectivity between them. The second stage is to disambiguate the VPCM into multiple anatomical trees, using a graph-based metaheuristic algorithm. The third stage is to classify these trees into arterial or venous (A/V) trees. We evaluated our approach with a ground truth built based on a public database, showing a pixel-wise classification accuracy of 88.15% using a manual vessel segmentation as input, and 86.11% using an automatic vessel segmentation as input.

  17. Boundary-to-Marker Evidence-Controlled Segmentation and MDL-Based Contour Inference for Overlapping Nuclei.

    PubMed

    Song, Jie; Xiao, Liang; Lian, Zhichao

    2017-03-01

    This paper presents a novel method for automated morphology delineation and analysis of cell nuclei in histopathology images. Combining the initial segmentation information and concavity measurement, the proposed method first segments clusters of nuclei into individual pieces, avoiding segmentation errors introduced by the scale-constrained Laplacian-of-Gaussian filtering. After that a nuclear boundary-to-marker evidence computing is introduced to delineate individual objects after the refined segmentation process. The obtained evidence set is then modeled by the periodic B-splines with the minimum description length principle, which achieves a practical compromise between the complexity of the nuclear structure and its coverage of the fluorescence signal to avoid the underfitting and overfitting results. The algorithm is computationally efficient and has been tested on the synthetic database as well as 45 real histopathology images. By comparing the proposed method with several state-of-the-art methods, experimental results show the superior recognition performance of our method and indicate the potential applications of analyzing the intrinsic features of nuclei morphology.

  18. An automatic multi-atlas prostate segmentation in MRI using a multiscale representation and a label fusion strategy

    NASA Astrophysics Data System (ADS)

    Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.

  19. The Filled Arm Fizeau Telescope (FFT)

    NASA Technical Reports Server (NTRS)

    Synnott, S. P.

    1991-01-01

    Attention is given to the design of a Mills Cross imaging interferometer in which the arms are fully filled with mirror segments of a Ritchey-Chretien primary and which has sensitivity to 27th magnitude per pixel and resolution a factor of 10 greater than Hubble. The optical design, structural configuration, thermal disturbances, and vibration, material, control, and metrology issues, as well as scientific capabilities are discussed, and technology needs are identified. The technologies under consideration are similar to those required for the development of the other imaging interferometers that have been proposed over the past decade. A comparison of the imaging capabilities of a 30-m diameter FFT, an 8-m telescope with a collecting area equal to that of the FFT, and the HST is presented.

  20. [Human venous hemodynamics in microgravity and prediction of orthostatic tolerance in flight].

    PubMed

    Kotovskaya, A R; Fomina, G A

    2013-01-01

    The paper presents the results of investigating the lower limbs venous status in cosmonauts (n = 13) with the use of occlusion plethysmography in 6-month missions to the Russian segment of the International space station (ISS). An interrelation of shifts in venous capacitance, compliance and filling with orthostatic tolerance (OT) in the lower body negative pressure test (LBNP) was stated. OT predictability by the leg vein status in the course of space flight was demonstrated. The objective changes of veins predictive of OT reduction were identified. There are 3 levels of changes in venous capacitance, compliance and filling that prognosticate respective reductions in LBNP tolerance and were attested in 91% of the in-flight LBNP testing.

  1. An Unsupervised Approach for Extraction of Blood Vessels from Fundus Images.

    PubMed

    Dash, Jyotiprava; Bhoi, Nilamani

    2018-04-26

    Pathological disorders may happen due to small changes in retinal blood vessels which may later turn into blindness. Hence, the accurate segmentation of blood vessels is becoming a challenging task for pathological analysis. This paper offers an unsupervised recursive method for extraction of blood vessels from ophthalmoscope images. First, a vessel-enhanced image is generated with the help of gamma correction and contrast-limited adaptive histogram equalization (CLAHE). Next, the vessels are extracted iteratively by applying an adaptive thresholding technique. At last, a final vessel segmented image is produced by applying a morphological cleaning operation. Evaluations are accompanied on the publicly available digital retinal images for vessel extraction (DRIVE) and Child Heart And Health Study in England (CHASE_DB1) databases using nine different measurements. The proposed method achieves average accuracies of 0.957 and 0.952 on DRIVE and CHASE_DB1 databases respectively.

  2. A compositional segmentation of the human mitochondrial genome is related to heterogeneities in the guanine mutation rate

    PubMed Central

    Samuels, David C.; Boys, Richard J.; Henderson, Daniel A.; Chinnery, Patrick F.

    2003-01-01

    We applied a hidden Markov model segmentation method to the human mitochondrial genome to identify patterns in the sequence, to compare these patterns to the gene structure of mtDNA and to see whether these patterns reveal additional characteristics important for our understanding of genome evolution, structure and function. Our analysis identified three segmentation categories based upon the sequence transition probabilities. Category 2 segments corresponded to the tRNA and rRNA genes, with a greater strand-symmetry in these segments. Category 1 and 3 segments covered the protein- coding genes and almost all of the non-coding D-loop. Compared to category 1, the mtDNA segments assigned to category 3 had much lower guanine abundance. A comparison to two independent databases of mitochondrial mutations and polymorphisms showed that the high substitution rate of guanine in human mtDNA is largest in the category 3 segments. Analysis of synonymous mutations showed the same pattern. This suggests that this heterogeneity in the mutation rate is partly independent of respiratory chain function and is a direct property of the genome sequence itself. This has important implications for our understanding of mtDNA evolution and its use as a ‘molecular clock’ to determine the rate of population and species divergence. PMID:14530452

  3. Automatic segmentation of the bone and extraction of the bone cartilage interface from magnetic resonance images of the knee

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2007-03-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  4. Mammogram segmentation using maximal cell strength updation in cellular automata.

    PubMed

    Anitha, J; Peter, J Dinesh

    2015-08-01

    Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.

  5. Firearm suppressor having enhanced thermal management for rapid heat dissipation

    DOEpatents

    Moss, William C.; Anderson, Andrew T.

    2014-08-19

    A suppressor is disclosed for use with a weapon having a barrel through which a bullet is fired. The suppressor has an inner portion having a bore extending coaxially therethrough. The inner portion is adapted to be secured to a distal end of the barrel. A plurality of axial flow segments project radially from the inner portion and form axial flow paths through which expanding propellant gasses discharged from the barrel flow through. The axial flow segments have radially extending wall portions that define sections which may be filled with thermally conductive material, which in one example is a thermally conductive foam. The conductive foam helps to dissipate heat deposited within the suppressor during firing of the weapon.

  6. The efficacy of the Self-Adjusting File versus WaveOne in removal of root filling residue that remains in oval canals after the use of ProTaper retreatment files: A cone-beam computed tomography study

    PubMed Central

    Pawar, Ajinkya M; Thakur, Bhagyashree; Metzger, Zvi; Kfir, Anda; Pawar, Mansing

    2016-01-01

    Aim: The current ex vivo study compared the efficacy of removing root fillings using ProTaper retreatment files followed by either WaveOne reciprocating file or the Self-Adjusting File (SAF). Materials and Methods: Forty maxillary canines with single oval root canal were selected and sectioned to obtain 18-mm root segments. The root canals were instrumented with WaveOne primary files, followed by obturation using warm lateral compaction, and the sealer was allowed to fully set. The teeth were then divided into two equal groups (N = 20). Initial removal of the bulk of root filling material was performed with ProTaper retreatment files, followed by either WaveOne files (Group 1) or SAF (Group 2). Endosolv R was used as a gutta-percha softener. Preoperative and postoperative high-resolution cone-beam computed tomography (CBCT) was used to measure the volume of the root filling residue that was left after the procedure. Statistical analysis was performed using t-test. Results: The mean volume of root filling residue in Group 1 was 9.4 (±0.5) mm3, whereas in Group 2 the residue volume was 2.6 (±0.4) mm3, (P < 0.001; t-test). Conclusions: When SAF was used after ProTaper retreatment files, significantly less root filling residue was left in the canals compared to when WaveOne was used. PMID:26957798

  7. Brain tumor segmentation with Vander Lugt correlator based active contour.

    PubMed

    Essadike, Abdelaziz; Ouabida, Elhoussaine; Bouzid, Abdenbi

    2018-07-01

    The manual segmentation of brain tumors from medical images is an error-prone, sensitive, and time-absorbing process. This paper presents an automatic and fast method of brain tumor segmentation. In the proposed method, a numerical simulation of the optical Vander Lugt correlator is used for automatically detecting the abnormal tissue region. The tumor filter, used in the simulated optical correlation, is tailored to all the brain tumor types and especially to the Glioblastoma, which considered to be the most aggressive cancer. The simulated optical correlation, computed between Magnetic Resonance Images (MRI) and this filter, estimates precisely and automatically the initial contour inside the tumorous tissue. Further, in the segmentation part, the detected initial contour is used to define an active contour model and presenting the problematic as an energy minimization problem. As a result, this initial contour assists the algorithm to evolve an active contour model towards the exact tumor boundaries. Equally important, for a comparison purposes, we considered different active contour models and investigated their impact on the performance of the segmentation task. Several images from BRATS database with tumors anywhere in images and having different sizes, contrast, and shape, are used to test the proposed system. Furthermore, several performance metrics are computed to present an aggregate overview of the proposed method advantages. The proposed method achieves a high accuracy in detecting the tumorous tissue by a parameter returned by the simulated optical correlation. In addition, the proposed method yields better performance compared to the active contour based methods with the averages of Sensitivity=0.9733, Dice coefficient = 0.9663, Hausdroff distance = 2.6540, Specificity = 0.9994, and faster with a computational time average of 0.4119 s per image. Results reported on BRATS database reveal that our proposed system improves over the recently published state-of-the-art methods in brain tumor detection and segmentation. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Sodium Heat Pipe Module Processing For the SAFE-100 Reactor Concept

    NASA Technical Reports Server (NTRS)

    Martin, James; Salvail, Pat

    2003-01-01

    To support development and hardware-based testing of various space reactor concepts, the Early Flight Fission-Test Facility (EFF-TF) team established a specialized glove box unit with ancillary systems to handle/process alkali metals. Recently, these systems have been commissioned with sodium supporting the fill of stainless steel heat pipe modules for use with a 100 kW thermal heat pipe reactor design. As part of this effort, procedures were developed and refined to govern each segment of the process covering: fill, leak check, vacuum processing, weld closeout, and final "wet in". A series of 316 stainless steel modules, used as precursors to the actual 321 stainless steel modules, were filled with 35 +/- 1 grams of sodium using a known volume canister to control the dispensed mass. Each module was leak checked to less than10(exp -10) std cc/sec helium and vacuum conditioned at 250 C to assist in the removal of trapped gases. A welding procedure was developed to close out the fill stem preventing external gases from entering the evacuated module. Finally the completed modules were vacuum fired at 750 C allowing the sodium to fully wet the internal surface and wick structure of the heat pipe module.

  9. Sodium Heat Pipe Module Processing For the SAFE-100 Reactor Concept

    NASA Astrophysics Data System (ADS)

    Martin, James; Salvail, Pat

    2004-02-01

    To support development and hardware-based testing of various space reactor concepts, the Early Flight Fission-Test Facility (EFF-TF) team established a specialized glove box unit with ancillary systems to handle/process alkali metals. Recently, these systems have been commissioned with sodium supporting the fill of stainless steel heat pipe modules for use with a 100 kW thermal heat pipe reactor design. As part of this effort, procedures were developed and refined to govern each segment of the process covering: fill, leak check, vacuum processing, weld closeout, and final ``wet in''. A series of 316 stainless steel modules, used as precursors to the actual 321 stainless steel modules, were filled with 35 +/-1 grams of sodium using a known volume canister to control the dispensed mass. Each module was leak checked to <10-10 std cc/sec helium and vacuum conditioned at 250 °C to assist in the removal of trapped gases. A welding procedure was developed to close out the fill stem preventing external gases from entering the evacuated module. Finally the completed modules were vacuum fired at 750 °C allowing the sodium to fully wet the internal surface and wick structure of the heat pipe module.

  10. Lessons Learned from Deploying an Analytical Task Management Database

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel A.; Welch, Clara; Arceneaux, Joshua; Bulgatz, Dennis; Hunt, Mitch; Young, Stephen

    2007-01-01

    Defining requirements, missions, technologies, and concepts for space exploration involves multiple levels of organizations, teams of people with complementary skills, and analytical models and simulations. Analytical activities range from filling a To-Be-Determined (TBD) in a requirement to creating animations and simulations of exploration missions. In a program as large as returning to the Moon, there are hundreds of simultaneous analysis activities. A way to manage and integrate efforts of this magnitude is to deploy a centralized database that provides the capability to define tasks, identify resources, describe products, schedule deliveries, and generate a variety of reports. This paper describes a web-accessible task management system and explains the lessons learned during the development and deployment of the database. Through the database, managers and team leaders can define tasks, establish review schedules, assign teams, link tasks to specific requirements, identify products, and link the task data records to external repositories that contain the products. Data filters and spreadsheet export utilities provide a powerful capability to create custom reports. Import utilities provide a means to populate the database from previously filled form files. Within a four month period, a small team analyzed requirements, developed a prototype, conducted multiple system demonstrations, and deployed a working system supporting hundreds of users across the aeros pace community. Open-source technologies and agile software development techniques, applied by a skilled team enabled this impressive achievement. Topics in the paper cover the web application technologies, agile software development, an overview of the system's functions and features, dealing with increasing scope, and deploying new versions of the system.

  11. Watershed Data Management (WDM) database for Salt Creek streamflow simulation, DuPage County, Illinois, water years 2005-11

    USGS Publications Warehouse

    Bera, Maitreyee

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with DuPage County Stormwater Management Division, maintains a USGS database of hourly meteorologic and hydrologic data for use in a near real-time streamflow simulation system, which assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek watershed in DuPage County, Illinois. Most of the precipitation data are collected from a tipping-bucket rain-gage network located in and near DuPage County. The other meteorologic data (wind speed, solar radiation, air temperature, and dewpoint temperature) are collected at Argonne National Laboratory in Argonne, Ill. Potential evapotranspiration is computed from the meteorologic data. The hydrologic data (discharge and stage) are collected at USGS streamflow-gaging stations in DuPage County. These data are stored in a Watershed Data Management (WDM) database. An earlier report describes in detail the WDM database development including the processing of data from January 1, 1997, through September 30, 2004, in SEP04.WDM database. SEP04.WDM is updated with the appended data from October 1, 2004, through September 30, 2011, water years 2005–11 and renamed as SEP11.WDM. This report details the processing of meteorologic and hydrologic data in SEP11.WDM. This report provides a record of snow affected periods and the data used to fill missing-record periods for each precipitation site during water years 2005–11. The meteorologic data filling methods are described in detail in Over and others (2010), and an update is provided in this report.

  12. Noise/spike detection in phonocardiogram signal as a cyclic random process with non-stationary period interval.

    PubMed

    Naseri, H; Homaeinezhad, M R; Pourkhajeh, H

    2013-09-01

    The major aim of this study is to describe a unified procedure for detecting noisy segments and spikes in transduced signals with a cyclic but non-stationary periodic nature. According to this procedure, the cycles of the signal (onset and offset locations) are detected. Then, the cycles are clustered into a finite number of groups based on appropriate geometrical- and frequency-based time series. Next, the median template of each time series of each cluster is calculated. Afterwards, a correlation-based technique is devised for making a comparison between a test cycle feature and the associated time series of each cluster. Finally, by applying a suitably chosen threshold for the calculated correlation values, a segment is prescribed to be either clean or noisy. As a key merit of this research, the procedure can introduce a decision support for choosing accurately orthogonal-expansion-based filtering or to remove noisy segments. In this paper, the application procedure of the proposed method is comprehensively described by applying it to phonocardiogram (PCG) signals for finding noisy cycles. The database consists of 126 records from several patients of a domestic research station acquired by a 3M Littmann(®) 3200, 4KHz sampling frequency electronic stethoscope. By implementing the noisy segments detection algorithm with this database, a sensitivity of Se=91.41% and a positive predictive value, PPV=92.86% were obtained based on physicians assessments. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. An ex post facto evaluation framework for place-based police interventions.

    PubMed

    Braga, Anthony A; Hureau, David M; Papachristos, Andrew V

    2011-12-01

    A small but growing body of research evidence suggests that place-based police interventions generate significant crime control gains. While place-based policing strategies have been adopted by a majority of U.S. police departments, very few agencies make a priori commitments to rigorous evaluations. Recent methodological developments were applied to conduct a rigorous ex post facto evaluation of the Boston Police Department's Safe Street Team (SST) hot spots policing program. A nonrandomized quasi-experimental design was used to evaluate the violent crime control benefits of the SST program at treated street segments and intersections relative to untreated street segments and intersections. Propensity score matching techniques were used to identify comparison places in Boston. Growth curve regression models were used to analyze violent crime trends at treatment places relative to control places. UNITS OF ANALYSIS: Using computerized mapping and database software, a micro-level place database of violent index crimes at all street segments and intersections in Boston was created. Yearly counts of violent index crimes between 2000 and 2009 at the treatment and comparison street segments and intersections served as the key outcome measure. The SST program was associated with a statistically significant reduction in violent index crimes at the treatment places relative to the comparison places without displacing crime into proximate areas. To overcome the challenges of evaluation in real-world settings, evaluators need to continuously develop innovative approaches that take advantage of new theoretical and methodological approaches.

  14. New algorithm for detecting smaller retinal blood vessels in fundus images

    NASA Astrophysics Data System (ADS)

    LeAnder, Robert; Bidari, Praveen I.; Mohammed, Tauseef A.; Das, Moumita; Umbaugh, Scott E.

    2010-03-01

    About 4.1 million Americans suffer from diabetic retinopathy. To help automatically diagnose various stages of the disease, a new blood-vessel-segmentation algorithm based on spatial high-pass filtering was developed to automatically segment blood vessels, including the smaller ones, with low noise. Methods: Image database: Forty, 584 x 565-pixel images were collected from the DRIVE image database. Preprocessing: Green-band extraction was used to obtain better contrast, which facilitated better visualization of retinal blood vessels. A spatial highpass filter of mask-size 11 was applied. A histogram stretch was performed to enhance contrast. A median filter was applied to mitigate noise. At this point, the gray-scale image was converted to a binary image using a binary thresholding operation. Then, a NOT operation was performed by gray-level value inversion between 0 and 255. Postprocessing: The resulting image was AND-ed with its corresponding ring mask to remove the outer-ring (lens-edge) artifact. At this point, the above algorithm steps had extracted most of the major and minor vessels, with some intersections and bifurcations missing. Vessel segments were reintegrated using the Hough transform. Results: After applying the Hough transform, both the average peak SNR and the RMS error improved by 10%. Pratt's Figure of Merit (PFM) was decreased by 6%. Those averages were better than [1] by 10-30%. Conclusions: The new algorithm successfully preserved the details of smaller blood vessels and should prove successful as a segmentation step for automatically identifying diseases that affect retinal blood vessels.

  15. Sequencing artifacts in the type A influenza databases and attempts to correct them.

    PubMed

    Suarez, David L; Chester, Nikki; Hatfield, Jason

    2014-07-01

    There are over 276 000 influenza gene sequences in public databases, with the quality of the sequences determined by the contributor. As part of a high school class project, influenza sequences with possible errors were identified in the public databases based on the size of the gene being longer than expected, with the hypothesis that these sequences would have an error. Students contacted sequence submitters alerting them of the possible sequence issue(s) and requested they the suspect sequence(s) be correct as appropriate. Type A influenza viruses were screened, and gene segments longer than the accepted size were identified for further analysis. Attention was placed on sequences with additional nucleotides upstream or downstream of the highly conserved non-coding ends of the viral segments. A total of 1081 sequences were identified that met this criterion. Three types of errors were commonly observed: non-influenza primer sequence wasn't removed from the sequence; PCR product was cloned and plasmid sequence was included in the sequence; and Taq polymerase added an adenine at the end of the PCR product. Internal insertions of nucleotide sequence were also commonly observed, but in many cases it was unclear if the sequence was correct or actually contained an error. A total of 215 sequences, or 22.8% of the suspect sequences, were corrected in the public databases in the first year of the student project. Unfortunately 138 additional sequences with possible errors were added to the databases in the second year. Additional awareness of the need for data integrity of sequences submitted to public databases is needed to fully reap the benefits of these large data sets. © 2014 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.

  16. Active Segmentation.

    PubMed

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  17. Automated 3D Ultrasound Image Segmentation to Aid Breast Cancer Image Interpretation

    PubMed Central

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2015-01-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. PMID:26547117

  18. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    NASA Astrophysics Data System (ADS)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  19. Hierarchical layered and semantic-based image segmentation using ergodicity map

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.

  20. Localized Charges Control Exciton Energetics and Energy Dissipation in Doped Carbon Nanotubes.

    PubMed

    Eckstein, Klaus H; Hartleb, Holger; Achsnich, Melanie M; Schöppler, Friedrich; Hertel, Tobias

    2017-10-24

    Doping by chemical or physical means is key for the development of future semiconductor technologies. Ideally, charge carriers should be able to move freely in a homogeneous environment. Here, we report on evidence suggesting that excess carriers in electrochemically p-doped semiconducting single-wall carbon nanotubes (s-SWNTs) become localized, most likely due to poorly screened Coulomb interactions with counterions in the Helmholtz layer. A quantitative analysis of blue-shift, broadening, and asymmetry of the first exciton absorption band also reveals that doping leads to hard segmentation of s-SWNTs with intrinsic undoped segments being separated by randomly distributed charge puddles approximately 4 nm in width. Light absorption in these doped segments is associated with the formation of trions, spatially separated from neutral excitons. Acceleration of exciton decay in doped samples is governed by diffusive exciton transport to, and nonradiative decay at charge puddles within 3.2 ps in moderately doped s-SWNTs. The results suggest that conventional band-filling in s-SWNTs breaks down due to inhomogeneous electrochemical doping.

  1. 3-D Object Pose Determination Using Complex EGI

    DTIC Science & Technology

    1990-10-01

    the length of edges of the polyhedron from the EGI. Dane and Bajcsy [4] make use of the Gaussian Image to spatially segment a group of range points...involving real range data of two smooth objects were conducted. The two smooth objects are the torus and ellipsoid, whose databases have been created...in the simulations earlier. 5.0.1 Implementational Issues The torus and ellipsoid were crafted out of clay to resemble the models whose databases were

  2. Statewide crash analysis and forecasting.

    DOT National Transportation Integrated Search

    2008-11-20

    There is a need for the development of safety analysis tools to allow Penn DOT to better assess the safety performance of road : segments in the Commonwealth. The project utilized a safety management system database at Penn DOT that integrates crash,...

  3. Brain Tumor Segmentation Using Deep Belief Networks and Pathological Knowledge.

    PubMed

    Zhan, Tianming; Chen, Yi; Hong, Xunning; Lu, Zhenyu; Chen, Yunjie

    2017-01-01

    In this paper, we propose an automatic brain tumor segmentation method based on Deep Belief Networks (DBNs) and pathological knowledge. The proposed method is targeted against gliomas (both low and high grade) obtained in multi-sequence magnetic resonance images (MRIs). Firstly, a novel deep architecture is proposed to combine the multi-sequences intensities feature extraction with classification to get the classification probabilities of each voxel. Then, graph cut based optimization is executed on the classification probabilities to strengthen the spatial relationships of voxels. At last, pathological knowledge of gliomas is applied to remove some false positives. Our method was validated in the Brain Tumor Segmentation Challenge 2012 and 2013 databases (BRATS 2012, 2013). The performance of segmentation results demonstrates our proposal providing a competitive solution with stateof- the-art methods. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. Thermogram breast cancer prediction approach based on Neutrosophic sets and fuzzy c-means algorithm.

    PubMed

    Gaber, Tarek; Ismail, Gehad; Anter, Ahmed; Soliman, Mona; Ali, Mona; Semary, Noura; Hassanien, Aboul Ella; Snasel, Vaclav

    2015-08-01

    The early detection of breast cancer makes many women survive. In this paper, a CAD system classifying breast cancer thermograms to normal and abnormal is proposed. This approach consists of two main phases: automatic segmentation and classification. For the former phase, an improved segmentation approach based on both Neutrosophic sets (NS) and optimized Fast Fuzzy c-mean (F-FCM) algorithm was proposed. Also, post-segmentation process was suggested to segment breast parenchyma (i.e. ROI) from thermogram images. For the classification, different kernel functions of the Support Vector Machine (SVM) were used to classify breast parenchyma into normal or abnormal cases. Using benchmark database, the proposed CAD system was evaluated based on precision, recall, and accuracy as well as a comparison with related work. The experimental results showed that our system would be a very promising step toward automatic diagnosis of breast cancer using thermograms as the accuracy reached 100%.

  5. Sequence-based Network Completion Reveals the Integrality of Missing Reactions in Metabolic Networks.

    PubMed

    Krumholz, Elias W; Libourel, Igor G L

    2015-07-31

    Genome-scale metabolic models are central in connecting genotypes to metabolic phenotypes. However, even for well studied organisms, such as Escherichia coli, draft networks do not contain a complete biochemical network. Missing reactions are referred to as gaps. These gaps need to be filled to enable functional analysis, and gap-filling choices influence model predictions. To investigate whether functional networks existed where all gap-filling reactions were supported by sequence similarity to annotated enzymes, four draft networks were supplemented with all reactions from the Model SEED database for which minimal sequence similarity was found in their genomes. Quadratic programming revealed that the number of reactions that could partake in a gap-filling solution was vast: 3,270 in the case of E. coli, where 72% of the metabolites in the draft network could connect a gap-filling solution. Nonetheless, no network could be completed without the inclusion of orphaned enzymes, suggesting that parts of the biochemistry integral to biomass precursor formation are uncharacterized. However, many gap-filling reactions were well determined, and the resulting networks showed improved prediction of gene essentiality compared with networks generated through canonical gap filling. In addition, gene essentiality predictions that were sensitive to poorly determined gap-filling reactions were of poor quality, suggesting that damage to the network structure resulting from the inclusion of erroneous gap-filling reactions may be predictable. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  6. Users’ guide to the surgical literature: how to perform a high-quality literature search

    PubMed Central

    Waltho, Daniel; Kaur, Manraj Nirmal; Haynes, R. Brian; Farrokhyar, Forough; Thoma, Achilleas

    2015-01-01

    Summary The article “Users’ guide to the surgical literature: how to perform a literature search” was published in 2003, but the continuing technological developments in databases and search filters have rendered that guide out of date. The present guide fills an existing gap in this area; it provides the reader with strategies for developing a searchable clinical question, creating an efficient search strategy, accessing appropriate databases, and skillfully retrieving the best evidence to address the research question. PMID:26384150

  7. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of KARESZ project for whole Budapest. BSz contributed as an Alexander von Humboldt Research Fellow.

  8. Sea-floor drainage features of Cascadia Basin and the adjacent continental slope, northeast Pacific Ocean

    USGS Publications Warehouse

    Hampton, M.A.; Karl, Herman A.; Kenyon, Neil H.

    1989-01-01

    Sea-floor drainage features of Cascadia Basin and the adjacent continental slope include canyons, primary fan valleys, deep-sea valleys, and remnant valley segments. Long-range sidescan sonographs and associated seismic-reflection profiles indicate that the canyons may originate along a mid-slope escarpment and grow upslope by mass wasting and downslope by valley erosion or aggradation. Most canyons are partly filled with sediment, and Quillayute Canyon is almost completely filled. Under normal growth conditions, the larger canyons connect with primary fan valleys or deep-sea valleys in Cascadia Basin, but development of accretionary ridges blocks or re-routes most canyons, forcing abandonment of the associated valleys in the basin. Astoria Fan has a primary fan valley that connects with Astoria Canyon at the fan apex. The fan valley is bordered by parallel levees on the upper fan but becomes obscure on the lower fan, where a few valley segments appear on the sonographs. Apparently, Nitinat Fan does not presently have a primary fan valley; none of the numerous valleys on the fan connect with a canyon. The Willapa-Cascadia-Vancouver-Juan de Fuca deep-sea valley system bypasses the submarine fans and includes deeply incised valleys to broad shallow swales, as well as within-valley terraces and hanging-valley confluences. ?? 1989.

  9. Development of a hybrid image processing algorithm for automatic evaluation of intramuscular fat content in beef M. longissimus dorsi.

    PubMed

    Du, Cheng-Jin; Sun, Da-Wen; Jackman, Patrick; Allen, Paul

    2008-12-01

    An automatic method for estimating the content of intramuscular fat (IMF) in beef M. longissimus dorsi (LD) was developed using a sequence of image processing algorithm. To extract IMF particles within the LD muscle from structural features of intermuscular fat surrounding the muscle, three steps of image processing algorithm were developed, i.e. bilateral filter for noise removal, kernel fuzzy c-means clustering (KFCM) for segmentation, and vector confidence connected and flood fill for IMF extraction. The technique of bilateral filtering was firstly applied to reduce the noise and enhance the contrast of the beef image. KFCM was then used to segment the filtered beef image into lean, fat, and background. The IMF was finally extracted from the original beef image by using the techniques of vector confidence connected and flood filling. The performance of the algorithm developed was verified by correlation analysis between the IMF characteristics and the percentage of chemically extractable IMF content (P<0.05). Five IMF features are very significantly correlated with the fat content (P<0.001), including count densities of middle (CDMiddle) and large (CDLarge) fat particles, area densities of middle and large fat particles, and total fat area per unit LD area. The highest coefficient is 0.852 for CDLarge.

  10. Automatic video segmentation and indexing

    NASA Astrophysics Data System (ADS)

    Chahir, Youssef; Chen, Liming

    1999-08-01

    Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.

  11. 34 CFR 682.210 - Deferment.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... elementary or secondary school teachers; or (B) A specific grade level or academic, instructional, subject... are filled by teachers who are certified, but who are teaching in academic subject areas other than... Secretary or from an authoritative electronic database maintained or authorized by the Secretary that...

  12. 34 CFR 682.210 - Deferment.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... elementary or secondary school teachers; or (B) A specific grade level or academic, instructional, subject... are filled by teachers who are certified, but who are teaching in academic subject areas other than... Secretary or from an authoritative electronic database maintained or authorized by the Secretary that...

  13. 34 CFR 682.210 - Deferment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... elementary or secondary school teachers; or (B) A specific grade level or academic, instructional, subject... are filled by teachers who are certified, but who are teaching in academic subject areas other than... Secretary or from an authoritative electronic database maintained or authorized by the Secretary that...

  14. 17 CFR 37.205 - Audit trail.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... trading; and (iv) Identification of each account to which fills are allocated. (3) Electronic analysis capability. A swap execution facility's audit trail program shall include electronic analysis capability with respect to all audit trail data in the transaction history database. Such electronic analysis capability...

  15. Thermal Barrier/Seal for Extreme Temperature Applications

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M.; Dunlap, Patrick H., Jr.; Phelps, Jack; Bauer, Paul; Bond, Bruce; McCool, Alex (Technical Monitor)

    2002-01-01

    Large solid rocket motors, as found on the Space Shuttle, are fabricated in segments for manufacturing considerations, bolted together, and sealed using conventional Viton O-ring seals. Similarly the nine large solid rocket motor nozzles are assembled from several different segments, bolted together, and sealed at six joint locations using conventional O-ring seals. The 5500 F combustion gases are generally kept a safe distance away from the seals by thick layers of phenolic or rubber insulation. Joint-fill compounds, including RTV (room temperature vulcanized compound) and polysulfide filler, are used to fill the joints in the insulation to prevent a direct flow-path to the O-rings. Normally these two stages of protection are enough to prevent a direct flow-path of the 900-psi hot gases from reaching the temperature-sensitive O-ring seals. However, in the current design 1 out of 15 Space Shuttle solid rocket motors experience hot gas effects on the Joint 6 wiper (sacrificial) O-rings. Also worrisome is the fact that joints have experienced heat effects on materials between the RTV and the O-rings, and in two cases O-rings have experienced heat effects. These conditions lead to extensive reviews of the post-flight conditions as part of the effort to monitor flight safety. We have developed a braided carbon fiber thermal barrier to replace the joint fill compounds in the Space Shuttle solid rocket motor nozzles to reduce the incoming 5500 F combustion gas temperature and permit only cool (approximately 100 F) gas to reach the temperature-sensitive O-ring seals. Implementation of this thermal barrier provides more robust, consistent operation with shorter turn around times between Shuttle launches.

  16. HBLAST: Parallelised sequence similarity--A Hadoop MapReducable basic local alignment search tool.

    PubMed

    O'Driscoll, Aisling; Belogrudov, Vladislav; Carroll, John; Kropp, Kai; Walsh, Paul; Ghazal, Peter; Sleator, Roy D

    2015-04-01

    The recent exponential growth of genomic databases has resulted in the common task of sequence alignment becoming one of the major bottlenecks in the field of computational biology. It is typical for these large datasets and complex computations to require cost prohibitive High Performance Computing (HPC) to function. As such, parallelised solutions have been proposed but many exhibit scalability limitations and are incapable of effectively processing "Big Data" - the name attributed to datasets that are extremely large, complex and require rapid processing. The Hadoop framework, comprised of distributed storage and a parallelised programming framework known as MapReduce, is specifically designed to work with such datasets but it is not trivial to efficiently redesign and implement bioinformatics algorithms according to this paradigm. The parallelisation strategy of "divide and conquer" for alignment algorithms can be applied to both data sets and input query sequences. However, scalability is still an issue due to memory constraints or large databases, with very large database segmentation leading to additional performance decline. Herein, we present Hadoop Blast (HBlast), a parallelised BLAST algorithm that proposes a flexible method to partition both databases and input query sequences using "virtual partitioning". HBlast presents improved scalability over existing solutions and well balanced computational work load while keeping database segmentation and recompilation to a minimum. Enhanced BLAST search performance on cheap memory constrained hardware has significant implications for in field clinical diagnostic testing; enabling faster and more accurate identification of pathogenic DNA in human blood or tissue samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. BDVC (Bimodal Database of Violent Content): A database of violent audio and video

    NASA Astrophysics Data System (ADS)

    Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro

    2017-09-01

    Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.

  18. Automated cortical bone segmentation for multirow-detector CT imaging with validation and application to human studies

    PubMed Central

    Li, Cheng; Jin, Dakai; Chen, Cheng; Letuchy, Elena M.; Janz, Kathleen F.; Burns, Trudy L.; Torner, James C; Levy, Steven M.; Saha, Punam K

    2015-01-01

    Purpose: Cortical bone supports and protects human skeletal functions and plays an important role in determining bone strength and fracture risk. Cortical bone segmentation at a peripheral site using multirow-detector CT (MD-CT) imaging is useful for in vivo assessment of bone strength and fracture risk. Major challenges for the task emerge from limited spatial resolution, low signal-to-noise ratio, presence of cortical pores, and structural complexity over the transition between trabecular and cortical bones. An automated algorithm for cortical bone segmentation at the distal tibia from in vivo MD-CT imaging is presented and its performance and application are examined. Methods: The algorithm is completed in two major steps—(1) bone filling, alignment, and region-of-interest computation and (2) segmentation of cortical bone. After the first step, the following sequence of tasks is performed to accomplish cortical bone segmentation—(1) detection of marrow space and possible pores, (2) computation of cortical bone thickness, detection of recession points, and confirmation and filling of true pores, and (3) detection of endosteal boundary and delineation of cortical bone. Effective generalizations of several digital topologic and geometric techniques are introduced and a fully automated algorithm is presented for cortical bone segmentation. Results: An accuracy of 95.1% in terms of volume of agreement with manual outlining of cortical bone was observed in human MD-CT scans, while an accuracy of 88.5% was achieved when compared with manual outlining on postregistered high resolution micro-CT imaging. An intraclass correlation coefficient of 0.98 was obtained in cadaveric repeat scans. A pilot study was conducted to describe gender differences in cortical bone properties. This study involved 51 female and 46 male participants (age: 19–20 yr) from the Iowa Bone Development Study. Results from this pilot study suggest that, on average after adjustment for height and weight differences, males have thicker cortex (mean difference 0.33 mm and effect size 0.92 at the anterior region) with lower bone mineral density (mean difference −28.73 mg/cm3 and effect size 1.35 at the posterior region) as compared to females. Conclusions: The algorithm presented is suitable for fully automated segmentation of cortical bone in MD-CT imaging of the distal tibia with high accuracy and reproducibility. Analysis of data from a pilot study demonstrated that the cortical bone indices allow quantification of gender differences in cortical bone from MD-CT imaging. Application to larger population groups, including those with compromised bone, is needed. PMID:26233184

  19. Structural and Geophysical Characterization of Oklahoma Basement

    NASA Astrophysics Data System (ADS)

    Morgan, C.; Johnston, C. S.; Carpenter, B. M.; Reches, Z.

    2017-12-01

    Oklahoma has experienced a large increase in seismicity since 2009 that has been attributed to wastewater injection. Most earthquakes, including four M5+ earthquakes, nucleated at depths > 4 km, well within the pre-Cambrian crystalline basement, even though wastewater injection occurred almost exclusively in the sedimentary sequence above. To better understand the structural characteristics of the rhyolite and granite that makeup the midcontinent basement, we analyzed a 150 m long core recovered from a basement borehole (Shads 4) in Rogers County, NE Oklahoma. The analysis of the fracture network in the rhyolite core included measurements of fracture inclination, aperture, and density, the examination fracture surface features and fill minerology, as well as x-ray diffraction analysis of secondary mineralization. We also analyzed the highly fractured and faulted segments of the core with a portable gamma-ray detector, magnetometer, and rebound hammer. The preliminary analysis of the fractures within the rhyolite core showed: (1) Fracture density increasing with depth by a factor of 10, from 4 fractures/10m in the upper core segment to 40 fracture/10m at 150 m deeper. (2) The fractures are primarily sub-vertical, inclined 10-20° from the axis of the vertical core. (3) The secondary mineralization is dominated by calcite and epidote. (4) Fracture aperture ranges from 0.35 to 2.35mm based on the thickness of secondary filling. (5) About 8% of the examined fractures display slickenside striations. (6) Increases of elasticity (by rebound hammer) and gamma-ray emissions are systematically correlated with a decrease in magnetic susceptibility in core segments of high fracture density and/or faulting; this observation suggests diagenetic fracture re-mineralization.

  20. Continuous on-line monitoring of left ventricular function with a new nonimaging detector:validation and clinical use in the evaluation of patients post angioplasty.

    PubMed

    Breisblatt, W M; Schulman, D S; Follansbee, W P

    1991-06-01

    A new miniaturized nonimaging radionuclide detector (Cardioscint, Oxford, England) was evaluated for the continuous on-line assessment of left ventricular function. This cesium iodide probe can be placed on the patient's chest and can be interfaced to an IBM compatible personal computer conveniently placed at the patient's bedside. This system can provide a beat-to-beat or gated determination of left ventricular ejection fraction and ST segment analysis. In 28 patients this miniaturized probe was correlated against a high resolution gamma camera study. Over a wide range of ejection fraction (31% to 76%) in patients with and without regional wall motion abnormalities, the correlation between the Cardioscint detector and the gamma camera was excellent (r = 0.94, SEE +/- 2.1). This detector system has high temporal (10 msec) resolution, and comparison of peak filling rate (PFR) and time to peak filling (TPFR) also showed close agreement with the gamma camera (PFR, r = 0.94, SEE +/- 0.17; TPFR, r = 0.92, SEE +/- 6.8). In 18 patients on bed rest the long-term stability of this system for measuring ejection fraction and ST segments was verified. During the monitoring period (108 +/- 28 minutes) only minor changes in ejection fraction occurred (coefficient of variation 0.035 +/- 0.016) and ST segment analysis showed no significant change from baseline. To determine whether continuous on-line measurement of ejection fraction would be useful after coronary angioplasty, 12 patients who had undergone a successful procedure were evaluated for 280 +/- 35 minutes with the Cardioscint system.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. Automatic atlas-based three-label cartilage segmentation from MR knee images

    PubMed Central

    Shan, Liang; Zach, Christopher; Charles, Cecil; Niethammer, Marc

    2016-01-01

    Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies. Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces – for example the interface between femoral and tibial cartilage. This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions. We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset. PMID:25128683

  2. Echogenicity based approach to detect, segment and track the common carotid artery in 2D ultrasound images.

    PubMed

    Narayan, Nikhil S; Marziliano, Pina

    2015-08-01

    Automatic detection and segmentation of the common carotid artery in transverse ultrasound (US) images of the thyroid gland play a vital role in the success of US guided intervention procedures. We propose in this paper a novel method to accurately detect, segment and track the carotid in 2D and 2D+t US images of the thyroid gland using concepts based on tissue echogenicity and ultrasound image formation. We first segment the hypoechoic anatomical regions of interest using local phase and energy in the input image. We then make use of a Hessian based blob like analysis to detect the carotid within the segmented hypoechoic regions. The carotid artery is segmented by making use of least squares ellipse fit for the edge points around the detected carotid candidate. Experiments performed on a multivendor dataset of 41 images show that the proposed algorithm can segment the carotid artery with high sensitivity (99.6 ±m 0.2%) and specificity (92.9 ±m 0.1%). Further experiments on a public database containing 971 images of the carotid artery showed that the proposed algorithm can achieve a detection accuracy of 95.2% with a 2% increase in performance when compared to the state-of-the-art method.

  3. Numerical Simulation on the Induced Voltage Across the Coil Terminal by the Segmented Flow of Ferrofluid and Air-Layer.

    PubMed

    Lee, Won-Ho; Lee, Sangyoup; Lee, Jong-Chul

    2018-09-01

    Nanoparticles and nanofluids have been implemented in energy harvesting devices, and energy harvesting based on magnetic nanofluid flow was recently achieved by using a layer-built magnet and microbubble injection to induce a voltage on the order of 10-1 mV. However, this is not yet suitable for some commercial purpose. The air bubbles must be segmented in the base fluid, and the magnetic flux of the ferrofluids should change over time to increase the amount of electric voltage and current from energy harvesting. In this study, we proposed a novel technique to achieve segmented flow of the ferrofluids and the air layers. This segmented ferrofluid flow linear generator can increase the magnitude of the induced voltage from the energy harvesting system. In our experiments, a ferrofluid-filled capsule produced time-dependent changes in the magnetic flux through a multi-turn coil, and the induced voltage was generated on the order of about 101 mV at a low frequency of 2 Hz. A finite element analysis was used to describe the time-dependent change of the magnetic flux through the coil according to the motion of the segmented flow of the ferrofluid and the air-layer, and the induced voltage was generated to the order of 102 mV at a high frequency of 12.5 Hz.

  4. Innovative telescope architectures for future large space observatories

    NASA Astrophysics Data System (ADS)

    Polidan, Ronald S.; Breckinridge, James B.; Lillie, Charles F.; MacEwen, Howard A.; Flannery, Martin R.; Dailey, Dean R.

    2016-10-01

    Over the past few years, we have developed a concept for an evolvable space telescope (EST) that is assembled on orbit in three stages, growing from a 4×12-m telescope in Stage 1, to a 12-m filled aperture in Stage 2, and then to a 20-m filled aperture in Stage 3. Stage 1 is launched as a fully functional telescope and begins gathering science data immediately after checkout on orbit. This observatory is then periodically augmented in space with additional mirror segments, structures, and newer instruments to evolve the telescope over the years to a 20-m space telescope. We discuss the EST architecture, the motivation for this approach, and the benefits it provides over current approaches to building and maintaining large space observatories.

  5. Gap filling of 3-D microvascular networks by tensor voting.

    PubMed

    Risser, L; Plouraboue, F; Descombes, X

    2008-05-01

    We present a new algorithm which merges discontinuities in 3-D images of tubular structures presenting undesirable gaps. The application of the proposed method is mainly associated to large 3-D images of microvascular networks. In order to recover the real network topology, we need to fill the gaps between the closest discontinuous vessels. The algorithm presented in this paper aims at achieving this goal. This algorithm is based on the skeletonization of the segmented network followed by a tensor voting method. It permits to merge the most common kinds of discontinuities found in microvascular networks. It is robust, easy to use, and relatively fast. The microvascular network images were obtained using synchrotron tomography imaging at the European Synchrotron Radiation Facility. These images exhibit samples of intracortical networks. Representative results are illustrated.

  6. REGRESSION MODELS OF RESIDENTIAL EXPOSURE TO CHLORPYRIFOS AND DIAZINON

    EPA Science Inventory

    This study examines the ability of regression models to predict residential exposures to chlorpyrifos and diazinon, based on the information from the NHEXAS-AZ database. The robust method was used to generate "fill-in" values for samples that are below the detection l...

  7. Computer Based Melanocytic and Nevus Image Enhancement and Segmentation.

    PubMed

    Jamil, Uzma; Akram, M Usman; Khalid, Shehzad; Abbas, Sarmad; Saleem, Kashif

    2016-01-01

    Digital dermoscopy aids dermatologists in monitoring potentially cancerous skin lesions. Melanoma is the 5th common form of skin cancer that is rare but the most dangerous. Melanoma is curable if it is detected at an early stage. Automated segmentation of cancerous lesion from normal skin is the most critical yet tricky part in computerized lesion detection and classification. The effectiveness and accuracy of lesion classification are critically dependent on the quality of lesion segmentation. In this paper, we have proposed a novel approach that can automatically preprocess the image and then segment the lesion. The system filters unwanted artifacts including hairs, gel, bubbles, and specular reflection. A novel approach is presented using the concept of wavelets for detection and inpainting the hairs present in the cancer images. The contrast of lesion with the skin is enhanced using adaptive sigmoidal function that takes care of the localized intensity distribution within a given lesion's images. We then present a segmentation approach to precisely segment the lesion from the background. The proposed approach is tested on the European database of dermoscopic images. Results are compared with the competitors to demonstrate the superiority of the suggested approach.

  8. Application of the 3D slicer chest imaging platform segmentation algorithm for large lung nodule delineation

    PubMed Central

    Parmar, Chintan; Blezek, Daniel; Estepar, Raul San Jose; Pieper, Steve; Kim, John; Aerts, Hugo J. W. L.

    2017-01-01

    Purpose Accurate segmentation of lung nodules is crucial in the development of imaging biomarkers for predicting malignancy of the nodules. Manual segmentation is time consuming and affected by inter-observer variability. We evaluated the robustness and accuracy of a publically available semiautomatic segmentation algorithm that is implemented in the 3D Slicer Chest Imaging Platform (CIP) and compared it with the performance of manual segmentation. Methods CT images of 354 manually segmented nodules were downloaded from the LIDC database. Four radiologists performed the manual segmentation and assessed various nodule characteristics. The semiautomatic CIP segmentation was initialized using the centroid of the manual segmentations, thereby generating four contours for each nodule. The robustness of both segmentation methods was assessed using the region of uncertainty (δ) and Dice similarity index (DSI). The robustness of the segmentation methods was compared using the Wilcoxon-signed rank test (pWilcoxon<0.05). The Dice similarity index (DSIAgree) between the manual and CIP segmentations was computed to estimate the accuracy of the semiautomatic contours. Results The median computational time of the CIP segmentation was 10 s. The median CIP and manually segmented volumes were 477 ml and 309 ml, respectively. CIP segmentations were significantly more robust than manual segmentations (median δCIP = 14ml, median dsiCIP = 99% vs. median δmanual = 222ml, median dsimanual = 82%) with pWilcoxon~10−16. The agreement between CIP and manual segmentations had a median DSIAgree of 60%. While 13% (47/354) of the nodules did not require any manual adjustment, minor to substantial manual adjustments were needed for 87% (305/354) of the nodules. CIP segmentations were observed to perform poorly (median DSIAgree≈50%) for non-/sub-solid nodules with subtle appearances and poorly defined boundaries. Conclusion Semi-automatic CIP segmentation can potentially reduce the physician workload for 13% of nodules owing to its computational efficiency and superior stability compared to manual segmentation. Although manual adjustment is needed for many cases, CIP segmentation provides a preliminary contour for physicians as a starting point. PMID:28594880

  9. Materials And Processes Technical Information System (MAPTIS) LDEF materials database

    NASA Technical Reports Server (NTRS)

    Davis, John M.; Strickland, John W.

    1992-01-01

    The Materials and Processes Technical Information System (MAPTIS) is a collection of materials data which was computerized and is available to engineers in the aerospace community involved in the design and development of spacecraft and related hardware. Consisting of various database segments, MAPTIS provides the user with information such as material properties, test data derived from tests specifically conducted for qualification of materials for use in space, verification and control, project management, material information, and various administrative requirements. A recent addition to the project management segment consists of materials data derived from the LDEF flight. This tremendous quantity of data consists of both pre-flight and post-flight data in such diverse areas as optical/thermal, mechanical and electrical properties, atomic concentration surface analysis data, as well as general data such as sample placement on the satellite, A-O flux, equivalent sun hours, etc. Each data point is referenced to the primary investigator(s) and the published paper from which the data was taken. The MAPTIS system is envisioned to become the central location for all LDEF materials data. This paper consists of multiple parts, comprising a general overview of the MAPTIS System and the types of data contained within, and the specific LDEF data element and the data contained in that segment.

  10. Contour Tracking in Echocardiographic Sequences via Sparse Representation and Dictionary Learning

    PubMed Central

    Huang, Xiaojie; Dione, Donald P.; Compas, Colin B.; Papademetris, Xenophon; Lin, Ben A.; Bregasi, Alda; Sinusas, Albert J.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554

  11. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers

    PubMed Central

    Filipovic, Nenad D.

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851

  12. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  13. Spotting words in handwritten Arabic documents

    NASA Astrophysics Data System (ADS)

    Srihari, Sargur; Srinivasan, Harish; Babu, Pavithra; Bhole, Chetan

    2006-01-01

    The design and performance of a system for spotting handwritten Arabic words in scanned document images is presented. Three main components of the system are a word segmenter, a shape based matcher for words and a search interface. The user types in a query in English within a search window, the system finds the equivalent Arabic word, e.g., by dictionary look-up, locates word images in an indexed (segmented) set of documents. A two-step approach is employed in performing the search: (1) prototype selection: the query is used to obtain a set of handwritten samples of that word from a known set of writers (these are the prototypes), and (2) word matching: the prototypes are used to spot each occurrence of those words in the indexed document database. A ranking is performed on the entire set of test word images-- where the ranking criterion is a similarity score between each prototype word and the candidate words based on global word shape features. A database of 20,000 word images contained in 100 scanned handwritten Arabic documents written by 10 different writers was used to study retrieval performance. Using five writers for providing prototypes and the other five for testing, using manually segmented documents, 55% precision is obtained at 50% recall. Performance increases as more writers are used for training.

  14. Segmentation of MR images via discriminative dictionary learning and sparse coding: application to hippocampus labeling.

    PubMed

    Tong, Tong; Wolz, Robin; Coupé, Pierrick; Hajnal, Joseph V; Rueckert, Daniel

    2013-08-01

    We propose a novel method for the automatic segmentation of brain MRI images by using discriminative dictionary learning and sparse coding techniques. In the proposed method, dictionaries and classifiers are learned simultaneously from a set of brain atlases, which can then be used for the reconstruction and segmentation of an unseen target image. The proposed segmentation strategy is based on image reconstruction, which is in contrast to most existing atlas-based labeling approaches that rely on comparing image similarities between atlases and target images. In addition, we propose a Fixed Discriminative Dictionary Learning for Segmentation (F-DDLS) strategy, which can learn dictionaries offline and perform segmentations online, enabling a significant speed-up in the segmentation stage. The proposed method has been evaluated for the hippocampus segmentation of 80 healthy ICBM subjects and 202 ADNI images. The robustness of the proposed method, especially of our F-DDLS strategy, was validated by training and testing on different subject groups in the ADNI database. The influence of different parameters was studied and the performance of the proposed method was also compared with that of the nonlocal patch-based approach. The proposed method achieved a median Dice coefficient of 0.879 on 202 ADNI images and 0.890 on 80 ICBM subjects, which is competitive compared with state-of-the-art methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Scale-space for empty catheter segmentation in PCI fluoroscopic images.

    PubMed

    Bacchuwar, Ketan; Cousty, Jean; Vaillant, Régis; Najman, Laurent

    2017-07-01

    In this article, we present a method for empty guiding catheter segmentation in fluoroscopic X-ray images. The guiding catheter, being a commonly visible landmark, its segmentation is an important and a difficult brick for Percutaneous Coronary Intervention (PCI) procedure modeling. In number of clinical situations, the catheter is empty and appears as a low contrasted structure with two parallel and partially disconnected edges. To segment it, we work on the level-set scale-space of image, the min tree, to extract curve blobs. We then propose a novel structural scale-space, a hierarchy built on these curve blobs. The deep connected component, i.e. the cluster of curve blobs on this hierarchy, that maximizes the likelihood to be an empty catheter is retained as final segmentation. We evaluate the performance of the algorithm on a database of 1250 fluoroscopic images from 6 patients. As a result, we obtain very good qualitative and quantitative segmentation performance, with mean precision and recall of 80.48 and 63.04% respectively. We develop a novel structural scale-space to segment a structured object, the empty catheter, in challenging situations where the information content is very sparse in the images. Fully-automatic empty catheter segmentation in X-ray fluoroscopic images is an important and preliminary step in PCI procedure modeling, as it aids in tagging the arrival and removal location of other interventional tools.

  16. Project W-320, 241-C-106 sluicing electrical calculations, Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, J.W.

    1998-08-07

    This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. These calculations are required: To determine the power requirements needed to power electrical heat tracing segments contained within three manufactured insulated tubing assemblies; To verify thermal adequacy of tubing assembly selection by others; To size the heat tracing feeder and branch circuit conductors and conduits; To size protective circuit breaker and fuses; and To accomplish thermal design for two electrical heat tracing segments: One at C-106 tank riser 7 (CCTV) and one at the exhaust hatchway (condensate drain). Contents include: C-Farm electrical heat tracing;more » Cable ampacity, lighting, conduit fill and voltage drop; and Control circuit sizing and voltage drop analysis for the seismic shutdown system.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holcomb, R.T.; Moore, J.G.; Lipman, P.W.

    The GLORIA long-range sonar imaging system has revealed fields of large lava flows in the Hawaiian Trough east and south of Hawaii in water as deep as 5.5 km. Flows in the most extensive field (110 km long) have erupted from the deep submarine segment of Kilauea's east rift zone. Other flows have been erupted from Loihi and Mauna Loa. This discovery confirms a suspicion, long held from subaerial studies, that voluminous submarine flows are erupted from Hawaiian volcanoes, and it supports an inference that summit calderas repeatedly collapse and fill at intervals of centuries to millenia owing to voluminousmore » eruptions. These extensive flows differ greatly in form from pillow lavas found previously along shallower segments of the rift zones; therefore, revision of concepts of volcano stratigraphy and structure may be required.« less

  18. ASRM test report: Autoclave cure process development

    NASA Technical Reports Server (NTRS)

    Nachbar, D. L.; Mitchell, Suzanne

    1992-01-01

    ASRM insulated segments will be autoclave cured following insulation pre-form installation and strip wind operations. Following competitive bidding, Aerojet ASRM Division (AAD) Purchase Order 100142 was awarded to American Fuel Cell and Coated Fabrics Company, Inc. (Amfuel), Magnolia, AR, for subcontracted insulation autoclave cure process development. Autoclave cure process development test requirements were included in Task 3 of TM05514, Manufacturing Process Development Specification for Integrated Insulation Characterization and Stripwind Process Development. The test objective was to establish autoclave cure process parameters for ASRM insulated segments. Six tasks were completed to: (1) evaluate cure parameters that control acceptable vulcanization of ASRM Kevlar-filled EPDM insulation material; (2) identify first and second order impact parameters on the autoclave cure process; and (3) evaluate insulation material flow-out characteristics to support pre-form configuration design.

  19. DNA profiles, computer searches, and the Fourth Amendment.

    PubMed

    Kimel, Catherine W

    2013-01-01

    Pursuant to federal statutes and to laws in all fifty states, the United States government has assembled a database containing the DNA profiles of over eleven million citizens. Without judicial authorization, the government searches each of these profiles one-hundred thousand times every day, seeking to link database subjects to crimes they are not suspected of committing. Yet, courts and scholars that have addressed DNA databasing have focused their attention almost exclusively on the constitutionality of the government's seizure of the biological samples from which the profiles are generated. This Note fills a gap in the scholarship by examining the Fourth Amendment problems that arise when the government searches its vast DNA database. This Note argues that each attempt to match two DNA profiles constitutes a Fourth Amendment search because each attempted match infringes upon database subjects' expectations of privacy in their biological relationships and physical movements. The Note further argues that database searches are unreasonable as they are currently conducted, and it suggests an adaptation of computer-search procedures to remedy the constitutional deficiency.

  20. ACME: Automated Cell Morphology Extractor for Comprehensive Reconstruction of Cell Membranes

    PubMed Central

    Mosaliganti, Kishore R.; Noche, Ramil R.; Xiong, Fengzhu; Swinburne, Ian A.; Megason, Sean G.

    2012-01-01

    The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME). PMID:23236265

  1. Profiling the different needs and expectations of patients for population-based medicine: a case study using segmentation analysis

    PubMed Central

    2012-01-01

    Background This study illustrates an evidence-based method for the segmentation analysis of patients that could greatly improve the approach to population-based medicine, by filling a gap in the empirical analysis of this topic. Segmentation facilitates individual patient care in the context of the culture, health status, and the health needs of the entire population to which that patient belongs. Because many health systems are engaged in developing better chronic care management initiatives, patient profiles are critical to understanding whether some patients can move toward effective self-management and can play a central role in determining their own care, which fosters a sense of responsibility for their own health. A review of the literature on patient segmentation provided the background for this research. Method First, we conducted a literature review on patient satisfaction and segmentation to build a survey. Then, we performed 3,461 surveys of outpatient services users. The key structures on which the subjects’ perception of outpatient services was based were extrapolated using principal component factor analysis with varimax rotation. After the factor analysis, segmentation was performed through cluster analysis to better analyze the influence of individual attitudes on the results. Results Four segments were identified through factor and cluster analysis: the “unpretentious,” the “informed and supported,” the “experts” and the “advanced” patients. Their policies and managerial implications are outlined. Conclusions With this research, we provide the following: – a method for profiling patients based on common patient satisfaction surveys that is easily replicable in all health systems and contexts; – a proposal for segments based on the results of a broad-based analysis conducted in the Italian National Health System (INHS). Segments represent profiles of patients requiring different strategies for delivering health services. Their knowledge and analysis might support an effort to build an effective population-based medicine approach. PMID:23256543

  2. Slow Joining of Newly Replicated DNA Chains in DNA Polymerase I-Deficient Escherichia coli Mutants*

    PubMed Central

    Okazaki, Reiji; Arisawa, Mikio; Sugino, Akio

    1971-01-01

    In Escherichia coli mutants deficient in DNA polymerase I, newly replicated short DNA is joined at about 10% of the rate in the wild-type strains. It is postulated that DNA polymerase I normally functions in filling gaps between the nascent short segments synthesized by the replication complex. Possible implications of the finding are discussed in relation to other abnormal properties of these mutants. PMID:4943548

  3. Improved Net-Level Filling And Finishing Of Large Castings

    NASA Technical Reports Server (NTRS)

    Johnson, Erik P.; Brown, Richard F.

    1995-01-01

    Improved method of vacuum casting of large, generally cylindrical objects to net sizes and shapes reduces amount of direct manual labor by workers in proximity to cast material. Original application for which method devised is fabrication of solid rocket-motor segments containing solid propellant, wherein need to minimize exposure of workers to propellant material being cast. Improved method adaptable to other applications involving large castings of toxic, flammable, or otherwise hazardous materials.

  4. Rectus sheath catheters for continuous analgesia after upper abdominal surgery.

    PubMed

    Cornish, Philip; Deacon, Alf

    2007-01-01

    The segmental nerves T6-T11 pass through and innervate the rectus abdominis muscle and overlying skin. The arcuate lines compartmentalize the rectus, but they are deficient posteriorly and hence a catheter tunnelled into the posterior sheath can be used to achieve an effective continuous analgesic block. Volume is important to fill the compartment. It is a simple surgical procedure that has several advantages and appears a viable alternative to epidural analgesia.

  5. Using remote sensing data to predict road fill areas and areas affected by fill erosion with planned forest road construction: a case study in Kastamonu Regional Forest Directorate (Turkey).

    PubMed

    Aricak, Burak

    2015-07-01

    Forest roads are essential for transport in managed forests, yet road construction causes environmental disturbance, both in the surface area the road covers and in erosion and downslope deposition of road fill material. The factors affecting the deposition distance of eroded road fill are the slope gradient and the density of plant cover. Thus, it is important to take these factors into consideration during road planning to minimize their disturbance. The aim of this study was to use remote sensing and field surveying to predict the locations that would be affected by downslope deposition of eroding road fill and to compile the data into a geographic information system (GIS) database. The construction of 99,500 m of forest roads is proposed for the Kastamonu Regional Forest Directorate in Turkey. Using GeoEye satellite images and a digital elevation model (DEM) for the region, the location and extent of downslope deposition of road fill were determined for the roads as planned. It was found that if the proposed roads were constructed by excavators, the fill material would cover 910,621 m(2) and the affected surface area would be 1,302,740 m(2). Application of the method used here can minimize the adverse effects of forest roads.

  6. Objects Grouping for Segmentation of Roads Network in High Resolution Images of Urban Areas

    NASA Astrophysics Data System (ADS)

    Maboudi, M.; Amini, J.; Hahn, M.

    2016-06-01

    Updated road databases are required for many purposes such as urban planning, disaster management, car navigation, route planning, traffic management and emergency handling. In the last decade, the improvement in spatial resolution of VHR civilian satellite sensors - as the main source of large scale mapping applications - was so considerable that GSD has become finer than size of common urban objects of interest such as building, trees and road parts. This technological advancement pushed the development of "Object-based Image Analysis (OBIA)" as an alternative to pixel-based image analysis methods. Segmentation as one of the main stages of OBIA provides the image objects on which most of the following processes will be applied. Therefore, the success of an OBIA approach is strongly affected by the segmentation quality. In this paper, we propose a purpose-dependent refinement strategy in order to group road segments in urban areas using maximal similarity based region merging. For investigations with the proposed method, we use high resolution images of some urban sites. The promising results suggest that the proposed approach is applicable in grouping of road segments in urban areas.

  7. Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs

    PubMed Central

    Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos

    2014-01-01

    In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540

  8. Setting a good example: supervisors as work-life-friendly role models within the context of boundary management.

    PubMed

    Koch, Anna R; Binnewies, Carmen

    2015-01-01

    This multisource, multilevel study examined the importance of supervisors as work-life-friendly role models for employees' boundary management. Particularly, we tested whether supervisors' work-home segmentation behavior represents work-life-friendly role modeling for their employees. Furthermore, we tested whether work-life-friendly role modeling is positively related to employees' work-home segmentation behavior. Also, we examined whether work-life-friendly role modeling is positively related to employees' well-being in terms of feeling less exhausted and disengaged. In total, 237 employees and their 75 supervisors participated in our study. Results from hierarchical linear models revealed that supervisors who showed more segmentation behavior to separate work and home were more likely perceived as work-life-friendly role models. Employees with work-life-friendly role models were more likely to segment between work and home, and they felt less exhausted and disengaged. One may conclude that supervisors as work-life-friendly role models are highly important for employees' work-home segmentation behavior and gatekeepers to implement a work-life-friendly organizational culture. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. Automated construction of arterial and venous trees in retinal images

    PubMed Central

    Hu, Qiao; Abràmoff, Michael D.; Garvin, Mona K.

    2015-01-01

    Abstract. While many approaches exist to segment retinal vessels in fundus photographs, only a limited number focus on the construction and disambiguation of arterial and venous trees. Previous approaches are local and/or greedy in nature, making them susceptible to errors or limiting their applicability to large vessels. We propose a more global framework to generate arteriovenous trees in retinal images, given a vessel segmentation. In particular, our approach consists of three stages. The first stage is to generate an overconnected vessel network, named the vessel potential connectivity map (VPCM), consisting of vessel segments and the potential connectivity between them. The second stage is to disambiguate the VPCM into multiple anatomical trees, using a graph-based metaheuristic algorithm. The third stage is to classify these trees into arterial or venous (A/V) trees. We evaluated our approach with a ground truth built based on a public database, showing a pixel-wise classification accuracy of 88.15% using a manual vessel segmentation as input, and 86.11% using an automatic vessel segmentation as input. PMID:26636114

  10. Superpixel Cut for Figure-Ground Image Segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Michael Ying; Rosenhahn, Bodo

    2016-06-01

    Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.

  11. A framework for comparing different image segmentation methods and its use in studying equivalences between level set and fuzzy connectedness frameworks

    PubMed Central

    Ciesielski, Krzysztof Chris; Udupa, Jayaram K.

    2011-01-01

    In the current vast image segmentation literature, there seems to be considerable redundancy among algorithms, while there is a serious lack of methods that would allow their theoretical comparison to establish their similarity, equivalence, or distinctness. In this paper, we make an attempt to fill this gap. To accomplish this goal, we argue that: (1) every digital segmentation algorithm A should have a well defined continuous counterpart MA, referred to as its model, which constitutes an asymptotic of A when image resolution goes to infinity; (2) the equality of two such models MA and MA′ establishes a theoretical (asymptotic) equivalence of their digital counterparts A and A′. Such a comparison is of full theoretical value only when, for each involved algorithm A, its model MA is proved to be an asymptotic of A. So far, such proofs do not appear anywhere in the literature, even in the case of algorithms introduced as digitizations of continuous models, like level set segmentation algorithms. The main goal of this article is to explore a line of investigation for formally pairing the digital segmentation algorithms with their asymptotic models, justifying such relations with mathematical proofs, and using the results to compare the segmentation algorithms in this general theoretical framework. As a first step towards this general goal, we prove here that the gradient based thresholding model M∇ is the asymptotic for the fuzzy connectedness Udupa and Samarasekera segmentation algorithm used with gradient based affinity A∇. We also argue that, in a sense, M∇ is the asymptotic for the original front propagation level set algorithm of Malladi, Sethian, and Vemuri, thus establishing a theoretical equivalence between these two specific algorithms. Experimental evidence of this last equivalence is also provided. PMID:21442014

  12. Breast mass segmentation in mammography using plane fitting and dynamic programming.

    PubMed

    Song, Enmin; Jiang, Luan; Jin, Renchao; Zhang, Lin; Yuan, Yuan; Li, Qiang

    2009-07-01

    Segmentation is an important and challenging task in a computer-aided diagnosis (CAD) system. Accurate segmentation could improve the accuracy in lesion detection and characterization. The objective of this study is to develop and test a new segmentation method that aims at improving the performance level of breast mass segmentation in mammography, which could be used to provide accurate features for classification. This automated segmentation method consists of two main steps and combines the edge gradient, the pixel intensity, as well as the shape characteristics of the lesions to achieve good segmentation results. First, a plane fitting method was applied to a background-trend corrected region-of-interest (ROI) of a mass to obtain the edge candidate points. Second, dynamic programming technique was used to find the "optimal" contour of the mass from the edge candidate points. Area-based similarity measures based on the radiologist's manually marked annotation and the segmented region were employed as criteria to evaluate the performance level of the segmentation method. With the evaluation criteria, the new method was compared with 1) the dynamic programming method developed by Timp and Karssemeijer, and 2) the normalized cut segmentation method, based on 337 ROIs extracted from a publicly available image database. The experimental results indicate that our segmentation method can achieve a higher performance level than the other two methods, and the improvements in segmentation performance level were statistically significant. For instance, the mean overlap percentage for the new algorithm was 0.71, whereas those for Timp's dynamic programming method and the normalized cut segmentation method were 0.63 (P < .001) and 0.61 (P < .001), respectively. We developed a new segmentation method by use of plane fitting and dynamic programming, which achieved a relatively high performance level. The new segmentation method would be useful for improving the accuracy of computerized detection and classification of breast cancer in mammography.

  13. The geometrical precision of virtual bone models derived from clinical computed tomography data for forensic anthropology.

    PubMed

    Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J

    2017-07-01

    Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.

  14. Segmentation of Retinal Blood Vessels Based on Cake Filter

    PubMed Central

    Bao, Xi-Rong; Ge, Xin; She, Li-Huang; Zhang, Shi

    2015-01-01

    Segmentation of retinal blood vessels is significant to diagnosis and evaluation of ocular diseases like glaucoma and systemic diseases such as diabetes and hypertension. The retinal blood vessel segmentation for small and low contrast vessels is still a challenging problem. To solve this problem, a new method based on cake filter is proposed. Firstly, a quadrature filter band called cake filter band is made up in Fourier field. Then the real component fusion is used to separate the blood vessel from the background. Finally, the blood vessel network is got by a self-adaption threshold. The experiments implemented on the STARE database indicate that the new method has a better performance than the traditional ones on the small vessels extraction, average accuracy rate, and true and false positive rate. PMID:26636095

  15. Airport take-off noise assessment aimed at identify responsible aircraft classes.

    PubMed

    Sanchez-Perez, Luis A; Sanchez-Fernandez, Luis P; Shaout, Adnan; Suarez-Guerra, Sergio

    2016-01-15

    Assessment of aircraft noise is an important task of nowadays airports in order to fight environmental noise pollution given the recent discoveries on the exposure negative effects on human health. Noise monitoring and estimation around airports mostly use aircraft noise signals only for computing statistical indicators and depends on additional data sources so as to determine required inputs such as the aircraft class responsible for noise pollution. In this sense, the noise monitoring and estimation systems have been tried to improve by creating methods for obtaining more information from aircraft noise signals, especially real-time aircraft class recognition. Consequently, this paper proposes a multilayer neural-fuzzy model for aircraft class recognition based on take-off noise signal segmentation. It uses a fuzzy inference system to build a final response for each class p based on the aggregation of K parallel neural networks outputs Op(k) with respect to Linear Predictive Coding (LPC) features extracted from K adjacent signal segments. Based on extensive experiments over two databases with real-time take-off noise measurements, the proposed model performs better than other methods in literature, particularly when aircraft classes are strongly correlated to each other. A new strictly cross-checked database is introduced including more complex classes and real-time take-off noise measurements from modern aircrafts. The new model is at least 5% more accurate with respect to previous database and successfully classifies 87% of measurements in the new database. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Pre-filled syringe - a ready-to-use drug delivery system: a review.

    PubMed

    Ingle, Rahul G; Agarwal, Aayush S

    2014-09-01

    Fueled by a growing global expectation of the health and medical fields, billions of dollars/euros/pounds are invested every year in the research of new biological and chemical entities. However, little interest is seen in the development of novel drug delivery systems. One such system, pre-filled syringe (PFS), was invented decades ago but is still a rare mode of delivery in many therapeutic segments. This review comprises properties and effects of extractables, leachables and discuss the characteristics of PFS technology; its composition, glass and polymer types, configuration of PFS, advantages over glass, technical and commercial applicability; its significance against patient, industry, quality, environment and cost; and its business potential. We discuss in brief about PFS used in various major and life-threatening disorders and future prospects. It provides showers of knowledge in the field of PFS drug delivery technology to the reader's, industrialist's and researcher's point of view. The PFS drug delivery system offers a wonderful panorama to lifesaving drugs that are currently only available in conventional vials and ampoules in the market. A novel approach of Form Fill Seal technology can be adopted for this particular ready-to-use dosage form also, which opens the new global doors for budding researchers in the field of pre-filled drug delivery system.

  17. An automatic graph-based approach for artery/vein classification in retinal images.

    PubMed

    Dashtbozorg, Behdad; Mendonça, Ana Maria; Campilho, Aurélio

    2014-03-01

    The classification of retinal vessels into artery/vein (A/V) is an important phase for automating the detection of vascular changes, and for the calculation of characteristic signs associated with several systemic diseases such as diabetes, hypertension, and other cardiovascular conditions. This paper presents an automatic approach for A/V classification based on the analysis of a graph extracted from the retinal vasculature. The proposed method classifies the entire vascular tree deciding on the type of each intersection point (graph nodes) and assigning one of two labels to each vessel segment (graph links). Final classification of a vessel segment as A/V is performed through the combination of the graph-based labeling results with a set of intensity features. The results of this proposed method are compared with manual labeling for three public databases. Accuracy values of 88.3%, 87.4%, and 89.8% are obtained for the images of the INSPIRE-AVR, DRIVE, and VICAVR databases, respectively. These results demonstrate that our method outperforms recent approaches for A/V classification.

  18. Epididymal genomics and the search for a male contraceptive.

    PubMed

    Turner, T T; Johnston, D S; Jelinsky, S A

    2006-05-16

    This report represents the joint efforts of three laboratories, one with a primary interest in understanding regulatory processes in the epididymal epithelium (TTT) and two with a primary interest in identifying and characterizing new contraceptive targets (DSJ and SAJ). We have developed a highly refined mouse epididymal transcriptome and have used it as a starting point for determining genes in the human epididymis, which may serve as targets for male contraceptives. Our database represents gene expression information for approximately 39,000 transcripts, of which over 17,000 are significantly expressed in at least one segment of the mouse epididymis. Over 2000 of these transcripts are up- or down-regulated by at least four-fold between at least two segments. In addition, human databases have been queried to determine expression of orthologs in the human epididymis and the specificity of their expression in the epididymis. Genes highly regulated in the human epididymis and showing high tissue specificity are potential targets for male contraceptives.

  19. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    PubMed Central

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  20. Virus Database and Online Inquiry System Based on Natural Vectors.

    PubMed

    Dong, Rui; Zheng, Hui; Tian, Kun; Yau, Shek-Chung; Mao, Weiguang; Yu, Wenping; Yin, Changchuan; Yu, Chenglong; He, Rong Lucy; Yang, Jie; Yau, Stephen St

    2017-01-01

    We construct a virus database called VirusDB (http://yaulab.math.tsinghua.edu.cn/VirusDB/) and an online inquiry system to serve people who are interested in viral classification and prediction. The database stores all viral genomes, their corresponding natural vectors, and the classification information of the single/multiple-segmented viral reference sequences downloaded from National Center for Biotechnology Information. The online inquiry system serves the purpose of computing natural vectors and their distances based on submitted genomes, providing an online interface for accessing and using the database for viral classification and prediction, and back-end processes for automatic and manual updating of database content to synchronize with GenBank. Submitted genomes data in FASTA format will be carried out and the prediction results with 5 closest neighbors and their classifications will be returned by email. Considering the one-to-one correspondence between sequence and natural vector, time efficiency, and high accuracy, natural vector is a significant advance compared with alignment methods, which makes VirusDB a useful database in further research.

  1. Performance of an Artificial Multi-observer Deep Neural Network for Fully Automated Segmentation of Polycystic Kidneys.

    PubMed

    Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J

    2017-08-01

    Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.

  2. Temporal decline in filling prescriptions for terfenadine closely in time with those for either ketoconazole or erythromycin.

    PubMed

    Burkhart, G A; Sevka, M J; Temple, R; Honig, P K

    1997-01-01

    Temporal changes in the rates of filling terfenadine prescriptions within 2 days of those for either oral erythromycin or oral ketoconazole were described with use of paid pharmacy claims data from 1988 through 1994 in state Medicaid programs from Michigan and Ohio and in a large health maintenance organization. There were rapid and significant declines in the rates of filling prescriptions for either erythromycin or ketoconazole within 2 days of prescriptions for terfenadine in all three databases that coincided with 1992 publicity about the cardiovascular risk of terfenadine. These findings suggest that the use of terfenadine with contraindicated medications has declined in response to relabeling and publicity concerning the safe use of terfenadine. Further study is necessary to estimate the absolute level of concurrent use of terfenadine with contraindicated medications.

  3. A Genealogical Look at Shared Ancestry on the X Chromosome.

    PubMed

    Buffalo, Vince; Mount, Stephen M; Coop, Graham

    2016-09-01

    Close relatives can share large segments of their genome identical by descent (IBD) that can be identified in genome-wide polymorphism data sets. There are a range of methods to use these IBD segments to identify relatives and estimate their relationship. These methods have focused on sharing on the autosomes, as they provide a rich source of information about genealogical relationships. We hope to learn additional information about recent ancestry through shared IBD segments on the X chromosome, but currently lack the theoretical framework to use this information fully. Here, we fill this gap by developing probability distributions for the number and length of X chromosome segments shared IBD between an individual and an ancestor k generations back, as well as between half- and full-cousin relationships. Due to the inheritance pattern of the X and the fact that X homologous recombination occurs only in females (outside of the pseudoautosomal regions), the number of females along a genealogical lineage is a key quantity for understanding the number and length of the IBD segments shared among relatives. When inferring relationships among individuals, the number of female ancestors along a genealogical lineage will often be unknown. Therefore, our IBD segment length and number distributions marginalize over this unknown number of recombinational meioses through a distribution of recombinational meioses we derive. By using Bayes' theorem to invert these distributions, we can estimate the number of female ancestors between two relatives, giving us details about the genealogical relations between individuals not possible with autosomal data alone. Copyright © 2016 by the Genetics Society of America.

  4. Abnormal early diastolic intraventricular flow 'kinetic energy index' assessed by vector flow mapping in patients with elevated filling pressure.

    PubMed

    Nogami, Yoshie; Ishizu, Tomoko; Atsumi, Akiko; Yamamoto, Masayoshi; Kawamura, Ryo; Seo, Yoshihiro; Aonuma, Kazutaka

    2013-03-01

    Recently developed vector flow mapping (VFM) enables evaluation of local flow dynamics without angle dependency. This study used VFM to evaluate quantitatively the index of intraventricular haemodynamic kinetic energy in patients with left ventricular (LV) diastolic dysfunction and to compare those with normal subjects. We studied 25 patients with estimated high left atrial (LA) pressure (pseudonormal: PN group) and 36 normal subjects (control group). Left ventricle was divided into basal, mid, and apical segments. Intraventricular haemodynamic energy was evaluated in the dimension of speed, and it was defined as the kinetic energy index. We calculated this index and created time-energy index curves. The time interval from electrocardiogram (ECG) R wave to peak index was measured, and time differences of the peak index between basal and other segments were defined as ΔT-mid and ΔT-apex. In both groups, early diastolic peak kinetic energy index in mid and apical segments was significantly lower than that in the basal segment. Time to peak index did not differ in apex, mid, and basal segments in the control group but was significantly longer in the apex than that in the basal segment in the PN group. ΔT-mid and ΔT-apex were significantly larger in the PN group than the control group. Multiple regression analysis showed sphericity index, E/E' to be significant independent variables determining ΔT apex. Retarded apical kinetic energy fluid dynamics were detected using VFM and were closely associated with LV spherical remodelling in patients with high LA pressure.

  5. Filling Terrorism Gaps: VEOs, Evaluating Databases, and Applying Risk Terrain Modeling to Terrorism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagan, Ross F.

    2016-08-29

    This paper aims to address three issues: the lack of literature differentiating terrorism and violent extremist organizations (VEOs), terrorism incident databases, and the applicability of Risk Terrain Modeling (RTM) to terrorism. Current open source literature and publicly available government sources do not differentiate between terrorism and VEOs; furthermore, they fail to define them. Addressing the lack of a comprehensive comparison of existing terrorism data sources, a matrix comparing a dozen terrorism databases is constructed, providing insight toward the array of data available. RTM, a method for spatial risk analysis at a micro level, has some applicability to terrorism research, particularlymore » for studies looking at risk indicators of terrorism. Leveraging attack data from multiple databases, combined with RTM, offers one avenue for closing existing research gaps in terrorism literature.« less

  6. Accurate GM atrophy quantification in MS using lesion-filling with co-registered 2D lesion masks☆

    PubMed Central

    Popescu, V.; Ran, N.C.G.; Barkhof, F.; Chard, D.T.; Wheeler-Kingshott, C.A.; Vrenken, H.

    2014-01-01

    Background In multiple sclerosis (MS), brain atrophy quantification is affected by white matter lesions. LEAP and FSL-lesion_filling, replace lesion voxels with white matter intensities; however, they require precise lesion identification on 3DT1-images. Aim To determine whether 2DT2 lesion masks co-registered to 3DT1 images, yield grey and white matter volumes comparable to precise lesion masks. Methods 2DT2 lesion masks were linearly co-registered to 20 3DT1-images of MS patients, with nearest-neighbor (NNI), and tri-linear interpolation. As gold-standard, lesion masks were manually outlined on 3DT1-images. LEAP and FSL-lesion_filling were applied with each lesion mask. Grey (GM) and white matter (WM) volumes were quantified with FSL-FAST, and deep gray matter (DGM) volumes using FSL-FIRST. Volumes were compared between lesion mask types using paired Wilcoxon tests. Results Lesion-filling with gold-standard lesion masks compared to native images reduced GM overestimation by 1.93 mL (p < .001) for LEAP, and 1.21 mL (p = .002) for FSL-lesion_filling. Similar effects were achieved with NNI lesion masks from 2DT2. Global WM underestimation was not significantly influenced. GM and WM volumes from NNI, did not differ significantly from gold-standard. GM segmentation differed between lesion masks in the lesion area, and also elsewhere. Using the gold-standard, FSL-FAST quantified as GM on average 0.4% of the lesion area with LEAP and 24.5% with FSL-lesion_filling. Lesion-filling did not influence DGM volumes from FSL-FIRST. Discussion These results demonstrate that for global GM volumetry, precise lesion masks on 3DT1 images can be replaced by co-registered 2DT2 lesion masks. This makes lesion-filling a feasible method for GM atrophy measurements in MS. PMID:24567908

  7. The role of attention in figure-ground segregation in areas V1 and V4 of the visual cortex.

    PubMed

    Poort, Jasper; Raudies, Florian; Wannig, Aurel; Lamme, Victor A F; Neumann, Heiko; Roelfsema, Pieter R

    2012-07-12

    Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Riparian Land Use/Land Cover Data for Three Study Units in Group II of the Nutrient Enrichment Effects Topical Study of the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Johnson, Michaela R.; Clark, Jimmy M.; Dickinson, Ross G.; Sanocki, Chris A.; Tranmer, Andrew W.

    2009-01-01

    This data set was developed as part of the National Water-Quality Assessment (NAWQA) Program, Nutrient Enrichment Effects Topical (NEET) study. This report is concerned with three of the eight NEET study units distributed across the United States: Ozark Plateaus, Upper Mississippi River Basin, and Upper Snake River Basin, collectively known as Group II of the NEET study. Ninety stream reaches were investigated during 2006-08 in these three study units. Stream segments, with lengths equal to the base-10 logarithm of the basin area, were delineated upstream from the stream reaches through the use of digital orthophoto quarter-quadrangle (DOQQ) imagery. The analysis area for each stream segment was defined by a streamside buffer extending laterally to 250 meters from the stream segment. Delineation of landuse and land-cover (LULC) map units within stream-segment buffers was completed using on-screen digitizing of riparian LULC classes interpreted from the DOQQ. LULC units were classified using a strategy consisting of nine classes. National Wetlands Inventory (NWI) data were used to aid in wetland classification. Longitudinal riparian transects (lines offset from the stream segments) were generated digitally, used to sample the LULC maps, and partitioned in accord with the intersected LULC map-unit types. These longitudinal samples yielded the relative linear extent and sequence of each LULC type within the riparian zone at the segment scale. The resulting areal and linear estimates of LULC extent filled in the spatial-scale gap between the 30-meter resolution of the 1990s National Land Cover Dataset and the reach-level habitat assessment data collected onsite routinely for NAWQA ecological sampling. The resulting data consisted of 12 geospatial data sets: LULC within 25 meters of the stream reach (polygon); LULC within 50 meters of the stream reach (polygon); LULC within 50 meters of the stream segment (polygon); LULC within 100 meters of the stream segment (polygon); LULC within 150 meters of the stream segment (polygon); LULC within 250 meters of the stream segment (polygon); frequency of gaps in woody vegetation at the reach scale (arc); stream reaches (arc); longitudinal LULC transect sample at the reach scale (arc); frequency of gaps in woody vegetation at the segment scale (arc); stream segments (arc); and longitudinal LULC transect sample at the segment scale (arc).

  9. An open access database for the evaluation of heart sound algorithms.

    PubMed

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  10. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models

    DOE PAGES

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...

    2014-10-16

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less

  11. Likelihood-Based Gene Annotations for Gap Filling and Quality Assessment in Genome-Scale Metabolic Models

    PubMed Central

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.

    2014-01-01

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157

  12. Efficient patient modeling for visuo-haptic VR simulation using a generic patient atlas.

    PubMed

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-08-01

    This work presents a new time-saving virtual patient modeling system by way of example for an existing visuo-haptic training and planning virtual reality (VR) system for percutaneous transhepatic cholangio-drainage (PTCD). Our modeling process is based on a generic patient atlas to start with. It is defined by organ-specific optimized models, method modules and parameters, i.e. mainly individual segmentation masks, transfer functions to fill the gaps between the masks and intensity image data. In this contribution, we show how generic patient atlases can be generalized to new patient data. The methodology consists of patient-specific, locally-adaptive transfer functions and dedicated modeling methods such as multi-atlas segmentation, vessel filtering and spline-modeling. Our full image volume segmentation algorithm yields median DICE coefficients of 0.98, 0.93, 0.82, 0.74, 0.51 and 0.48 regarding soft-tissue, liver, bone, skin, blood and bile vessels for ten test patients and three selected reference patients. Compared to standard slice-wise manual contouring time saving is remarkable. Our segmentation process shows out efficiency and robustness for upper abdominal puncture simulation systems. This marks a significant step toward establishing patient-specific training and hands-on planning systems in a clinical environment. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Biomechanical Evaluation of Different Fixation Methods for Mandibular Anterior Segmental Osteotomy Using Finite Element Analysis, Part Two: Superior Repositioning Surgery With Bone Allograft.

    PubMed

    Kilinç, Yeliz; Erkmen, Erkan; Kurt, Ahmet

    2016-01-01

    In this study, the biomechanical behavior of different fixation methods used to fix the mandibular anterior segment following various amounts of superior repositioning was evaluated by using Finite Element Analysis (FEA). The three-dimensional finite element models representing 3 and 5 mm superior repositioning were generated. The gap in between segments was assumed to be filled by block bone allograft and resignated to be in perfect contact with the mandible and segmented bone. Six different finite element models with 2 distinct mobilization rate including 3 different fixation configurations, double right L (DRL), double left L (DLL), or double I (DI) miniplates with monocortical screws, correspondingly were created. A comparative evaluation has been made under vertical, horizontal and oblique loads. The von Mises and principal maximum stress (Pmax) values were calculated by finite element solver programme. The first part of our ongoing Finite Element Analysis research has been addressed to the mechanical behavior of the same fixation configurations in nongrafted models. In comparison with the findings of the first part of the study, it was concluded that bone graft offers superior mechanical stability without any limitation of mobilization and less stress on the fixative appliances as well as in the bone.

  14. Computer aided system for segmentation and visualization of microcalcifications in digital mammograms.

    PubMed

    Reljin, Branimir; Milosević, Zorica; Stojić, Tomislav; Reljin, Irini

    2009-01-01

    Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.

  15. Supervised segmentation of microelectrode recording artifacts using power spectral density.

    PubMed

    Bakstein, Eduard; Schneider, Jakub; Sieger, Tomas; Novak, Daniel; Wild, Jiri; Jech, Robert

    2015-08-01

    Appropriate detection of clean signal segments in extracellular microelectrode recordings (MER) is vital for maintaining high signal-to-noise ratio in MER studies. Existing alternatives to manual signal inspection are based on unsupervised change-point detection. We present a method of supervised MER artifact classification, based on power spectral density (PSD) and evaluate its performance on a database of 95 labelled MER signals. The proposed method yielded test-set accuracy of 90%, which was close to the accuracy of annotation (94%). The unsupervised methods achieved accuracy of about 77% on both training and testing data.

  16. Establishing homologies in protein sequences

    NASA Technical Reports Server (NTRS)

    Dayhoff, M. O.; Barker, W. C.; Hunt, L. T.

    1983-01-01

    Computer-based statistical techniques used to determine homologies between proteins occurring in different species are reviewed. The technique is based on comparison of two protein sequences, either by relating all segments of a given length in one sequence to all segments of the second or by finding the best alignment of the two sequences. Approaches discussed include selection using printed tabulations, identification of very similar sequences, and computer searches of a database. The use of the SEARCH, RELATE, and ALIGN programs (Dayhoff, 1979) is explained; sample data are presented in graphs, diagrams, and tables and the construction of scoring matrices is considered.

  17. Automatic Segmentation and Quantification of Filamentous Structures in Electron Tomography

    PubMed Central

    Loss, Leandro A.; Bebis, George; Chang, Hang; Auer, Manfred; Sarkar, Purbasha; Parvin, Bahram

    2016-01-01

    Electron tomography is a promising technology for imaging ultrastructures at nanoscale resolutions. However, image and quantitative analyses are often hindered by high levels of noise, staining heterogeneity, and material damage either as a result of the electron beam or sample preparation. We have developed and built a framework that allows for automatic segmentation and quantification of filamentous objects in 3D electron tomography. Our approach consists of three steps: (i) local enhancement of filaments by Hessian filtering; (ii) detection and completion (e.g., gap filling) of filamentous structures through tensor voting; and (iii) delineation of the filamentous networks. Our approach allows for quantification of filamentous networks in terms of their compositional and morphological features. We first validate our approach using a set of specifically designed synthetic data. We then apply our segmentation framework to tomograms of plant cell walls that have undergone different chemical treatments for polysaccharide extraction. The subsequent compositional and morphological analyses of the plant cell walls reveal their organizational characteristics and the effects of the different chemical protocols on specific polysaccharides. PMID:28090597

  18. Automatic Segmentation and Quantification of Filamentous Structures in Electron Tomography.

    PubMed

    Loss, Leandro A; Bebis, George; Chang, Hang; Auer, Manfred; Sarkar, Purbasha; Parvin, Bahram

    2012-10-01

    Electron tomography is a promising technology for imaging ultrastructures at nanoscale resolutions. However, image and quantitative analyses are often hindered by high levels of noise, staining heterogeneity, and material damage either as a result of the electron beam or sample preparation. We have developed and built a framework that allows for automatic segmentation and quantification of filamentous objects in 3D electron tomography. Our approach consists of three steps: (i) local enhancement of filaments by Hessian filtering; (ii) detection and completion (e.g., gap filling) of filamentous structures through tensor voting; and (iii) delineation of the filamentous networks. Our approach allows for quantification of filamentous networks in terms of their compositional and morphological features. We first validate our approach using a set of specifically designed synthetic data. We then apply our segmentation framework to tomograms of plant cell walls that have undergone different chemical treatments for polysaccharide extraction. The subsequent compositional and morphological analyses of the plant cell walls reveal their organizational characteristics and the effects of the different chemical protocols on specific polysaccharides.

  19. Segmenting overlapping nano-objects in atomic force microscopy image

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko

    2018-01-01

    Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.

  20. Atomizing nozzle and process

    DOEpatents

    Anderson, I.E.; Figliola, R.S.; Molnar, H.M.

    1993-07-20

    High pressure atomizing nozzle includes a high pressure gas manifold having a divergent expansion chamber between a gas inlet and arcuate manifold segment to minimize standing shock wave patterns in the manifold and thereby improve filling of the manifold with high pressure gas for improved melt atomization. The atomizing nozzle is especially useful in atomizing rare earth-transition metal alloys to form fine powder particles wherein a majority of the powder particles exhibit particle sizes having near-optimum magnetic properties.

  1. Atomizing nozzle and process

    DOEpatents

    Anderson, Iver E.; Figliola, Richard S.; Molnar, Holly M.

    1992-06-30

    High pressure atomizing nozzle includes a high pressure gas manifold having a divergent expansion chamber between a gas inlet and arcuate manifold segment to minimize standing shock wave patterns in the manifold and thereby improve filling of the manifold with high pressure gas for improved melt atomization. The atomizing nozzle is especially useful in atomizing rare earth-transition metal alloys to form fine powder particles wherein a majority of the powder particles exhibit particle sizes having near-optimum magnetic properties.

  2. Transcriptional sequencing and analysis of major genes involved in the adventitious root formation of mango cotyledon segments.

    PubMed

    Li, Yun-He; Zhang, Hong-Na; Wu, Qing-Song; Muday, Gloria K

    2017-06-01

    A total of 74,745 unigenes were generated and 1975 DEGs were identified. Candidate genes that may be involved in the adventitious root formation of mango cotyledon segment were revealed. Adventitious root formation is a crucial step in plant vegetative propagation, but the molecular mechanism of adventitious root formation remains unclear. Adventitious roots formed only at the proximal cut surface (PCS) of mango cotyledon segments, whereas no roots were formed on the opposite, distal cut surface (DCS). To identify the transcript abundance changes linked to adventitious root development, RNA was isolated from PCS and DCS at 0, 4 and 7 days after culture, respectively. Illumina sequencing of libraries generated from these samples yielded 62.36 Gb high-quality reads that were assembled into 74,745 unigenes with an average sequence length of 807 base pairs, and 33,252 of the assembled unigenes at least had homologs in one of the public databases. Comparative analysis of these transcriptome databases revealed that between the different time points at PCS there were 1966 differentially expressed genes (DEGs), while there were only 51 DEGs for the PCS vs. DCS when time-matched samples were compared. Of these DEGs, 1636 were assigned to gene ontology (GO) classes, the majority of that was involved in cellular processes, metabolic processes and single-organism processes. Candidate genes that may be involved in the adventitious root formation of mango cotyledon segment are predicted to encode polar auxin transport carriers, auxin-regulated proteins, cell wall remodeling enzymes and ethylene-related proteins. In order to validate RNA-sequencing results, we further analyzed the expression profiles of 20 genes by quantitative real-time PCR. This study expands the transcriptome information for Mangifera indica and identifies candidate genes involved in adventitious root formation in cotyledon segments of mango.

  3. Segmentation of the spinous process and its acoustic shadow in vertebral ultrasound images.

    PubMed

    Berton, Florian; Cheriet, Farida; Miron, Marie-Claude; Laporte, Catherine

    2016-05-01

    Spinal ultrasound imaging is emerging as a low-cost, radiation-free alternative to conventional X-ray imaging for the clinical follow-up of patients with scoliosis. Currently, deformity measurement relies almost entirely on manual identification of key vertebral landmarks. However, the interpretation of vertebral ultrasound images is challenging, primarily because acoustic waves are entirely reflected by bone. To alleviate this problem, we propose an algorithm to segment these images into three regions: the spinous process, its acoustic shadow and other tissues. This method consists, first, in the extraction of several image features and the selection of the most relevant ones for the discrimination of the three regions. Then, using this set of features and linear discriminant analysis, each pixel of the image is classified as belonging to one of the three regions. Finally, the image is segmented by regularizing the pixel-wise classification results to account for some geometrical properties of vertebrae. The feature set was first validated by analyzing the classification results across a learning database. The database contained 107 vertebral ultrasound images acquired with convex and linear probes. Classification rates of 84%, 92% and 91% were achieved for the spinous process, the acoustic shadow and other tissues, respectively. Dice similarity coefficients of 0.72 and 0.88 were obtained respectively for the spinous process and acoustic shadow, confirming that the proposed method accurately segments the spinous process and its acoustic shadow in vertebral ultrasound images. Furthermore, the centroid of the automatically segmented spinous process was located at an average distance of 0.38 mm from that of the manually labeled spinous process, which is on the order of image resolution. This suggests that the proposed method is a promising tool for the measurement of the Spinous Process Angle and, more generally, for assisting ultrasound-based assessment of scoliosis progression. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Physical-chemical property based sequence motifs and methods regarding same

    DOEpatents

    Braun, Werner [Friendswood, TX; Mathura, Venkatarajan S [Sarasota, FL; Schein, Catherine H [Friendswood, TX

    2008-09-09

    A data analysis system, program, and/or method, e.g., a data mining/data exploration method, using physical-chemical property motifs. For example, a sequence database may be searched for identifying segments thereof having physical-chemical properties similar to the physical-chemical property motifs.

  5. Polydopamine Particle-Filled Shape-Memory Polyurethane Composites with Fast Near-Infrared Light Responsibility.

    PubMed

    Yang, Li; Tong, Rui; Wang, Zhanhua; Xia, Hesheng

    2018-03-25

    A new kind of fast near-infrared (NIR) light-responsive shape-memory polymer composites was prepared by introducing polydopamine particles (PDAPs) into commercial shape-memory polyurethane (SMPU). The toughness and strength of the polydopamine-particle-filled polyurethane composites (SMPU-PDAPs) were significantly enhanced with the addition of PDAPs due to the strong interface interaction between PDAPs and polyurethane segments. Owing to the outstanding photothermal effect of PDAPs, the composites exhibit a rapid light-responsive shape-memory process in 60 s with a PDAPs content of 0.01 wt%. Due to the excellent dispersion and convenient preparation method, PDAPs have great potential to be used as high-efficiency and environmentally friendly fillers to obtain novel photoactive functional polymer composites. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Spectral Skyline Separation: Extended Landmark Databases and Panoramic Imaging

    PubMed Central

    Differt, Dario; Möller, Ralf

    2016-01-01

    Evidence from behavioral experiments suggests that insects use the skyline as a cue for visual navigation. However, changes of lighting conditions, over hours, days or possibly seasons, significantly affect the appearance of the sky and ground objects. One possible solution to this problem is to extract the “skyline” by an illumination-invariant classification of the environment into two classes, ground objects and sky. In a previous study (Insect models of illumination-invariant skyline extraction from UV (ultraviolet) and green channels), we examined the idea of using two different color channels available for many insects (UV and green) to perform this segmentation. We found out that for suburban scenes in temperate zones, where the skyline is dominated by trees and artificial objects like houses, a “local” UV segmentation with adaptive thresholds applied to individual images leads to the most reliable classification. Furthermore, a “global” segmentation with fixed thresholds (trained on an image dataset recorded over several days) using UV-only information is only slightly worse compared to using both the UV and green channel. In this study, we address three issues: First, to enhance the limited range of environments covered by the dataset collected in the previous study, we gathered additional data samples of skylines consisting of minerals (stones, sand, earth) as ground objects. We could show that also for mineral-rich environments, UV-only segmentation achieves a quality comparable to multi-spectral (UV and green) segmentation. Second, we collected a wide variety of ground objects to examine their spectral characteristics under different lighting conditions. On the one hand, we found that the special case of diffusely-illuminated minerals increases the difficulty to reliably separate ground objects from the sky. On the other hand, the spectral characteristics of this collection of ground objects covers well with the data collected in the skyline databases, increasing, due to the increased variety of ground objects, the validity of our findings for novel environments. Third, we collected omnidirectional images, as often used for visual navigation tasks, of skylines using an UV-reflective hyperbolic mirror. We could show that “local” separation techniques can be adapted to the use of panoramic images by splitting the image into segments and finding individual thresholds for each segment. Contrarily, this is not possible for ‘global’ separation techniques. PMID:27690053

  7. Automatic segmentation of brain MRIs and mapping neuroanatomy across the human lifespan

    NASA Astrophysics Data System (ADS)

    Keihaninejad, Shiva; Heckemann, Rolf A.; Gousias, Ioannis S.; Rueckert, Daniel; Aljabar, Paul; Hajnal, Joseph V.; Hammers, Alexander

    2009-02-01

    A robust model for the automatic segmentation of human brain images into anatomically defined regions across the human lifespan would be highly desirable, but such structural segmentations of brain MRI are challenging due to age-related changes. We have developed a new method, based on established algorithms for automatic segmentation of young adults' brains. We used prior information from 30 anatomical atlases, which had been manually segmented into 83 anatomical structures. Target MRIs came from 80 subjects (~12 individuals/decade) from 20 to 90 years, with equal numbers of men, women; data from two different scanners (1.5T, 3T), using the IXI database. Each of the adult atlases was registered to each target MR image. By using additional information from segmentation into tissue classes (GM, WM and CSF) to initialise the warping based on label consistency similarity before feeding this into the previous normalised mutual information non-rigid registration, the registration became robust enough to accommodate atrophy and ventricular enlargement with age. The final segmentation was obtained by combination of the 30 propagated atlases using decision fusion. Kernel smoothing was used for modelling the structural volume changes with aging. Example linear correlation coefficients with age were, for lateral ventricular volume, rmale=0.76, rfemale=0.58 and, for hippocampal volume, rmale=-0.6, rfemale=-0.4 (allρ<0.01).

  8. Segmentation of lung fields using Chan-Vese active contour model in chest radiographs

    NASA Astrophysics Data System (ADS)

    Sohn, Kiwon

    2011-03-01

    A CAD tool for chest radiographs consists of several procedures and the very first step is segmentation of lung fields. We develop a novel methodology for segmentation of lung fields in chest radiographs that can satisfy the following two requirements. First, we aim to develop a segmentation method that does not need a training stage with manual estimation of anatomical features in a large training dataset of images. Secondly, for the ease of implementation, it is desirable to apply a well established model that is widely used for various image-partitioning practices. The Chan-Vese active contour model, which is based on Mumford-Shah functional in the level set framework, is applied for segmentation of lung fields. With the use of this model, segmentation of lung fields can be carried out without detailed prior knowledge on the radiographic anatomy of the chest, yet in some chest radiographs, the trachea regions are unfavorably segmented out in addition to the lung field contours. To eliminate artifacts from the trachea, we locate the upper end of the trachea, find a vertical center line of the trachea and delineate it, and then brighten the trachea region to make it less distinctive. The segmentation process is finalized by subsequent morphological operations. We randomly select 30 images from the Japanese Society of Radiological Technology image database to test the proposed methodology and the results are shown. We hope our segmentation technique can help to promote of CAD tools, especially for emerging chest radiographic imaging techniques such as dual energy radiography and chest tomosynthesis.

  9. Segmenting patients and physicians using preferences from discrete choice experiments.

    PubMed

    Deal, Ken

    2014-01-01

    People often form groups or segments that have similar interests and needs and seek similar benefits from health providers. Health organizations need to understand whether the same health treatments, prevention programs, services, and products should be applied to everyone in the relevant population or whether different treatments need to be provided to each of several segments that are relatively homogeneous internally but heterogeneous among segments. Our objective was to explain the purposes, benefits, and methods of segmentation for health organizations, and to illustrate the process of segmenting health populations based on preference coefficients from a discrete choice conjoint experiment (DCE) using an example study of prevention of cyberbullying among university students. We followed a two-level procedure for investigating segmentation incorporating several methods for forming segments in Level 1 using DCE preference coefficients and testing their quality, reproducibility, and usability by health decision makers. Covariates (demographic, behavioral, lifestyle, and health state variables) were included in Level 2 to further evaluate quality and to support the scoring of large databases and developing typing tools for assigning those in the relevant population, but not in the sample, to the segments. Several segmentation solution candidates were found during the Level 1 analysis, and the relationship of the preference coefficients to the segments was investigated using predictive methods. Those segmentations were tested for their quality and reproducibility and three were found to be very close in quality. While one seemed better than others in the Level 1 analysis, another was very similar in quality and proved ultimately better in predicting segment membership using covariates in Level 2. The two segments in the final solution were profiled for attributes that would support the development and acceptance of cyberbullying prevention programs among university students. Those segments were very different-where one wanted substantial penalties against cyberbullies and were willing to devote time to a prevention program, while the other felt no need to be involved in prevention and wanted only minor penalties. Segmentation recognizes key differences in why patients and physicians prefer different health programs and treatments. A viable segmentation solution may lead to adapting prevention programs and treatments for each targeted segment and/or to educating and communicating to better inform those in each segment of the program/treatment benefits. Segment members' revealed preferences showing behavioral changes provide the ultimate basis for evaluating the segmentation benefits to the health organization.

  10. Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution

    NASA Astrophysics Data System (ADS)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing

    2016-12-01

    The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of 80.3+/- 4.5 , yielding a mean Dice similarity coefficient of 97.25+/- 0.65 % , and an average symmetric surface distance of 0.84+/- 0.25 mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.

  11. Segmented Detector Calibration Techniques for the PROSPECT Experiment

    NASA Astrophysics Data System (ADS)

    Davee, Daniel; Prospect Collaboration

    2016-03-01

    PROSPECT will make the most precise measurement of the 235U anti-neutrino spectrum to date and search for eV-scale sterile neutrinos. The proposed detector is composed of 120 6Li loaded liquid scintillator filled cells, and uses Inverse Beta Decay (IBD) ν + p -->e+ + n to detect reactor anti-neutrinos. Because the positron produced in IBD carries most of the ν energy, the response throughout the entire segmented detector to electron-like energy depositions must be determined with high precision via an extensive calibration program. To this end the detector is designed to allow for the insertion of both optical and radioactive sources to test each performance of cell individually without changing the optical response. In addition to these measures, cosmogenic sources will be used to probe energy response of the detector at high energies.

  12. Video shot boundary detection using region-growing-based watershed method

    NASA Astrophysics Data System (ADS)

    Wang, Jinsong; Patel, Nilesh; Grosky, William

    2004-10-01

    In this paper, a novel shot boundary detection approach is presented, based on the popular region growing segmentation method - Watershed segmentation. In image processing, gray-scale pictures could be considered as topographic reliefs, in which the numerical value of each pixel of a given image represents the elevation at that point. Watershed method segments images by filling up basins with water starting at local minima, and at points where water coming from different basins meet, dams are built. In our method, each frame in the video sequences is first transformed from the feature space into the topographic space based on a density function. Low-level features are extracted from frame to frame. Each frame is then treated as a point in the feature space. The density of each point is defined as the sum of the influence functions of all neighboring data points. The height function that is originally used in Watershed segmentation is then replaced by inverting the density at the point. Thus, all the highest density values are transformed into local minima. Subsequently, Watershed segmentation is performed in the topographic space. The intuitive idea under our method is that frames within a shot are highly agglomerative in the feature space and have higher possibilities to be merged together, while those frames between shots representing the shot changes are not, hence they have less density values and are less likely to be clustered by carefully extracting the markers and choosing the stopping criterion.

  13. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    NASA Astrophysics Data System (ADS)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  14. Discriminative confidence estimation for probabilistic multi-atlas label fusion.

    PubMed

    Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard

    2017-12-01

    Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. SpArcFiRe: Scalable automated detection of spiral galaxy arm segments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Darren R.; Hayes, Wayne B., E-mail: drdavis@uci.edu, E-mail: whayes@uci.edu

    Given an approximately centered image of a spiral galaxy, we describe an entirely automated method that finds, centers, and sizes the galaxy (possibly masking nearby stars and other objects if necessary in order to isolate the galaxy itself) and then automatically extracts structural information about the spiral arms. For each arm segment found, we list the pixels in that segment, allowing image analysis on a per-arm-segment basis. We also perform a least-squares fit of a logarithmic spiral arc to the pixels in that segment, giving per-arc parameters, such as the pitch angle, arm segment length, location, etc. The algorithm takesmore » about one minute per galaxies, and can easily be scaled using parallelism. We have run it on all ∼644,000 Sloan objects that are larger than 40 pixels across and classified as 'galaxies'. We find a very good correlation between our quantitative description of a spiral structure and the qualitative description provided by Galaxy Zoo humans. Our objective, quantitative measures of structure demonstrate the difficulty in defining exactly what constitutes a spiral 'arm', leading us to prefer the term 'arm segment'. We find that pitch angle often varies significantly segment-to-segment in a single spiral galaxy, making it difficult to define the pitch angle for a single galaxy. We demonstrate how our new database of arm segments can be queried to find galaxies satisfying specific quantitative visual criteria. For example, even though our code does not explicitly find rings, a good surrogate is to look for galaxies having one long, low-pitch-angle arm—which is how our code views ring galaxies. SpArcFiRe is available at http://sparcfire.ics.uci.edu.« less

  16. Evaluation of a New Ensemble Learning Framework for Mass Classification in Mammograms.

    PubMed

    Rahmani Seryasat, Omid; Haddadnia, Javad

    2018-06-01

    Mammography is the most common screening method for diagnosis of breast cancer. In this study, a computer-aided system for diagnosis of benignity and malignity of the masses was implemented in mammogram images. In the computer aided diagnosis system, we first reduce the noise in the mammograms using an effective noise removal technique. After the noise removal, the mass in the region of interest must be segmented and this segmentation is done using a deformable model. After the mass segmentation, a number of features are extracted from it. These features include: features of the mass shape and border, tissue properties, and the fractal dimension. After extracting a large number of features, a proper subset must be chosen from among them. In this study, we make use of a new method on the basis of a genetic algorithm for selection of a proper set of features. After determining the proper features, a classifier is trained. To classify the samples, a new architecture for combination of the classifiers is proposed. In this architecture, easy and difficult samples are identified and trained using different classifiers. Finally, the proposed mass diagnosis system was also tested on mini-Mammographic Image Analysis Society and digital database for screening mammography databases. The obtained results indicate that the proposed system can compete with the state-of-the-art methods in terms of accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Extended Multiscale Image Segmentation for Castellated Wall Management

    NASA Astrophysics Data System (ADS)

    Sakamoto, M.; Tsuguchi, M.; Chhatkuli, S.; Satoh, T.

    2018-05-01

    Castellated walls are positioned as tangible cultural heritage, which require regular maintenance to preserve their original state. For the demolition and repair work of the castellated wall, it is necessary to identify the individual stones constituting the wall. However, conventional approaches using laser scanning or integrated circuits (IC) tags were very time-consuming and cumbersome. Therefore, we herein propose an efficient approach for castellated wall management based on an extended multiscale image segmentation technique. In this approach, individual stone polygons are extracted from the castellated wall image and are associated with a stone management database. First, to improve the performance of the extraction of individual stone polygons having a convex shape, we developed a new shape criterion named convex hull fitness in the image segmentation process and confirmed its effectiveness. Next, we discussed the stone management database and its beneficial utilization in the repair work of castellated walls. Subsequently, we proposed irregular-shape indexes that are helpful for evaluating the stone shape and the stability of the stone arrangement state in castellated walls. Finally, we demonstrated an application of the proposed method for a typical castellated wall in Japan. Consequently, we confirmed that the stone polygons can be extracted with an acceptable level. Further, the condition of the shapes and the layout of the stones could be visually judged with the proposed irregular-shape indexes.

  18. One size does not fit all: how the tobacco industry has altered cigarette design to target consumer groups with specific psychological and psychosocial needs.

    PubMed

    Cook, Benjamin Lê; Wayne, Geoffrey Ferris; Keithly, Lois; Connolly, Gregory

    2003-11-01

    To identify whether the tobacco industry has targeted cigarette product design towards individuals with varying psychological/psychosocial needs. Internal industry documents were identified through searches of an online archival document research tool database using relevancy criteria of consumer segmentation and needs assessment. The industry segmented consumer markets based on psychological needs (stress relief, behavioral arousal, performance enhancement, obesity reduction) and psychosocial needs (social acceptance, personal image). Associations between these segments and smoking behaviors, brand and design preferences were used to create cigarette brands targeting individuals with these needs. Cigarette brands created to address the psychological/psychosocial needs of smokers may increase the likelihood of smoking initiation and addiction. Awareness of targeted product development will improve smoking cessation and prevention efforts.

  19. EXTENSIBLE DATABASE FRAMEWORK FOR MANAGEMENT OF UNSTRUCTURED AND SEMI-STRUCTURED DOCUMENTS

    NASA Technical Reports Server (NTRS)

    Gawdiak, Yuri O. (Inventor); La, Tracy T. (Inventor); Lin, Shu-Chun Y. (Inventor); Malof, David A. (Inventor); Tran, Khai Peter B. (Inventor)

    2005-01-01

    Method and system for querying a collection of Unstructured or semi-structured documents to identify presence of, and provide context and/or content for, keywords and/or keyphrases. The documents are analyzed and assigned a node structure, including an ordered sequence of mutually exclusive node segments or strings. Each node has an associated set of at least four, five or six attributes with node information and can represent a format marker or text, with the last node in any node segment usually being a text node. A keyword (or keyphrase) is specified. and the last node in each node segment is searched for a match with the keyword. When a match is found at a query node, or at a node determined with reference to a query node, the system displays the context andor the content of the query node.

  20. The Effect of Arc Proximity on Hydrothermal Activity Along Spreading Centers: New Evidence From the Mariana Back Arc (12.7°N-18.3°N)

    NASA Astrophysics Data System (ADS)

    Baker, Edward T.; Walker, Sharon L.; Resing, Joseph A.; Chadwick, William W.; Merle, Susan G.; Anderson, Melissa O.; Butterfield, David A.; Buck, Nathan J.; Michael, Susanna

    2017-11-01

    Back-arc spreading centers (BASCs) form a distinct class of ocean spreading ridges distinguished by steep along-axis gradients in spreading rate and by additional magma supplied through subduction. These characteristics can affect the population and distribution of hydrothermal activity on BASCs compared to mid-ocean ridges (MORs). To investigate this hypothesis, we comprehensively explored 600 km of the southern half of the Mariana BASC. We used water column mapping and seafloor imaging to identify 19 active vent sites, an increase of 13 over the current listing in the InterRidge Database (IRDB), on the bathymetric highs of 7 of the 11 segments. We identified both high and low (i.e., characterized by a weak or negligible particle plume) temperature discharge occurring on segment types spanning dominantly magmatic to dominantly tectonic. Active sites are concentrated on the two southernmost segments, where distance to the adjacent arc is shortest (<40 km), spreading rate is highest (>48 mm/yr), and tectonic extension is pervasive. Re-examination of hydrothermal data from other BASCs supports the generalization that hydrothermal site density increases on segments <90 km from an adjacent arc. Although exploration quality varies greatly among BASCs, present data suggest that, for a given spreading rate, the mean spatial density of hydrothermal activity varies little between MORs and BASCs. The present global database, however, may be misleading. On both BASCs and MORs, the spatial density of hydrothermal sites mapped by high-quality water-column surveys is 2-7 times greater than predicted by the existing IRDB trend of site density versus spreading rate.

  1. Mini-DNA barcode in identification of the ornamental fish: A case study from Northeast India.

    PubMed

    Dhar, Bishal; Ghosh, Sankar Kumar

    2017-09-05

    The ornamental fishes were exported under the trade names or generic names, thus creating problems in species identification. In this regard, DNA barcoding could effectively elucidate the actual species status. However, the problem arises if the specimen is having taxonomic disputes, falsified by trade/generic names, etc., On the other hand, barcoding the archival museum specimens would be of greater benefit to address such issues as it would create firm, error-free reference database for rapid identification of any species. This can be achieved only by generating short sequences as DNA from chemically preserved are mostly degraded. Here we aimed to identify a short stretch of informative sites within the full-length barcode segment, capable of delineating diverse group of ornamental fish species, commonly traded from NE India. We analyzed 287 full-length barcode sequences from the major fish orders and compared the interspecific K2P distance with nucleotide substitutions patterns and found a strong correlation of interspecies distance with transversions (0.95, p<0.001). We, therefore, proposed a short stretch of 171bp (transversion rich) segment as mini-barcode. The proposed segment was compared with the full-length barcodes and found to delineate the species effectively. Successful PCR amplification and sequencing of the 171bp segment using designed primers for different orders validated it as mini-barcodes for ornamental fishes. Thus, our findings would be helpful in strengthening the global database with the sequence of archived fish species as well as an effective identification tool of the traded ornamental fish species, as a less time consuming, cost effective field-based application. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes

    NASA Astrophysics Data System (ADS)

    Kim, Ki Wan; Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Lee, Eui Chul; Park, Kang Ryoung

    2015-03-01

    The classification of eye openness and closure has been researched in various fields, e.g., driver drowsiness detection, physiological status analysis, and eye fatigue measurement. For a classification with high accuracy, accurate segmentation of the eye region is required. Most previous research used the segmentation method by image binarization on the basis that the eyeball is darker than skin, but the performance of this approach is frequently affected by thick eyelashes or shadows around the eye. Thus, we propose a fuzzy-based method for classifying eye openness and closure. First, the proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. Second, the eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadows around the eye. The combined image of I and K pixels is obtained through the fuzzy logic system. Third, in order to reflect the effect by all the inference values on calculating the output score of the fuzzy system, we use the revised weighted average method, where all the rectangular regions by all the inference values are considered for calculating the output score. Fourth, the classification of eye openness or closure is successfully made by the proposed fuzzy-based method with eye images of low resolution which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the chosen database. Experimental results with two databases of eye images show that our method is superior to previous approaches.

  3. Psychiatric and medical comorbidities, associated pain, and health care utilization of patients prescribed buprenorphine.

    PubMed

    Mark, Tami L; Dilonardo, Joan; Vandivort, Rita; Miller, Kay

    2013-01-01

    This study describes the comorbidities and health care utilization of individuals treated with buprenorphine using the 2007-2009 MarketScan Research Databases. Buprenorphine recipients had a high prevalence of comorbidities associated with chronic pain, including back problems (42%), connective tissue disease (24-27%), and nontraumatic joint disorders (20-23%). Approximately 69% of recipients filled prescriptions for opioid agonist medications in the 6 months before buprenorphine initiation. Buprenorphine recipients were frequently diagnosed with anxiety (23-42%) and mood disorders (39-51%) and filled prescriptions for antidepressants (47-56%) and benzodiazepines (47-56%) at high rates. Surprisingly, only 53-54% of patients filling a prescription for buprenorphine had a coded opioid abuse/dependence diagnosis. Research is needed to better understand buprenorphine's effectiveness in the context of prescription drug abuse and the best way to coordinate services to address the patient's comorbid addiction, pain, and psychiatric illnesses. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Radio-Frequency Tank Eigenmode Sensor for Propellant Quantity Gauging

    NASA Technical Reports Server (NTRS)

    Zimmerli, Gregory A.; Buchanan, David A.; Follo, Jeffrey C.; Vaden, Karl R.; Wagner, James D.; Asipauskas, Marius; Herlacher, Michael D.

    2010-01-01

    Although there are several methods for determining liquid level in a tank, there are no proven methods to quickly gauge the amount of propellant in a tank while it is in low gravity or under low-settling thrust conditions where propellant sloshing is an issue. Having the ability to quickly and accurately gauge propellant tanks in low-gravity is an enabling technology that would allow a spacecraft crew or mission control to always know the amount of propellant onboard, thus increasing the chances for a successful mission. The Radio Frequency Mass Gauge (RFMG) technique measures the electromagnetic eigenmodes, or natural resonant frequencies, of a tank containing a dielectric fluid. The essential hardware components consist of an RF network analyzer that measures the reflected power from an antenna probe mounted internal to the tank. At a resonant frequency, there is a drop in the reflected power, and these inverted peaks in the reflected power spectrum are identified as the tank eigenmode frequencies using a peak-detection software algorithm. This information is passed to a pattern-matching algorithm, which compares the measured eigenmode frequencies with a database of simulated eigenmode frequencies at various fill levels. A best match between the simulated and measured frequency values occurs at some fill level, which is then reported as the gauged fill level. The database of simulated eigenmode frequencies is created by using RF simulation software to calculate the tank eigenmodes at various fill levels. The input to the simulations consists of a fairly high-fidelity tank model with proper dimensions and including internal tank hardware, the dielectric properties of the fluid, and a defined liquid/vapor interface. Because of small discrepancies between the model and actual hardware, the measured empty tank spectra and simulations are used to create a set of correction factors for each mode (typically in the range of 0.999 1.001), which effectively accounts for the small discrepancies. These correction factors are multiplied to the modes at all fill levels. By comparing several measured modes with the simulations, it is possible to accurately gauge the amount of propellant in the tank. An advantage of the RFMG approach of applying computer simulations and a pattern-matching algorithm is that the Although there are several methods for determining liquid level in a tank, there are no proven methods to quickly gauge the amount of propellant in a tank while it is in low gravity or under low-settling thrust conditions where propellant sloshing is an issue. Having the ability to quickly and accurately gauge propellant tanks in low-gravity is an enabling technology that would allow a spacecraft crew or mission control to always know the amount of propellant onboard, thus increasing the chances for a successful mission. The Radio Frequency Mass Gauge (RFMG) technique measures the electromagnetic eigenmodes, or natural resonant frequencies, of a tank containing a dielectric fluid. The essential hardware components consist of an RF network analyzer that measures the reflected power from an antenna probe mounted internal to the tank. At a resonant frequency, there is a drop in the reflected power, and these inverted peaks in the reflected power spectrum are identified as the tank eigenmode frequencies using a peak-detection software algorithm. This information is passed to a pattern-matching algorithm, which compares the measured eigenmode frequencies with a database of simulated eigenmode frequencies at various fill levels. A best match between the simulated and measured frequency values occurs at some fill level, which is then reported as the gauged fill level. The database of simulated eigenmode frequencies is created by using RF simulation software to calculate the tank eigenmodes at various fill levels. The input to the simulations consists of a fairly high-fidelity tank model with proper dimensions and including internal tank harare, the dielectric properties of the fluid, and a defined liquid/vapor interface. Because of small discrepancies between the model and actual hardware, the measured empty tank spectra and simulations are used to create a set of correction factors for each mode (typically in the range of 0.999 1.001), which effectively accounts for the small discrepancies. These correction factors are multiplied to the modes at all fill levels. By comparing several measured modes with the simulations, it is possible to accurately gauge the amount of propellant in the tank. An advantage of the RFMG approach of applying computer simulations and a pattern-matching algorithm is that the

  5. Breast histopathology image segmentation using spatio-colour-texture based graph partition method.

    PubMed

    Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N

    2016-06-01

    This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  6. Unsupervised MRI segmentation of brain tissues using a local linear model and level set.

    PubMed

    Rivest-Hénault, David; Cheriet, Mohamed

    2011-02-01

    Real-world magnetic resonance imaging of the brain is affected by intensity nonuniformity (INU) phenomena which makes it difficult to fully automate the segmentation process. This difficult task is accomplished in this work by using a new method with two original features: (1) each brain tissue class is locally modeled using a local linear region representative, which allows us to account for the INU in an implicit way and to more accurately position the region's boundaries; and (2) the region models are embedded in the level set framework, so that the spatial coherence of the segmentation can be controlled in a natural way. Our new method has been tested on the ground-truthed Internet Brain Segmentation Repository (IBSR) database and gave promising results, with Tanimoto indexes ranging from 0.61 to 0.79 for the classification of the white matter and from 0.72 to 0.84 for the gray matter. To our knowledge, this is the first time a region-based level set model has been used to perform the segmentation of real-world MRI brain scans with convincing results. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Concurrent tumor segmentation and registration with uncertainty-based sparse non-uniform graphs.

    PubMed

    Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos

    2014-05-01

    In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Semisupervised learning using denoising autoencoders for brain lesion detection and segmentation.

    PubMed

    Alex, Varghese; Vaidhya, Kiran; Thirunavukkarasu, Subramaniam; Kesavadas, Chandrasekharan; Krishnamurthi, Ganapathy

    2017-10-01

    The work explores the use of denoising autoencoders (DAEs) for brain lesion detection, segmentation, and false-positive reduction. Stacked denoising autoencoders (SDAEs) were pretrained using a large number of unlabeled patient volumes and fine-tuned with patches drawn from a limited number of patients ([Formula: see text], 40, 65). The results show negligible loss in performance even when SDAE was fine-tuned using 20 labeled patients. Low grade glioma (LGG) segmentation was achieved using a transfer learning approach in which a network pretrained with high grade glioma data was fine-tuned using LGG image patches. The networks were also shown to generalize well and provide good segmentation on unseen BraTS 2013 and BraTS 2015 test data. The manuscript also includes the use of a single layer DAE, referred to as novelty detector (ND). ND was trained to accurately reconstruct nonlesion patches. The reconstruction error maps of test data were used to localize lesions. The error maps were shown to assign unique error distributions to various constituents of the glioma, enabling localization. The ND learns the nonlesion brain accurately as it was also shown to provide good segmentation performance on ischemic brain lesions in images from a different database.

  9. Laparoscopic liver surgery: towards a day-case management.

    PubMed

    Tranchart, Hadrien; Fuks, David; Lainas, Panagiotis; Gaillard, Martin; Dagher, Ibrahim; Gayet, Brice

    2017-12-01

    Ambulatory surgery (AS) is a contemporary subject of interest. The feasibility and safety of AS for solid abdominal organs are still dubious. In the present study, we aimed at defining potential surgical criteria for AS by analyzing a large database of patients who underwent laparoscopic liver surgery (LLS) in two French expert centers. This study was performed using prospectively filled databases including patients that underwent pure LLS between 1998 and 2015. Patients whose perioperative medical characteristics (ASA score <3, no associated extra-hepatic procedure, surgical duration ≤180 min, blood loss ≤300 mL, no intraoperative anesthesiological or surgical complication, no postoperative drainage) were potentially adapted for ambulatory LLS were included in the analysis. In order to determine the risk factors for postoperative complications, multivariate analysis was carried out. During the study period, pure LLS was performed in 994 patients. After preoperative and intraoperative characteristics screening, 174 (17.5%) patients were considered for the final analysis. Lesions (benign (46%) and liver metastases (43%)) were predominantly single with a mean size of 37 ± 32 mm in an underlying normal or steatotic liver parenchyma (94.8%). The vast majority of LLS performed were single procedures including wedge resections and liver cyst unroofing or left lateral sectionectomies (74%). The global morbidity rate was 14% and six patients presented a major complication (Dindo-Clavien ≥III). The mean length of stay was 5 ± 4 days. Multivariate analysis showed that major hepatectomy [OR 29.04 (2.26-37.19); P = 0.01] and resection of tumors localized in central segments [OR 41.24 (1.08-156.47); P = 0.04] were independent predictors of postoperative morbidity. In experienced teams, approximately 7% of highly selected patients requiring laparoscopic hepatic surgery (wedge resection, liver cyst unroofing, or left lateral sectionectomy) could benefit from ambulatory surgery management.

  10. Riparian Land Use/Land Cover Data for Five Study Units in the Nutrient Enrichment Effects Topical Study of the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Johnson, Michaela R.; Buell, Gary R.; Kim, Moon H.; Nardi, Mark R.

    2007-01-01

    This dataset was developed as part of the National Water-Quality Assessment (NAWQA) Program, Nutrient Enrichment Effects Topical (NEET) study for five study units distributed across the United States: Apalachicola-Chattahoochee-Flint River Basin, Central Columbia Plateau-Yakima River Basin, Central Nebraska Basins, Potomac River Basin and Delmarva Peninsula, and White, Great and Little Miami River Basins. One hundred forty-three stream reaches were examined as part of the NEET study conducted 2003-04. Stream segments, with lengths equal to the logarithm of the basin area, were delineated upstream from the downstream ends of the stream reaches with the use of digital orthophoto quarter quadrangles (DOQQ) or selected from the high-resolution National Hydrography Dataset (NHD). Use of the NHD was necessary when the stream was not distinguishable in the DOQQ because of dense tree canopy. The analysis area for each stream segment was defined by a buffer beginning at the segment extending to 250 meters lateral to the stream segment. Delineation of land use/land cover (LULC) map units within stream segment buffers was conducted using on-screen digitizing of riparian LULC classes interpreted from the DOQQ. LULC units were mapped using a classification strategy consisting of nine classes. National Wetlands Inventory (NWI) data were used to aid in wetland classification. Longitudinal transect sampling lines offset from the stream segments were generated and partitioned into the underlying LULC types. These longitudinal samples yielded the relative linear extent and sequence of each LULC type within the riparian zone at the segment scale. The resulting areal and linear LULC data filled in the spatial-scale gap between the 30-meter resolution of the National Land Cover Dataset and the reach-level habitat assessment data collected onsite routinely for NAWQA ecological sampling. The final data consisted of 12 geospatial datasets: LULC within 25 meters of the stream reach (polygon); LULC within 50 meters of the stream reach (polygon); LULC within 50 meters of the stream segment (polygon); LULC within 100 meters of the stream segment (polygon); LULC within 150 meters of the stream segment (polygon); LULC within 250 meters of the stream segment (polygon); frequency of gaps in woody vegetation LULC at the reach scale (arc); stream reaches (arc); longitudinal LULC at the reach scale (arc); frequency of gaps in woody vegetation LULC at the segment scale (arc); stream segments (arc); and longitudinal LULC at the segment scale (arc).

  11. Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations

    PubMed Central

    2016-01-01

    Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT) of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping), thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT) ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB). As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3) classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The KLT and LPT present new possibilities for human-expert diagnostics, and for automated ischaemia detection. PMID:26863140

  12. Simultaneous segmentation of the bone and cartilage surfaces of a knee joint in 3D

    NASA Astrophysics Data System (ADS)

    Yin, Y.; Zhang, X.; Anderson, D. D.; Brown, T. D.; Hofwegen, C. Van; Sonka, M.

    2009-02-01

    We present a novel framework for the simultaneous segmentation of multiple interacting surfaces belonging to multiple mutually interacting objects. The method is a non-trivial extension of our previously reported optimal multi-surface segmentation. Considering an example application of knee-cartilage segmentation, the framework consists of the following main steps: 1) Shape model construction: Building a mean shape for each bone of the joint (femur, tibia, patella) from interactively segmented volumetric datasets. Using the resulting mean-shape model - identification of cartilage, non-cartilage, and transition areas on the mean-shape bone model surfaces. 2) Presegmentation: Employment of iterative optimal surface detection method to achieve approximate segmentation of individual bone surfaces. 3) Cross-object surface mapping: Detection of inter-bone equidistant separating sheets to help identify corresponding vertex pairs for all interacting surfaces. 4) Multi-object, multi-surface graph construction and final segmentation: Construction of a single multi-bone, multi-surface graph so that two surfaces (bone and cartilage) with zero and non-zero intervening distances can be detected for each bone of the joint, according to whether or not cartilage can be locally absent or present on the bone. To define inter-object relationships, corresponding vertex pairs identified using the separating sheets were interlinked in the graph. The graph optimization algorithm acted on the entire multiobject, multi-surface graph to yield a globally optimal solution. The segmentation framework was tested on 16 MR-DESS knee-joint datasets from the Osteoarthritis Initiative database. The average signed surface positioning error for the 6 detected surfaces ranged from 0.00 to 0.12 mm. When independently initialized, the signed reproducibility error of bone and cartilage segmentation ranged from 0.00 to 0.26 mm. The results showed that this framework provides robust, accurate, and reproducible segmentation of the knee joint bone and cartilage surfaces of the femur, tibia, and patella. As a general segmentation tool, the developed framework can be applied to a broad range of multi-object segmentation problems.

  13. Planning the data transition of a VLDB: a case study

    NASA Astrophysics Data System (ADS)

    Finken, Shirley J.

    1997-02-01

    This paper describes the technical and programmatic plans for moving and checking certain data from the IDentification Automated Services (IDAS) system to the new Interstate Identification Index/Federal Bureau of Investigation (III/FBI) Segment database--one of the three components of the Integrated Automated Fingerprint Identification System (IAFIS) being developed by the Federal Bureau of Investigation, Criminal Justice Information Services Division. Transitioning IDAS to III/FBI includes putting the data into an entirely new target database structure (i.e. from IBM VSAM files to ORACLE7 RDBMS tables). Only four IDAS files were transitioned (CCN, CCR, CCA, and CRS), but their total estimated size is at 500 Gb of data. Transitioning of this Very Large Database is planned as two processes.

  14. Impact of Resolution in Multi-Conjugate Adaptive Optics Systems Using Segmented Mirrors (Preprint)

    DTIC Science & Technology

    2009-06-01

    and 100 percent fill factor. The DM1 influence function for each subaperture is modelled as a rectangle. As the apparent resolution of DM1 in the...modelled as a continuous facesheet. To account for the impact of adjoining actuators, an influence function is applied which essentially smoothes out...continuous DMs.20 Lukin’s influence function is closer to that of Jagourel and Gafford,21 or more simplified than the general higher order Gaussian function

  15. EDITSPEC: System Manual. Volume IV. Data Handler.

    DTIC Science & Technology

    1980-11-01

    PRINTS AND ABORTS OR RETURNS WITHOUT SAYING ANYTHING DKFBF FILL BUFFER ROUTINE: BT ENTRY AT IBTAD IS IN D GET BLOCK NBL OF DATA SET NSW IN AND WAIT FOR...READ COMPLETION DKFND ROUTINE TO LOCATE BLOCK NBL SEGMENT NSG OF DATA SET NSW. N SEARCHES BT’S FIRST’THEN READS INTO CORE RETURNS IBTAD=THE BT ENTRY...WHICH IS RETURNED IN NBL . DKMIC ROUTINE TO SEARCH IN CORE BUFFER TABLES FOR ONE WITH DATA SET NOS FILENAME FILNM AND RETURN THE ONE WITH THE MOST

  16. Solar harvesting by a heterostructured cell with built-in variable width quantum wells

    NASA Astrophysics Data System (ADS)

    Brooks, W.; Wang, H.; Mil'shtein, S.

    2018-02-01

    We propose cascaded heterostructured p-i-n solar cells, where inside of the i-region is a set of Quantum Wells (QWs) with variable thicknesses to enhance absorption of different photonic energies and provide quick relaxation for high energy carriers. Our p-i-n heterostructure carries top p-type and bottom n-type 11.3 Å thick AlAs layers, which are doped by acceptors and donor densities up to 1019/cm3. The intrinsic region is divided into 10 segments where each segment carries ten QWs of the same width and the width of the QWs in each subsequent segment gradually increases. The top segment consists of 10 QWs with widths of 56.5Å, followed by a segment with 10 wider QWs with widths of 84.75Å, followed by increasing QW widths until the last segment has 10 QWs with widths of 565Å, bringing the total number of QWs to 100. The QW wall height is controlled by alternating AlAs and GaAs layers, where the AlAs layers are all 11.3Å thick, throughout the entire intrinsic region. Configuration of variable width QWs prescribes sets of energy levels which are suitable for absorption of a wide range of photon energies and will dissipate high electron-hole energies rapidly, reducing the heat load on the solar cell. We expect that the heating of the solar cell will be reduced by 8-11%, enhancing efficiency. The efficiency of the designed solar cell is 43.71%, the Fill Factor is 0.86, the density of short circuit current (ISC) will not exceed 338 A/m2 and the open circuit voltage (VOC) is 1.51V.

  17. Brain tumor classification using the diffusion tensor image segmentation (D-SEG) technique.

    PubMed

    Jones, Timothy L; Byrnes, Tiernan J; Yang, Guang; Howe, Franklyn A; Bell, B Anthony; Barrick, Thomas R

    2015-03-01

    There is an increasing demand for noninvasive brain tumor biomarkers to guide surgery and subsequent oncotherapy. We present a novel whole-brain diffusion tensor imaging (DTI) segmentation (D-SEG) to delineate tumor volumes of interest (VOIs) for subsequent classification of tumor type. D-SEG uses isotropic (p) and anisotropic (q) components of the diffusion tensor to segment regions with similar diffusion characteristics. DTI scans were acquired from 95 patients with low- and high-grade glioma, metastases, and meningioma and from 29 healthy subjects. D-SEG uses k-means clustering of the 2D (p,q) space to generate segments with different isotropic and anisotropic diffusion characteristics. Our results are visualized using a novel RGB color scheme incorporating p, q and T2-weighted information within each segment. The volumetric contribution of each segment to gray matter, white matter, and cerebrospinal fluid spaces was used to generate healthy tissue D-SEG spectra. Tumor VOIs were extracted using a semiautomated flood-filling technique and D-SEG spectra were computed within the VOI. Classification of tumor type using D-SEG spectra was performed using support vector machines. D-SEG was computationally fast and stable and delineated regions of healthy tissue from tumor and edema. D-SEG spectra were consistent for each tumor type, with constituent diffusion characteristics potentially reflecting regional differences in tissue microstructure. Support vector machines classified tumor type with an overall accuracy of 94.7%, providing better classification than previously reported. D-SEG presents a user-friendly, semiautomated biomarker that may provide a valuable adjunct in noninvasive brain tumor diagnosis and treatment planning. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Neuro-Oncology.

  18. Experimental Investigation of Heat Pipe Startup Under Reflux Mode

    NASA Technical Reports Server (NTRS)

    Ku, Jentung

    2018-01-01

    In the absence of body forces such as gravity, a heat pipe will start as soon as its evaporator temperature reaches the saturation temperature. If the heat pipe operates under a reflux mode in ground testing, the liquid puddle will fill the entire cross sectional area of the evaporator. Under this condition, the heat pipe may not start when the evaporator temperature reaches the saturation temperature. Instead, a superheat is required in order for the liquid to vaporize through nucleate boiling. The amount of superheat depends on several factors such as the roughness of the heat pipe internal surface and the gravity head. This paper describes an experimental investigation of the effect of gravity pressure head on the startup of a heat pipe under reflux mode. In this study, a heat pipe with internal axial grooves was placed in a vertical position with different tilt angles relative to the horizontal plane. Heat was applied to the evaporator at the bottom and cooling was provided to the condenser at the top. The liquid-flooded evaporator was divided into seven segments along the axial direction, and an electrical heater was attached to each evaporator segment. Heat was applied to individual heaters in various combinations and sequences. Other test variables included the condenser sink temperature and tilt angle. Test results show that as long as an individual evaporator segment was flooded with liquid initially, a superheat was required to vaporize the liquid in that segment. The amount of superheat required for liquid vaporization was a function of gravity pressure head imposed on that evaporator segment and the initial temperature of the heat pipe. The most efficient and effective way to start the heat pipe was to apply a heat load with a high heat flux to the lowest segment of the evaporator.

  19. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    PubMed

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20- or derived 17-segment models was confirmed to be 5% myocardium abnormal, corresponding to a summed stress score greater than 3. Of note, the 17-segment model demonstrated a trend toward fewer mildly abnormal scans and more normal and severely abnormal scans. An algorithm for conversion of 20-segment perfusion scores to 17-segment scores has been developed that is highly concordant with expert visual analysis by the 17-segment model and provides nearly identical prognostic information. This conversion model may provide a mechanism for comparison of studies analyzed by the 17-segment system with previous studies analyzed by the 20-segment approach.

  20. Searching fee and non-fee toxicology information resources: an overview of selected databases.

    PubMed

    Wright, L L

    2001-01-12

    Toxicology profiles organize information by broad subjects, the first of which affirms identity of the agent studied. Studies here show two non-fee databases (ChemFinder and ChemIDplus) verify the identity of compounds with high efficiency (63% and 73% respectively) with the fee-based Chemical Abstracts Registry file serving well to fill data gaps (100%). Continued searching proceeds using knowledge of structure, scope and content to select databases. Valuable sources for information are factual databases that collect data and facts in special subject areas organized in formats available for analysis or use. Some sources representative of factual files are RTECS, CCRIS, HSDB, GENE-TOX and IRIS. Numerous factual databases offer a wealth of reliable information; however, exhaustive searches probe information published in journal articles and/or technical reports with records residing in bibliographic databases such as BIOSIS, EMBASE, MEDLINE, TOXLINE and Web of Science. Listed with descriptions are numerous factual and bibliographic databases supplied by 11 producers. Given the multitude of options and resources, it is often necessary to seek service desk assistance. Questions were posed by telephone and e-mail to service desks at DIALOG, ISI, MEDLARS, Micromedex and STN International. Results of the survey are reported.

  1. Anatomy-guided joint tissue segmentation and topological correction for 6-month infant brain MRI with risk of autism.

    PubMed

    Wang, Li; Li, Gang; Adeli, Ehsan; Liu, Mingxia; Wu, Zhengwang; Meng, Yu; Lin, Weili; Shen, Dinggang

    2018-06-01

    Tissue segmentation of infant brain MRIs with risk of autism is critically important for characterizing early brain development and identifying biomarkers. However, it is challenging due to low tissue contrast caused by inherent ongoing myelination and maturation. In particular, at around 6 months of age, the voxel intensities in both gray matter and white matter are within similar ranges, thus leading to the lowest image contrast in the first postnatal year. Previous studies typically employed intensity images and tentatively estimated tissue probabilities to train a sequence of classifiers for tissue segmentation. However, the important prior knowledge of brain anatomy is largely ignored during the segmentation. Consequently, the segmentation accuracy is still limited and topological errors frequently exist, which will significantly degrade the performance of subsequent analyses. Although topological errors could be partially handled by retrospective topological correction methods, their results may still be anatomically incorrect. To address these challenges, in this article, we propose an anatomy-guided joint tissue segmentation and topological correction framework for isointense infant MRI. Particularly, we adopt a signed distance map with respect to the outer cortical surface as anatomical prior knowledge, and incorporate such prior information into the proposed framework to guide segmentation in ambiguous regions. Experimental results on the subjects acquired from National Database for Autism Research demonstrate the effectiveness to topological errors and also some levels of robustness to motion. Comparisons with the state-of-the-art methods further demonstrate the advantages of the proposed method in terms of both segmentation accuracy and topological correctness. © 2018 Wiley Periodicals, Inc.

  2. [Conserved motifs in voltage sensing proteins].

    PubMed

    Wang, Chang-He; Xie, Zhen-Li; Lv, Jian-Wei; Yu, Zhi-Dan; Shao, Shu-Li

    2012-08-25

    This paper was aimed to study conserved motifs of voltage sensing proteins (VSPs) and establish a voltage sensing model. All VSPs were collected from the Uniprot database using a comprehensive keyword search followed by manual curation, and the results indicated that there are only two types of known VSPs, voltage gated ion channels and voltage dependent phosphatases. All the VSPs have a common domain of four helical transmembrane segments (TMS, S1-S4), which constitute the voltage sensing module of the VSPs. The S1 segment was shown to be responsible for membrane targeting and insertion of these proteins, while S2-S4 segments, which can sense membrane potential, for protein properties. Conserved motifs/residues and their functional significance of each TMS were identified using profile-to-profile sequence alignments. Conserved motifs in these four segments are strikingly similar for all VSPs, especially, the conserved motif [RK]-X(2)-R-X(2)-R-X(2)-[RK] was presented in all the S4 segments, with positively charged arginine (R) alternating with two hydrophobic or uncharged residues. Movement of these arginines across the membrane electric field is the core mechanism by which the VSPs detect changes in membrane potential. The negatively charged aspartate (D) in the S3 segment is universally conserved in all the VSPs, suggesting that the aspartate residue may be involved in voltage sensing properties of VSPs as well as the electrostatic interactions with the positively charged residues in the S4 segment, which may enhance the thermodynamic stability of the S4 segments in plasma membrane.

  3. Improved segmentation of cerebellar structures in children

    PubMed Central

    Narayanan, Priya Lakshmi; Boonazier, Natalie; Warton, Christopher; Molteno, Christopher D; Joseph, Jesuchristopher; Jacobson, Joseph L; Jacobson, Sandra W; Zöllei, Lilla; Meintjes, Ernesta M

    2016-01-01

    Background Consistent localization of cerebellar cortex in a standard coordinate system is important for functional studies and detection of anatomical alterations in studies of morphometry. To date, no pediatric cerebellar atlas is available. New method The probabilistic Cape Town Pediatric Cerebellar Atlas (CAPCA18) was constructed in the age-appropriate National Institute of Health Pediatric Database asymmetric template space using manual tracings of 16 cerebellar compartments in 18 healthy children (9–13 years) from Cape Town, South Africa. The individual atlases of the training subjects were also used to implement multi atlas label fusion using multi atlas majority voting (MAMV) and multi atlas generative model (MAGM) approaches. Segmentation accuracy in 14 test subjects was compared for each method to ‘gold standard’ manual tracings. Results Spatial overlap between manual tracings and CAPCA18 automated segmentation was 73% or higher for all lobules in both hemispheres, except VIIb and X. Automated segmentation using MAGM yielded the best segmentation accuracy over all lobules (mean Dice Similarity Coefficient 0.76; range 0.55–0.91). Comparison with existing methods In all lobules, spatial overlap of CAPCA18 segmentations with manual tracings was similar or higher than those obtained with SUIT (spatially unbiased infra-tentorial template), providing additional evidence of the benefits of an age appropriate atlas. MAGM segmentation accuracy was comparable to values reported recently by Park et al. (2014) in adults (across all lobules mean DSC = 0.73, range 0.40–0.89). Conclusions CAPCA18 and the associated multi atlases of the training subjects yield improved segmentation of cerebellar structures in children. PMID:26743973

  4. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy

    PubMed Central

    Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki

    2013-01-01

    We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787

  5. A variational approach to liver segmentation using statistics from multiple sources

    NASA Astrophysics Data System (ADS)

    Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi

    2018-01-01

    Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.

  6. Automated segmentation of 3D anatomical structures on CT images by using a deep convolutional network based on end-to-end learning approach

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2017-02-01

    We have proposed an end-to-end learning approach that trained a deep convolutional neural network (CNN) for automatic CT image segmentation, which accomplished a voxel-wised multiple classification to directly map each voxel on 3D CT images to an anatomical label automatically. The novelties of our proposed method were (1) transforming the anatomical structures segmentation on 3D CT images into a majority voting of the results of 2D semantic image segmentation on a number of 2D-slices from different image orientations, and (2) using "convolution" and "deconvolution" networks to achieve the conventional "coarse recognition" and "fine extraction" functions which were integrated into a compact all-in-one deep CNN for CT image segmentation. The advantage comparing to previous works was its capability to accomplish real-time image segmentations on 2D slices of arbitrary CT-scan-range (e.g. body, chest, abdomen) and produced correspondingly-sized output. In this paper, we propose an improvement of our proposed approach by adding an organ localization module to limit CT image range for training and testing deep CNNs. A database consisting of 240 3D CT scans and a human annotated ground truth was used for training (228 cases) and testing (the remaining 12 cases). We applied the improved method to segment pancreas and left kidney regions, respectively. The preliminary results showed that the accuracies of the segmentation results were improved significantly (pancreas was 34% and kidney was 8% increased in Jaccard index from our previous results). The effectiveness and usefulness of proposed improvement for CT image segmentations were confirmed.

  7. Towards automatic music transcription: note extraction based on independent subspace analysis

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens; Hoynck, Michael

    2005-01-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  8. Towards automatic music transcription: note extraction based on independent subspace analysis

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens; Höynck, Michael

    2004-12-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  9. Effect of the connection gap on the heat-load characteristics of a liquid nitrogen bayonet coupling

    NASA Astrophysics Data System (ADS)

    Tsai, H. H.; Liu, C. P.; Hsiao, F. Z.; Huang, T. Y.; Li, H. C.; Chiou, W. S.; Chang, S. H.; Lin, T. F.

    2012-12-01

    A transfer system for liquid nitrogen (LN2) installed at National Synchrotron Radiation Research Center (NSRRC) to provide LN2 required for the superconducting equipment and experimental stations has a LN2 transfer line of length 160 m and pipeline of inner diameter 25 mm, a phase separator (250 L) and an automatic filling station. The end uses include two cryogenic systems, one Superconducting Radio Frequency (SRF) cavity, five superconducting magnets, monochromators for the beam line and filling of mobile Dewars. The transfer line is segmented and connected with bayonet couplings. The aim of this work was to investigate, by numerical simulation, the effects on the heat load of the gap thickness of the bayonet assembly and the thickness of vacuum insulation. A numerical correlation was created that has become a basis to minimize the head load for future design of bayonet couplings.

  10. Complex evolutionary footprints revealed in an analysis of reused protein segments of diverse lengths

    PubMed Central

    Nepomnyachiy, Sergey; Ben-Tal, Nir; Kolodny, Rachel

    2017-01-01

    Proteins share similar segments with one another. Such “reused parts”—which have been successfully incorporated into other proteins—are likely to offer an evolutionary advantage over de novo evolved segments, as most of the latter will not even have the capacity to fold. To systematically explore the evolutionary traces of segment “reuse” across proteins, we developed an automated methodology that identifies reused segments from protein alignments. We search for “themes”—segments of at least 35 residues of similar sequence and structure—reused within representative sets of 15,016 domains [Evolutionary Classification of Protein Domains (ECOD) database] or 20,398 chains [Protein Data Bank (PDB)]. We observe that theme reuse is highly prevalent and that reuse is more extensive when the length threshold for identifying a theme is lower. Structural domains, the best characterized form of reuse in proteins, are just one of many complex and intertwined evolutionary traces. Others include long themes shared among a few proteins, which encompass and overlap with shorter themes that recur in numerous proteins. The observed complexity is consistent with evolution by duplication and divergence, and some of the themes might include descendants of ancestral segments. The observed recursive footprints, where the same amino acid can simultaneously participate in several intertwined themes, could be a useful concept for protein design. Data are available at http://trachel-srv.cs.haifa.ac.il/rachel/ppi/themes/. PMID:29078314

  11. Early detection of lung cancer from CT images: nodule segmentation and classification using deep learning

    NASA Astrophysics Data System (ADS)

    Sharma, Manu; Bhatt, Jignesh S.; Joshi, Manjunath V.

    2018-04-01

    Lung cancer is one of the most abundant causes of the cancerous deaths worldwide. It has low survival rate mainly due to the late diagnosis. With the hardware advancements in computed tomography (CT) technology, it is now possible to capture the high resolution images of lung region. However, it needs to be augmented by efficient algorithms to detect the lung cancer in the earlier stages using the acquired CT images. To this end, we propose a two-step algorithm for early detection of lung cancer. Given the CT image, we first extract the patch from the center location of the nodule and segment the lung nodule region. We propose to use Otsu method followed by morphological operations for the segmentation. This step enables accurate segmentation due to the use of data-driven threshold. Unlike other methods, we perform the segmentation without using the complete contour information of the nodule. In the second step, a deep convolutional neural network (CNN) is used for the better classification (malignant or benign) of the nodule present in the segmented patch. Accurate segmentation of even a tiny nodule followed by better classification using deep CNN enables the early detection of lung cancer. Experiments have been conducted using 6306 CT images of LIDC-IDRI database. We achieved the test accuracy of 84.13%, with the sensitivity and specificity of 91.69% and 73.16%, respectively, clearly outperforming the state-of-the-art algorithms.

  12. Turtle Graphics of Morphic Sequences

    NASA Astrophysics Data System (ADS)

    Zantema, Hans

    2016-02-01

    The simplest infinite sequences that are not ultimately periodic are pure morphic sequences: fixed points of particular morphisms mapping single symbols to strings of symbols. A basic way to visualize a sequence is by a turtle curve: for every alphabet symbol fix an angle, and then consecutively for all sequence elements draw a unit segment and turn the drawing direction by the corresponding angle. This paper investigates turtle curves of pure morphic sequences. In particular, criteria are given for turtle curves being finite (consisting of finitely many segments), and for being fractal or self-similar: it contains an up-scaled copy of itself. Also space-filling turtle curves are considered, and a turtle curve that is dense in the plane. As a particular result we give an exact relationship between the Koch curve and a turtle curve for the Thue-Morse sequence, where until now for such a result only approximations were known.

  13. Management of segmental bony defects: the role of osteoconductive orthobiologics.

    PubMed

    McKee, Michael D

    2006-01-01

    Our knowledge about, and the availability of, orthobiologic materials has increased exponentially in the last decade. Although previously confined to the experimental or animal-model realm, several orthobiologics have been shown to be useful in a variety of clinical situations. As surgical techniques in vascular anastomosis, soft-tissue coverage, limb salvage, and fracture stabilization have improved, the size and frequency of bony defects (commensurate with the severity of the initial injury) have increased, as well. Because all methods of managing segmental bony defects have drawbacks, a need remains for a readily available, void-filling, inexpensive bone substitute. Such a bone substitute fulfills a permissive role in allowing new bone to grow into a given defect. Such potential osteoconductive materials include ceramics, calcium sulfate or calcium phosphate compounds, hydroxyapatite, deproteinized bone, corals, and recently developed polymers. Some materials that have osteoinductive properties, such as demineralized bone matrix, also display prominent osteoconductive properties.

  14. Two new stygobiotic species of Elaphoidella (Crustacea: Copepoda: Harpacticoida) with comments on geographical distribution and ecology of harpacticoids from caves in Thailand.

    PubMed

    Watiroyram, Santi; Brancelj, Anton; Sanoamuang, La-Orsri

    2015-02-16

    Elaphoidella thailandensis sp. nov. and E. jaesornensis sp. nov., collected during an investigation of cave-dwelling copepod fauna in the northern part of Thailand, are described and figured herein. The new species were collected from pools filled by percolating water from the unsaturated zone of a karstic aquifer in Phitsanulok and Lampang Provinces, respectively. Elaphoidella thailandensis, from Tham Khun cave, is distinguished from its congeners by the two-segmented endopod of pediger 1, the absence of endopod on pediger 4, and the setal formula 4, 5, 6 for the distal exopodal segment of pedigers 2-4. Elaphoidella jaesornensis, from Tham Phar Ngam cave, is distinguished from its most closely related species, E. namnaoensis Brancelj, Watiroyram & Sanoamuang, 2010, by the armature formula of the endopod of pedigers 2-5. The geographical distribution and ecology of Harpacticoida from Thai caves is also presented.

  15. A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.

    PubMed

    Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F

    2012-09-01

    Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.

  16. Random forest feature selection approach for image segmentation

    NASA Astrophysics Data System (ADS)

    Lefkovits, László; Lefkovits, Szidónia; Emerich, Simina; Vaida, Mircea Florin

    2017-03-01

    In the field of image segmentation, discriminative models have shown promising performance. Generally, every such model begins with the extraction of numerous features from annotated images. Most authors create their discriminative model by using many features without using any selection criteria. A more reliable model can be built by using a framework that selects the important variables, from the point of view of the classification, and eliminates the unimportant once. In this article we present a framework for feature selection and data dimensionality reduction. The methodology is built around the random forest (RF) algorithm and its variable importance evaluation. In order to deal with datasets so large as to be practically unmanageable, we propose an algorithm based on RF that reduces the dimension of the database by eliminating irrelevant features. Furthermore, this framework is applied to optimize our discriminative model for brain tumor segmentation.

  17. Vessel Segmentation in Retinal Images Using Multi-scale Line Operator and K-Means Clustering.

    PubMed

    Saffarzadeh, Vahid Mohammadi; Osareh, Alireza; Shadgar, Bita

    2014-04-01

    Detecting blood vessels is a vital task in retinal image analysis. The task is more challenging with the presence of bright and dark lesions in retinal images. Here, a method is proposed to detect vessels in both normal and abnormal retinal fundus images based on their linear features. First, the negative impact of bright lesions is reduced by using K-means segmentation in a perceptive space. Then, a multi-scale line operator is utilized to detect vessels while ignoring some of the dark lesions, which have intensity structures different from the line-shaped vessels in the retina. The proposed algorithm is tested on two publicly available STARE and DRIVE databases. The performance of the method is measured by calculating the area under the receiver operating characteristic curve and the segmentation accuracy. The proposed method achieves 0.9483 and 0.9387 localization accuracy against STARE and DRIVE respectively.

  18. Iris Segmentation and Normalization Algorithm Based on Zigzag Collarette

    NASA Astrophysics Data System (ADS)

    Rizky Faundra, M.; Ratna Sulistyaningrum, Dwi

    2017-01-01

    In this paper, we proposed iris segmentation and normalization algorithm based on the zigzag collarette. First of all, iris images are processed by using Canny Edge Detection to detect pupil edge, then finding the center and the radius of the pupil with the Hough Transform Circle. Next, isolate important part in iris based zigzag collarette area. Finally, Daugman Rubber Sheet Model applied to get the fixed dimensions or normalization iris by transforming cartesian into polar format and thresholding technique to remove eyelid and eyelash. This experiment will be conducted with a grayscale eye image data taken from a database of iris-Chinese Academy of Sciences Institute of Automation (CASIA). Data iris taken is the data reliable and widely used to study the iris biometrics. The result show that specific threshold level is 0.3 have better accuracy than other, so the present algorithm can be used to segmentation and normalization zigzag collarette with accuracy is 98.88%

  19. New approach for logo recognition

    NASA Astrophysics Data System (ADS)

    Chen, Jingying; Leung, Maylor K. H.; Gao, Yongsheng

    2000-03-01

    The problem of logo recognition is of great interest in the document domain, especially for document database. By recognizing the logo we obtain semantic information about the document which may be useful in deciding whether or not to analyze the textual components. In order to develop a logo recognition method that is efficient to compute and product intuitively reasonable results, we investigate the Line Segment Hausdorff Distance on logo recognition. Researchers apply Hausdorff Distance to measure the dissimilarity of two point sets. It has been extended to match two sets of line segments. The new approach has the advantage to incorporate structural and spatial information to compute the dissimilarity. The added information can conceptually provide more and better distinctive capability for recognition. The proposed technique has been applied on line segments of logos with encouraging results that support the concept experimentally. This might imply a new way for logo recognition.

  20. FragFit: a web-application for interactive modeling of protein segments into cryo-EM density maps.

    PubMed

    Tiemann, Johanna K S; Rose, Alexander S; Ismer, Jochen; Darvish, Mitra D; Hilal, Tarek; Spahn, Christian M T; Hildebrand, Peter W

    2018-05-21

    Cryo-electron microscopy (cryo-EM) is a standard method to determine the three-dimensional structures of molecular complexes. However, easy to use tools for modeling of protein segments into cryo-EM maps are sparse. Here, we present the FragFit web-application, a web server for interactive modeling of segments of up to 35 amino acids length into cryo-EM density maps. The fragments are provided by a regularly updated database containing at the moment about 1 billion entries extracted from PDB structures and can be readily integrated into a protein structure. Fragments are selected based on geometric criteria, sequence similarity and fit into a given cryo-EM density map. Web-based molecular visualization with the NGL Viewer allows interactive selection of fragments. The FragFit web-application, accessible at http://proteinformatics.de/FragFit, is free and open to all users, without any login requirements.

  1. Evaluation of a Phylogenetic Marker Based on Genomic Segment B of Infectious Bursal Disease Virus: Facilitating a Feasible Incorporation of this Segment to the Molecular Epidemiology Studies for this Viral Agent.

    PubMed

    Alfonso-Morales, Abdulahi; Rios, Liliam; Martínez-Pérez, Orlando; Dolz, Roser; Valle, Rosa; Perera, Carmen L; Bertran, Kateri; Frías, Maria T; Ganges, Llilianne; Díaz de Arce, Heidy; Majó, Natàlia; Núñez, José I; Pérez, Lester J

    2015-01-01

    Infectious bursal disease (IBD) is a highly contagious and acute viral disease, which has caused high mortality rates in birds and considerable economic losses in different parts of the world for more than two decades and it still represents a considerable threat to poultry. The current study was designed to rigorously measure the reliability of a phylogenetic marker included into segment B. This marker can facilitate molecular epidemiology studies, incorporating this segment of the viral genome, to better explain the links between emergence, spreading and maintenance of the very virulent IBD virus (vvIBDV) strains worldwide. Sequences of the segment B gene from IBDV strains isolated from diverse geographic locations were obtained from the GenBank Database; Cuban sequences were obtained in the current work. A phylogenetic marker named B-marker was assessed by different phylogenetic principles such as saturation of substitution, phylogenetic noise and high consistency. This last parameter is based on the ability of B-marker to reconstruct the same topology as the complete segment B of the viral genome. From the results obtained from B-marker, demographic history for both main lineages of IBDV regarding segment B was performed by Bayesian skyline plot analysis. Phylogenetic analysis for both segments of IBDV genome was also performed, revealing the presence of a natural reassortant strain with segment A from vvIBDV strains and segment B from non-vvIBDV strains within Cuban IBDV population. This study contributes to a better understanding of the emergence of vvIBDV strains, describing molecular epidemiology of IBDV using the state-of-the-art methodology concerning phylogenetic reconstruction. This study also revealed the presence of a novel natural reassorted strain as possible manifest of change in the genetic structure and stability of the vvIBDV strains. Therefore, it highlights the need to obtain information about both genome segments of IBDV for molecular epidemiology studies.

  2. Novel algorithm by low complexity filter on retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  3. Blood Vessel Extraction in Color Retinal Fundus Images with Enhancement Filtering and Unsupervised Classification

    PubMed Central

    2017-01-01

    Retinal blood vessels have a significant role in the diagnosis and treatment of various retinal diseases such as diabetic retinopathy, glaucoma, arteriosclerosis, and hypertension. For this reason, retinal vasculature extraction is important in order to help specialists for the diagnosis and treatment of systematic diseases. In this paper, a novel approach is developed to extract retinal blood vessel network. Our method comprises four stages: (1) preprocessing stage in order to prepare dataset for segmentation; (2) an enhancement procedure including Gabor, Frangi, and Gauss filters obtained separately before a top-hat transform; (3) a hard and soft clustering stage which includes K-means and Fuzzy C-means (FCM) in order to get binary vessel map; and (4) a postprocessing step which removes falsely segmented isolated regions. The method is tested on color retinal images obtained from STARE and DRIVE databases which are available online. As a result, Gabor filter followed by K-means clustering method achieves 95.94% and 95.71% of accuracy for STARE and DRIVE databases, respectively, which are acceptable for diagnosis systems. PMID:29065611

  4. Method for introducing unidirectional nested deletions

    DOEpatents

    Dunn, J.J.; Quesada, M.A.; Randesi, M.

    1999-07-27

    Disclosed is a method for the introduction of unidirectional deletions in a cloned DNA segment. More specifically, the method comprises providing a recombinant DNA construct comprising a DNA segment of interest inserted in a cloning vector. The cloning vector has an f1 endonuclease recognition sequence adjacent to the insertion site of the DNA segment of interest. The recombinant DNA construct is then contacted with the protein pII encoded by gene II of phage f1 thereby generating a single-stranded nick. The nicked DNA is then contacted with E. coli Exonuclease III thereby expanding the single-stranded nick into a single-stranded gap. The single-stranded gapped DNA is then contacted with a single-strand-specific endonuclease thereby producing a linearized DNA molecule containing a double-stranded deletion corresponding in size to the single-stranded gap. The DNA treated in this manner is then incubated with DNA ligase under conditions appropriate for ligation. Also disclosed is a method for producing single-stranded DNA probes. In this embodiment, single-stranded gapped DNA, produced as described above, is contacted with a DNA polymerase in the presence of labeled nucleotides to fill in the gap. This DNA is then linearized by digestion with a restriction enzyme which cuts outside the DNA segment of interest. The product of this digestion is then denatured to produce a labeled single-stranded nucleic acid probe. 1 fig.

  5. Method for introducing unidirectional nested deletions

    DOEpatents

    Dunn, John J.; Quesada, Mark A.; Randesi, Matthew

    1999-07-27

    Disclosed is a method for the introduction of unidirectional deletions in a cloned DNA segment. More specifically, the method comprises providing a recombinant DNA construct comprising a DNA segment of interest inserted in a cloning vector, the cloning vector having an f1 endonuclease recognition sequence adjacent to the insertion site of the DNA segment of interest. The recombinant DNA construct is then contacted with the protein pII encoded by gene II of phage f1 thereby generating a single-stranded nick. The nicked DNA is then contacted with E. coli Exonuclease III thereby expanding the single-stranded nick into a single-stranded gap. The single-stranded gapped DNA is then contacted with a single-strand-specific endonuclease thereby producing a linearized DNA molecule containing a double-stranded deletion corresponding in size to the single-stranded gap. The DNA treated in this manner is then incubated with DNA ligase under conditions appropriate for ligation. Also disclosed is a method for producing single-stranded DNA probes. In this embodiment, single-stranded gapped DNA, produced as described above, is contacted with a DNA polymerase in the presence of labeled nucleotides to fill in the gap. This DNA is then linearized by digestion with a restriction enzyme which cuts outside the DNA segment of interest. The product of this digestion is then denatured to produce a labeled single-stranded nucleic acid probe.

  6. Method for producing labeled single-stranded nucleic acid probes

    DOEpatents

    Dunn, John J.; Quesada, Mark A.; Randesi, Matthew

    1999-10-19

    Disclosed is a method for the introduction of unidirectional deletions in a cloned DNA segment. More specifically, the method comprises providing a recombinant DNA construct comprising a DNA segment of interest inserted in a cloning vector, the cloning vector having an f1 endonuclease recognition sequence adjacent to the insertion site of the DNA segment of interest. The recombinant DNA construct is then contacted with the protein pII encoded by gene II of phage f1 thereby generating a single-stranded nick. The nicked DNA is then contacted with E. coli Exonuclease III thereby expanding the single-stranded nick into a single-stranded gap. The single-stranded gapped DNA is then contacted with a single-strand-specific endonuclease thereby producing a linearized DNA molecule containing a double-stranded deletion corresponding in size to the single-stranded gap. The DNA treated in this manner is then incubated with DNA ligase under conditions appropriate for ligation. Also disclosed is a method for producing single-stranded DNA probes. In this embodiment, single-stranded gapped DNA, produced as described above, is contacted with a DNA polymerase in the presence of labeled nucleotides to fill in the gap. This DNA is then linearized by digestion with a restriction enzyme which cuts outside the DNA segment of interest. The product of this digestion is then denatured to produce a labeled single-stranded nucleic acid probe.

  7. Graph Databases for Large-Scale Healthcare Systems: A Framework for Efficient Data Management and Data Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Yubin; Shankar, Mallikarjun; Park, Byung H.

    Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less

  8. Mated Fingerprint Card Pairs (Volumes 1-5)

    National Institute of Standards and Technology Data Gateway

    NIST Mated Fingerprint Card Pairs (Volumes 1-5) (Web, free access)   The NIST database of mated fingerprint card pairs (Special Database 9) consists of multiple volumes. Currently five volumes have been released. Each volume will be a 3-disk set with each CD-ROM containing 90 mated card pairs of segmented 8-bit gray scale fingerprint images (900 fingerprint image pairs per CD-ROM). A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  9. A unified framework for gesture recognition and spatiotemporal gesture segmentation.

    PubMed

    Alon, Jonathan; Athitsos, Vassilis; Yuan, Quan; Sclaroff, Stan

    2009-09-01

    Within the context of hand gesture recognition, spatiotemporal gesture segmentation is the task of determining, in a video sequence, where the gesturing hand is located and when the gesture starts and ends. Existing gesture recognition methods typically assume either known spatial segmentation or known temporal segmentation, or both. This paper introduces a unified framework for simultaneously performing spatial segmentation, temporal segmentation, and recognition. In the proposed framework, information flows both bottom-up and top-down. A gesture can be recognized even when the hand location is highly ambiguous and when information about when the gesture begins and ends is unavailable. Thus, the method can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds. The proposed method consists of three novel contributions: a spatiotemporal matching algorithm that can accommodate multiple candidate hand detections in every frame, a classifier-based pruning framework that enables accurate and early rejection of poor matches to gesture models, and a subgesture reasoning algorithm that learns which gesture models can falsely match parts of other longer gestures. The performance of the approach is evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, in front of a cluttered background, and retrieval of occurrences of signs of interest in a video database containing continuous, unsegmented signing in American Sign Language (ASL).

  10. Segmentation and feature extraction of cervical spine x-ray images

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    1999-05-01

    As part of an R&D project in mixed text/image database design, the National Library of Medicine has archived a collection of 17,000 digitized x-ray images of the cervical and lumbar spine which were collected as part of the second National Health and Nutrition Examination Survey (NHANES II). To make this image data available and usable to a wide audience, we are investigating techniques for indexing the image content by automated or semi-automated means. Indexing of the images by features of interest to researchers in spine disease and structure requires effective segmentation of the vertebral anatomy. This paper describes work in progress toward this segmentation of the cervical spine images into anatomical components of interest, including anatomical landmarks for vertebral location, and segmentation and identification of individual vertebrae. Our work includes developing a reliable method for automatically fixing an anatomy-based coordinate system in the images, and work to adaptively threshold the images, using methods previously applied by researchers in cardioangiography. We describe the motivation for our work and present our current results in both areas.

  11. Robust finger vein ROI localization based on flexible segmentation.

    PubMed

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-10-24

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.

  12. Robust Finger Vein ROI Localization Based on Flexible Segmentation

    PubMed Central

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-01-01

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769

  13. Automatic segmentation of left ventricle in cardiac cine MRI images based on deep learning

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Icke, Ilknur; Dogdas, Belma; Parimal, Sarayu; Sampath, Smita; Forbes, Joseph; Bagchi, Ansuman; Chin, Chih-Liang; Chen, Antong

    2017-02-01

    In developing treatment of cardiovascular diseases, short axis cine MRI has been used as a standard technique for understanding the global structural and functional characteristics of the heart, e.g. ventricle dimensions, stroke volume and ejection fraction. To conduct an accurate assessment, heart structures need to be segmented from the cine MRI images with high precision, which could be a laborious task when performed manually. Herein a fully automatic framework is proposed for the segmentation of the left ventricle from the slices of short axis cine MRI scans of porcine subjects using a deep learning approach. For training the deep learning models, which generally requires a large set of data, a public database of human cine MRI scans is used. Experiments on the 3150 cine slices of 7 porcine subjects have shown that when comparing the automatic and manual segmentations the mean slice-wise Dice coefficient is about 0.930, the point-to-curve error is 1.07 mm, and the mean slice-wise Hausdorff distance is around 3.70 mm, which demonstrates the accuracy and robustness of the proposed inter-species translational approach.

  14. Component-Level Electronic-Assembly Repair (CLEAR) System Architecture

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.; Bradish, Martin A.; Juergens, Jeffrey R.; Lewis, Michael J.; Vrnak, Daniel R.

    2011-01-01

    This document captures the system architecture for a Component-Level Electronic-Assembly Repair (CLEAR) capability needed for electronics maintenance and repair of the Constellation Program (CxP). CLEAR is intended to improve flight system supportability and reduce the mass of spares required to maintain the electronics of human rated spacecraft on long duration missions. By necessity it allows the crew to make repairs that would otherwise be performed by Earth based repair depots. Because of practical knowledge and skill limitations of small spaceflight crews they must be augmented by Earth based support crews and automated repair equipment. This system architecture covers the complete system from ground-user to flight hardware and flight crew and defines an Earth segment and a Space segment. The Earth Segment involves database management, operational planning, and remote equipment programming and validation processes. The Space Segment involves the automated diagnostic, test and repair equipment required for a complete repair process. This document defines three major subsystems including, tele-operations that links the flight hardware to ground support, highly reconfigurable diagnostics and test instruments, and a CLEAR Repair Apparatus that automates the physical repair process.

  15. Video indexing based on image and sound

    NASA Astrophysics Data System (ADS)

    Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose

    1997-10-01

    Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.

  16. Drawing the line between constituent structure and coherence relations in visual narratives.

    PubMed

    Cohn, Neil; Bender, Patrick

    2017-02-01

    Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of visual narrative grammar posits that hierarchic "grammatical" structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a "segmentation task" where participants drew lines between images in order to divide them into subepisodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants' divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques.

    PubMed

    Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego

    2010-11-01

    Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.

  18. Infrared thermography based on artificial intelligence for carpal tunnel syndrome diagnosis.

    PubMed

    Jesensek Papez, B; Palfy, M; Turk, Z

    2008-01-01

    Thermography for the measurement of surface temperatures is well known in industry, although is not established in medicine despite its safety, lack of pain and invasiveness, easy reproducibility, and low running costs. Promising results have been achieved in nerve entrapment syndromes, although thermography has never represented a real alternative to electromyography. Here an attempt is described to improve the diagnosis of carpal tunnel syndrome with thermography using a computer-based system employing artificial neural networks to analyse the images. Method reliability was tested on 112 images (depicting the dorsal and palmar sides of 26 healthy and 30 pathological hands), with the hand divided into 12 segments and compared relative to a reference. Palmar segments appeared to have no beneficial influence on classification outcome, whereas dorsal segments gave improved outcome with classification success rates near to or over 80%, and finger segments influenced by the median nerve appeared to be of greatest importance. These are preliminary results from a limited number of images and further research will be undertaken as our image database grows.

  19. Behavioral Health Services Utilization among Older Adults Identified within a State Abuse Hotline Database

    ERIC Educational Resources Information Center

    Schonfeld, Lawrence; Larsen, Rebecca G.; Stiles, Paul G.

    2006-01-01

    Purpose: This study examined the extent to which older adults identified in a statewide abuse hotline registry utilized behavioral health services. This is important as mental health issues have been identified as a high priority for filling gaps in services for victims of mistreatment. Design and Methods: We compared Medicaid and Medicare claims…

  20. The Past in the Future: Problems and Potentials of Historical Reception Studies.

    ERIC Educational Resources Information Center

    Jensen, Klaus Bruhn

    1993-01-01

    Gives examples of how qualitative methodologies have been employed to study media reception in the present. Identifies some forms of evidence that can creatively fill the gaps in knowledge about media reception in the past. Argues that the field must develop databases documenting media reception, which may broaden the scope of audience research in…

  1. nStudy: A System for Researching Information Problem Solving

    ERIC Educational Resources Information Center

    Winne, Philip H.; Nesbit, John C.; Popowich, Fred

    2017-01-01

    A bottleneck in gathering big data about learning is instrumentation designed to record data about processes students use to learn and information on which those processes operate. The software system nStudy fills this gap. nStudy is an extension to the Chrome web browser plus a server side database for logged trace data plus peripheral modules…

  2. Database of Schools Enrolling Migrant Children: An Overview.

    ERIC Educational Resources Information Center

    Henderson, Allison

    Until recently, there was no information on U.S. schools attended by migrant children and their characteristics. Migrant children and youth often were excluded from major educational studies because of the lack of a nationally reliable sampling frame of schools or districts enrolling migrant children. In an effort to fill this gap, the U.S.…

  3. Use of a Corona Discharge to Selectively Pattern a Hydrophilic/Hydrophobic Interface for Integrating Segmented Flow with Microchip Electrophoresis and Electrochemical Detection

    PubMed Central

    Filla, Laura A.; Kirkpatrick, Douglas C.; Martin, R. Scott

    2011-01-01

    Segmented flow in microfluidic devices involves the use of droplets that are generated either on- or off-chip. When used with off-chip sampling methods, segmented flow has been shown to prevent analyte dispersion and improve temporal resolution by periodically surrounding an aqueous flow stream with an immiscible carrier phase as it is transferred to the microchip. To analyze the droplets by methods such as electrochemistry or electrophoresis, a method to “desegment” the flow into separate aqueous and immiscible carrier phase streams is needed. In this paper, a simple and straightforward approach for this desegmentation process was developed by first creating an air/water junction in natively hydrophobic and perpendicular PDMS channels. The air-filled channel was treated with a corona discharge electrode to create a hydrophilic/hydrophobic interface. When a segmented flow stream encounters this interface, only the aqueous sample phase enters the hydrophilic channel, where it can be subsequently analyzed by electrochemistry or microchip-based electrophoresis with electrochemical detection. It is shown that the desegmentation process does not significantly degrade the temporal resolution of the system, with rise times as low as 12 s reported after droplets are recombined into a continuous flow stream. This approach demonstrates significant advantages over previous studies in that the treatment process takes only a few minutes, fabrication is relatively simple, and reversible sealing of the microchip is possible. This work should enable future studies where off-chip processes such as microdialysis can be integrated with segmented flow and electrochemical-based detection. PMID:21718004

  4. Partitioning an Artificial Anterior Chamber With a Latex Diaphragm to Simulate Anterior and Posterior Segment Pressure Dynamics: The "DMEK Practice Stage," Where Surgeons Can Rehearse the "DMEK Dance".

    PubMed

    Sáles, Christopher S; Straiko, Michael D; Fernandez, Ana Alzaga; Odell, Kelly; Dye, Philip K; Tran, Khoa D

    2018-02-01

    To present a novel apparatus for simulating the anterior and posterior segment pressure dynamics involved in executing Descemet membrane endothelial keratoplasty (DMEK) surgery when using a chamber-shallowing technique. An artificial anterior chamber (AAC), 18-mm trephine, latex glove, two 3-mL syringes, and one donor cornea comprising an intact corneoscleral cap from which a DMEK tissue was peeled and punched are required for the model. After making the corneal incisions with the corneoscleral cap mounted on the AAC in the usual fashion, the corneoscleral cap is remounted onto the dried AAC over an 18-mm latex diaphragm. The space between the latex diaphragm and the cornea is filled with saline to pressurize the anterior chamber, and the posterior segment is pressurized with air from a syringe. The resulting apparatus comprises a posterior segment and anterior chamber that exert pressure on each other by way of a distensible latex diaphragm. A novice and experienced DMEK surgeon and 2 eye bank technicians were able to assemble the apparatus and perform the routine steps of a DMEK procedure, including maneuvers that require shallowing the anterior chamber and lowering its pressure. Only one cornea was required per apparatus. We present a novel in vitro model of the human eye that more closely mimics the anterior and posterior segment pressure dynamics of in vivo DMEK surgery than average human and animal cadaveric globes. The model is easy to assemble, inexpensive, and applicable to a range of teaching environments.

  5. The morphology, processes, and evolution of Monterey Fan: a revisit

    USGS Publications Warehouse

    Gardner, James V.; Bohannon, Robert G.; Field, Michael E.; Masson, Douglas G.

    2010-01-01

    Long-range (GLORIA) and mid-range (TOBI) sidescan imagery and seismic-reflection profiles have revealed the surface morphology and architecture of the complete Monterey Fan. The fan has not developed a classic wedge shape because it has been blocked for much of its history by Morro Fracture Zone. The barrier has caused the fan to develop an upper-fan and lower-fan sequence that are distinctly different from one another. The upper-fan sequence is characterized by Monterey and Ascension Channels and associated Monterey Channel-levee system. The lower-fan sequence is characterized by depositional lobes of the Ascension, Monterey, and Sur-Parkington-Lucia systems, with the Monterey depositional lobe being the youngest. Presently, the Monterey depositional lobe is being downcut because the system has reached a new, lower base level in the Murray Fracture Zone. A five-step evolution of Monterey Fan is presented, starting with initial fan deposition in the Late Miocene, about 5.5 Ma. This first stage was one of filling bathymetric lows in the oceanic basement in what was to become the upper-fan segment. The second stage involved filling the bathymetric low on the north side of Morro Fracture Zone, and probably not much sediment was transported beyond the fracture zone. The third stage witnessed sediment being transported around both ends of Morro Fracture Zone and initial sedimentation on the lower-fan segment. During the fourth stage Ascension Channel was diverted into Monterey Channel, thereby cutting off sedimentation to the Ascension depositional lobe.

  6. Automated detection of discourse segment and experimental types from the text of cancer pathway results sections.

    PubMed

    Burns, Gully A P C; Dasigi, Pradeep; de Waard, Anita; Hovy, Eduard H

    2016-01-01

    Automated machine-reading biocuration systems typically use sentence-by-sentence information extraction to construct meaning representations for use by curators. This does not directly reflect the typical discourse structure used by scientists to construct an argument from the experimental data available within a article, and is therefore less likely to correspond to representations typically used in biomedical informatics systems (let alone to the mental models that scientists have). In this study, we develop Natural Language Processing methods to locate, extract, and classify the individual passages of text from articles' Results sections that refer to experimental data. In our domain of interest (molecular biology studies of cancer signal transduction pathways), individual articles may contain as many as 30 small-scale individual experiments describing a variety of findings, upon which authors base their overall research conclusions. Our system automatically classifies discourse segments in these texts into seven categories (fact, hypothesis, problem, goal, method, result, implication) with an F-score of 0.68. These segments describe the essential building blocks of scientific discourse to (i) provide context for each experiment, (ii) report experimental details and (iii) explain the data's meaning in context. We evaluate our system on text passages from articles that were curated in molecular biology databases (the Pathway Logic Datum repository, the Molecular Interaction MINT and INTACT databases) linking individual experiments in articles to the type of assay used (coprecipitation, phosphorylation, translocation etc.). We use supervised machine learning techniques on text passages containing unambiguous references to experiments to obtain baseline F1 scores of 0.59 for MINT, 0.71 for INTACT and 0.63 for Pathway Logic. Although preliminary, these results support the notion that targeting information extraction methods to experimental results could provide accurate, automated methods for biocuration. We also suggest the need for finer-grained curation of experimental methods used when constructing molecular biology databases. © The Author(s) 2016. Published by Oxford University Press.

  7. Filling gaps in large ecological databases: consequences for the study of global-scale plant functional trait patterns

    NASA Astrophysics Data System (ADS)

    Schrodt, Franziska; Shan, Hanhuai; Fazayeli, Farideh; Karpatne, Anuj; Kattge, Jens; Banerjee, Arindam; Reichstein, Markus; Reich, Peter

    2013-04-01

    With the advent of remotely sensed data and coordinated efforts to create global databases, the ecological community has progressively become more data-intensive. However, in contrast to other disciplines, statistical ways of handling these large data sets, especially the gaps which are inherent to them, are lacking. Widely used theoretical approaches, for example model averaging based on Akaike's information criterion (AIC), are sensitive to missing values. Yet, the most common way of handling sparse matrices - the deletion of cases with missing data (complete case analysis) - is known to severely reduce statistical power as well as inducing biased parameter estimates. In order to address these issues, we present novel approaches to gap filling in large ecological data sets using matrix factorization techniques. Factorization based matrix completion was developed in a recommender system context and has since been widely used to impute missing data in fields outside the ecological community. Here, we evaluate the effectiveness of probabilistic matrix factorization techniques for imputing missing data in ecological matrices using two imputation techniques. Hierarchical Probabilistic Matrix Factorization (HPMF) effectively incorporates hierarchical phylogenetic information (phylogenetic group, family, genus, species and individual plant) into the trait imputation. Advanced Hierarchical Probabilistic Matrix Factorization (aHPMF) on the other hand includes climate and soil information into the matrix factorization by regressing the environmental variables against residuals of the HPMF. One unique opportunity opened up by aHPMF is out-of-sample prediction, where traits can be predicted for specific species at locations different to those sampled in the past. This has potentially far-reaching consequences for the study of global-scale plant functional trait patterns. We test the accuracy and effectiveness of HPMF and aHPMF in filling sparse matrices, using the TRY database of plant functional traits (http://www.try-db.org). TRY is one of the largest global compilations of plant trait databases (750 traits of 1 million plants), encompassing data on morphological, anatomical, biochemical, phenological and physiological features of plants. However, despite of unprecedented coverage, the TRY database is still very sparse, severely limiting joint trait analyses. Plant traits are the key to understanding how plants as primary producers adjust to changes in environmental conditions and in turn influence them. Forming the basis for Dynamic Global Vegetation Models (DGVMs), plant traits are also fundamental in global change studies for predicting future ecosystem changes. It is thus imperative that missing data is imputed in as accurate and precise a way as possible. In this study, we show the advantages and disadvantages of applying probabilistic matrix factorization techniques in incorporating hierarchical and environmental information for the prediction of missing plant traits as compared to conventional imputation techniques such as the complete case and mean approaches. We will discuss the implications of using gap-filled data for global-scale studies of plant functional trait - environment relationship as opposed to the above-mentioned conventional techniques, using examples of out-of-sample predictions of foliar Nitrogen across several species' ranges and biomes.

  8. Performance evaluation of an automatic segmentation method of cerebral arteries in MRA images by use of a large image database

    NASA Astrophysics Data System (ADS)

    Uchiyama, Yoshikazu; Asano, Tatsunori; Hara, Takeshi; Fujita, Hiroshi; Kinosada, Yasutomi; Asano, Takahiko; Kato, Hiroki; Kanematsu, Masayuki; Hoshi, Hiroaki; Iwama, Toru

    2009-02-01

    The detection of cerebrovascular diseases such as unruptured aneurysm, stenosis, and occlusion is a major application of magnetic resonance angiography (MRA). However, their accurate detection is often difficult for radiologists. Therefore, several computer-aided diagnosis (CAD) schemes have been developed in order to assist radiologists with image interpretation. The purpose of this study was to develop a computerized method for segmenting cerebral arteries, which is an essential component of CAD schemes. For the segmentation of vessel regions, we first used a gray level transformation to calibrate voxel values. To adjust for variations in the positioning of patients, registration was subsequently employed to maximize the overlapping of the vessel regions in the target image and reference image. The vessel regions were then segmented from the background using gray-level thresholding and region growing techniques. Finally, rule-based schemes with features such as size, shape, and anatomical location were employed to distinguish between vessel regions and false positives. Our method was applied to 854 clinical cases obtained from two different hospitals. The segmentation of cerebral arteries in 97.1%(829/854) of the MRA studies was attained as an acceptable result. Therefore, our computerized method would be useful in CAD schemes for the detection of cerebrovascular diseases in MRA images.

  9. A perceptive method for handwritten text segmentation

    NASA Astrophysics Data System (ADS)

    Lemaitre, Aurélie; Camillerapp, Jean; Coüasnon, Bertrand

    2011-01-01

    This paper presents a new method to address the problem of handwritten text segmentation into text lines and words. Thus, we propose a method based on the cooperation among points of view that enables the localization of the text lines in a low resolution image, and then to associate the pixels at a higher level of resolution. Thanks to the combination of levels of vision, we can detect overlapping characters and re-segment the connected components during the analysis. Then, we propose a segmentation of lines into words based on the cooperation among digital data and symbolic knowledge. The digital data are obtained from distances inside a Delaunay graph, which gives a precise distance between connected components, at the pixel level. We introduce structural rules in order to take into account some generic knowledge about the organization of a text page. This cooperation among information gives a bigger power of expression and ensures the global coherence of the recognition. We validate this work using the metrics and the database proposed for the segmentation contest of ICDAR 2009. Thus, we show that our method obtains very interesting results, compared to the other methods of the literature. More precisely, we are able to deal with slope and curvature, overlapping text lines and varied kinds of writings, which are the main difficulties met by the other methods.

  10. Automatic morphometry in Alzheimer's disease and mild cognitive impairment☆☆☆

    PubMed Central

    Heckemann, Rolf A.; Keihaninejad, Shiva; Aljabar, Paul; Gray, Katherine R.; Nielsen, Casper; Rueckert, Daniel; Hajnal, Joseph V.; Hammers, Alexander

    2011-01-01

    This paper presents a novel, publicly available repository of anatomically segmented brain images of healthy subjects as well as patients with mild cognitive impairment and Alzheimer's disease. The underlying magnetic resonance images have been obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. T1-weighted screening and baseline images (1.5 T and 3 T) have been processed with the multi-atlas based MAPER procedure, resulting in labels for 83 regions covering the whole brain in 816 subjects. Selected segmentations were subjected to visual assessment. The segmentations are self-consistent, as evidenced by strong agreement between segmentations of paired images acquired at different field strengths (Jaccard coefficient: 0.802 ± 0.0146). Morphometric comparisons between diagnostic groups (normal; stable mild cognitive impairment; mild cognitive impairment with progression to Alzheimer's disease; Alzheimer's disease) showed highly significant group differences for individual regions, the majority of which were located in the temporal lobe. Additionally, significant effects were seen in the parietal lobe. Increased left/right asymmetry was found in posterior cortical regions. An automatically derived white-matter hypointensities index was found to be a suitable means of quantifying white-matter disease. This repository of segmentations is a potentially valuable resource to researchers working with ADNI data. PMID:21397703

  11. Multifractal geometry in analysis and processing of digital retinal photographs for early diagnosis of human diabetic macular edema.

    PubMed

    Tălu, Stefan

    2013-07-01

    The purpose of this paper is to determine a quantitative assessment of the human retinal vascular network architecture for patients with diabetic macular edema (DME). Multifractal geometry and lacunarity parameters are used in this study. A set of 10 segmented and skeletonized human retinal images, corresponding to both normal (five images) and DME states of the retina (five images), from the DRIVE database was analyzed using the Image J software. Statistical analyses were performed using Microsoft Office Excel 2003 and GraphPad InStat software. The human retinal vascular network architecture has a multifractal geometry. The average of generalized dimensions (Dq) for q = 0, 1, 2 of the normal images (segmented versions), is similar to the DME cases (segmented versions). The average of generalized dimensions (Dq) for q = 0, 1 of the normal images (skeletonized versions), is slightly greater than the DME cases (skeletonized versions). However, the average of D2 for the normal images (skeletonized versions) is similar to the DME images. The average of lacunarity parameter, Λ, for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values for DME images (segmented and skeletonized versions). The multifractal and lacunarity analysis provides a non-invasive predictive complementary tool for an early diagnosis of patients with DME.

  12. Segmentation and determination of joint space width in foot radiographs

    NASA Astrophysics Data System (ADS)

    Schenk, O.; de Muinck Keizer, D. M.; Bernelot Moens, H. J.; Slump, C. H.

    2016-03-01

    Joint damage in rheumatoid arthritis is frequently assessed using radiographs of hands and feet. Evaluation includes measurements of the joint space width (JSW) and detection of erosions. Current visual scoring methods are timeconsuming and subject to inter- and intra-observer variability. Automated measurement methods avoid these limitations and have been fairly successful in hand radiographs. This contribution aims at foot radiographs. Starting from an earlier proposed automated segmentation method we have developed a novel model based image analysis algorithm for JSW measurements. This method uses active appearance and active shape models to identify individual bones. The model compiles ten submodels, each representing a specific bone of the foot (metatarsals 1-5, proximal phalanges 1-5). We have performed segmentation experiments using 24 foot radiographs, randomly selected from a large database from the rheumatology department of a local hospital: 10 for training and 14 for testing. Segmentation was considered successful if the joint locations are correctly determined. Segmentation was successful in only 14%. To improve results a step-by-step analysis will be performed. We performed JSW measurements on 14 randomly selected radiographs. JSW was successfully measured in 75%, mean and standard deviation are 2.30+/-0.36mm. This is a first step towards automated determination of progression of RA and therapy response in feet using radiographs.

  13. Prescription drug use in pregnancy: a retrospective, population-based study in British Columbia, Canada (2001-2006).

    PubMed

    Daw, Jamie R; Mintzes, Barbara; Law, Michael R; Hanley, Gillian E; Morgan, Steven G

    2012-01-01

    Owing to the paucity of evidence available on the risks and benefits of drug use in pregnancy, the use of prescription medicines is a concern for both pregnant women and their health care providers. The aim of this study was to measure the frequency, timing, and type of medicines used before, during, and after pregnancy in a Canadian population. This retrospective cohort analysis used population-based health care data from all pregnancies ending in live births in hospitals in British Columbia from April 2001 to June 2006 (n = 163,082). Data from hospital records were linked to those in outpatient prescription-drug claims. Data from prescriptions filled from 6 months before pregnancy to 6 months postpartum were analyzed. Drugs were classified by therapeutic category and US Food and Drug Administration (FDA) pregnancy risk categories. Prescriptions were filled in 63.5% of pregnancies. Evidence on safety is limited for many of the medicines most frequently filled in pregnancy, including codeine, salbutamol, and betamethasone. At least 1 prescription for a category D or X medicine was filled in 7.8% of pregnancies (5.5% category D; 2.5% category X). The most frequently filled prescriptions for category D drugs were benzodiazepines and antidepressants. The most frequently filled prescriptions for category X drugs were oral contraceptives and ovulation stimulants filled in the first trimester. The majority of pregnant women in British Columbia filled at least 1 prescription, and ~1 in 13 filled a prescription for a drug categorized as D or X by the FDA. The prevalence of maternal prescription drug use emphasizes the need for postmarketing evaluation of the risk-benefit profiles of pharmaceuticals in pregnancy. Future research on prenatal drug use based on administrative databases should examine maternal treatment adherence and the determinants of maternal drug use, considering maternal health status, sociodemographics, and the characteristics and providers of prenatal care. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.

  14. Systematic review of "filling" procedures for lip augmentation regarding types of material, outcomes and complications.

    PubMed

    San Miguel Moragas, Joan; Reddy, Rajgopal R; Hernández Alfaro, Federico; Mommaerts, Maurice Y

    2015-07-01

    The ideal lip augmentation technique provides the longest period of efficacy, lowest complication rate, and best aesthetic results. A myriad of techniques have been described for lip augmentation, but the optimal approach has not yet been established. This systematic review with meta-regression will focus on the various filling procedures for lip augmentation (FPLA), with the goal of determining the optimal approach. A systematic search for all English, French, Spanish, German, Italian, Portuguese and Dutch language studies involving FPLA was performed using these databases: Elsevier Science Direct, PubMed, Highwire Press, Springer Standard Collection, SAGE, DOAJ, Sweetswise, Free E-Journals, Ovid Lippincott Williams & Wilkins, Willey Online Library Journals, and Cochrane Plus. The reference section of every study selected through this database search was subsequently examined to identify additional relevant studies. The database search yielded 29 studies. Nine more studies were retrieved from the reference sections of these 29 studies. The level of evidence ratings of these 38 studies were as follows: level Ib, four studies; level IIb, four studies; level IIIb, one study; and level IV, 29 studies. Ten studies were prospective. This systematic review sought to highlight all the quality data currently available regarding FPLA. Because of the considerable diversity of procedures, no definitive comparisons or conclusions were possible. Additional prospective studies and clinical trials are required to more conclusively determine the most appropriate approach for this procedure. IV. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  15. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters.

    PubMed

    Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia

    2018-04-01

    Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.

    PubMed

    Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A

    2016-05-01

    Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.

  17. A segmentation approach for a delineation of terrestrial ecoregions

    NASA Astrophysics Data System (ADS)

    Nowosad, J.; Stepinski, T.

    2017-12-01

    Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for global ecological and conservation studies.

  18. [Analysis of genomic copy number variations in two unrelated neonates with 8p deletion and duplication associated with congenital heart disease].

    PubMed

    Mei, Mei; Yang, Lin; Zhan, Guodong; Wang, Huijun; Ma, Duan; Zhou, Wenhao; Huang, Guoying

    2014-06-01

    To screen for genomic copy number variations (CNVs) in two unrelated neonates with multiple congenital abnormalities using Affymetrix SNP chip and try to find the critical region associated with congenital heart disease. Two neonates were tested for genomic copy number variations by using Cytogenetic SNP chip.Rare CNVs with potential clinical significance were selected of which deletion segments' size was larger than 50 kb and duplication segments' size was larger than 150 kb based on the analysis of ChAs software, without false positive CNVs and segments of normal population. The identified CNVs were compared with those of the cases in DECIPHER and ISCA databases. Eleven rare CNVs with size from 546.6-27 892 kb were identified in the 2 neonates. The deletion region and size of case 1 were 8p23.3-p23.1 (387 912-11 506 771 bp) and 11.1 Mb respectively, the duplication region and size of case 1 were 8p23.1-p11.1 (11 508 387-43 321 279 bp) and 31.8 Mb respectively. The deletion region and size of case 2 were 8p23.3-p23.1 (46 385-7 809 878 bp) and 7.8 Mb respectively, the duplication region and size of case 2 were 8p23.1-p11.21 (12 260 914-40 917 092 bp) and 28.7 Mb respectively. The comparison with Decipher and ISCA databases supported previous viewpoint that 8p23.1 had been associated with congenital heart disease and the region between 7 809 878-11 506 771 bp may play a role in the severe cardiac defects associated with 8p23.1 deletions. Case 1 had serious cardiac abnormalities whose GATA4 was located in the duplication segment and the copy number increased while SOX7 was located in the deletion segment and the copy number decreased. The region between 7 809 878-11 506 771 bp in 8p23.1 is associated with heart defects and copy number variants of SOX7 and GATA4 may result in congenital heart disease.

  19. Determination of Isthmocele Using a Foley Catheter During Laparoscopic Repair of Cesarean Scar Defect.

    PubMed

    Akdemir, Ali; Sahin, Cagdas; Ari, Sabahattin Anil; Ergenoglu, Mete; Ulukus, Murat; Karadadas, Nedim

    2018-01-01

    To demonstrate a new technique of isthmocele repair via laparoscopic surgery. Case report (Canadian Task Force classification III). The local Ethics Committee waived the requirement for approval. Isthmocele localized at a low uterine segment is a defect of a previous caesarean scar due to poor myometrial healing after surgery [1]. This pouch accumulates menstrual bleeding, which can cause various disturbances and irregularities, including abnormal uterine bleeding, infertility, pelvic pain, and scar pregnancy [2-6]. Given the absence of a clearly defined surgical method in the literature, choosing the proper approach to treating isthmocele can be arduous. Laparoscopy provides a minimally invasive procedure in women with previous caesarean scar defects. A 28-year-old woman, gravida 2 para 2, presented with a complaint of prolonged postmenstrual bleeding for 5 years. She had undergone 2 cesarean deliveries. Transvaginal ultrasonography revealed a hypoechogenic area with menstrual blood in the anterior lower uterine segment. Magnetic resonance imaging showed an isthmocele localized at the anterior left lateral side of the uterus, with an estimated volume of approximately 12 cm 3 . After patient preparation, laparoscopy was performed. To repair the defect, the uterovesical peritoneal fold was incised and the bladder was mobilized from the lower uterine segment. During this surgery, differentiating the isthmocele from the abdomen can be challenging. Here we used a Foley catheter to identify the isthmocele. To do this, after mobilizing the bladder from the lower uterine segment, we inserted a Foley catheter into the uterine cavity through the cervical canal. We then filled the balloon of the catheter at the lower uterine segment under laparoscopic view, which allowed clear identification of the isthmocele pouch. The uterine defect was then incised. The isthmocele cavity was accessed, the margins of the pouch were debrided, and the edges were surgically reapproximated with continuous nonlocking single layer 2-0 polydioxanone sutures. We believed that single-layer suturing could provide for proper healing without necrosis due to suturation. During the procedure, the vesicouterine space was dissected without difficulty. A urine bag was collected with clear urine, and there was no gas leakage; thus, we considered a safety test for the bladder superfluous. Based on concerns about the possible increased risk of adhesions, we did not cover peritoneum over the suture. The patients experienced no associated complications, and she reported complete resolution of prolonged postmenstrual bleeding at a 3-month follow-up. Even though the literature is cloudy in this area, a laparoscopic approach to repairing an isthmocele is a safe and minimally invasive procedure. Our approach described here involves inserting a Foley catheter in the uterine cavity through the cervical canal, then filling the balloon in the lower uterine segment under laparoscopic view to identify the isthmocele. Copyright © 2017 AAGL. Published by Elsevier Inc. All rights reserved.

  20. Analysis of electricity consumption: a study in the wood products industry

    Treesearch

    Henry Quesada-Pineda; Jan Wiedenbeck; Brian Bond

    2016-01-01

    This paper evaluates the effect of industry segment, year, and US region on electricity consumption per employee, per dollar sales, and per square foot of plant area for wood products industries. Data was extracted from the Industrial Assessment Center (IAC) database and imported into MS Excel. The extracted dataset was examined for outliers and abnormalities with...

  1. Computer-based Interactive Literature Searching for CSU-Chico Chemistry Students.

    ERIC Educational Resources Information Center

    Cooke, Ron C.; And Others

    The intent of this instructional manual, which is aimed at exploring the literature of a discipline and presented in a self-paced, course segment format applicable to any course content, is to enable college students to conduct computer-based interactive searches through multiple databases. The manual is divided into 10 chapters: (1) Introduction,…

  2. WEPP FuME Analysis for a North Idaho Site

    Treesearch

    William Elliot; Ina Sue Miller; David Hall

    2007-01-01

    A computer interface has been developed to assist with analyzing soil erosion rates associated with fuel management activities. This interface uses the Water Erosion Prediction Project (WEPP) model to predict sediment yields from hillslopes and road segments to the stream network. The simple interface has a large database of climates, vegetation files and forest soil...

  3. The Danish Testicular Cancer database.

    PubMed

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel; Mortensen, Mette Saksø; Larsson, Heidi; Søgaard, Mette; Toft, Birgitte Groenkaer; Engvad, Birte; Agerbæk, Mads; Holm, Niels Vilstrup; Lauritsen, Jakob

    2016-01-01

    The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients.

  4. How to prepare a systematic review of economic evaluations for clinical practice guidelines: database selection and search strategy development (part 2/3).

    PubMed

    Thielen, F W; Van Mastrigt, Gapg; Burgers, L T; Bramer, W M; Majoie, Hjm; Evers, Smaa; Kleijnen, J

    2016-12-01

    This article is part of the series "How to prepare a systematic review of economic evaluations (EES) for informing evidence-based healthcare decisions", in which a five-step approach is proposed. Areas covered: This paper focuses on the selection of relevant databases and developing a search strategy for detecting EEs, as well as on how to perform the search and how to extract relevant data from retrieved records. Expert commentary: Thus far, little has been published on how to conduct systematic review EEs. Moreover, reliable sources of information, such as the Health Economic Evaluation Database, have ceased to publish updates. Researchers are thus left without authoritative guidance on how to conduct SR-EEs. Together with van Mastrigt et al. we seek to fill this gap.

  5. Single crowns versus conventional fillings for the restoration of root filled teeth.

    PubMed

    Fedorowicz, Zbys; Carter, Ben; de Souza, Raphael Freitas; Chaves, Carolina de Andrade Lima; Nasser, Mona; Sequeira-Byron, Patrick

    2012-05-16

    Endodontic treatment, involves removal of the dental pulp and its replacement by a root canal filling. Restoration of root filled teeth can be challenging due to structural differences between vital and non-vital root filled teeth. Direct restoration involves placement of a restorative material e.g. amalgam or composite directly into the tooth. Indirect restorations consist of cast metal or ceramic (porcelain) crowns. The choice of restoration depends on the amount of remaining tooth which may influence long term survival and cost. The comparative in service clinical performance of crowns or conventional fillings used to restore root filled teeth is unclear. To assess the effects of restoration of endodontically treated teeth (with or without post and core) by crowns versus conventional filling materials. We searched the following databases: the Cochrane Oral Health Group's Trials Register, CENTRAL, MEDLINE via OVID, EMBASE via OVID, CINAHL via EBSCO, LILACS via BIREME and the reference lists of articles as well as ongoing trials registries.There were no restrictions regarding language or date of publication. Date of last search was 13 February 2012. Randomised controlled trials (RCTs) or quasi-randomised controlled trials in participants with permanent teeth which have undergone endodontic treatment. Single full coverage crowns compared with any type of filling materials for direct restoration, as well as indirect partial restorations (e.g. inlays and onlays). Comparisons considered the type of post and core used (cast or prefabricated post), if any. Two review authors independently assessed trial quality and extracted data. One trial judged to be at high risk of bias due to missing outcome data, was included. 117 participants with a root filled premolar tooth restored with a carbon fibre post, were randomised to either a full coverage metal-ceramic crown or direct adhesive composite restoration. At 3 years there was no reported difference between the non-catastrophic failure rates in both groups. Decementation of the post and marginal gap formation occurred in a small number of teeth. There is insufficient evidence to support or refute the effectiveness of conventional fillings over crowns for the restoration of root filled teeth. Until more evidence becomes available clinicians should continue to base decisions on how to restore root filled teeth on their own clinical experience, whilst taking into consideration the individual circumstances and preferences of their patients.

  6. Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.

    PubMed

    Li, Yuhong; Jia, Fucang; Qin, Jing

    2016-10-01

    Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    PubMed

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  8. The Development of Vocational Vehicle Drive Cycles and Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, Adam W.; Phillips, Caleb T.; Konan, Arnaud M.

    Under a collaborative interagency agreement between the U.S. Environmental Protection Agency and the U.S Department of Energy (DOE), the National Renewable Energy Laboratory (NREL) performed a series of in-depth analyses to characterize the on-road driving behavior including distributions of vehicle speed, idle time, accelerations and decelerations, and other driving metrics of medium- and heavy-duty vocational vehicles operating within the United States. As part of this effort, NREL researchers segmented U.S. medium- and heavy-duty vocational vehicle driving characteristics into three distinct operating groups or clusters using real world drive cycle data collected at 1 Hz and stored in NREL's Fleet DNAmore » database. The Fleet DNA database contains millions of miles of historical real-world drive cycle data captured from medium- and heavy vehicles operating across the United States. The data encompass data from existing DOE activities as well as contributions from valued industry stakeholder participants. For this project, data captured from 913 unique vehicles comprising 16,250 days of operation were drawn from the Fleet DNA database and examined. The Fleet DNA data used as a source for this analysis has been collected from a total of 30 unique fleets/data providers operating across 22 unique geographic locations spread across the United States. This includes locations with topology ranging from the foothills of Denver, Colorado, to the flats of Miami, Florida. The range of fleets, geographic locations, and total number of vehicles analyzed ensures results that include the influence of these factors. While no analysis will be perfect without unlimited resources and data, it is the researchers understanding that the Fleet DNA database is the largest and most thorough publicly accessible vocational vehicle usage database currently in operation. This report includes an introduction to the Fleet DNA database and the data contained within, a presentation of the results of the statistical analysis performed by NREL, review of the logistic model developed to predict cluster membership, and a discussion and detailed summary of the development of the vocational drive cycle weights and representative transient drive cycles for testing and simulation. Additional discussion of known limitations and potential future work are also included in the report content.« less

  9. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  10. Learning to segment mouse embryo cells

    NASA Astrophysics Data System (ADS)

    León, Juan; Pardo, Alejandro; Arbeláez, Pablo

    2017-11-01

    Recent advances in microscopy enable the capture of temporal sequences during cell development stages. However, the study of such sequences is a complex task and time consuming task. In this paper we propose an automatic strategy to adders the problem of semantic and instance segmentation of mouse embryos using NYU's Mouse Embryo Tracking Database. We obtain our instance proposals as refined predictions from the generalized hough transform, using prior knowledge of the embryo's locations and their current cell stage. We use two main approaches to learn the priors: Hand crafted features and automatic learned features. Our strategy increases the baseline jaccard index from 0.12 up to 0.24 using hand crafted features and 0.28 by using automatic learned ones.

  11. Massive bone allograft: a salvage procedure for complex bone loss due to high-velocity missiles--a long-term follow-up.

    PubMed

    Salai, M; Volks, S; Blankstein, A; Chechik, A; Amit, Y; Horosowski, H

    1990-07-01

    The treatment of high-velocity missile injury to the limbs is often associated with segmental bone loss, as well as damage to neurovascular and soft tissue. In such "limb threatening" cases, massive bone allograft can fill the bone defect and offer stability to the soft tissue reconstruction. The return of function in the affected limb is relatively rapid when using this method as a primary procedure. The indications for use of this technique and illustrative case reports are presented and discussed.

  12. Universal Design for Learning (UDL): A Content Analysis of Peer-Reviewed Journal Papers from 2012 to 2015

    ERIC Educational Resources Information Center

    Al-Azawei, Ahmed; Serenelli, Fabio; Lundqvist, Karsten

    2016-01-01

    The Universal Design for Learning (UDL) framework is increasingly drawing the attention of researchers and educators as an effective solution for filling the gap between learner ability and individual differences. This study aims to analyse the content of twelve papers, where the UDL was adopted. The articles were chosen from several databases and…

  13. Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers

    PubMed Central

    García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta

    2016-01-01

    The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine. PMID:28773653

  14. Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers.

    PubMed

    García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta

    2016-06-29

    The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine.

  15. GLUCAGON PRESCRIPTION PATTERNS IN PATIENTS WITH EITHER TYPE 1 OR 2 DIABETES WITH NEWLY PRESCRIBED INSULIN.

    PubMed

    Mitchell, Beth D; He, Xuanyao; Sturdy, Ian M; Cagle, Andrew P; Settles, Julie A

    2016-02-01

    To describe glucagon prescription patterns in patients with type 1 (T1DM) or type 2 diabetes (T2DM) who received an initial insulin prescription. Retrospective analyses were conducted with data from Truven Health MarketScan databases to assess time to glucagon prescriptions: filled within 1.5 months after index date (early) or after 1.5 months postindex (nonearly). The index date was the date of first insulin prescription between January 1, 2009 and December 31, 2011; for T2DM, without an insulin prescription in the previous 6 months; for T1DM, diabetes diagnosis preindex or within 3 months postindex. Analysis included 8,814 patients with T1DM and 47,051 with T2DM (49.3% and 2.4%, respectively) who had glucagon prescriptions filled. The median times to first glucagon prescription were 196 days (T1DM) and 288 days (T2DM). The rates of filling glucagon were highest in the first 1.5 months. The times to first hypoglycemia-related emergency room (ER) visit for T1DM and T2DM cohorts were initially similar for those with early glucagon versus nonearly glucagon prescriptions. After 10.8 and 2.5 months postindex, respectively, the percentage of hypoglycemia-related ER visits was lower for those with early glucagon prescriptions. Glucagon prescriptions filled for patients with diabetes who are initiating insulin are low. Patients with T1DM who were younger and healthier filled glucagon prescriptions more often; patients with T2DM who were younger and sicker and had a higher percentage of hypoglycemia-related ER visit history filled glucagon prescriptions more often. Glucagon filled early was associated with a lower incidence of hypoglycemia-related ER visits.

  16. Clinical evaluation of multi-atlas based segmentation of lymph node regions in head and neck and prostate cancer patients.

    PubMed

    Sjöberg, Carl; Lundmark, Martin; Granberg, Christoffer; Johansson, Silvia; Ahnesjö, Anders; Montelius, Anders

    2013-10-03

    Semi-automated segmentation using deformable registration of selected atlas cases consisting of expert segmented patient images has been proposed to facilitate the delineation of lymph node regions for three-dimensional conformal and intensity-modulated radiotherapy planning of head and neck and prostate tumours. Our aim is to investigate if fusion of multiple atlases will lead to clinical workload reductions and more accurate segmentation proposals compared to the use of a single atlas segmentation, due to a more complete representation of the anatomical variations. Atlases for lymph node regions were constructed using 11 head and neck patients and 15 prostate patients based on published recommendations for segmentations. A commercial registration software (Velocity AI) was used to create individual segmentations through deformable registration. Ten head and neck patients, and ten prostate patients, all different from the atlas patients, were randomly chosen for the study from retrospective data. Each patient was first delineated three times, (a) manually by a radiation oncologist, (b) automatically using a single atlas segmentation proposal from a chosen atlas and (c) automatically by fusing the atlas proposals from all cases in the database using the probabilistic weighting fusion algorithm. In a subsequent step a radiation oncologist corrected the segmentation proposals achieved from step (b) and (c) without using the result from method (a) as reference. The time spent for editing the segmentations was recorded separately for each method and for each individual structure. Finally, the Dice Similarity Coefficient and the volume of the structures were used to evaluate the similarity between the structures delineated with the different methods. For the single atlas method, the time reduction compared to manual segmentation was 29% and 23% for head and neck and pelvis lymph nodes, respectively, while editing the fused atlas proposal resulted in time reductions of 49% and 34%. The average volume of the fused atlas proposals was only 74% of the manual segmentation for the head and neck cases and 82% for the prostate cases due to a blurring effect from the fusion process. After editing of the proposals the resulting volume differences were no longer statistically significant, although a slight influence by the proposals could be noticed since the average edited volume was still slightly smaller than the manual segmentation, 9% and 5%, respectively. Segmentation based on fusion of multiple atlases reduces the time needed for delineation of lymph node regions compared to the use of a single atlas segmentation. Even though the time saving is large, the quality of the segmentation is maintained compared to manual segmentation.

  17. Automatic 3D segmentation of multiphoton images: a key step for the quantification of human skin.

    PubMed

    Decencière, Etienne; Tancrède-Bohin, Emmanuelle; Dokládal, Petr; Koudoro, Serge; Pena, Ana-Maria; Baldeweck, Thérèse

    2013-05-01

    Multiphoton microscopy has emerged in the past decade as a useful noninvasive imaging technique for in vivo human skin characterization. However, it has not been used until now in evaluation clinical trials, mainly because of the lack of specific image processing tools that would allow the investigator to extract pertinent quantitative three-dimensional (3D) information from the different skin components. We propose a 3D automatic segmentation method of multiphoton images which is a key step for epidermis and dermis quantification. This method, based on the morphological watershed and graph cuts algorithms, takes into account the real shape of the skin surface and of the dermal-epidermal junction, and allows separating in 3D the epidermis and the superficial dermis. The automatic segmentation method and the associated quantitative measurements have been developed and validated on a clinical database designed for aging characterization. The segmentation achieves its goals for epidermis-dermis separation and allows quantitative measurements inside the different skin compartments with sufficient relevance. This study shows that multiphoton microscopy associated with specific image processing tools provides access to new quantitative measurements on the various skin components. The proposed 3D automatic segmentation method will contribute to build a powerful tool for characterizing human skin condition. To our knowledge, this is the first 3D approach to the segmentation and quantification of these original images. © 2013 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.

  18. Exploring Short Linear Motifs Using the ELM Database and Tools.

    PubMed

    Gouw, Marc; Sámano-Sánchez, Hugo; Van Roey, Kim; Diella, Francesca; Gibson, Toby J; Dinkel, Holger

    2017-06-27

    The Eukaryotic Linear Motif (ELM) resource is dedicated to the characterization and prediction of short linear motifs (SLiMs). SLiMs are compact, degenerate peptide segments found in many proteins and essential to almost all cellular processes. However, despite their abundance, SLiMs remain largely uncharacterized. The ELM database is a collection of manually annotated SLiM instances curated from experimental literature. In this article we illustrate how to browse and search the database for curated SLiM data, and cover the different types of data integrated in the resource. We also cover how to use this resource in order to predict SLiMs in known as well as novel proteins, and how to interpret the results generated by the ELM prediction pipeline. The ELM database is a very rich resource, and in the following protocols we give helpful examples to demonstrate how this knowledge can be used to improve your own research. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  19. Database of potential sources for earthquakes larger than magnitude 6 in Northern California

    USGS Publications Warehouse

    ,

    1996-01-01

    The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.

  20. Image query and indexing for digital x rays

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    1998-12-01

    The web-based medical information retrieval system (WebMIRS) allows interned access to databases containing 17,000 digitized x-ray spine images and associated text data from National Health and Nutrition Examination Surveys (NHANES). WebMIRS allows SQL query of the text, and viewing of the returned text records and images using a standard browser. We are now working (1) to determine utility of data directly derived from the images in our databases, and (2) to investigate the feasibility of computer-assisted or automated indexing of the images to support image retrieval of images of interest to biomedical researchers in the field of osteoarthritis. To build an initial database based on image data, we are manually segmenting a subset of the vertebrae, using techniques from vertebral morphometry. From this, we will derive and add to the database vertebral features. This image-derived data will enhance the user's data access capability by enabling the creation of combined SQL/image-content queries.

Top